00:00:00.004 Started by upstream project "autotest-per-patch" build number 132335 00:00:00.004 originally caused by: 00:00:00.004 Started by user sys_sgci 00:00:00.087 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:09.778 The recommended git tool is: git 00:00:09.778 using credential 00000000-0000-0000-0000-000000000002 00:00:09.780 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:09.795 Fetching changes from the remote Git repository 00:00:09.796 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:09.810 Using shallow fetch with depth 1 00:00:09.810 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:09.810 > git --version # timeout=10 00:00:09.821 > git --version # 'git version 2.39.2' 00:00:09.821 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:09.832 Setting http proxy: proxy-dmz.intel.com:911 00:00:09.832 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:15.999 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:16.010 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:16.022 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:16.022 > git config core.sparsecheckout # timeout=10 00:00:16.035 > git read-tree -mu HEAD # timeout=10 00:00:16.050 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:16.075 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:16.075 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:16.158 [Pipeline] Start of Pipeline 00:00:16.171 [Pipeline] library 00:00:16.173 Loading library shm_lib@master 00:00:16.173 Library shm_lib@master is cached. Copying from home. 00:00:16.190 [Pipeline] node 00:00:16.198 Running on WFP6 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:16.200 [Pipeline] { 00:00:16.210 [Pipeline] catchError 00:00:16.211 [Pipeline] { 00:00:16.221 [Pipeline] wrap 00:00:16.227 [Pipeline] { 00:00:16.231 [Pipeline] stage 00:00:16.233 [Pipeline] { (Prologue) 00:00:16.422 [Pipeline] sh 00:00:16.705 + logger -p user.info -t JENKINS-CI 00:00:16.725 [Pipeline] echo 00:00:16.727 Node: WFP6 00:00:16.735 [Pipeline] sh 00:00:17.037 [Pipeline] setCustomBuildProperty 00:00:17.052 [Pipeline] echo 00:00:17.054 Cleanup processes 00:00:17.060 [Pipeline] sh 00:00:17.349 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:17.349 271080 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:17.362 [Pipeline] sh 00:00:17.647 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:17.647 ++ grep -v 'sudo pgrep' 00:00:17.647 ++ awk '{print $1}' 00:00:17.647 + sudo kill -9 00:00:17.647 + true 00:00:17.658 [Pipeline] cleanWs 00:00:17.666 [WS-CLEANUP] Deleting project workspace... 00:00:17.666 [WS-CLEANUP] Deferred wipeout is used... 00:00:17.672 [WS-CLEANUP] done 00:00:17.676 [Pipeline] setCustomBuildProperty 00:00:17.700 [Pipeline] sh 00:00:17.980 + sudo git config --global --replace-all safe.directory '*' 00:00:18.074 [Pipeline] httpRequest 00:00:19.438 [Pipeline] echo 00:00:19.440 Sorcerer 10.211.164.20 is alive 00:00:19.449 [Pipeline] retry 00:00:19.450 [Pipeline] { 00:00:19.459 [Pipeline] httpRequest 00:00:19.463 HttpMethod: GET 00:00:19.464 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:19.464 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:19.472 Response Code: HTTP/1.1 200 OK 00:00:19.472 Success: Status code 200 is in the accepted range: 200,404 00:00:19.472 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:36.651 [Pipeline] } 00:00:36.664 [Pipeline] // retry 00:00:36.670 [Pipeline] sh 00:00:36.953 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:36.971 [Pipeline] httpRequest 00:00:37.511 [Pipeline] echo 00:00:37.512 Sorcerer 10.211.164.20 is alive 00:00:37.518 [Pipeline] retry 00:00:37.520 [Pipeline] { 00:00:37.529 [Pipeline] httpRequest 00:00:37.532 HttpMethod: GET 00:00:37.533 URL: http://10.211.164.20/packages/spdk_95f6a056ecf4b8cae15fa0d46b90a394eb041775.tar.gz 00:00:37.533 Sending request to url: http://10.211.164.20/packages/spdk_95f6a056ecf4b8cae15fa0d46b90a394eb041775.tar.gz 00:00:37.538 Response Code: HTTP/1.1 200 OK 00:00:37.538 Success: Status code 200 is in the accepted range: 200,404 00:00:37.539 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_95f6a056ecf4b8cae15fa0d46b90a394eb041775.tar.gz 00:06:09.949 [Pipeline] } 00:06:09.966 [Pipeline] // retry 00:06:09.974 [Pipeline] sh 00:06:10.263 + tar --no-same-owner -xf spdk_95f6a056ecf4b8cae15fa0d46b90a394eb041775.tar.gz 00:06:12.815 [Pipeline] sh 00:06:13.101 + git -C spdk log --oneline -n5 00:06:13.101 95f6a056e bdev: Add spdk_bdev_open_ext_v2() to support per-open options 00:06:13.101 a38267915 bdev: Locate all hot data in spdk_bdev_desc to the first cache line 00:06:13.101 095307e93 bdev: Change 1st parameter of bdev_bytes_to_blocks from bdev to desc 00:06:13.101 3b3a1a596 bdev: Change void to bdev_io pointer of parameter of _bdev_io_submit() 00:06:13.101 17c638de0 dif: dif_generate/verify_copy() supports NVMe PRACT = 1 and MD size > PI size 00:06:13.112 [Pipeline] } 00:06:13.124 [Pipeline] // stage 00:06:13.132 [Pipeline] stage 00:06:13.134 [Pipeline] { (Prepare) 00:06:13.148 [Pipeline] writeFile 00:06:13.161 [Pipeline] sh 00:06:13.442 + logger -p user.info -t JENKINS-CI 00:06:13.455 [Pipeline] sh 00:06:13.740 + logger -p user.info -t JENKINS-CI 00:06:13.752 [Pipeline] sh 00:06:14.038 + cat autorun-spdk.conf 00:06:14.038 SPDK_RUN_FUNCTIONAL_TEST=1 00:06:14.038 SPDK_TEST_NVMF=1 00:06:14.038 SPDK_TEST_NVME_CLI=1 00:06:14.038 SPDK_TEST_NVMF_TRANSPORT=tcp 00:06:14.038 SPDK_TEST_NVMF_NICS=e810 00:06:14.038 SPDK_TEST_VFIOUSER=1 00:06:14.038 SPDK_RUN_UBSAN=1 00:06:14.038 NET_TYPE=phy 00:06:14.046 RUN_NIGHTLY=0 00:06:14.050 [Pipeline] readFile 00:06:14.074 [Pipeline] withEnv 00:06:14.076 [Pipeline] { 00:06:14.088 [Pipeline] sh 00:06:14.377 + set -ex 00:06:14.377 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:06:14.377 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:06:14.377 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:06:14.377 ++ SPDK_TEST_NVMF=1 00:06:14.377 ++ SPDK_TEST_NVME_CLI=1 00:06:14.377 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:06:14.377 ++ SPDK_TEST_NVMF_NICS=e810 00:06:14.377 ++ SPDK_TEST_VFIOUSER=1 00:06:14.377 ++ SPDK_RUN_UBSAN=1 00:06:14.377 ++ NET_TYPE=phy 00:06:14.377 ++ RUN_NIGHTLY=0 00:06:14.377 + case $SPDK_TEST_NVMF_NICS in 00:06:14.377 + DRIVERS=ice 00:06:14.377 + [[ tcp == \r\d\m\a ]] 00:06:14.377 + [[ -n ice ]] 00:06:14.377 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:06:14.377 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:06:14.377 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:06:14.377 rmmod: ERROR: Module irdma is not currently loaded 00:06:14.377 rmmod: ERROR: Module i40iw is not currently loaded 00:06:14.377 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:06:14.377 + true 00:06:14.377 + for D in $DRIVERS 00:06:14.377 + sudo modprobe ice 00:06:14.377 + exit 0 00:06:14.387 [Pipeline] } 00:06:14.403 [Pipeline] // withEnv 00:06:14.409 [Pipeline] } 00:06:14.423 [Pipeline] // stage 00:06:14.434 [Pipeline] catchError 00:06:14.436 [Pipeline] { 00:06:14.450 [Pipeline] timeout 00:06:14.450 Timeout set to expire in 1 hr 0 min 00:06:14.452 [Pipeline] { 00:06:14.468 [Pipeline] stage 00:06:14.470 [Pipeline] { (Tests) 00:06:14.484 [Pipeline] sh 00:06:14.816 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:06:14.816 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:06:14.816 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:06:14.816 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:06:14.816 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:14.816 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:06:14.816 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:06:14.816 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:06:14.816 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:06:14.816 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:06:14.816 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:06:14.816 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:06:14.816 + source /etc/os-release 00:06:14.816 ++ NAME='Fedora Linux' 00:06:14.816 ++ VERSION='39 (Cloud Edition)' 00:06:14.816 ++ ID=fedora 00:06:14.816 ++ VERSION_ID=39 00:06:14.816 ++ VERSION_CODENAME= 00:06:14.816 ++ PLATFORM_ID=platform:f39 00:06:14.816 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:06:14.816 ++ ANSI_COLOR='0;38;2;60;110;180' 00:06:14.816 ++ LOGO=fedora-logo-icon 00:06:14.816 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:06:14.816 ++ HOME_URL=https://fedoraproject.org/ 00:06:14.816 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:06:14.816 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:06:14.816 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:06:14.816 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:06:14.816 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:06:14.816 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:06:14.816 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:06:14.816 ++ SUPPORT_END=2024-11-12 00:06:14.816 ++ VARIANT='Cloud Edition' 00:06:14.816 ++ VARIANT_ID=cloud 00:06:14.816 + uname -a 00:06:14.816 Linux spdk-wfp-06 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:06:14.816 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:06:17.374 Hugepages 00:06:17.374 node hugesize free / total 00:06:17.374 node0 1048576kB 0 / 0 00:06:17.374 node0 2048kB 0 / 0 00:06:17.374 node1 1048576kB 0 / 0 00:06:17.374 node1 2048kB 0 / 0 00:06:17.374 00:06:17.374 Type BDF Vendor Device NUMA Driver Device Block devices 00:06:17.374 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:06:17.374 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:06:17.374 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:06:17.374 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:06:17.374 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:06:17.374 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:06:17.374 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:06:17.374 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:06:17.374 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:06:17.374 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:06:17.374 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:06:17.374 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:06:17.374 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:06:17.374 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:06:17.374 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:06:17.374 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:06:17.374 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:06:17.374 + rm -f /tmp/spdk-ld-path 00:06:17.374 + source autorun-spdk.conf 00:06:17.374 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:06:17.374 ++ SPDK_TEST_NVMF=1 00:06:17.374 ++ SPDK_TEST_NVME_CLI=1 00:06:17.374 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:06:17.374 ++ SPDK_TEST_NVMF_NICS=e810 00:06:17.374 ++ SPDK_TEST_VFIOUSER=1 00:06:17.374 ++ SPDK_RUN_UBSAN=1 00:06:17.374 ++ NET_TYPE=phy 00:06:17.374 ++ RUN_NIGHTLY=0 00:06:17.374 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:06:17.374 + [[ -n '' ]] 00:06:17.374 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:17.374 + for M in /var/spdk/build-*-manifest.txt 00:06:17.374 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:06:17.374 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:06:17.375 + for M in /var/spdk/build-*-manifest.txt 00:06:17.375 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:06:17.375 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:06:17.375 + for M in /var/spdk/build-*-manifest.txt 00:06:17.375 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:06:17.375 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:06:17.375 ++ uname 00:06:17.375 + [[ Linux == \L\i\n\u\x ]] 00:06:17.375 + sudo dmesg -T 00:06:17.375 + sudo dmesg --clear 00:06:17.635 + dmesg_pid=273063 00:06:17.635 + [[ Fedora Linux == FreeBSD ]] 00:06:17.635 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:17.635 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:17.635 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:06:17.635 + [[ -x /usr/src/fio-static/fio ]] 00:06:17.635 + export FIO_BIN=/usr/src/fio-static/fio 00:06:17.635 + FIO_BIN=/usr/src/fio-static/fio 00:06:17.635 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:06:17.635 + sudo dmesg -Tw 00:06:17.635 + [[ ! -v VFIO_QEMU_BIN ]] 00:06:17.635 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:06:17.635 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:17.635 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:17.635 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:06:17.635 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:17.635 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:17.635 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:06:17.635 06:16:49 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:06:17.635 06:16:49 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:06:17.635 06:16:49 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:06:17.635 06:16:49 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:06:17.635 06:16:49 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:06:17.635 06:16:49 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:06:17.635 06:16:49 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_NICS=e810 00:06:17.635 06:16:49 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@6 -- $ SPDK_TEST_VFIOUSER=1 00:06:17.635 06:16:49 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@7 -- $ SPDK_RUN_UBSAN=1 00:06:17.635 06:16:49 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@8 -- $ NET_TYPE=phy 00:06:17.635 06:16:49 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=0 00:06:17.635 06:16:49 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:06:17.635 06:16:49 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:06:17.635 06:16:49 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:06:17.635 06:16:49 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:17.635 06:16:49 -- scripts/common.sh@15 -- $ shopt -s extglob 00:06:17.635 06:16:49 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:06:17.635 06:16:49 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:17.635 06:16:49 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:17.635 06:16:49 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:17.635 06:16:49 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:17.635 06:16:49 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:17.635 06:16:49 -- paths/export.sh@5 -- $ export PATH 00:06:17.635 06:16:49 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:17.635 06:16:49 -- common/autobuild_common.sh@485 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:06:17.635 06:16:49 -- common/autobuild_common.sh@486 -- $ date +%s 00:06:17.635 06:16:49 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1732079809.XXXXXX 00:06:17.635 06:16:49 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1732079809.4kUbfz 00:06:17.635 06:16:49 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:06:17.635 06:16:49 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:06:17.635 06:16:49 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:06:17.635 06:16:49 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:06:17.635 06:16:49 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:06:17.635 06:16:49 -- common/autobuild_common.sh@502 -- $ get_config_params 00:06:17.635 06:16:49 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:06:17.635 06:16:49 -- common/autotest_common.sh@10 -- $ set +x 00:06:17.635 06:16:49 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:06:17.635 06:16:49 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:06:17.635 06:16:49 -- pm/common@17 -- $ local monitor 00:06:17.635 06:16:49 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:17.635 06:16:49 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:17.635 06:16:49 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:17.635 06:16:49 -- pm/common@21 -- $ date +%s 00:06:17.635 06:16:49 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:17.635 06:16:49 -- pm/common@21 -- $ date +%s 00:06:17.635 06:16:49 -- pm/common@25 -- $ sleep 1 00:06:17.635 06:16:49 -- pm/common@21 -- $ date +%s 00:06:17.635 06:16:49 -- pm/common@21 -- $ date +%s 00:06:17.635 06:16:49 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732079809 00:06:17.635 06:16:49 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732079809 00:06:17.635 06:16:49 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732079809 00:06:17.635 06:16:49 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732079809 00:06:17.635 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732079809_collect-cpu-load.pm.log 00:06:17.635 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732079809_collect-vmstat.pm.log 00:06:17.635 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732079809_collect-cpu-temp.pm.log 00:06:17.635 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732079809_collect-bmc-pm.bmc.pm.log 00:06:18.575 06:16:50 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:06:18.575 06:16:50 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:06:18.575 06:16:50 -- spdk/autobuild.sh@12 -- $ umask 022 00:06:18.575 06:16:50 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:18.575 06:16:50 -- spdk/autobuild.sh@16 -- $ date -u 00:06:18.575 Wed Nov 20 05:16:50 AM UTC 2024 00:06:18.575 06:16:50 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:06:18.836 v25.01-pre-189-g95f6a056e 00:06:18.836 06:16:50 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:06:18.836 06:16:50 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:06:18.836 06:16:50 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:06:18.836 06:16:50 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:06:18.836 06:16:50 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:06:18.836 06:16:50 -- common/autotest_common.sh@10 -- $ set +x 00:06:18.836 ************************************ 00:06:18.836 START TEST ubsan 00:06:18.836 ************************************ 00:06:18.836 06:16:50 ubsan -- common/autotest_common.sh@1127 -- $ echo 'using ubsan' 00:06:18.836 using ubsan 00:06:18.836 00:06:18.836 real 0m0.000s 00:06:18.836 user 0m0.000s 00:06:18.836 sys 0m0.000s 00:06:18.836 06:16:50 ubsan -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:06:18.836 06:16:50 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:06:18.836 ************************************ 00:06:18.836 END TEST ubsan 00:06:18.836 ************************************ 00:06:18.836 06:16:50 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:06:18.836 06:16:50 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:06:18.836 06:16:50 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:06:18.836 06:16:50 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:06:18.836 06:16:50 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:06:18.836 06:16:50 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:06:18.836 06:16:50 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:06:18.836 06:16:50 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:06:18.836 06:16:50 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:06:18.836 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:06:18.836 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:06:19.405 Using 'verbs' RDMA provider 00:06:32.198 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:06:44.421 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:06:44.421 Creating mk/config.mk...done. 00:06:44.421 Creating mk/cc.flags.mk...done. 00:06:44.421 Type 'make' to build. 00:06:44.421 06:17:15 -- spdk/autobuild.sh@70 -- $ run_test make make -j96 00:06:44.421 06:17:15 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:06:44.421 06:17:15 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:06:44.421 06:17:15 -- common/autotest_common.sh@10 -- $ set +x 00:06:44.421 ************************************ 00:06:44.421 START TEST make 00:06:44.421 ************************************ 00:06:44.421 06:17:15 make -- common/autotest_common.sh@1127 -- $ make -j96 00:06:44.680 make[1]: Nothing to be done for 'all'. 00:06:46.063 The Meson build system 00:06:46.063 Version: 1.5.0 00:06:46.063 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:06:46.063 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:06:46.063 Build type: native build 00:06:46.063 Project name: libvfio-user 00:06:46.063 Project version: 0.0.1 00:06:46.063 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:06:46.063 C linker for the host machine: cc ld.bfd 2.40-14 00:06:46.063 Host machine cpu family: x86_64 00:06:46.063 Host machine cpu: x86_64 00:06:46.063 Run-time dependency threads found: YES 00:06:46.063 Library dl found: YES 00:06:46.063 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:06:46.063 Run-time dependency json-c found: YES 0.17 00:06:46.063 Run-time dependency cmocka found: YES 1.1.7 00:06:46.063 Program pytest-3 found: NO 00:06:46.063 Program flake8 found: NO 00:06:46.063 Program misspell-fixer found: NO 00:06:46.063 Program restructuredtext-lint found: NO 00:06:46.063 Program valgrind found: YES (/usr/bin/valgrind) 00:06:46.063 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:06:46.063 Compiler for C supports arguments -Wmissing-declarations: YES 00:06:46.063 Compiler for C supports arguments -Wwrite-strings: YES 00:06:46.063 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:06:46.063 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:06:46.063 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:06:46.063 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:06:46.063 Build targets in project: 8 00:06:46.063 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:06:46.063 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:06:46.063 00:06:46.063 libvfio-user 0.0.1 00:06:46.063 00:06:46.063 User defined options 00:06:46.063 buildtype : debug 00:06:46.063 default_library: shared 00:06:46.063 libdir : /usr/local/lib 00:06:46.063 00:06:46.063 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:06:46.631 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:06:46.631 [1/37] Compiling C object samples/null.p/null.c.o 00:06:46.631 [2/37] Compiling C object samples/lspci.p/lspci.c.o 00:06:46.631 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:06:46.631 [4/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:06:46.631 [5/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:06:46.631 [6/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:06:46.631 [7/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:06:46.631 [8/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:06:46.631 [9/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:06:46.631 [10/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:06:46.631 [11/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:06:46.631 [12/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:06:46.631 [13/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:06:46.631 [14/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:06:46.631 [15/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:06:46.631 [16/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:06:46.631 [17/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:06:46.631 [18/37] Compiling C object samples/server.p/server.c.o 00:06:46.631 [19/37] Compiling C object test/unit_tests.p/mocks.c.o 00:06:46.631 [20/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:06:46.631 [21/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:06:46.631 [22/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:06:46.632 [23/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:06:46.632 [24/37] Compiling C object samples/client.p/client.c.o 00:06:46.632 [25/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:06:46.632 [26/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:06:46.632 [27/37] Linking target samples/client 00:06:46.632 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:06:46.632 [29/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:06:46.632 [30/37] Linking target lib/libvfio-user.so.0.0.1 00:06:46.891 [31/37] Linking target test/unit_tests 00:06:46.891 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:06:46.891 [33/37] Linking target samples/server 00:06:46.891 [34/37] Linking target samples/null 00:06:46.891 [35/37] Linking target samples/lspci 00:06:46.891 [36/37] Linking target samples/gpio-pci-idio-16 00:06:46.891 [37/37] Linking target samples/shadow_ioeventfd_server 00:06:46.891 INFO: autodetecting backend as ninja 00:06:46.891 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:06:46.891 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:06:47.459 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:06:47.459 ninja: no work to do. 00:06:52.737 The Meson build system 00:06:52.737 Version: 1.5.0 00:06:52.737 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:06:52.737 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:06:52.737 Build type: native build 00:06:52.737 Program cat found: YES (/usr/bin/cat) 00:06:52.737 Project name: DPDK 00:06:52.737 Project version: 24.03.0 00:06:52.737 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:06:52.737 C linker for the host machine: cc ld.bfd 2.40-14 00:06:52.737 Host machine cpu family: x86_64 00:06:52.737 Host machine cpu: x86_64 00:06:52.737 Message: ## Building in Developer Mode ## 00:06:52.737 Program pkg-config found: YES (/usr/bin/pkg-config) 00:06:52.737 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:06:52.737 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:06:52.737 Program python3 found: YES (/usr/bin/python3) 00:06:52.737 Program cat found: YES (/usr/bin/cat) 00:06:52.737 Compiler for C supports arguments -march=native: YES 00:06:52.737 Checking for size of "void *" : 8 00:06:52.737 Checking for size of "void *" : 8 (cached) 00:06:52.737 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:06:52.737 Library m found: YES 00:06:52.737 Library numa found: YES 00:06:52.737 Has header "numaif.h" : YES 00:06:52.737 Library fdt found: NO 00:06:52.737 Library execinfo found: NO 00:06:52.737 Has header "execinfo.h" : YES 00:06:52.737 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:06:52.737 Run-time dependency libarchive found: NO (tried pkgconfig) 00:06:52.737 Run-time dependency libbsd found: NO (tried pkgconfig) 00:06:52.737 Run-time dependency jansson found: NO (tried pkgconfig) 00:06:52.737 Run-time dependency openssl found: YES 3.1.1 00:06:52.737 Run-time dependency libpcap found: YES 1.10.4 00:06:52.737 Has header "pcap.h" with dependency libpcap: YES 00:06:52.737 Compiler for C supports arguments -Wcast-qual: YES 00:06:52.737 Compiler for C supports arguments -Wdeprecated: YES 00:06:52.737 Compiler for C supports arguments -Wformat: YES 00:06:52.737 Compiler for C supports arguments -Wformat-nonliteral: NO 00:06:52.737 Compiler for C supports arguments -Wformat-security: NO 00:06:52.737 Compiler for C supports arguments -Wmissing-declarations: YES 00:06:52.737 Compiler for C supports arguments -Wmissing-prototypes: YES 00:06:52.737 Compiler for C supports arguments -Wnested-externs: YES 00:06:52.737 Compiler for C supports arguments -Wold-style-definition: YES 00:06:52.737 Compiler for C supports arguments -Wpointer-arith: YES 00:06:52.737 Compiler for C supports arguments -Wsign-compare: YES 00:06:52.737 Compiler for C supports arguments -Wstrict-prototypes: YES 00:06:52.737 Compiler for C supports arguments -Wundef: YES 00:06:52.737 Compiler for C supports arguments -Wwrite-strings: YES 00:06:52.737 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:06:52.737 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:06:52.737 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:06:52.737 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:06:52.737 Program objdump found: YES (/usr/bin/objdump) 00:06:52.737 Compiler for C supports arguments -mavx512f: YES 00:06:52.737 Checking if "AVX512 checking" compiles: YES 00:06:52.737 Fetching value of define "__SSE4_2__" : 1 00:06:52.737 Fetching value of define "__AES__" : 1 00:06:52.737 Fetching value of define "__AVX__" : 1 00:06:52.737 Fetching value of define "__AVX2__" : 1 00:06:52.737 Fetching value of define "__AVX512BW__" : 1 00:06:52.737 Fetching value of define "__AVX512CD__" : 1 00:06:52.737 Fetching value of define "__AVX512DQ__" : 1 00:06:52.737 Fetching value of define "__AVX512F__" : 1 00:06:52.737 Fetching value of define "__AVX512VL__" : 1 00:06:52.737 Fetching value of define "__PCLMUL__" : 1 00:06:52.737 Fetching value of define "__RDRND__" : 1 00:06:52.737 Fetching value of define "__RDSEED__" : 1 00:06:52.737 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:06:52.737 Fetching value of define "__znver1__" : (undefined) 00:06:52.737 Fetching value of define "__znver2__" : (undefined) 00:06:52.737 Fetching value of define "__znver3__" : (undefined) 00:06:52.737 Fetching value of define "__znver4__" : (undefined) 00:06:52.737 Compiler for C supports arguments -Wno-format-truncation: YES 00:06:52.737 Message: lib/log: Defining dependency "log" 00:06:52.737 Message: lib/kvargs: Defining dependency "kvargs" 00:06:52.737 Message: lib/telemetry: Defining dependency "telemetry" 00:06:52.737 Checking for function "getentropy" : NO 00:06:52.737 Message: lib/eal: Defining dependency "eal" 00:06:52.737 Message: lib/ring: Defining dependency "ring" 00:06:52.737 Message: lib/rcu: Defining dependency "rcu" 00:06:52.737 Message: lib/mempool: Defining dependency "mempool" 00:06:52.737 Message: lib/mbuf: Defining dependency "mbuf" 00:06:52.737 Fetching value of define "__PCLMUL__" : 1 (cached) 00:06:52.737 Fetching value of define "__AVX512F__" : 1 (cached) 00:06:52.737 Fetching value of define "__AVX512BW__" : 1 (cached) 00:06:52.737 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:06:52.737 Fetching value of define "__AVX512VL__" : 1 (cached) 00:06:52.737 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:06:52.737 Compiler for C supports arguments -mpclmul: YES 00:06:52.737 Compiler for C supports arguments -maes: YES 00:06:52.737 Compiler for C supports arguments -mavx512f: YES (cached) 00:06:52.737 Compiler for C supports arguments -mavx512bw: YES 00:06:52.737 Compiler for C supports arguments -mavx512dq: YES 00:06:52.737 Compiler for C supports arguments -mavx512vl: YES 00:06:52.737 Compiler for C supports arguments -mvpclmulqdq: YES 00:06:52.737 Compiler for C supports arguments -mavx2: YES 00:06:52.737 Compiler for C supports arguments -mavx: YES 00:06:52.737 Message: lib/net: Defining dependency "net" 00:06:52.737 Message: lib/meter: Defining dependency "meter" 00:06:52.737 Message: lib/ethdev: Defining dependency "ethdev" 00:06:52.737 Message: lib/pci: Defining dependency "pci" 00:06:52.737 Message: lib/cmdline: Defining dependency "cmdline" 00:06:52.737 Message: lib/hash: Defining dependency "hash" 00:06:52.737 Message: lib/timer: Defining dependency "timer" 00:06:52.737 Message: lib/compressdev: Defining dependency "compressdev" 00:06:52.737 Message: lib/cryptodev: Defining dependency "cryptodev" 00:06:52.737 Message: lib/dmadev: Defining dependency "dmadev" 00:06:52.737 Compiler for C supports arguments -Wno-cast-qual: YES 00:06:52.737 Message: lib/power: Defining dependency "power" 00:06:52.737 Message: lib/reorder: Defining dependency "reorder" 00:06:52.737 Message: lib/security: Defining dependency "security" 00:06:52.737 Has header "linux/userfaultfd.h" : YES 00:06:52.737 Has header "linux/vduse.h" : YES 00:06:52.737 Message: lib/vhost: Defining dependency "vhost" 00:06:52.737 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:06:52.737 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:06:52.737 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:06:52.737 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:06:52.737 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:06:52.737 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:06:52.737 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:06:52.737 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:06:52.737 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:06:52.737 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:06:52.737 Program doxygen found: YES (/usr/local/bin/doxygen) 00:06:52.737 Configuring doxy-api-html.conf using configuration 00:06:52.737 Configuring doxy-api-man.conf using configuration 00:06:52.737 Program mandb found: YES (/usr/bin/mandb) 00:06:52.737 Program sphinx-build found: NO 00:06:52.737 Configuring rte_build_config.h using configuration 00:06:52.737 Message: 00:06:52.737 ================= 00:06:52.737 Applications Enabled 00:06:52.737 ================= 00:06:52.737 00:06:52.737 apps: 00:06:52.737 00:06:52.737 00:06:52.737 Message: 00:06:52.737 ================= 00:06:52.737 Libraries Enabled 00:06:52.737 ================= 00:06:52.737 00:06:52.737 libs: 00:06:52.737 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:06:52.737 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:06:52.737 cryptodev, dmadev, power, reorder, security, vhost, 00:06:52.737 00:06:52.737 Message: 00:06:52.737 =============== 00:06:52.737 Drivers Enabled 00:06:52.737 =============== 00:06:52.737 00:06:52.737 common: 00:06:52.737 00:06:52.737 bus: 00:06:52.737 pci, vdev, 00:06:52.737 mempool: 00:06:52.737 ring, 00:06:52.737 dma: 00:06:52.737 00:06:52.737 net: 00:06:52.737 00:06:52.737 crypto: 00:06:52.737 00:06:52.737 compress: 00:06:52.737 00:06:52.737 vdpa: 00:06:52.737 00:06:52.737 00:06:52.737 Message: 00:06:52.737 ================= 00:06:52.737 Content Skipped 00:06:52.737 ================= 00:06:52.737 00:06:52.737 apps: 00:06:52.737 dumpcap: explicitly disabled via build config 00:06:52.737 graph: explicitly disabled via build config 00:06:52.737 pdump: explicitly disabled via build config 00:06:52.737 proc-info: explicitly disabled via build config 00:06:52.738 test-acl: explicitly disabled via build config 00:06:52.738 test-bbdev: explicitly disabled via build config 00:06:52.738 test-cmdline: explicitly disabled via build config 00:06:52.738 test-compress-perf: explicitly disabled via build config 00:06:52.738 test-crypto-perf: explicitly disabled via build config 00:06:52.738 test-dma-perf: explicitly disabled via build config 00:06:52.738 test-eventdev: explicitly disabled via build config 00:06:52.738 test-fib: explicitly disabled via build config 00:06:52.738 test-flow-perf: explicitly disabled via build config 00:06:52.738 test-gpudev: explicitly disabled via build config 00:06:52.738 test-mldev: explicitly disabled via build config 00:06:52.738 test-pipeline: explicitly disabled via build config 00:06:52.738 test-pmd: explicitly disabled via build config 00:06:52.738 test-regex: explicitly disabled via build config 00:06:52.738 test-sad: explicitly disabled via build config 00:06:52.738 test-security-perf: explicitly disabled via build config 00:06:52.738 00:06:52.738 libs: 00:06:52.738 argparse: explicitly disabled via build config 00:06:52.738 metrics: explicitly disabled via build config 00:06:52.738 acl: explicitly disabled via build config 00:06:52.738 bbdev: explicitly disabled via build config 00:06:52.738 bitratestats: explicitly disabled via build config 00:06:52.738 bpf: explicitly disabled via build config 00:06:52.738 cfgfile: explicitly disabled via build config 00:06:52.738 distributor: explicitly disabled via build config 00:06:52.738 efd: explicitly disabled via build config 00:06:52.738 eventdev: explicitly disabled via build config 00:06:52.738 dispatcher: explicitly disabled via build config 00:06:52.738 gpudev: explicitly disabled via build config 00:06:52.738 gro: explicitly disabled via build config 00:06:52.738 gso: explicitly disabled via build config 00:06:52.738 ip_frag: explicitly disabled via build config 00:06:52.738 jobstats: explicitly disabled via build config 00:06:52.738 latencystats: explicitly disabled via build config 00:06:52.738 lpm: explicitly disabled via build config 00:06:52.738 member: explicitly disabled via build config 00:06:52.738 pcapng: explicitly disabled via build config 00:06:52.738 rawdev: explicitly disabled via build config 00:06:52.738 regexdev: explicitly disabled via build config 00:06:52.738 mldev: explicitly disabled via build config 00:06:52.738 rib: explicitly disabled via build config 00:06:52.738 sched: explicitly disabled via build config 00:06:52.738 stack: explicitly disabled via build config 00:06:52.738 ipsec: explicitly disabled via build config 00:06:52.738 pdcp: explicitly disabled via build config 00:06:52.738 fib: explicitly disabled via build config 00:06:52.738 port: explicitly disabled via build config 00:06:52.738 pdump: explicitly disabled via build config 00:06:52.738 table: explicitly disabled via build config 00:06:52.738 pipeline: explicitly disabled via build config 00:06:52.738 graph: explicitly disabled via build config 00:06:52.738 node: explicitly disabled via build config 00:06:52.738 00:06:52.738 drivers: 00:06:52.738 common/cpt: not in enabled drivers build config 00:06:52.738 common/dpaax: not in enabled drivers build config 00:06:52.738 common/iavf: not in enabled drivers build config 00:06:52.738 common/idpf: not in enabled drivers build config 00:06:52.738 common/ionic: not in enabled drivers build config 00:06:52.738 common/mvep: not in enabled drivers build config 00:06:52.738 common/octeontx: not in enabled drivers build config 00:06:52.738 bus/auxiliary: not in enabled drivers build config 00:06:52.738 bus/cdx: not in enabled drivers build config 00:06:52.738 bus/dpaa: not in enabled drivers build config 00:06:52.738 bus/fslmc: not in enabled drivers build config 00:06:52.738 bus/ifpga: not in enabled drivers build config 00:06:52.738 bus/platform: not in enabled drivers build config 00:06:52.738 bus/uacce: not in enabled drivers build config 00:06:52.738 bus/vmbus: not in enabled drivers build config 00:06:52.738 common/cnxk: not in enabled drivers build config 00:06:52.738 common/mlx5: not in enabled drivers build config 00:06:52.738 common/nfp: not in enabled drivers build config 00:06:52.738 common/nitrox: not in enabled drivers build config 00:06:52.738 common/qat: not in enabled drivers build config 00:06:52.738 common/sfc_efx: not in enabled drivers build config 00:06:52.738 mempool/bucket: not in enabled drivers build config 00:06:52.738 mempool/cnxk: not in enabled drivers build config 00:06:52.738 mempool/dpaa: not in enabled drivers build config 00:06:52.738 mempool/dpaa2: not in enabled drivers build config 00:06:52.738 mempool/octeontx: not in enabled drivers build config 00:06:52.738 mempool/stack: not in enabled drivers build config 00:06:52.738 dma/cnxk: not in enabled drivers build config 00:06:52.738 dma/dpaa: not in enabled drivers build config 00:06:52.738 dma/dpaa2: not in enabled drivers build config 00:06:52.738 dma/hisilicon: not in enabled drivers build config 00:06:52.738 dma/idxd: not in enabled drivers build config 00:06:52.738 dma/ioat: not in enabled drivers build config 00:06:52.738 dma/skeleton: not in enabled drivers build config 00:06:52.738 net/af_packet: not in enabled drivers build config 00:06:52.738 net/af_xdp: not in enabled drivers build config 00:06:52.738 net/ark: not in enabled drivers build config 00:06:52.738 net/atlantic: not in enabled drivers build config 00:06:52.738 net/avp: not in enabled drivers build config 00:06:52.738 net/axgbe: not in enabled drivers build config 00:06:52.738 net/bnx2x: not in enabled drivers build config 00:06:52.738 net/bnxt: not in enabled drivers build config 00:06:52.738 net/bonding: not in enabled drivers build config 00:06:52.738 net/cnxk: not in enabled drivers build config 00:06:52.738 net/cpfl: not in enabled drivers build config 00:06:52.738 net/cxgbe: not in enabled drivers build config 00:06:52.738 net/dpaa: not in enabled drivers build config 00:06:52.738 net/dpaa2: not in enabled drivers build config 00:06:52.738 net/e1000: not in enabled drivers build config 00:06:52.738 net/ena: not in enabled drivers build config 00:06:52.738 net/enetc: not in enabled drivers build config 00:06:52.738 net/enetfec: not in enabled drivers build config 00:06:52.738 net/enic: not in enabled drivers build config 00:06:52.738 net/failsafe: not in enabled drivers build config 00:06:52.738 net/fm10k: not in enabled drivers build config 00:06:52.738 net/gve: not in enabled drivers build config 00:06:52.738 net/hinic: not in enabled drivers build config 00:06:52.738 net/hns3: not in enabled drivers build config 00:06:52.738 net/i40e: not in enabled drivers build config 00:06:52.738 net/iavf: not in enabled drivers build config 00:06:52.738 net/ice: not in enabled drivers build config 00:06:52.738 net/idpf: not in enabled drivers build config 00:06:52.738 net/igc: not in enabled drivers build config 00:06:52.738 net/ionic: not in enabled drivers build config 00:06:52.738 net/ipn3ke: not in enabled drivers build config 00:06:52.738 net/ixgbe: not in enabled drivers build config 00:06:52.738 net/mana: not in enabled drivers build config 00:06:52.738 net/memif: not in enabled drivers build config 00:06:52.738 net/mlx4: not in enabled drivers build config 00:06:52.738 net/mlx5: not in enabled drivers build config 00:06:52.738 net/mvneta: not in enabled drivers build config 00:06:52.738 net/mvpp2: not in enabled drivers build config 00:06:52.738 net/netvsc: not in enabled drivers build config 00:06:52.738 net/nfb: not in enabled drivers build config 00:06:52.738 net/nfp: not in enabled drivers build config 00:06:52.738 net/ngbe: not in enabled drivers build config 00:06:52.738 net/null: not in enabled drivers build config 00:06:52.738 net/octeontx: not in enabled drivers build config 00:06:52.738 net/octeon_ep: not in enabled drivers build config 00:06:52.738 net/pcap: not in enabled drivers build config 00:06:52.738 net/pfe: not in enabled drivers build config 00:06:52.738 net/qede: not in enabled drivers build config 00:06:52.738 net/ring: not in enabled drivers build config 00:06:52.738 net/sfc: not in enabled drivers build config 00:06:52.738 net/softnic: not in enabled drivers build config 00:06:52.738 net/tap: not in enabled drivers build config 00:06:52.738 net/thunderx: not in enabled drivers build config 00:06:52.738 net/txgbe: not in enabled drivers build config 00:06:52.738 net/vdev_netvsc: not in enabled drivers build config 00:06:52.738 net/vhost: not in enabled drivers build config 00:06:52.738 net/virtio: not in enabled drivers build config 00:06:52.738 net/vmxnet3: not in enabled drivers build config 00:06:52.738 raw/*: missing internal dependency, "rawdev" 00:06:52.738 crypto/armv8: not in enabled drivers build config 00:06:52.738 crypto/bcmfs: not in enabled drivers build config 00:06:52.738 crypto/caam_jr: not in enabled drivers build config 00:06:52.738 crypto/ccp: not in enabled drivers build config 00:06:52.738 crypto/cnxk: not in enabled drivers build config 00:06:52.738 crypto/dpaa_sec: not in enabled drivers build config 00:06:52.738 crypto/dpaa2_sec: not in enabled drivers build config 00:06:52.738 crypto/ipsec_mb: not in enabled drivers build config 00:06:52.738 crypto/mlx5: not in enabled drivers build config 00:06:52.738 crypto/mvsam: not in enabled drivers build config 00:06:52.738 crypto/nitrox: not in enabled drivers build config 00:06:52.738 crypto/null: not in enabled drivers build config 00:06:52.738 crypto/octeontx: not in enabled drivers build config 00:06:52.738 crypto/openssl: not in enabled drivers build config 00:06:52.738 crypto/scheduler: not in enabled drivers build config 00:06:52.738 crypto/uadk: not in enabled drivers build config 00:06:52.738 crypto/virtio: not in enabled drivers build config 00:06:52.738 compress/isal: not in enabled drivers build config 00:06:52.738 compress/mlx5: not in enabled drivers build config 00:06:52.738 compress/nitrox: not in enabled drivers build config 00:06:52.738 compress/octeontx: not in enabled drivers build config 00:06:52.739 compress/zlib: not in enabled drivers build config 00:06:52.739 regex/*: missing internal dependency, "regexdev" 00:06:52.739 ml/*: missing internal dependency, "mldev" 00:06:52.739 vdpa/ifc: not in enabled drivers build config 00:06:52.739 vdpa/mlx5: not in enabled drivers build config 00:06:52.739 vdpa/nfp: not in enabled drivers build config 00:06:52.739 vdpa/sfc: not in enabled drivers build config 00:06:52.739 event/*: missing internal dependency, "eventdev" 00:06:52.739 baseband/*: missing internal dependency, "bbdev" 00:06:52.739 gpu/*: missing internal dependency, "gpudev" 00:06:52.739 00:06:52.739 00:06:52.998 Build targets in project: 85 00:06:52.998 00:06:52.998 DPDK 24.03.0 00:06:52.998 00:06:52.998 User defined options 00:06:52.998 buildtype : debug 00:06:52.998 default_library : shared 00:06:52.998 libdir : lib 00:06:52.998 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:06:52.998 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:06:52.998 c_link_args : 00:06:52.998 cpu_instruction_set: native 00:06:52.998 disable_apps : test-dma-perf,test,test-sad,test-acl,test-pmd,test-mldev,test-compress-perf,test-cmdline,test-regex,test-fib,graph,test-bbdev,dumpcap,test-gpudev,proc-info,test-pipeline,test-flow-perf,test-crypto-perf,pdump,test-eventdev,test-security-perf 00:06:52.998 disable_libs : port,lpm,ipsec,regexdev,dispatcher,argparse,bitratestats,rawdev,stack,graph,acl,bbdev,pipeline,member,sched,pcapng,mldev,eventdev,efd,metrics,latencystats,cfgfile,ip_frag,jobstats,pdump,pdcp,rib,node,fib,distributor,gso,table,bpf,gpudev,gro 00:06:52.998 enable_docs : false 00:06:52.998 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:06:52.998 enable_kmods : false 00:06:52.998 max_lcores : 128 00:06:52.998 tests : false 00:06:52.998 00:06:52.998 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:06:53.266 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:06:53.530 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:06:53.530 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:06:53.530 [3/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:06:53.530 [4/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:06:53.530 [5/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:06:53.530 [6/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:06:53.530 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:06:53.531 [8/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:06:53.531 [9/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:06:53.531 [10/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:06:53.531 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:06:53.531 [12/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:06:53.531 [13/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:06:53.531 [14/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:06:53.531 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:06:53.531 [16/268] Linking static target lib/librte_kvargs.a 00:06:53.531 [17/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:06:53.531 [18/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:06:53.531 [19/268] Linking static target lib/librte_log.a 00:06:53.795 [20/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:06:53.795 [21/268] Linking static target lib/librte_pci.a 00:06:53.795 [22/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:06:53.795 [23/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:06:53.795 [24/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:06:54.057 [25/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:06:54.057 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:06:54.057 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:06:54.057 [28/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:06:54.057 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:06:54.057 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:06:54.057 [31/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:06:54.057 [32/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:06:54.057 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:06:54.057 [34/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:06:54.057 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:06:54.057 [36/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:06:54.057 [37/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:06:54.057 [38/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:06:54.057 [39/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:06:54.057 [40/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:06:54.057 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:06:54.057 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:06:54.057 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:06:54.057 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:06:54.057 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:06:54.057 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:06:54.057 [47/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:06:54.057 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:06:54.057 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:06:54.057 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:06:54.057 [51/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:06:54.057 [52/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:06:54.057 [53/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:06:54.057 [54/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:06:54.057 [55/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:06:54.057 [56/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:06:54.057 [57/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:06:54.057 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:06:54.057 [59/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:06:54.057 [60/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:06:54.057 [61/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:06:54.057 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:06:54.057 [63/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:06:54.057 [64/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:06:54.057 [65/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:06:54.057 [66/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:06:54.057 [67/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:06:54.057 [68/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:06:54.057 [69/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:06:54.057 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:06:54.057 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:06:54.057 [72/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:06:54.057 [73/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:06:54.057 [74/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:06:54.057 [75/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:06:54.057 [76/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:06:54.057 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:06:54.058 [78/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:06:54.058 [79/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:06:54.058 [80/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:06:54.058 [81/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:06:54.058 [82/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:06:54.058 [83/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:06:54.058 [84/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:06:54.058 [85/268] Linking static target lib/librte_meter.a 00:06:54.058 [86/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:06:54.058 [87/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:06:54.058 [88/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:06:54.058 [89/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:06:54.058 [90/268] Linking static target lib/librte_ring.a 00:06:54.058 [91/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:06:54.058 [92/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:06:54.058 [93/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:06:54.058 [94/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:06:54.058 [95/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:06:54.058 [96/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:06:54.058 [97/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:06:54.058 [98/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:06:54.058 [99/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:06:54.058 [100/268] Linking static target lib/librte_telemetry.a 00:06:54.058 [101/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:06:54.058 [102/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:06:54.058 [103/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:06:54.058 [104/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:06:54.058 [105/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:06:54.316 [106/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:06:54.317 [107/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:06:54.317 [108/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:06:54.317 [109/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:06:54.317 [110/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:06:54.317 [111/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:06:54.317 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:06:54.317 [113/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:06:54.317 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:06:54.317 [115/268] Linking static target lib/librte_mempool.a 00:06:54.317 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:06:54.317 [117/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:06:54.317 [118/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:06:54.317 [119/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:06:54.317 [120/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:06:54.317 [121/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:06:54.317 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:06:54.317 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:06:54.317 [124/268] Linking static target lib/librte_rcu.a 00:06:54.317 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:06:54.317 [126/268] Linking static target lib/librte_net.a 00:06:54.317 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:06:54.317 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:06:54.317 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:06:54.317 [130/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:06:54.317 [131/268] Linking static target lib/librte_eal.a 00:06:54.317 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:06:54.317 [133/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:06:54.317 [134/268] Linking static target lib/librte_mbuf.a 00:06:54.317 [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:06:54.317 [136/268] Linking static target lib/librte_cmdline.a 00:06:54.317 [137/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:06:54.317 [138/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:06:54.317 [139/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:06:54.317 [140/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:06:54.317 [141/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:06:54.317 [142/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:06:54.317 [143/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:06:54.576 [144/268] Linking target lib/librte_log.so.24.1 00:06:54.576 [145/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:06:54.576 [146/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:06:54.576 [147/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:06:54.576 [148/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:06:54.576 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:06:54.576 [150/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:06:54.576 [151/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:06:54.576 [152/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:06:54.576 [153/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:06:54.576 [154/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:06:54.576 [155/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:06:54.576 [156/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:06:54.576 [157/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:06:54.576 [158/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:06:54.576 [159/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:06:54.576 [160/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:06:54.576 [161/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:06:54.576 [162/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:06:54.576 [163/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:06:54.576 [164/268] Linking static target lib/librte_reorder.a 00:06:54.576 [165/268] Linking static target lib/librte_timer.a 00:06:54.576 [166/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:06:54.576 [167/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:06:54.576 [168/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:06:54.576 [169/268] Linking target lib/librte_kvargs.so.24.1 00:06:54.576 [170/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:06:54.576 [171/268] Linking target lib/librte_telemetry.so.24.1 00:06:54.576 [172/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:06:54.576 [173/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:06:54.576 [174/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:06:54.576 [175/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:06:54.576 [176/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:06:54.576 [177/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:06:54.576 [178/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:06:54.576 [179/268] Linking static target lib/librte_security.a 00:06:54.576 [180/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:06:54.835 [181/268] Linking static target lib/librte_compressdev.a 00:06:54.835 [182/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:06:54.835 [183/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:06:54.835 [184/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:06:54.835 [185/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:06:54.835 [186/268] Linking static target lib/librte_dmadev.a 00:06:54.835 [187/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:06:54.835 [188/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:06:54.835 [189/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:06:54.835 [190/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:06:54.835 [191/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:06:54.835 [192/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:06:54.835 [193/268] Linking static target lib/librte_power.a 00:06:54.835 [194/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:06:54.835 [195/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:06:54.835 [196/268] Linking static target drivers/librte_bus_vdev.a 00:06:54.835 [197/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:06:54.835 [198/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:06:54.835 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:06:54.835 [200/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:06:54.835 [201/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:06:54.835 [202/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:06:54.835 [203/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:06:54.835 [204/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:06:54.835 [205/268] Linking static target drivers/librte_mempool_ring.a 00:06:54.835 [206/268] Linking static target lib/librte_hash.a 00:06:55.093 [207/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:06:55.093 [208/268] Linking static target lib/librte_cryptodev.a 00:06:55.093 [209/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:06:55.093 [210/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:06:55.093 [211/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:06:55.093 [212/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:06:55.093 [213/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:06:55.093 [214/268] Linking static target drivers/librte_bus_pci.a 00:06:55.093 [215/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:06:55.093 [216/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:06:55.093 [217/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:06:55.093 [218/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:06:55.093 [219/268] Linking static target lib/librte_ethdev.a 00:06:55.351 [220/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:06:55.351 [221/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:06:55.351 [222/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:06:55.610 [223/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:06:55.610 [224/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:06:55.610 [225/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:06:55.869 [226/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:06:55.869 [227/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:06:56.807 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:06:56.807 [229/268] Linking static target lib/librte_vhost.a 00:06:56.807 [230/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:06:58.712 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:07:03.988 [232/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:07:04.247 [233/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:07:04.247 [234/268] Linking target lib/librte_eal.so.24.1 00:07:04.507 [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:07:04.507 [236/268] Linking target lib/librte_ring.so.24.1 00:07:04.507 [237/268] Linking target lib/librte_pci.so.24.1 00:07:04.507 [238/268] Linking target lib/librte_dmadev.so.24.1 00:07:04.507 [239/268] Linking target lib/librte_timer.so.24.1 00:07:04.507 [240/268] Linking target drivers/librte_bus_vdev.so.24.1 00:07:04.507 [241/268] Linking target lib/librte_meter.so.24.1 00:07:04.507 [242/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:07:04.767 [243/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:07:04.767 [244/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:07:04.767 [245/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:07:04.767 [246/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:07:04.767 [247/268] Linking target drivers/librte_bus_pci.so.24.1 00:07:04.767 [248/268] Linking target lib/librte_mempool.so.24.1 00:07:04.767 [249/268] Linking target lib/librte_rcu.so.24.1 00:07:04.767 [250/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:07:04.767 [251/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:07:04.767 [252/268] Linking target drivers/librte_mempool_ring.so.24.1 00:07:04.767 [253/268] Linking target lib/librte_mbuf.so.24.1 00:07:05.026 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:07:05.026 [255/268] Linking target lib/librte_reorder.so.24.1 00:07:05.026 [256/268] Linking target lib/librte_net.so.24.1 00:07:05.026 [257/268] Linking target lib/librte_compressdev.so.24.1 00:07:05.026 [258/268] Linking target lib/librte_cryptodev.so.24.1 00:07:05.285 [259/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:07:05.285 [260/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:07:05.285 [261/268] Linking target lib/librte_cmdline.so.24.1 00:07:05.285 [262/268] Linking target lib/librte_hash.so.24.1 00:07:05.285 [263/268] Linking target lib/librte_security.so.24.1 00:07:05.285 [264/268] Linking target lib/librte_ethdev.so.24.1 00:07:05.285 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:07:05.285 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:07:05.545 [267/268] Linking target lib/librte_power.so.24.1 00:07:05.545 [268/268] Linking target lib/librte_vhost.so.24.1 00:07:05.545 INFO: autodetecting backend as ninja 00:07:05.545 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 96 00:07:17.767 CC lib/ut_mock/mock.o 00:07:17.767 CC lib/log/log.o 00:07:17.767 CC lib/log/log_flags.o 00:07:17.767 CC lib/log/log_deprecated.o 00:07:17.767 CC lib/ut/ut.o 00:07:17.767 LIB libspdk_ut.a 00:07:17.767 LIB libspdk_ut_mock.a 00:07:17.767 LIB libspdk_log.a 00:07:17.767 SO libspdk_ut.so.2.0 00:07:17.767 SO libspdk_ut_mock.so.6.0 00:07:17.767 SO libspdk_log.so.7.1 00:07:17.767 SYMLINK libspdk_ut.so 00:07:17.767 SYMLINK libspdk_ut_mock.so 00:07:17.767 SYMLINK libspdk_log.so 00:07:17.767 CC lib/dma/dma.o 00:07:17.767 CC lib/ioat/ioat.o 00:07:17.767 CXX lib/trace_parser/trace.o 00:07:17.767 CC lib/util/base64.o 00:07:17.767 CC lib/util/bit_array.o 00:07:17.767 CC lib/util/cpuset.o 00:07:17.767 CC lib/util/crc16.o 00:07:17.767 CC lib/util/crc32.o 00:07:17.767 CC lib/util/crc32c.o 00:07:17.767 CC lib/util/crc32_ieee.o 00:07:17.767 CC lib/util/crc64.o 00:07:17.767 CC lib/util/dif.o 00:07:17.767 CC lib/util/fd.o 00:07:17.767 CC lib/util/fd_group.o 00:07:17.767 CC lib/util/file.o 00:07:17.767 CC lib/util/hexlify.o 00:07:17.767 CC lib/util/iov.o 00:07:17.767 CC lib/util/math.o 00:07:17.767 CC lib/util/net.o 00:07:17.767 CC lib/util/pipe.o 00:07:17.767 CC lib/util/strerror_tls.o 00:07:17.767 CC lib/util/string.o 00:07:17.767 CC lib/util/uuid.o 00:07:17.767 CC lib/util/xor.o 00:07:17.767 CC lib/util/zipf.o 00:07:17.767 CC lib/util/md5.o 00:07:17.767 CC lib/vfio_user/host/vfio_user_pci.o 00:07:17.767 CC lib/vfio_user/host/vfio_user.o 00:07:17.767 LIB libspdk_dma.a 00:07:17.767 SO libspdk_dma.so.5.0 00:07:17.767 LIB libspdk_ioat.a 00:07:17.767 SYMLINK libspdk_dma.so 00:07:17.767 SO libspdk_ioat.so.7.0 00:07:17.767 SYMLINK libspdk_ioat.so 00:07:17.767 LIB libspdk_vfio_user.a 00:07:17.767 SO libspdk_vfio_user.so.5.0 00:07:17.767 SYMLINK libspdk_vfio_user.so 00:07:17.767 LIB libspdk_util.a 00:07:17.767 SO libspdk_util.so.10.1 00:07:17.767 SYMLINK libspdk_util.so 00:07:17.767 LIB libspdk_trace_parser.a 00:07:17.767 SO libspdk_trace_parser.so.6.0 00:07:17.767 SYMLINK libspdk_trace_parser.so 00:07:17.767 CC lib/json/json_parse.o 00:07:17.767 CC lib/json/json_util.o 00:07:17.767 CC lib/json/json_write.o 00:07:17.767 CC lib/conf/conf.o 00:07:17.767 CC lib/vmd/vmd.o 00:07:17.767 CC lib/rdma_utils/rdma_utils.o 00:07:17.767 CC lib/vmd/led.o 00:07:17.767 CC lib/idxd/idxd.o 00:07:17.767 CC lib/idxd/idxd_user.o 00:07:17.767 CC lib/env_dpdk/env.o 00:07:17.767 CC lib/idxd/idxd_kernel.o 00:07:17.767 CC lib/env_dpdk/memory.o 00:07:17.767 CC lib/env_dpdk/pci.o 00:07:17.767 CC lib/env_dpdk/init.o 00:07:17.767 CC lib/env_dpdk/threads.o 00:07:17.767 CC lib/env_dpdk/pci_ioat.o 00:07:17.767 CC lib/env_dpdk/pci_virtio.o 00:07:17.767 CC lib/env_dpdk/pci_vmd.o 00:07:17.767 CC lib/env_dpdk/pci_idxd.o 00:07:17.767 CC lib/env_dpdk/pci_event.o 00:07:17.767 CC lib/env_dpdk/sigbus_handler.o 00:07:17.767 CC lib/env_dpdk/pci_dpdk.o 00:07:17.767 CC lib/env_dpdk/pci_dpdk_2207.o 00:07:17.767 CC lib/env_dpdk/pci_dpdk_2211.o 00:07:17.767 LIB libspdk_conf.a 00:07:17.767 LIB libspdk_json.a 00:07:17.767 SO libspdk_conf.so.6.0 00:07:17.767 LIB libspdk_rdma_utils.a 00:07:17.767 SO libspdk_json.so.6.0 00:07:17.767 SO libspdk_rdma_utils.so.1.0 00:07:17.767 SYMLINK libspdk_conf.so 00:07:17.767 SYMLINK libspdk_json.so 00:07:17.767 SYMLINK libspdk_rdma_utils.so 00:07:17.767 LIB libspdk_idxd.a 00:07:17.767 LIB libspdk_vmd.a 00:07:17.767 SO libspdk_idxd.so.12.1 00:07:18.027 SO libspdk_vmd.so.6.0 00:07:18.027 SYMLINK libspdk_idxd.so 00:07:18.027 SYMLINK libspdk_vmd.so 00:07:18.027 CC lib/jsonrpc/jsonrpc_server.o 00:07:18.027 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:07:18.027 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:07:18.027 CC lib/jsonrpc/jsonrpc_client.o 00:07:18.027 CC lib/rdma_provider/common.o 00:07:18.027 CC lib/rdma_provider/rdma_provider_verbs.o 00:07:18.287 LIB libspdk_rdma_provider.a 00:07:18.287 LIB libspdk_jsonrpc.a 00:07:18.287 SO libspdk_rdma_provider.so.7.0 00:07:18.287 SO libspdk_jsonrpc.so.6.0 00:07:18.287 SYMLINK libspdk_rdma_provider.so 00:07:18.287 SYMLINK libspdk_jsonrpc.so 00:07:18.546 LIB libspdk_env_dpdk.a 00:07:18.546 SO libspdk_env_dpdk.so.15.1 00:07:18.546 CC lib/rpc/rpc.o 00:07:18.546 SYMLINK libspdk_env_dpdk.so 00:07:18.806 LIB libspdk_rpc.a 00:07:18.806 SO libspdk_rpc.so.6.0 00:07:18.806 SYMLINK libspdk_rpc.so 00:07:19.376 CC lib/trace/trace.o 00:07:19.376 CC lib/notify/notify.o 00:07:19.376 CC lib/trace/trace_flags.o 00:07:19.376 CC lib/notify/notify_rpc.o 00:07:19.376 CC lib/trace/trace_rpc.o 00:07:19.376 CC lib/keyring/keyring.o 00:07:19.376 CC lib/keyring/keyring_rpc.o 00:07:19.376 LIB libspdk_notify.a 00:07:19.376 SO libspdk_notify.so.6.0 00:07:19.376 LIB libspdk_keyring.a 00:07:19.376 LIB libspdk_trace.a 00:07:19.376 SYMLINK libspdk_notify.so 00:07:19.376 SO libspdk_keyring.so.2.0 00:07:19.376 SO libspdk_trace.so.11.0 00:07:19.636 SYMLINK libspdk_keyring.so 00:07:19.636 SYMLINK libspdk_trace.so 00:07:19.895 CC lib/thread/thread.o 00:07:19.895 CC lib/thread/iobuf.o 00:07:19.895 CC lib/sock/sock.o 00:07:19.895 CC lib/sock/sock_rpc.o 00:07:20.155 LIB libspdk_sock.a 00:07:20.155 SO libspdk_sock.so.10.0 00:07:20.155 SYMLINK libspdk_sock.so 00:07:20.723 CC lib/nvme/nvme_ctrlr_cmd.o 00:07:20.723 CC lib/nvme/nvme_ctrlr.o 00:07:20.723 CC lib/nvme/nvme_fabric.o 00:07:20.723 CC lib/nvme/nvme_ns_cmd.o 00:07:20.723 CC lib/nvme/nvme_ns.o 00:07:20.723 CC lib/nvme/nvme_pcie_common.o 00:07:20.723 CC lib/nvme/nvme_pcie.o 00:07:20.723 CC lib/nvme/nvme_qpair.o 00:07:20.723 CC lib/nvme/nvme.o 00:07:20.723 CC lib/nvme/nvme_quirks.o 00:07:20.723 CC lib/nvme/nvme_transport.o 00:07:20.723 CC lib/nvme/nvme_discovery.o 00:07:20.723 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:07:20.723 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:07:20.723 CC lib/nvme/nvme_tcp.o 00:07:20.723 CC lib/nvme/nvme_opal.o 00:07:20.723 CC lib/nvme/nvme_io_msg.o 00:07:20.723 CC lib/nvme/nvme_poll_group.o 00:07:20.723 CC lib/nvme/nvme_zns.o 00:07:20.723 CC lib/nvme/nvme_stubs.o 00:07:20.723 CC lib/nvme/nvme_auth.o 00:07:20.723 CC lib/nvme/nvme_cuse.o 00:07:20.723 CC lib/nvme/nvme_vfio_user.o 00:07:20.723 CC lib/nvme/nvme_rdma.o 00:07:20.982 LIB libspdk_thread.a 00:07:20.982 SO libspdk_thread.so.11.0 00:07:20.982 SYMLINK libspdk_thread.so 00:07:21.241 CC lib/accel/accel.o 00:07:21.241 CC lib/accel/accel_rpc.o 00:07:21.241 CC lib/accel/accel_sw.o 00:07:21.241 CC lib/virtio/virtio_vfio_user.o 00:07:21.241 CC lib/virtio/virtio.o 00:07:21.241 CC lib/blob/blobstore.o 00:07:21.241 CC lib/virtio/virtio_vhost_user.o 00:07:21.241 CC lib/init/json_config.o 00:07:21.241 CC lib/blob/request.o 00:07:21.241 CC lib/init/subsystem.o 00:07:21.241 CC lib/init/subsystem_rpc.o 00:07:21.241 CC lib/blob/zeroes.o 00:07:21.241 CC lib/virtio/virtio_pci.o 00:07:21.241 CC lib/init/rpc.o 00:07:21.241 CC lib/blob/blob_bs_dev.o 00:07:21.241 CC lib/fsdev/fsdev.o 00:07:21.241 CC lib/fsdev/fsdev_io.o 00:07:21.241 CC lib/fsdev/fsdev_rpc.o 00:07:21.241 CC lib/vfu_tgt/tgt_endpoint.o 00:07:21.241 CC lib/vfu_tgt/tgt_rpc.o 00:07:21.500 LIB libspdk_init.a 00:07:21.500 SO libspdk_init.so.6.0 00:07:21.500 LIB libspdk_virtio.a 00:07:21.759 SYMLINK libspdk_init.so 00:07:21.759 LIB libspdk_vfu_tgt.a 00:07:21.759 SO libspdk_virtio.so.7.0 00:07:21.759 SO libspdk_vfu_tgt.so.3.0 00:07:21.759 SYMLINK libspdk_virtio.so 00:07:21.759 SYMLINK libspdk_vfu_tgt.so 00:07:21.759 LIB libspdk_fsdev.a 00:07:22.018 SO libspdk_fsdev.so.2.0 00:07:22.018 CC lib/event/app.o 00:07:22.018 CC lib/event/reactor.o 00:07:22.018 CC lib/event/log_rpc.o 00:07:22.018 CC lib/event/app_rpc.o 00:07:22.018 CC lib/event/scheduler_static.o 00:07:22.018 SYMLINK libspdk_fsdev.so 00:07:22.276 LIB libspdk_accel.a 00:07:22.276 SO libspdk_accel.so.16.0 00:07:22.276 LIB libspdk_nvme.a 00:07:22.276 SYMLINK libspdk_accel.so 00:07:22.276 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:07:22.276 LIB libspdk_event.a 00:07:22.276 SO libspdk_event.so.14.0 00:07:22.276 SO libspdk_nvme.so.15.0 00:07:22.536 SYMLINK libspdk_event.so 00:07:22.536 SYMLINK libspdk_nvme.so 00:07:22.536 CC lib/bdev/bdev.o 00:07:22.536 CC lib/bdev/bdev_rpc.o 00:07:22.536 CC lib/bdev/bdev_zone.o 00:07:22.536 CC lib/bdev/scsi_nvme.o 00:07:22.536 CC lib/bdev/part.o 00:07:22.796 LIB libspdk_fuse_dispatcher.a 00:07:22.796 SO libspdk_fuse_dispatcher.so.1.0 00:07:22.796 SYMLINK libspdk_fuse_dispatcher.so 00:07:23.733 LIB libspdk_blob.a 00:07:23.733 SO libspdk_blob.so.11.0 00:07:23.733 SYMLINK libspdk_blob.so 00:07:23.992 CC lib/blobfs/blobfs.o 00:07:23.992 CC lib/blobfs/tree.o 00:07:23.992 CC lib/lvol/lvol.o 00:07:24.559 LIB libspdk_bdev.a 00:07:24.559 SO libspdk_bdev.so.17.0 00:07:24.559 SYMLINK libspdk_bdev.so 00:07:24.559 LIB libspdk_blobfs.a 00:07:24.559 SO libspdk_blobfs.so.10.0 00:07:24.559 LIB libspdk_lvol.a 00:07:24.559 SYMLINK libspdk_blobfs.so 00:07:24.559 SO libspdk_lvol.so.10.0 00:07:24.818 SYMLINK libspdk_lvol.so 00:07:24.818 CC lib/scsi/dev.o 00:07:24.818 CC lib/scsi/lun.o 00:07:24.818 CC lib/scsi/port.o 00:07:24.818 CC lib/nbd/nbd.o 00:07:24.818 CC lib/scsi/scsi.o 00:07:24.818 CC lib/nbd/nbd_rpc.o 00:07:24.818 CC lib/scsi/scsi_bdev.o 00:07:24.818 CC lib/nvmf/ctrlr.o 00:07:24.818 CC lib/ublk/ublk.o 00:07:24.818 CC lib/scsi/scsi_pr.o 00:07:24.818 CC lib/ublk/ublk_rpc.o 00:07:24.818 CC lib/nvmf/ctrlr_discovery.o 00:07:24.818 CC lib/scsi/scsi_rpc.o 00:07:24.818 CC lib/nvmf/ctrlr_bdev.o 00:07:24.818 CC lib/scsi/task.o 00:07:24.818 CC lib/nvmf/subsystem.o 00:07:24.818 CC lib/nvmf/nvmf.o 00:07:24.818 CC lib/nvmf/nvmf_rpc.o 00:07:24.818 CC lib/nvmf/transport.o 00:07:24.818 CC lib/nvmf/tcp.o 00:07:24.818 CC lib/nvmf/stubs.o 00:07:24.818 CC lib/ftl/ftl_core.o 00:07:24.818 CC lib/nvmf/mdns_server.o 00:07:24.818 CC lib/ftl/ftl_init.o 00:07:24.818 CC lib/nvmf/vfio_user.o 00:07:24.818 CC lib/ftl/ftl_layout.o 00:07:24.818 CC lib/ftl/ftl_debug.o 00:07:24.818 CC lib/nvmf/rdma.o 00:07:24.818 CC lib/nvmf/auth.o 00:07:24.818 CC lib/ftl/ftl_io.o 00:07:24.818 CC lib/ftl/ftl_sb.o 00:07:24.818 CC lib/ftl/ftl_l2p.o 00:07:24.818 CC lib/ftl/ftl_l2p_flat.o 00:07:24.818 CC lib/ftl/ftl_nv_cache.o 00:07:24.818 CC lib/ftl/ftl_band.o 00:07:24.818 CC lib/ftl/ftl_band_ops.o 00:07:24.818 CC lib/ftl/ftl_writer.o 00:07:24.818 CC lib/ftl/ftl_rq.o 00:07:24.818 CC lib/ftl/ftl_reloc.o 00:07:24.818 CC lib/ftl/ftl_l2p_cache.o 00:07:24.818 CC lib/ftl/ftl_p2l.o 00:07:24.818 CC lib/ftl/ftl_p2l_log.o 00:07:24.818 CC lib/ftl/mngt/ftl_mngt.o 00:07:24.818 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:07:24.818 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:07:24.818 CC lib/ftl/mngt/ftl_mngt_startup.o 00:07:24.818 CC lib/ftl/mngt/ftl_mngt_md.o 00:07:24.818 CC lib/ftl/mngt/ftl_mngt_misc.o 00:07:24.818 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:07:24.818 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:07:24.818 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:07:24.818 CC lib/ftl/mngt/ftl_mngt_band.o 00:07:24.818 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:07:24.818 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:07:24.818 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:07:24.818 CC lib/ftl/utils/ftl_mempool.o 00:07:24.818 CC lib/ftl/utils/ftl_conf.o 00:07:24.818 CC lib/ftl/utils/ftl_property.o 00:07:24.818 CC lib/ftl/utils/ftl_md.o 00:07:24.818 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:07:24.818 CC lib/ftl/utils/ftl_bitmap.o 00:07:24.818 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:07:24.818 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:07:24.818 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:07:24.818 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:07:24.818 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:07:24.818 CC lib/ftl/upgrade/ftl_sb_v3.o 00:07:24.818 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:07:24.818 CC lib/ftl/nvc/ftl_nvc_dev.o 00:07:24.818 CC lib/ftl/upgrade/ftl_sb_v5.o 00:07:24.819 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:07:24.819 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:07:24.819 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:07:24.819 CC lib/ftl/base/ftl_base_dev.o 00:07:24.819 CC lib/ftl/base/ftl_base_bdev.o 00:07:24.819 CC lib/ftl/ftl_trace.o 00:07:25.754 LIB libspdk_nbd.a 00:07:25.754 SO libspdk_nbd.so.7.0 00:07:25.754 LIB libspdk_scsi.a 00:07:25.754 SYMLINK libspdk_nbd.so 00:07:25.754 SO libspdk_scsi.so.9.0 00:07:25.754 LIB libspdk_ublk.a 00:07:25.754 SYMLINK libspdk_scsi.so 00:07:25.754 SO libspdk_ublk.so.3.0 00:07:25.754 SYMLINK libspdk_ublk.so 00:07:26.013 LIB libspdk_ftl.a 00:07:26.013 CC lib/vhost/vhost.o 00:07:26.013 CC lib/vhost/vhost_rpc.o 00:07:26.013 CC lib/vhost/vhost_scsi.o 00:07:26.013 CC lib/vhost/vhost_blk.o 00:07:26.013 CC lib/vhost/rte_vhost_user.o 00:07:26.013 CC lib/iscsi/conn.o 00:07:26.013 CC lib/iscsi/init_grp.o 00:07:26.013 CC lib/iscsi/iscsi.o 00:07:26.013 CC lib/iscsi/portal_grp.o 00:07:26.013 CC lib/iscsi/param.o 00:07:26.013 CC lib/iscsi/tgt_node.o 00:07:26.013 CC lib/iscsi/iscsi_subsystem.o 00:07:26.013 CC lib/iscsi/iscsi_rpc.o 00:07:26.013 CC lib/iscsi/task.o 00:07:26.013 SO libspdk_ftl.so.9.0 00:07:26.271 SYMLINK libspdk_ftl.so 00:07:26.530 LIB libspdk_nvmf.a 00:07:26.789 SO libspdk_nvmf.so.20.0 00:07:26.789 LIB libspdk_vhost.a 00:07:26.789 SO libspdk_vhost.so.8.0 00:07:26.789 SYMLINK libspdk_nvmf.so 00:07:27.048 SYMLINK libspdk_vhost.so 00:07:27.048 LIB libspdk_iscsi.a 00:07:27.048 SO libspdk_iscsi.so.8.0 00:07:27.048 SYMLINK libspdk_iscsi.so 00:07:27.617 CC module/env_dpdk/env_dpdk_rpc.o 00:07:27.617 CC module/vfu_device/vfu_virtio.o 00:07:27.617 CC module/vfu_device/vfu_virtio_blk.o 00:07:27.617 CC module/vfu_device/vfu_virtio_rpc.o 00:07:27.617 CC module/vfu_device/vfu_virtio_fs.o 00:07:27.617 CC module/vfu_device/vfu_virtio_scsi.o 00:07:27.876 CC module/scheduler/gscheduler/gscheduler.o 00:07:27.876 CC module/blob/bdev/blob_bdev.o 00:07:27.876 LIB libspdk_env_dpdk_rpc.a 00:07:27.876 CC module/keyring/linux/keyring.o 00:07:27.876 CC module/keyring/linux/keyring_rpc.o 00:07:27.876 CC module/scheduler/dynamic/scheduler_dynamic.o 00:07:27.876 CC module/accel/iaa/accel_iaa.o 00:07:27.876 CC module/accel/iaa/accel_iaa_rpc.o 00:07:27.876 CC module/keyring/file/keyring.o 00:07:27.876 CC module/keyring/file/keyring_rpc.o 00:07:27.876 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:07:27.876 CC module/accel/ioat/accel_ioat.o 00:07:27.876 CC module/accel/ioat/accel_ioat_rpc.o 00:07:27.876 CC module/fsdev/aio/fsdev_aio.o 00:07:27.876 CC module/sock/posix/posix.o 00:07:27.876 CC module/fsdev/aio/fsdev_aio_rpc.o 00:07:27.876 CC module/fsdev/aio/linux_aio_mgr.o 00:07:27.876 CC module/accel/error/accel_error.o 00:07:27.876 CC module/accel/error/accel_error_rpc.o 00:07:27.876 CC module/accel/dsa/accel_dsa.o 00:07:27.876 CC module/accel/dsa/accel_dsa_rpc.o 00:07:27.876 SO libspdk_env_dpdk_rpc.so.6.0 00:07:27.876 SYMLINK libspdk_env_dpdk_rpc.so 00:07:27.876 LIB libspdk_scheduler_gscheduler.a 00:07:27.876 LIB libspdk_keyring_linux.a 00:07:27.876 LIB libspdk_keyring_file.a 00:07:27.876 LIB libspdk_scheduler_dpdk_governor.a 00:07:27.876 SO libspdk_scheduler_gscheduler.so.4.0 00:07:28.135 SO libspdk_keyring_linux.so.1.0 00:07:28.135 LIB libspdk_accel_ioat.a 00:07:28.135 SO libspdk_scheduler_dpdk_governor.so.4.0 00:07:28.135 SO libspdk_keyring_file.so.2.0 00:07:28.135 LIB libspdk_scheduler_dynamic.a 00:07:28.135 LIB libspdk_accel_iaa.a 00:07:28.135 SO libspdk_accel_ioat.so.6.0 00:07:28.135 LIB libspdk_accel_error.a 00:07:28.135 SO libspdk_scheduler_dynamic.so.4.0 00:07:28.135 SYMLINK libspdk_scheduler_gscheduler.so 00:07:28.135 SO libspdk_accel_iaa.so.3.0 00:07:28.135 SYMLINK libspdk_scheduler_dpdk_governor.so 00:07:28.135 SYMLINK libspdk_keyring_linux.so 00:07:28.135 LIB libspdk_blob_bdev.a 00:07:28.135 SYMLINK libspdk_keyring_file.so 00:07:28.135 SO libspdk_accel_error.so.2.0 00:07:28.135 SO libspdk_blob_bdev.so.11.0 00:07:28.135 SYMLINK libspdk_scheduler_dynamic.so 00:07:28.135 SYMLINK libspdk_accel_ioat.so 00:07:28.135 LIB libspdk_accel_dsa.a 00:07:28.135 SYMLINK libspdk_accel_iaa.so 00:07:28.135 SO libspdk_accel_dsa.so.5.0 00:07:28.135 SYMLINK libspdk_accel_error.so 00:07:28.135 SYMLINK libspdk_blob_bdev.so 00:07:28.135 SYMLINK libspdk_accel_dsa.so 00:07:28.135 LIB libspdk_vfu_device.a 00:07:28.135 SO libspdk_vfu_device.so.3.0 00:07:28.394 SYMLINK libspdk_vfu_device.so 00:07:28.394 LIB libspdk_fsdev_aio.a 00:07:28.394 SO libspdk_fsdev_aio.so.1.0 00:07:28.394 LIB libspdk_sock_posix.a 00:07:28.394 SO libspdk_sock_posix.so.6.0 00:07:28.394 SYMLINK libspdk_fsdev_aio.so 00:07:28.653 SYMLINK libspdk_sock_posix.so 00:07:28.653 CC module/bdev/gpt/gpt.o 00:07:28.653 CC module/bdev/gpt/vbdev_gpt.o 00:07:28.653 CC module/bdev/lvol/vbdev_lvol.o 00:07:28.653 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:07:28.653 CC module/bdev/raid/bdev_raid.o 00:07:28.653 CC module/bdev/raid/bdev_raid_rpc.o 00:07:28.653 CC module/bdev/raid/bdev_raid_sb.o 00:07:28.653 CC module/bdev/ftl/bdev_ftl.o 00:07:28.653 CC module/bdev/error/vbdev_error.o 00:07:28.653 CC module/bdev/ftl/bdev_ftl_rpc.o 00:07:28.653 CC module/bdev/raid/raid0.o 00:07:28.653 CC module/bdev/error/vbdev_error_rpc.o 00:07:28.653 CC module/bdev/raid/raid1.o 00:07:28.653 CC module/bdev/raid/concat.o 00:07:28.653 CC module/bdev/delay/vbdev_delay.o 00:07:28.653 CC module/bdev/delay/vbdev_delay_rpc.o 00:07:28.653 CC module/bdev/passthru/vbdev_passthru.o 00:07:28.653 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:07:28.653 CC module/bdev/malloc/bdev_malloc_rpc.o 00:07:28.653 CC module/bdev/null/bdev_null.o 00:07:28.653 CC module/bdev/aio/bdev_aio.o 00:07:28.653 CC module/bdev/malloc/bdev_malloc.o 00:07:28.653 CC module/bdev/aio/bdev_aio_rpc.o 00:07:28.653 CC module/bdev/null/bdev_null_rpc.o 00:07:28.653 CC module/bdev/virtio/bdev_virtio_scsi.o 00:07:28.653 CC module/bdev/virtio/bdev_virtio_blk.o 00:07:28.653 CC module/bdev/virtio/bdev_virtio_rpc.o 00:07:28.653 CC module/blobfs/bdev/blobfs_bdev.o 00:07:28.653 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:07:28.653 CC module/bdev/nvme/bdev_nvme.o 00:07:28.653 CC module/bdev/nvme/bdev_nvme_rpc.o 00:07:28.653 CC module/bdev/nvme/nvme_rpc.o 00:07:28.653 CC module/bdev/nvme/bdev_mdns_client.o 00:07:28.653 CC module/bdev/nvme/vbdev_opal.o 00:07:28.653 CC module/bdev/nvme/vbdev_opal_rpc.o 00:07:28.653 CC module/bdev/zone_block/vbdev_zone_block.o 00:07:28.653 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:07:28.653 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:07:28.653 CC module/bdev/iscsi/bdev_iscsi.o 00:07:28.653 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:07:28.653 CC module/bdev/split/vbdev_split.o 00:07:28.653 CC module/bdev/split/vbdev_split_rpc.o 00:07:28.912 LIB libspdk_blobfs_bdev.a 00:07:28.912 SO libspdk_blobfs_bdev.so.6.0 00:07:28.912 LIB libspdk_bdev_gpt.a 00:07:28.912 LIB libspdk_bdev_split.a 00:07:28.912 SO libspdk_bdev_gpt.so.6.0 00:07:28.912 LIB libspdk_bdev_passthru.a 00:07:28.912 SYMLINK libspdk_blobfs_bdev.so 00:07:28.912 SO libspdk_bdev_split.so.6.0 00:07:28.912 LIB libspdk_bdev_error.a 00:07:28.912 SO libspdk_bdev_passthru.so.6.0 00:07:28.912 LIB libspdk_bdev_null.a 00:07:28.912 SYMLINK libspdk_bdev_gpt.so 00:07:28.912 SO libspdk_bdev_error.so.6.0 00:07:28.912 LIB libspdk_bdev_ftl.a 00:07:28.912 LIB libspdk_bdev_aio.a 00:07:28.912 SYMLINK libspdk_bdev_split.so 00:07:28.912 SO libspdk_bdev_null.so.6.0 00:07:29.171 SYMLINK libspdk_bdev_passthru.so 00:07:29.171 SO libspdk_bdev_ftl.so.6.0 00:07:29.171 LIB libspdk_bdev_zone_block.a 00:07:29.171 LIB libspdk_bdev_delay.a 00:07:29.171 SO libspdk_bdev_aio.so.6.0 00:07:29.171 LIB libspdk_bdev_iscsi.a 00:07:29.171 LIB libspdk_bdev_malloc.a 00:07:29.171 SYMLINK libspdk_bdev_error.so 00:07:29.171 SO libspdk_bdev_zone_block.so.6.0 00:07:29.171 SYMLINK libspdk_bdev_null.so 00:07:29.171 SO libspdk_bdev_delay.so.6.0 00:07:29.171 SO libspdk_bdev_iscsi.so.6.0 00:07:29.171 SYMLINK libspdk_bdev_ftl.so 00:07:29.171 LIB libspdk_bdev_lvol.a 00:07:29.171 SO libspdk_bdev_malloc.so.6.0 00:07:29.171 SYMLINK libspdk_bdev_aio.so 00:07:29.171 SO libspdk_bdev_lvol.so.6.0 00:07:29.171 SYMLINK libspdk_bdev_zone_block.so 00:07:29.171 SYMLINK libspdk_bdev_delay.so 00:07:29.171 SYMLINK libspdk_bdev_iscsi.so 00:07:29.171 LIB libspdk_bdev_virtio.a 00:07:29.171 SYMLINK libspdk_bdev_malloc.so 00:07:29.171 SO libspdk_bdev_virtio.so.6.0 00:07:29.171 SYMLINK libspdk_bdev_lvol.so 00:07:29.171 SYMLINK libspdk_bdev_virtio.so 00:07:29.429 LIB libspdk_bdev_raid.a 00:07:29.429 SO libspdk_bdev_raid.so.6.0 00:07:29.689 SYMLINK libspdk_bdev_raid.so 00:07:30.626 LIB libspdk_bdev_nvme.a 00:07:30.626 SO libspdk_bdev_nvme.so.7.1 00:07:30.626 SYMLINK libspdk_bdev_nvme.so 00:07:31.563 CC module/event/subsystems/vmd/vmd.o 00:07:31.563 CC module/event/subsystems/scheduler/scheduler.o 00:07:31.563 CC module/event/subsystems/vmd/vmd_rpc.o 00:07:31.563 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:07:31.563 CC module/event/subsystems/iobuf/iobuf.o 00:07:31.563 CC module/event/subsystems/sock/sock.o 00:07:31.563 CC module/event/subsystems/keyring/keyring.o 00:07:31.563 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:07:31.563 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:07:31.563 CC module/event/subsystems/fsdev/fsdev.o 00:07:31.563 LIB libspdk_event_vmd.a 00:07:31.563 LIB libspdk_event_fsdev.a 00:07:31.563 LIB libspdk_event_scheduler.a 00:07:31.563 LIB libspdk_event_keyring.a 00:07:31.563 LIB libspdk_event_vhost_blk.a 00:07:31.563 LIB libspdk_event_vfu_tgt.a 00:07:31.563 LIB libspdk_event_sock.a 00:07:31.563 LIB libspdk_event_iobuf.a 00:07:31.563 SO libspdk_event_fsdev.so.1.0 00:07:31.563 SO libspdk_event_vmd.so.6.0 00:07:31.563 SO libspdk_event_scheduler.so.4.0 00:07:31.563 SO libspdk_event_keyring.so.1.0 00:07:31.563 SO libspdk_event_vfu_tgt.so.3.0 00:07:31.563 SO libspdk_event_iobuf.so.3.0 00:07:31.563 SO libspdk_event_sock.so.5.0 00:07:31.563 SO libspdk_event_vhost_blk.so.3.0 00:07:31.563 SYMLINK libspdk_event_fsdev.so 00:07:31.563 SYMLINK libspdk_event_vfu_tgt.so 00:07:31.563 SYMLINK libspdk_event_scheduler.so 00:07:31.563 SYMLINK libspdk_event_vmd.so 00:07:31.563 SYMLINK libspdk_event_keyring.so 00:07:31.563 SYMLINK libspdk_event_vhost_blk.so 00:07:31.563 SYMLINK libspdk_event_sock.so 00:07:31.563 SYMLINK libspdk_event_iobuf.so 00:07:31.891 CC module/event/subsystems/accel/accel.o 00:07:32.193 LIB libspdk_event_accel.a 00:07:32.193 SO libspdk_event_accel.so.6.0 00:07:32.193 SYMLINK libspdk_event_accel.so 00:07:32.478 CC module/event/subsystems/bdev/bdev.o 00:07:32.747 LIB libspdk_event_bdev.a 00:07:32.747 SO libspdk_event_bdev.so.6.0 00:07:32.747 SYMLINK libspdk_event_bdev.so 00:07:33.006 CC module/event/subsystems/scsi/scsi.o 00:07:33.006 CC module/event/subsystems/nbd/nbd.o 00:07:33.006 CC module/event/subsystems/ublk/ublk.o 00:07:33.006 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:07:33.006 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:07:33.266 LIB libspdk_event_nbd.a 00:07:33.266 LIB libspdk_event_ublk.a 00:07:33.266 LIB libspdk_event_scsi.a 00:07:33.266 SO libspdk_event_nbd.so.6.0 00:07:33.266 SO libspdk_event_ublk.so.3.0 00:07:33.266 SO libspdk_event_scsi.so.6.0 00:07:33.266 LIB libspdk_event_nvmf.a 00:07:33.266 SYMLINK libspdk_event_nbd.so 00:07:33.266 SYMLINK libspdk_event_ublk.so 00:07:33.266 SO libspdk_event_nvmf.so.6.0 00:07:33.266 SYMLINK libspdk_event_scsi.so 00:07:33.266 SYMLINK libspdk_event_nvmf.so 00:07:33.525 CC module/event/subsystems/iscsi/iscsi.o 00:07:33.525 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:07:33.784 LIB libspdk_event_vhost_scsi.a 00:07:33.784 LIB libspdk_event_iscsi.a 00:07:33.784 SO libspdk_event_vhost_scsi.so.3.0 00:07:33.784 SO libspdk_event_iscsi.so.6.0 00:07:33.784 SYMLINK libspdk_event_vhost_scsi.so 00:07:33.784 SYMLINK libspdk_event_iscsi.so 00:07:34.043 SO libspdk.so.6.0 00:07:34.043 SYMLINK libspdk.so 00:07:34.302 TEST_HEADER include/spdk/accel.h 00:07:34.302 TEST_HEADER include/spdk/assert.h 00:07:34.302 TEST_HEADER include/spdk/accel_module.h 00:07:34.302 CXX app/trace/trace.o 00:07:34.302 TEST_HEADER include/spdk/barrier.h 00:07:34.302 TEST_HEADER include/spdk/base64.h 00:07:34.302 CC test/rpc_client/rpc_client_test.o 00:07:34.302 TEST_HEADER include/spdk/bdev_module.h 00:07:34.302 TEST_HEADER include/spdk/bdev.h 00:07:34.302 TEST_HEADER include/spdk/bdev_zone.h 00:07:34.302 TEST_HEADER include/spdk/bit_array.h 00:07:34.302 TEST_HEADER include/spdk/bit_pool.h 00:07:34.302 TEST_HEADER include/spdk/blob_bdev.h 00:07:34.302 CC app/spdk_nvme_perf/perf.o 00:07:34.302 TEST_HEADER include/spdk/blobfs_bdev.h 00:07:34.302 TEST_HEADER include/spdk/blob.h 00:07:34.302 TEST_HEADER include/spdk/blobfs.h 00:07:34.302 TEST_HEADER include/spdk/config.h 00:07:34.302 CC app/trace_record/trace_record.o 00:07:34.302 TEST_HEADER include/spdk/cpuset.h 00:07:34.302 CC app/spdk_top/spdk_top.o 00:07:34.302 TEST_HEADER include/spdk/conf.h 00:07:34.302 TEST_HEADER include/spdk/crc16.h 00:07:34.302 TEST_HEADER include/spdk/crc32.h 00:07:34.302 TEST_HEADER include/spdk/dif.h 00:07:34.302 TEST_HEADER include/spdk/crc64.h 00:07:34.302 CC app/spdk_nvme_identify/identify.o 00:07:34.302 TEST_HEADER include/spdk/endian.h 00:07:34.302 TEST_HEADER include/spdk/dma.h 00:07:34.302 TEST_HEADER include/spdk/env_dpdk.h 00:07:34.302 TEST_HEADER include/spdk/env.h 00:07:34.302 TEST_HEADER include/spdk/event.h 00:07:34.302 CC app/spdk_nvme_discover/discovery_aer.o 00:07:34.302 TEST_HEADER include/spdk/fd_group.h 00:07:34.302 TEST_HEADER include/spdk/fd.h 00:07:34.302 TEST_HEADER include/spdk/file.h 00:07:34.302 TEST_HEADER include/spdk/fsdev.h 00:07:34.302 TEST_HEADER include/spdk/fuse_dispatcher.h 00:07:34.302 TEST_HEADER include/spdk/ftl.h 00:07:34.302 TEST_HEADER include/spdk/fsdev_module.h 00:07:34.302 TEST_HEADER include/spdk/hexlify.h 00:07:34.302 TEST_HEADER include/spdk/gpt_spec.h 00:07:34.302 TEST_HEADER include/spdk/idxd_spec.h 00:07:34.302 TEST_HEADER include/spdk/histogram_data.h 00:07:34.302 TEST_HEADER include/spdk/idxd.h 00:07:34.302 CC app/spdk_lspci/spdk_lspci.o 00:07:34.302 TEST_HEADER include/spdk/init.h 00:07:34.302 TEST_HEADER include/spdk/ioat.h 00:07:34.302 TEST_HEADER include/spdk/iscsi_spec.h 00:07:34.302 TEST_HEADER include/spdk/json.h 00:07:34.302 TEST_HEADER include/spdk/ioat_spec.h 00:07:34.302 TEST_HEADER include/spdk/jsonrpc.h 00:07:34.302 TEST_HEADER include/spdk/keyring.h 00:07:34.302 TEST_HEADER include/spdk/keyring_module.h 00:07:34.302 TEST_HEADER include/spdk/likely.h 00:07:34.302 TEST_HEADER include/spdk/log.h 00:07:34.302 TEST_HEADER include/spdk/md5.h 00:07:34.302 TEST_HEADER include/spdk/lvol.h 00:07:34.302 TEST_HEADER include/spdk/nbd.h 00:07:34.302 TEST_HEADER include/spdk/memory.h 00:07:34.302 TEST_HEADER include/spdk/mmio.h 00:07:34.302 TEST_HEADER include/spdk/net.h 00:07:34.302 TEST_HEADER include/spdk/notify.h 00:07:34.302 TEST_HEADER include/spdk/nvme_intel.h 00:07:34.302 TEST_HEADER include/spdk/nvme.h 00:07:34.302 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:07:34.302 TEST_HEADER include/spdk/nvme_zns.h 00:07:34.302 TEST_HEADER include/spdk/nvme_ocssd.h 00:07:34.302 TEST_HEADER include/spdk/nvmf_cmd.h 00:07:34.302 TEST_HEADER include/spdk/nvme_spec.h 00:07:34.302 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:07:34.302 TEST_HEADER include/spdk/nvmf_spec.h 00:07:34.302 TEST_HEADER include/spdk/nvmf.h 00:07:34.302 TEST_HEADER include/spdk/nvmf_transport.h 00:07:34.302 TEST_HEADER include/spdk/opal.h 00:07:34.302 TEST_HEADER include/spdk/pci_ids.h 00:07:34.302 TEST_HEADER include/spdk/opal_spec.h 00:07:34.302 TEST_HEADER include/spdk/queue.h 00:07:34.303 TEST_HEADER include/spdk/reduce.h 00:07:34.303 TEST_HEADER include/spdk/rpc.h 00:07:34.303 TEST_HEADER include/spdk/scheduler.h 00:07:34.303 TEST_HEADER include/spdk/pipe.h 00:07:34.303 TEST_HEADER include/spdk/scsi.h 00:07:34.303 TEST_HEADER include/spdk/sock.h 00:07:34.303 CC app/nvmf_tgt/nvmf_main.o 00:07:34.303 TEST_HEADER include/spdk/scsi_spec.h 00:07:34.303 TEST_HEADER include/spdk/stdinc.h 00:07:34.303 TEST_HEADER include/spdk/thread.h 00:07:34.303 TEST_HEADER include/spdk/string.h 00:07:34.303 CC examples/interrupt_tgt/interrupt_tgt.o 00:07:34.569 TEST_HEADER include/spdk/trace_parser.h 00:07:34.569 TEST_HEADER include/spdk/trace.h 00:07:34.569 TEST_HEADER include/spdk/ublk.h 00:07:34.569 TEST_HEADER include/spdk/tree.h 00:07:34.569 CC app/spdk_dd/spdk_dd.o 00:07:34.569 TEST_HEADER include/spdk/util.h 00:07:34.569 CC app/iscsi_tgt/iscsi_tgt.o 00:07:34.569 TEST_HEADER include/spdk/version.h 00:07:34.569 TEST_HEADER include/spdk/vfio_user_pci.h 00:07:34.569 TEST_HEADER include/spdk/uuid.h 00:07:34.569 TEST_HEADER include/spdk/vfio_user_spec.h 00:07:34.569 TEST_HEADER include/spdk/vmd.h 00:07:34.569 TEST_HEADER include/spdk/vhost.h 00:07:34.569 TEST_HEADER include/spdk/zipf.h 00:07:34.569 TEST_HEADER include/spdk/xor.h 00:07:34.569 CXX test/cpp_headers/accel.o 00:07:34.569 CXX test/cpp_headers/accel_module.o 00:07:34.569 CXX test/cpp_headers/assert.o 00:07:34.569 CXX test/cpp_headers/barrier.o 00:07:34.569 CXX test/cpp_headers/base64.o 00:07:34.569 CXX test/cpp_headers/bdev_module.o 00:07:34.569 CXX test/cpp_headers/bdev.o 00:07:34.569 CXX test/cpp_headers/bit_array.o 00:07:34.569 CXX test/cpp_headers/bit_pool.o 00:07:34.569 CXX test/cpp_headers/bdev_zone.o 00:07:34.569 CXX test/cpp_headers/blobfs_bdev.o 00:07:34.569 CXX test/cpp_headers/blobfs.o 00:07:34.569 CXX test/cpp_headers/blob_bdev.o 00:07:34.569 CXX test/cpp_headers/conf.o 00:07:34.569 CXX test/cpp_headers/cpuset.o 00:07:34.569 CXX test/cpp_headers/config.o 00:07:34.569 CXX test/cpp_headers/blob.o 00:07:34.569 CXX test/cpp_headers/crc32.o 00:07:34.569 CXX test/cpp_headers/crc16.o 00:07:34.569 CXX test/cpp_headers/crc64.o 00:07:34.569 CXX test/cpp_headers/dma.o 00:07:34.569 CXX test/cpp_headers/endian.o 00:07:34.569 CXX test/cpp_headers/dif.o 00:07:34.569 CXX test/cpp_headers/env_dpdk.o 00:07:34.569 CXX test/cpp_headers/fd.o 00:07:34.569 CXX test/cpp_headers/event.o 00:07:34.569 CXX test/cpp_headers/env.o 00:07:34.569 CC app/spdk_tgt/spdk_tgt.o 00:07:34.569 CXX test/cpp_headers/fsdev_module.o 00:07:34.569 CXX test/cpp_headers/fd_group.o 00:07:34.569 CXX test/cpp_headers/file.o 00:07:34.569 CXX test/cpp_headers/fsdev.o 00:07:34.569 CXX test/cpp_headers/ftl.o 00:07:34.569 CXX test/cpp_headers/gpt_spec.o 00:07:34.569 CXX test/cpp_headers/fuse_dispatcher.o 00:07:34.569 CXX test/cpp_headers/histogram_data.o 00:07:34.569 CXX test/cpp_headers/hexlify.o 00:07:34.569 CXX test/cpp_headers/idxd.o 00:07:34.569 CXX test/cpp_headers/ioat.o 00:07:34.569 CXX test/cpp_headers/idxd_spec.o 00:07:34.569 CXX test/cpp_headers/init.o 00:07:34.569 CXX test/cpp_headers/ioat_spec.o 00:07:34.569 CXX test/cpp_headers/iscsi_spec.o 00:07:34.569 CXX test/cpp_headers/jsonrpc.o 00:07:34.569 CXX test/cpp_headers/json.o 00:07:34.569 CXX test/cpp_headers/log.o 00:07:34.569 CXX test/cpp_headers/keyring.o 00:07:34.569 CXX test/cpp_headers/keyring_module.o 00:07:34.569 CXX test/cpp_headers/likely.o 00:07:34.569 CXX test/cpp_headers/lvol.o 00:07:34.569 CXX test/cpp_headers/md5.o 00:07:34.569 CXX test/cpp_headers/mmio.o 00:07:34.569 CXX test/cpp_headers/memory.o 00:07:34.569 CXX test/cpp_headers/nbd.o 00:07:34.569 CXX test/cpp_headers/nvme.o 00:07:34.569 CXX test/cpp_headers/notify.o 00:07:34.569 CXX test/cpp_headers/net.o 00:07:34.569 CXX test/cpp_headers/nvme_intel.o 00:07:34.569 CXX test/cpp_headers/nvme_ocssd.o 00:07:34.569 CXX test/cpp_headers/nvme_spec.o 00:07:34.569 CXX test/cpp_headers/nvme_ocssd_spec.o 00:07:34.569 CXX test/cpp_headers/nvme_zns.o 00:07:34.569 CXX test/cpp_headers/nvmf_cmd.o 00:07:34.569 CXX test/cpp_headers/nvmf_fc_spec.o 00:07:34.569 CXX test/cpp_headers/nvmf_spec.o 00:07:34.569 CXX test/cpp_headers/opal.o 00:07:34.569 CXX test/cpp_headers/nvmf_transport.o 00:07:34.569 CXX test/cpp_headers/nvmf.o 00:07:34.569 CXX test/cpp_headers/opal_spec.o 00:07:34.569 CC test/app/jsoncat/jsoncat.o 00:07:34.569 CC examples/ioat/perf/perf.o 00:07:34.569 CC test/env/vtophys/vtophys.o 00:07:34.569 CC test/app/histogram_perf/histogram_perf.o 00:07:34.569 CC test/app/stub/stub.o 00:07:34.569 CC test/dma/test_dma/test_dma.o 00:07:34.569 CC examples/ioat/verify/verify.o 00:07:34.569 CC examples/util/zipf/zipf.o 00:07:34.569 CC app/fio/nvme/fio_plugin.o 00:07:34.569 CC test/thread/poller_perf/poller_perf.o 00:07:34.569 CC test/env/pci/pci_ut.o 00:07:34.569 CC test/app/bdev_svc/bdev_svc.o 00:07:34.843 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:07:34.843 CC test/env/memory/memory_ut.o 00:07:34.843 CC app/fio/bdev/fio_plugin.o 00:07:34.843 LINK spdk_lspci 00:07:34.843 LINK spdk_nvme_discover 00:07:35.107 LINK nvmf_tgt 00:07:35.107 LINK rpc_client_test 00:07:35.107 LINK interrupt_tgt 00:07:35.107 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:07:35.107 CC test/env/mem_callbacks/mem_callbacks.o 00:07:35.107 CXX test/cpp_headers/pci_ids.o 00:07:35.107 CXX test/cpp_headers/pipe.o 00:07:35.107 LINK iscsi_tgt 00:07:35.107 CXX test/cpp_headers/queue.o 00:07:35.107 CXX test/cpp_headers/reduce.o 00:07:35.107 CXX test/cpp_headers/rpc.o 00:07:35.107 CXX test/cpp_headers/scheduler.o 00:07:35.107 CXX test/cpp_headers/scsi.o 00:07:35.107 CXX test/cpp_headers/scsi_spec.o 00:07:35.107 LINK jsoncat 00:07:35.107 CXX test/cpp_headers/sock.o 00:07:35.108 CXX test/cpp_headers/stdinc.o 00:07:35.108 CXX test/cpp_headers/string.o 00:07:35.108 LINK histogram_perf 00:07:35.108 CXX test/cpp_headers/thread.o 00:07:35.108 CXX test/cpp_headers/trace.o 00:07:35.108 CXX test/cpp_headers/trace_parser.o 00:07:35.108 CXX test/cpp_headers/tree.o 00:07:35.108 CXX test/cpp_headers/ublk.o 00:07:35.108 CXX test/cpp_headers/uuid.o 00:07:35.108 CXX test/cpp_headers/util.o 00:07:35.108 CXX test/cpp_headers/version.o 00:07:35.108 CXX test/cpp_headers/vfio_user_pci.o 00:07:35.108 CXX test/cpp_headers/vfio_user_spec.o 00:07:35.108 CXX test/cpp_headers/vhost.o 00:07:35.108 CXX test/cpp_headers/vmd.o 00:07:35.108 CXX test/cpp_headers/xor.o 00:07:35.108 CXX test/cpp_headers/zipf.o 00:07:35.108 LINK vtophys 00:07:35.108 LINK spdk_trace_record 00:07:35.108 LINK poller_perf 00:07:35.367 LINK zipf 00:07:35.367 LINK verify 00:07:35.367 LINK env_dpdk_post_init 00:07:35.367 LINK stub 00:07:35.367 LINK bdev_svc 00:07:35.367 LINK spdk_tgt 00:07:35.367 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:07:35.367 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:07:35.367 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:07:35.367 LINK ioat_perf 00:07:35.367 LINK spdk_dd 00:07:35.367 LINK spdk_trace 00:07:35.367 LINK pci_ut 00:07:35.625 LINK test_dma 00:07:35.625 LINK spdk_nvme 00:07:35.625 LINK spdk_bdev 00:07:35.625 CC examples/sock/hello_world/hello_sock.o 00:07:35.625 CC examples/idxd/perf/perf.o 00:07:35.625 LINK nvme_fuzz 00:07:35.625 CC examples/vmd/led/led.o 00:07:35.625 CC examples/vmd/lsvmd/lsvmd.o 00:07:35.625 LINK spdk_top 00:07:35.625 CC test/event/reactor/reactor.o 00:07:35.625 LINK vhost_fuzz 00:07:35.625 CC test/event/event_perf/event_perf.o 00:07:35.625 CC test/event/reactor_perf/reactor_perf.o 00:07:35.883 CC test/event/app_repeat/app_repeat.o 00:07:35.883 CC examples/thread/thread/thread_ex.o 00:07:35.883 CC test/event/scheduler/scheduler.o 00:07:35.883 LINK spdk_nvme_perf 00:07:35.883 CC app/vhost/vhost.o 00:07:35.883 LINK spdk_nvme_identify 00:07:35.883 LINK led 00:07:35.883 LINK lsvmd 00:07:35.883 LINK mem_callbacks 00:07:35.883 LINK reactor 00:07:35.883 LINK reactor_perf 00:07:35.883 LINK event_perf 00:07:35.883 LINK hello_sock 00:07:35.883 LINK app_repeat 00:07:35.883 LINK idxd_perf 00:07:36.141 LINK thread 00:07:36.141 LINK scheduler 00:07:36.141 LINK vhost 00:07:36.141 CC test/nvme/reset/reset.o 00:07:36.141 CC test/nvme/err_injection/err_injection.o 00:07:36.141 CC test/nvme/cuse/cuse.o 00:07:36.141 CC test/nvme/sgl/sgl.o 00:07:36.141 CC test/nvme/fused_ordering/fused_ordering.o 00:07:36.141 CC test/nvme/boot_partition/boot_partition.o 00:07:36.141 CC test/nvme/compliance/nvme_compliance.o 00:07:36.141 CC test/nvme/overhead/overhead.o 00:07:36.141 CC test/nvme/startup/startup.o 00:07:36.141 CC test/nvme/doorbell_aers/doorbell_aers.o 00:07:36.141 CC test/nvme/simple_copy/simple_copy.o 00:07:36.141 CC test/nvme/aer/aer.o 00:07:36.141 CC test/nvme/reserve/reserve.o 00:07:36.141 CC test/nvme/connect_stress/connect_stress.o 00:07:36.141 CC test/nvme/e2edp/nvme_dp.o 00:07:36.141 CC test/nvme/fdp/fdp.o 00:07:36.141 CC test/accel/dif/dif.o 00:07:36.141 CC test/blobfs/mkfs/mkfs.o 00:07:36.141 LINK memory_ut 00:07:36.141 CC test/lvol/esnap/esnap.o 00:07:36.399 LINK boot_partition 00:07:36.399 LINK err_injection 00:07:36.399 LINK doorbell_aers 00:07:36.399 LINK startup 00:07:36.399 LINK fused_ordering 00:07:36.399 LINK connect_stress 00:07:36.399 LINK reserve 00:07:36.399 LINK reset 00:07:36.399 LINK simple_copy 00:07:36.399 LINK sgl 00:07:36.399 LINK nvme_dp 00:07:36.399 CC examples/nvme/nvme_manage/nvme_manage.o 00:07:36.399 CC examples/nvme/hotplug/hotplug.o 00:07:36.399 CC examples/nvme/reconnect/reconnect.o 00:07:36.399 LINK overhead 00:07:36.399 CC examples/nvme/arbitration/arbitration.o 00:07:36.399 CC examples/nvme/hello_world/hello_world.o 00:07:36.399 CC examples/nvme/cmb_copy/cmb_copy.o 00:07:36.399 CC examples/nvme/abort/abort.o 00:07:36.399 LINK mkfs 00:07:36.399 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:07:36.399 LINK nvme_compliance 00:07:36.399 LINK aer 00:07:36.399 LINK fdp 00:07:36.399 CC examples/accel/perf/accel_perf.o 00:07:36.658 CC examples/fsdev/hello_world/hello_fsdev.o 00:07:36.658 CC examples/blob/cli/blobcli.o 00:07:36.658 CC examples/blob/hello_world/hello_blob.o 00:07:36.658 LINK cmb_copy 00:07:36.658 LINK pmr_persistence 00:07:36.658 LINK hotplug 00:07:36.658 LINK hello_world 00:07:36.658 LINK reconnect 00:07:36.658 LINK arbitration 00:07:36.658 LINK abort 00:07:36.658 LINK dif 00:07:36.658 LINK iscsi_fuzz 00:07:36.916 LINK hello_fsdev 00:07:36.916 LINK hello_blob 00:07:36.916 LINK nvme_manage 00:07:36.916 LINK accel_perf 00:07:36.916 LINK blobcli 00:07:37.175 LINK cuse 00:07:37.175 CC test/bdev/bdevio/bdevio.o 00:07:37.434 CC examples/bdev/hello_world/hello_bdev.o 00:07:37.434 CC examples/bdev/bdevperf/bdevperf.o 00:07:37.693 LINK bdevio 00:07:37.693 LINK hello_bdev 00:07:37.952 LINK bdevperf 00:07:38.519 CC examples/nvmf/nvmf/nvmf.o 00:07:38.778 LINK nvmf 00:07:39.716 LINK esnap 00:07:39.975 00:07:39.975 real 0m55.890s 00:07:39.975 user 8m16.931s 00:07:39.975 sys 3m44.829s 00:07:39.975 06:18:11 make -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:07:39.975 06:18:11 make -- common/autotest_common.sh@10 -- $ set +x 00:07:39.975 ************************************ 00:07:39.975 END TEST make 00:07:39.975 ************************************ 00:07:39.975 06:18:11 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:07:39.975 06:18:11 -- pm/common@29 -- $ signal_monitor_resources TERM 00:07:39.975 06:18:11 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:07:39.975 06:18:11 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:07:39.975 06:18:11 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:07:39.975 06:18:11 -- pm/common@44 -- $ pid=273108 00:07:39.975 06:18:11 -- pm/common@50 -- $ kill -TERM 273108 00:07:39.975 06:18:11 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:07:39.975 06:18:11 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:07:39.975 06:18:11 -- pm/common@44 -- $ pid=273109 00:07:39.975 06:18:11 -- pm/common@50 -- $ kill -TERM 273109 00:07:39.975 06:18:11 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:07:39.975 06:18:11 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:07:39.975 06:18:11 -- pm/common@44 -- $ pid=273111 00:07:40.234 06:18:11 -- pm/common@50 -- $ kill -TERM 273111 00:07:40.234 06:18:11 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:07:40.234 06:18:11 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:07:40.234 06:18:11 -- pm/common@44 -- $ pid=273134 00:07:40.234 06:18:11 -- pm/common@50 -- $ sudo -E kill -TERM 273134 00:07:40.234 06:18:11 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:07:40.234 06:18:11 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:07:40.234 06:18:11 -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:40.234 06:18:11 -- common/autotest_common.sh@1691 -- # lcov --version 00:07:40.234 06:18:11 -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:40.234 06:18:11 -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:40.234 06:18:11 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:40.234 06:18:11 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:40.234 06:18:11 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:40.234 06:18:11 -- scripts/common.sh@336 -- # IFS=.-: 00:07:40.234 06:18:12 -- scripts/common.sh@336 -- # read -ra ver1 00:07:40.234 06:18:12 -- scripts/common.sh@337 -- # IFS=.-: 00:07:40.234 06:18:12 -- scripts/common.sh@337 -- # read -ra ver2 00:07:40.234 06:18:12 -- scripts/common.sh@338 -- # local 'op=<' 00:07:40.234 06:18:12 -- scripts/common.sh@340 -- # ver1_l=2 00:07:40.235 06:18:12 -- scripts/common.sh@341 -- # ver2_l=1 00:07:40.235 06:18:12 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:40.235 06:18:12 -- scripts/common.sh@344 -- # case "$op" in 00:07:40.235 06:18:12 -- scripts/common.sh@345 -- # : 1 00:07:40.235 06:18:12 -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:40.235 06:18:12 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:40.235 06:18:12 -- scripts/common.sh@365 -- # decimal 1 00:07:40.235 06:18:12 -- scripts/common.sh@353 -- # local d=1 00:07:40.235 06:18:12 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:40.235 06:18:12 -- scripts/common.sh@355 -- # echo 1 00:07:40.235 06:18:12 -- scripts/common.sh@365 -- # ver1[v]=1 00:07:40.235 06:18:12 -- scripts/common.sh@366 -- # decimal 2 00:07:40.235 06:18:12 -- scripts/common.sh@353 -- # local d=2 00:07:40.235 06:18:12 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:40.235 06:18:12 -- scripts/common.sh@355 -- # echo 2 00:07:40.235 06:18:12 -- scripts/common.sh@366 -- # ver2[v]=2 00:07:40.235 06:18:12 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:40.235 06:18:12 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:40.235 06:18:12 -- scripts/common.sh@368 -- # return 0 00:07:40.235 06:18:12 -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:40.235 06:18:12 -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:40.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:40.235 --rc genhtml_branch_coverage=1 00:07:40.235 --rc genhtml_function_coverage=1 00:07:40.235 --rc genhtml_legend=1 00:07:40.235 --rc geninfo_all_blocks=1 00:07:40.235 --rc geninfo_unexecuted_blocks=1 00:07:40.235 00:07:40.235 ' 00:07:40.235 06:18:12 -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:40.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:40.235 --rc genhtml_branch_coverage=1 00:07:40.235 --rc genhtml_function_coverage=1 00:07:40.235 --rc genhtml_legend=1 00:07:40.235 --rc geninfo_all_blocks=1 00:07:40.235 --rc geninfo_unexecuted_blocks=1 00:07:40.235 00:07:40.235 ' 00:07:40.235 06:18:12 -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:40.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:40.235 --rc genhtml_branch_coverage=1 00:07:40.235 --rc genhtml_function_coverage=1 00:07:40.235 --rc genhtml_legend=1 00:07:40.235 --rc geninfo_all_blocks=1 00:07:40.235 --rc geninfo_unexecuted_blocks=1 00:07:40.235 00:07:40.235 ' 00:07:40.235 06:18:12 -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:40.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:40.235 --rc genhtml_branch_coverage=1 00:07:40.235 --rc genhtml_function_coverage=1 00:07:40.235 --rc genhtml_legend=1 00:07:40.235 --rc geninfo_all_blocks=1 00:07:40.235 --rc geninfo_unexecuted_blocks=1 00:07:40.235 00:07:40.235 ' 00:07:40.235 06:18:12 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:40.235 06:18:12 -- nvmf/common.sh@7 -- # uname -s 00:07:40.235 06:18:12 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:40.235 06:18:12 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:40.235 06:18:12 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:40.235 06:18:12 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:40.235 06:18:12 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:40.235 06:18:12 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:40.235 06:18:12 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:40.235 06:18:12 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:40.235 06:18:12 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:40.235 06:18:12 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:40.235 06:18:12 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:07:40.235 06:18:12 -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:07:40.235 06:18:12 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:40.235 06:18:12 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:40.235 06:18:12 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:40.235 06:18:12 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:40.235 06:18:12 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:40.235 06:18:12 -- scripts/common.sh@15 -- # shopt -s extglob 00:07:40.235 06:18:12 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:40.235 06:18:12 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:40.235 06:18:12 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:40.235 06:18:12 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:40.235 06:18:12 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:40.235 06:18:12 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:40.235 06:18:12 -- paths/export.sh@5 -- # export PATH 00:07:40.235 06:18:12 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:40.235 06:18:12 -- nvmf/common.sh@51 -- # : 0 00:07:40.235 06:18:12 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:40.235 06:18:12 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:40.235 06:18:12 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:40.235 06:18:12 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:40.235 06:18:12 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:40.235 06:18:12 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:40.235 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:40.235 06:18:12 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:40.235 06:18:12 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:40.235 06:18:12 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:40.235 06:18:12 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:07:40.235 06:18:12 -- spdk/autotest.sh@32 -- # uname -s 00:07:40.235 06:18:12 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:07:40.235 06:18:12 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:07:40.235 06:18:12 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:07:40.235 06:18:12 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:07:40.235 06:18:12 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:07:40.235 06:18:12 -- spdk/autotest.sh@44 -- # modprobe nbd 00:07:40.235 06:18:12 -- spdk/autotest.sh@46 -- # type -P udevadm 00:07:40.235 06:18:12 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:07:40.495 06:18:12 -- spdk/autotest.sh@48 -- # udevadm_pid=336070 00:07:40.495 06:18:12 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:07:40.495 06:18:12 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:07:40.495 06:18:12 -- pm/common@17 -- # local monitor 00:07:40.495 06:18:12 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:07:40.495 06:18:12 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:07:40.495 06:18:12 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:07:40.495 06:18:12 -- pm/common@21 -- # date +%s 00:07:40.495 06:18:12 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:07:40.495 06:18:12 -- pm/common@21 -- # date +%s 00:07:40.495 06:18:12 -- pm/common@25 -- # sleep 1 00:07:40.495 06:18:12 -- pm/common@21 -- # date +%s 00:07:40.495 06:18:12 -- pm/common@21 -- # date +%s 00:07:40.495 06:18:12 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732079892 00:07:40.495 06:18:12 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732079892 00:07:40.495 06:18:12 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732079892 00:07:40.495 06:18:12 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732079892 00:07:40.495 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732079892_collect-vmstat.pm.log 00:07:40.495 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732079892_collect-cpu-load.pm.log 00:07:40.495 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732079892_collect-cpu-temp.pm.log 00:07:40.495 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732079892_collect-bmc-pm.bmc.pm.log 00:07:41.432 06:18:13 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:07:41.432 06:18:13 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:07:41.432 06:18:13 -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:41.432 06:18:13 -- common/autotest_common.sh@10 -- # set +x 00:07:41.432 06:18:13 -- spdk/autotest.sh@59 -- # create_test_list 00:07:41.432 06:18:13 -- common/autotest_common.sh@750 -- # xtrace_disable 00:07:41.432 06:18:13 -- common/autotest_common.sh@10 -- # set +x 00:07:41.432 06:18:13 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:07:41.433 06:18:13 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:41.433 06:18:13 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:41.433 06:18:13 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:07:41.433 06:18:13 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:41.433 06:18:13 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:07:41.433 06:18:13 -- common/autotest_common.sh@1455 -- # uname 00:07:41.433 06:18:13 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:07:41.433 06:18:13 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:07:41.433 06:18:13 -- common/autotest_common.sh@1475 -- # uname 00:07:41.433 06:18:13 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:07:41.433 06:18:13 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:07:41.433 06:18:13 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:07:41.433 lcov: LCOV version 1.15 00:07:41.433 06:18:13 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:07:59.527 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:07:59.527 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:08:06.095 06:18:37 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:08:06.095 06:18:37 -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:06.095 06:18:37 -- common/autotest_common.sh@10 -- # set +x 00:08:06.095 06:18:37 -- spdk/autotest.sh@78 -- # rm -f 00:08:06.095 06:18:37 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:08:09.385 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:08:09.385 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:08:09.385 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:08:09.385 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:08:09.385 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:08:09.385 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:08:09.385 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:08:09.385 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:08:09.385 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:08:09.385 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:08:09.385 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:08:09.385 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:08:09.385 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:08:09.385 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:08:09.385 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:08:09.385 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:08:09.385 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:08:09.385 06:18:41 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:08:09.385 06:18:41 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:08:09.385 06:18:41 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:08:09.385 06:18:41 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:08:09.385 06:18:41 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:08:09.385 06:18:41 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:08:09.385 06:18:41 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:08:09.385 06:18:41 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:08:09.385 06:18:41 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:08:09.385 06:18:41 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:08:09.385 06:18:41 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:08:09.385 06:18:41 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:08:09.385 06:18:41 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:08:09.385 06:18:41 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:08:09.385 06:18:41 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:08:09.385 No valid GPT data, bailing 00:08:09.385 06:18:41 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:08:09.385 06:18:41 -- scripts/common.sh@394 -- # pt= 00:08:09.385 06:18:41 -- scripts/common.sh@395 -- # return 1 00:08:09.385 06:18:41 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:08:09.385 1+0 records in 00:08:09.385 1+0 records out 00:08:09.385 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00416546 s, 252 MB/s 00:08:09.385 06:18:41 -- spdk/autotest.sh@105 -- # sync 00:08:09.385 06:18:41 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:08:09.385 06:18:41 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:08:09.385 06:18:41 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:08:15.956 06:18:46 -- spdk/autotest.sh@111 -- # uname -s 00:08:15.956 06:18:46 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:08:15.956 06:18:46 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:08:15.956 06:18:46 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:08:17.862 Hugepages 00:08:17.862 node hugesize free / total 00:08:17.862 node0 1048576kB 0 / 0 00:08:17.862 node0 2048kB 0 / 0 00:08:17.862 node1 1048576kB 0 / 0 00:08:17.862 node1 2048kB 0 / 0 00:08:17.862 00:08:17.863 Type BDF Vendor Device NUMA Driver Device Block devices 00:08:17.863 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:08:17.863 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:08:17.863 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:08:17.863 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:08:17.863 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:08:17.863 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:08:17.863 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:08:17.863 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:08:17.863 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:08:17.863 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:08:17.863 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:08:17.863 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:08:17.863 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:08:17.863 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:08:17.863 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:08:17.863 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:08:17.863 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:08:17.863 06:18:49 -- spdk/autotest.sh@117 -- # uname -s 00:08:17.863 06:18:49 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:08:17.863 06:18:49 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:08:17.863 06:18:49 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:08:21.153 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:08:21.153 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:08:21.153 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:08:21.153 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:08:21.153 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:08:21.153 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:08:21.153 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:08:21.153 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:08:21.153 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:08:21.153 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:08:21.153 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:08:21.153 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:08:21.153 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:08:21.153 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:08:21.153 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:08:21.153 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:08:22.530 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:08:22.530 06:18:54 -- common/autotest_common.sh@1515 -- # sleep 1 00:08:23.468 06:18:55 -- common/autotest_common.sh@1516 -- # bdfs=() 00:08:23.468 06:18:55 -- common/autotest_common.sh@1516 -- # local bdfs 00:08:23.468 06:18:55 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:08:23.468 06:18:55 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:08:23.468 06:18:55 -- common/autotest_common.sh@1496 -- # bdfs=() 00:08:23.468 06:18:55 -- common/autotest_common.sh@1496 -- # local bdfs 00:08:23.468 06:18:55 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:23.468 06:18:55 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:08:23.468 06:18:55 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:08:23.468 06:18:55 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:08:23.468 06:18:55 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:5e:00.0 00:08:23.468 06:18:55 -- common/autotest_common.sh@1520 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:08:26.760 Waiting for block devices as requested 00:08:26.760 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:08:26.760 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:08:26.760 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:08:26.760 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:08:26.760 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:08:26.760 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:08:26.760 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:08:27.019 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:08:27.019 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:08:27.019 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:08:27.278 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:08:27.278 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:08:27.278 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:08:27.537 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:08:27.537 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:08:27.537 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:08:27.537 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:08:27.796 06:18:59 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:08:27.796 06:18:59 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:5e:00.0 00:08:27.796 06:18:59 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 00:08:27.796 06:18:59 -- common/autotest_common.sh@1485 -- # grep 0000:5e:00.0/nvme/nvme 00:08:27.796 06:18:59 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:08:27.796 06:18:59 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 ]] 00:08:27.796 06:18:59 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:08:27.796 06:18:59 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:08:27.796 06:18:59 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:08:27.796 06:18:59 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:08:27.796 06:18:59 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:08:27.796 06:18:59 -- common/autotest_common.sh@1529 -- # grep oacs 00:08:27.796 06:18:59 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:08:27.796 06:18:59 -- common/autotest_common.sh@1529 -- # oacs=' 0xe' 00:08:27.796 06:18:59 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:08:27.796 06:18:59 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:08:27.796 06:18:59 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:08:27.796 06:18:59 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:08:27.796 06:18:59 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:08:27.796 06:18:59 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:08:27.796 06:18:59 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:08:27.796 06:18:59 -- common/autotest_common.sh@1541 -- # continue 00:08:27.796 06:18:59 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:08:27.796 06:18:59 -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:27.796 06:18:59 -- common/autotest_common.sh@10 -- # set +x 00:08:27.796 06:18:59 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:08:27.796 06:18:59 -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:27.796 06:18:59 -- common/autotest_common.sh@10 -- # set +x 00:08:27.796 06:18:59 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:08:31.087 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:08:31.087 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:08:31.087 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:08:31.087 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:08:31.087 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:08:31.087 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:08:31.087 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:08:31.087 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:08:31.087 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:08:31.087 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:08:31.087 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:08:31.087 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:08:31.087 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:08:31.087 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:08:31.087 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:08:31.087 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:08:32.467 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:08:32.467 06:19:04 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:08:32.467 06:19:04 -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:32.467 06:19:04 -- common/autotest_common.sh@10 -- # set +x 00:08:32.467 06:19:04 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:08:32.467 06:19:04 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:08:32.467 06:19:04 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:08:32.467 06:19:04 -- common/autotest_common.sh@1561 -- # bdfs=() 00:08:32.467 06:19:04 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:08:32.467 06:19:04 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:08:32.467 06:19:04 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:08:32.467 06:19:04 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:08:32.467 06:19:04 -- common/autotest_common.sh@1496 -- # bdfs=() 00:08:32.467 06:19:04 -- common/autotest_common.sh@1496 -- # local bdfs 00:08:32.467 06:19:04 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:32.467 06:19:04 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:08:32.467 06:19:04 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:08:32.467 06:19:04 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:08:32.467 06:19:04 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:5e:00.0 00:08:32.467 06:19:04 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:08:32.467 06:19:04 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:5e:00.0/device 00:08:32.467 06:19:04 -- common/autotest_common.sh@1564 -- # device=0x0a54 00:08:32.467 06:19:04 -- common/autotest_common.sh@1565 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:08:32.467 06:19:04 -- common/autotest_common.sh@1566 -- # bdfs+=($bdf) 00:08:32.467 06:19:04 -- common/autotest_common.sh@1570 -- # (( 1 > 0 )) 00:08:32.467 06:19:04 -- common/autotest_common.sh@1571 -- # printf '%s\n' 0000:5e:00.0 00:08:32.467 06:19:04 -- common/autotest_common.sh@1577 -- # [[ -z 0000:5e:00.0 ]] 00:08:32.467 06:19:04 -- common/autotest_common.sh@1582 -- # spdk_tgt_pid=350326 00:08:32.467 06:19:04 -- common/autotest_common.sh@1583 -- # waitforlisten 350326 00:08:32.467 06:19:04 -- common/autotest_common.sh@1581 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:08:32.467 06:19:04 -- common/autotest_common.sh@833 -- # '[' -z 350326 ']' 00:08:32.467 06:19:04 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:32.467 06:19:04 -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:32.467 06:19:04 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:32.467 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:32.467 06:19:04 -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:32.467 06:19:04 -- common/autotest_common.sh@10 -- # set +x 00:08:32.467 [2024-11-20 06:19:04.204956] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:08:32.467 [2024-11-20 06:19:04.205007] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid350326 ] 00:08:32.467 [2024-11-20 06:19:04.279830] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:32.726 [2024-11-20 06:19:04.321177] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:32.726 06:19:04 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:32.726 06:19:04 -- common/autotest_common.sh@866 -- # return 0 00:08:32.726 06:19:04 -- common/autotest_common.sh@1585 -- # bdf_id=0 00:08:32.726 06:19:04 -- common/autotest_common.sh@1586 -- # for bdf in "${bdfs[@]}" 00:08:32.726 06:19:04 -- common/autotest_common.sh@1587 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:5e:00.0 00:08:36.026 nvme0n1 00:08:36.026 06:19:07 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:08:36.026 [2024-11-20 06:19:07.725184] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:08:36.026 request: 00:08:36.026 { 00:08:36.026 "nvme_ctrlr_name": "nvme0", 00:08:36.026 "password": "test", 00:08:36.026 "method": "bdev_nvme_opal_revert", 00:08:36.026 "req_id": 1 00:08:36.026 } 00:08:36.026 Got JSON-RPC error response 00:08:36.026 response: 00:08:36.026 { 00:08:36.026 "code": -32602, 00:08:36.026 "message": "Invalid parameters" 00:08:36.026 } 00:08:36.026 06:19:07 -- common/autotest_common.sh@1589 -- # true 00:08:36.026 06:19:07 -- common/autotest_common.sh@1590 -- # (( ++bdf_id )) 00:08:36.026 06:19:07 -- common/autotest_common.sh@1593 -- # killprocess 350326 00:08:36.026 06:19:07 -- common/autotest_common.sh@952 -- # '[' -z 350326 ']' 00:08:36.026 06:19:07 -- common/autotest_common.sh@956 -- # kill -0 350326 00:08:36.026 06:19:07 -- common/autotest_common.sh@957 -- # uname 00:08:36.026 06:19:07 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:36.026 06:19:07 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 350326 00:08:36.026 06:19:07 -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:36.026 06:19:07 -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:36.026 06:19:07 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 350326' 00:08:36.026 killing process with pid 350326 00:08:36.026 06:19:07 -- common/autotest_common.sh@971 -- # kill 350326 00:08:36.026 06:19:07 -- common/autotest_common.sh@976 -- # wait 350326 00:08:38.560 06:19:10 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:08:38.560 06:19:10 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:08:38.560 06:19:10 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:08:38.560 06:19:10 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:08:38.560 06:19:10 -- spdk/autotest.sh@149 -- # timing_enter lib 00:08:38.560 06:19:10 -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:38.560 06:19:10 -- common/autotest_common.sh@10 -- # set +x 00:08:38.560 06:19:10 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:08:38.560 06:19:10 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:08:38.560 06:19:10 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:38.560 06:19:10 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:38.560 06:19:10 -- common/autotest_common.sh@10 -- # set +x 00:08:38.560 ************************************ 00:08:38.560 START TEST env 00:08:38.560 ************************************ 00:08:38.560 06:19:10 env -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:08:38.560 * Looking for test storage... 00:08:38.560 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:08:38.560 06:19:10 env -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:38.560 06:19:10 env -- common/autotest_common.sh@1691 -- # lcov --version 00:08:38.560 06:19:10 env -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:38.560 06:19:10 env -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:38.560 06:19:10 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:38.560 06:19:10 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:38.560 06:19:10 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:38.560 06:19:10 env -- scripts/common.sh@336 -- # IFS=.-: 00:08:38.560 06:19:10 env -- scripts/common.sh@336 -- # read -ra ver1 00:08:38.560 06:19:10 env -- scripts/common.sh@337 -- # IFS=.-: 00:08:38.560 06:19:10 env -- scripts/common.sh@337 -- # read -ra ver2 00:08:38.560 06:19:10 env -- scripts/common.sh@338 -- # local 'op=<' 00:08:38.560 06:19:10 env -- scripts/common.sh@340 -- # ver1_l=2 00:08:38.560 06:19:10 env -- scripts/common.sh@341 -- # ver2_l=1 00:08:38.560 06:19:10 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:38.560 06:19:10 env -- scripts/common.sh@344 -- # case "$op" in 00:08:38.560 06:19:10 env -- scripts/common.sh@345 -- # : 1 00:08:38.560 06:19:10 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:38.560 06:19:10 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:38.560 06:19:10 env -- scripts/common.sh@365 -- # decimal 1 00:08:38.560 06:19:10 env -- scripts/common.sh@353 -- # local d=1 00:08:38.560 06:19:10 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:38.560 06:19:10 env -- scripts/common.sh@355 -- # echo 1 00:08:38.560 06:19:10 env -- scripts/common.sh@365 -- # ver1[v]=1 00:08:38.560 06:19:10 env -- scripts/common.sh@366 -- # decimal 2 00:08:38.560 06:19:10 env -- scripts/common.sh@353 -- # local d=2 00:08:38.560 06:19:10 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:38.560 06:19:10 env -- scripts/common.sh@355 -- # echo 2 00:08:38.560 06:19:10 env -- scripts/common.sh@366 -- # ver2[v]=2 00:08:38.560 06:19:10 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:38.560 06:19:10 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:38.560 06:19:10 env -- scripts/common.sh@368 -- # return 0 00:08:38.560 06:19:10 env -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:38.560 06:19:10 env -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:38.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.560 --rc genhtml_branch_coverage=1 00:08:38.560 --rc genhtml_function_coverage=1 00:08:38.560 --rc genhtml_legend=1 00:08:38.560 --rc geninfo_all_blocks=1 00:08:38.560 --rc geninfo_unexecuted_blocks=1 00:08:38.560 00:08:38.560 ' 00:08:38.560 06:19:10 env -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:38.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.560 --rc genhtml_branch_coverage=1 00:08:38.560 --rc genhtml_function_coverage=1 00:08:38.560 --rc genhtml_legend=1 00:08:38.560 --rc geninfo_all_blocks=1 00:08:38.560 --rc geninfo_unexecuted_blocks=1 00:08:38.560 00:08:38.560 ' 00:08:38.560 06:19:10 env -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:38.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.560 --rc genhtml_branch_coverage=1 00:08:38.560 --rc genhtml_function_coverage=1 00:08:38.560 --rc genhtml_legend=1 00:08:38.560 --rc geninfo_all_blocks=1 00:08:38.560 --rc geninfo_unexecuted_blocks=1 00:08:38.560 00:08:38.560 ' 00:08:38.560 06:19:10 env -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:38.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.560 --rc genhtml_branch_coverage=1 00:08:38.560 --rc genhtml_function_coverage=1 00:08:38.560 --rc genhtml_legend=1 00:08:38.560 --rc geninfo_all_blocks=1 00:08:38.560 --rc geninfo_unexecuted_blocks=1 00:08:38.560 00:08:38.560 ' 00:08:38.560 06:19:10 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:08:38.560 06:19:10 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:38.561 06:19:10 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:38.561 06:19:10 env -- common/autotest_common.sh@10 -- # set +x 00:08:38.561 ************************************ 00:08:38.561 START TEST env_memory 00:08:38.561 ************************************ 00:08:38.561 06:19:10 env.env_memory -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:08:38.561 00:08:38.561 00:08:38.561 CUnit - A unit testing framework for C - Version 2.1-3 00:08:38.561 http://cunit.sourceforge.net/ 00:08:38.561 00:08:38.561 00:08:38.561 Suite: memory 00:08:38.561 Test: alloc and free memory map ...[2024-11-20 06:19:10.295508] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:08:38.561 passed 00:08:38.561 Test: mem map translation ...[2024-11-20 06:19:10.314554] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:08:38.561 [2024-11-20 06:19:10.314573] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:08:38.561 [2024-11-20 06:19:10.314609] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:08:38.561 [2024-11-20 06:19:10.314615] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:08:38.561 passed 00:08:38.561 Test: mem map registration ...[2024-11-20 06:19:10.352998] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:08:38.561 [2024-11-20 06:19:10.353015] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:08:38.561 passed 00:08:38.883 Test: mem map adjacent registrations ...passed 00:08:38.883 00:08:38.883 Run Summary: Type Total Ran Passed Failed Inactive 00:08:38.883 suites 1 1 n/a 0 0 00:08:38.883 tests 4 4 4 0 0 00:08:38.883 asserts 152 152 152 0 n/a 00:08:38.883 00:08:38.883 Elapsed time = 0.141 seconds 00:08:38.883 00:08:38.883 real 0m0.154s 00:08:38.883 user 0m0.144s 00:08:38.883 sys 0m0.009s 00:08:38.883 06:19:10 env.env_memory -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:38.883 06:19:10 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:08:38.883 ************************************ 00:08:38.883 END TEST env_memory 00:08:38.883 ************************************ 00:08:38.883 06:19:10 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:08:38.883 06:19:10 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:38.883 06:19:10 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:38.883 06:19:10 env -- common/autotest_common.sh@10 -- # set +x 00:08:38.883 ************************************ 00:08:38.883 START TEST env_vtophys 00:08:38.883 ************************************ 00:08:38.883 06:19:10 env.env_vtophys -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:08:38.883 EAL: lib.eal log level changed from notice to debug 00:08:38.883 EAL: Detected lcore 0 as core 0 on socket 0 00:08:38.883 EAL: Detected lcore 1 as core 1 on socket 0 00:08:38.883 EAL: Detected lcore 2 as core 2 on socket 0 00:08:38.883 EAL: Detected lcore 3 as core 3 on socket 0 00:08:38.883 EAL: Detected lcore 4 as core 4 on socket 0 00:08:38.883 EAL: Detected lcore 5 as core 5 on socket 0 00:08:38.883 EAL: Detected lcore 6 as core 6 on socket 0 00:08:38.883 EAL: Detected lcore 7 as core 8 on socket 0 00:08:38.883 EAL: Detected lcore 8 as core 9 on socket 0 00:08:38.883 EAL: Detected lcore 9 as core 10 on socket 0 00:08:38.883 EAL: Detected lcore 10 as core 11 on socket 0 00:08:38.883 EAL: Detected lcore 11 as core 12 on socket 0 00:08:38.883 EAL: Detected lcore 12 as core 13 on socket 0 00:08:38.883 EAL: Detected lcore 13 as core 16 on socket 0 00:08:38.883 EAL: Detected lcore 14 as core 17 on socket 0 00:08:38.883 EAL: Detected lcore 15 as core 18 on socket 0 00:08:38.883 EAL: Detected lcore 16 as core 19 on socket 0 00:08:38.883 EAL: Detected lcore 17 as core 20 on socket 0 00:08:38.883 EAL: Detected lcore 18 as core 21 on socket 0 00:08:38.883 EAL: Detected lcore 19 as core 25 on socket 0 00:08:38.883 EAL: Detected lcore 20 as core 26 on socket 0 00:08:38.883 EAL: Detected lcore 21 as core 27 on socket 0 00:08:38.883 EAL: Detected lcore 22 as core 28 on socket 0 00:08:38.883 EAL: Detected lcore 23 as core 29 on socket 0 00:08:38.884 EAL: Detected lcore 24 as core 0 on socket 1 00:08:38.884 EAL: Detected lcore 25 as core 1 on socket 1 00:08:38.884 EAL: Detected lcore 26 as core 2 on socket 1 00:08:38.884 EAL: Detected lcore 27 as core 3 on socket 1 00:08:38.884 EAL: Detected lcore 28 as core 4 on socket 1 00:08:38.884 EAL: Detected lcore 29 as core 5 on socket 1 00:08:38.884 EAL: Detected lcore 30 as core 6 on socket 1 00:08:38.884 EAL: Detected lcore 31 as core 8 on socket 1 00:08:38.884 EAL: Detected lcore 32 as core 10 on socket 1 00:08:38.884 EAL: Detected lcore 33 as core 11 on socket 1 00:08:38.884 EAL: Detected lcore 34 as core 12 on socket 1 00:08:38.884 EAL: Detected lcore 35 as core 13 on socket 1 00:08:38.884 EAL: Detected lcore 36 as core 16 on socket 1 00:08:38.884 EAL: Detected lcore 37 as core 17 on socket 1 00:08:38.884 EAL: Detected lcore 38 as core 18 on socket 1 00:08:38.884 EAL: Detected lcore 39 as core 19 on socket 1 00:08:38.884 EAL: Detected lcore 40 as core 20 on socket 1 00:08:38.884 EAL: Detected lcore 41 as core 21 on socket 1 00:08:38.884 EAL: Detected lcore 42 as core 24 on socket 1 00:08:38.884 EAL: Detected lcore 43 as core 25 on socket 1 00:08:38.884 EAL: Detected lcore 44 as core 26 on socket 1 00:08:38.884 EAL: Detected lcore 45 as core 27 on socket 1 00:08:38.884 EAL: Detected lcore 46 as core 28 on socket 1 00:08:38.884 EAL: Detected lcore 47 as core 29 on socket 1 00:08:38.884 EAL: Detected lcore 48 as core 0 on socket 0 00:08:38.884 EAL: Detected lcore 49 as core 1 on socket 0 00:08:38.884 EAL: Detected lcore 50 as core 2 on socket 0 00:08:38.884 EAL: Detected lcore 51 as core 3 on socket 0 00:08:38.884 EAL: Detected lcore 52 as core 4 on socket 0 00:08:38.884 EAL: Detected lcore 53 as core 5 on socket 0 00:08:38.884 EAL: Detected lcore 54 as core 6 on socket 0 00:08:38.884 EAL: Detected lcore 55 as core 8 on socket 0 00:08:38.884 EAL: Detected lcore 56 as core 9 on socket 0 00:08:38.884 EAL: Detected lcore 57 as core 10 on socket 0 00:08:38.884 EAL: Detected lcore 58 as core 11 on socket 0 00:08:38.884 EAL: Detected lcore 59 as core 12 on socket 0 00:08:38.884 EAL: Detected lcore 60 as core 13 on socket 0 00:08:38.884 EAL: Detected lcore 61 as core 16 on socket 0 00:08:38.884 EAL: Detected lcore 62 as core 17 on socket 0 00:08:38.884 EAL: Detected lcore 63 as core 18 on socket 0 00:08:38.884 EAL: Detected lcore 64 as core 19 on socket 0 00:08:38.884 EAL: Detected lcore 65 as core 20 on socket 0 00:08:38.884 EAL: Detected lcore 66 as core 21 on socket 0 00:08:38.884 EAL: Detected lcore 67 as core 25 on socket 0 00:08:38.884 EAL: Detected lcore 68 as core 26 on socket 0 00:08:38.884 EAL: Detected lcore 69 as core 27 on socket 0 00:08:38.884 EAL: Detected lcore 70 as core 28 on socket 0 00:08:38.884 EAL: Detected lcore 71 as core 29 on socket 0 00:08:38.884 EAL: Detected lcore 72 as core 0 on socket 1 00:08:38.884 EAL: Detected lcore 73 as core 1 on socket 1 00:08:38.884 EAL: Detected lcore 74 as core 2 on socket 1 00:08:38.884 EAL: Detected lcore 75 as core 3 on socket 1 00:08:38.884 EAL: Detected lcore 76 as core 4 on socket 1 00:08:38.884 EAL: Detected lcore 77 as core 5 on socket 1 00:08:38.884 EAL: Detected lcore 78 as core 6 on socket 1 00:08:38.884 EAL: Detected lcore 79 as core 8 on socket 1 00:08:38.884 EAL: Detected lcore 80 as core 10 on socket 1 00:08:38.884 EAL: Detected lcore 81 as core 11 on socket 1 00:08:38.884 EAL: Detected lcore 82 as core 12 on socket 1 00:08:38.884 EAL: Detected lcore 83 as core 13 on socket 1 00:08:38.884 EAL: Detected lcore 84 as core 16 on socket 1 00:08:38.884 EAL: Detected lcore 85 as core 17 on socket 1 00:08:38.884 EAL: Detected lcore 86 as core 18 on socket 1 00:08:38.884 EAL: Detected lcore 87 as core 19 on socket 1 00:08:38.884 EAL: Detected lcore 88 as core 20 on socket 1 00:08:38.884 EAL: Detected lcore 89 as core 21 on socket 1 00:08:38.884 EAL: Detected lcore 90 as core 24 on socket 1 00:08:38.884 EAL: Detected lcore 91 as core 25 on socket 1 00:08:38.884 EAL: Detected lcore 92 as core 26 on socket 1 00:08:38.884 EAL: Detected lcore 93 as core 27 on socket 1 00:08:38.884 EAL: Detected lcore 94 as core 28 on socket 1 00:08:38.884 EAL: Detected lcore 95 as core 29 on socket 1 00:08:38.884 EAL: Maximum logical cores by configuration: 128 00:08:38.884 EAL: Detected CPU lcores: 96 00:08:38.884 EAL: Detected NUMA nodes: 2 00:08:38.884 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:08:38.884 EAL: Detected shared linkage of DPDK 00:08:38.884 EAL: No shared files mode enabled, IPC will be disabled 00:08:38.884 EAL: Bus pci wants IOVA as 'DC' 00:08:38.884 EAL: Buses did not request a specific IOVA mode. 00:08:38.884 EAL: IOMMU is available, selecting IOVA as VA mode. 00:08:38.884 EAL: Selected IOVA mode 'VA' 00:08:38.884 EAL: Probing VFIO support... 00:08:38.884 EAL: IOMMU type 1 (Type 1) is supported 00:08:38.884 EAL: IOMMU type 7 (sPAPR) is not supported 00:08:38.884 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:08:38.884 EAL: VFIO support initialized 00:08:38.884 EAL: Ask a virtual area of 0x2e000 bytes 00:08:38.884 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:08:38.884 EAL: Setting up physically contiguous memory... 00:08:38.884 EAL: Setting maximum number of open files to 524288 00:08:38.884 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:08:38.884 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:08:38.884 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:08:38.884 EAL: Ask a virtual area of 0x61000 bytes 00:08:38.884 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:08:38.884 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:38.884 EAL: Ask a virtual area of 0x400000000 bytes 00:08:38.884 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:08:38.884 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:08:38.884 EAL: Ask a virtual area of 0x61000 bytes 00:08:38.884 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:08:38.884 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:38.884 EAL: Ask a virtual area of 0x400000000 bytes 00:08:38.884 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:08:38.884 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:08:38.884 EAL: Ask a virtual area of 0x61000 bytes 00:08:38.884 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:08:38.884 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:38.884 EAL: Ask a virtual area of 0x400000000 bytes 00:08:38.884 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:08:38.884 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:08:38.884 EAL: Ask a virtual area of 0x61000 bytes 00:08:38.884 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:08:38.884 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:38.884 EAL: Ask a virtual area of 0x400000000 bytes 00:08:38.884 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:08:38.884 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:08:38.884 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:08:38.884 EAL: Ask a virtual area of 0x61000 bytes 00:08:38.884 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:08:38.884 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:08:38.884 EAL: Ask a virtual area of 0x400000000 bytes 00:08:38.884 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:08:38.884 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:08:38.884 EAL: Ask a virtual area of 0x61000 bytes 00:08:38.884 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:08:38.884 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:08:38.884 EAL: Ask a virtual area of 0x400000000 bytes 00:08:38.884 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:08:38.884 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:08:38.884 EAL: Ask a virtual area of 0x61000 bytes 00:08:38.884 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:08:38.884 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:08:38.884 EAL: Ask a virtual area of 0x400000000 bytes 00:08:38.884 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:08:38.884 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:08:38.884 EAL: Ask a virtual area of 0x61000 bytes 00:08:38.884 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:08:38.884 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:08:38.884 EAL: Ask a virtual area of 0x400000000 bytes 00:08:38.884 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:08:38.884 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:08:38.884 EAL: Hugepages will be freed exactly as allocated. 00:08:38.884 EAL: No shared files mode enabled, IPC is disabled 00:08:38.884 EAL: No shared files mode enabled, IPC is disabled 00:08:38.884 EAL: TSC frequency is ~2100000 KHz 00:08:38.884 EAL: Main lcore 0 is ready (tid=7f23a5d42a00;cpuset=[0]) 00:08:38.884 EAL: Trying to obtain current memory policy. 00:08:38.884 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:38.884 EAL: Restoring previous memory policy: 0 00:08:38.884 EAL: request: mp_malloc_sync 00:08:38.884 EAL: No shared files mode enabled, IPC is disabled 00:08:38.884 EAL: Heap on socket 0 was expanded by 2MB 00:08:38.884 EAL: No shared files mode enabled, IPC is disabled 00:08:38.884 EAL: No PCI address specified using 'addr=' in: bus=pci 00:08:38.884 EAL: Mem event callback 'spdk:(nil)' registered 00:08:38.884 00:08:38.884 00:08:38.884 CUnit - A unit testing framework for C - Version 2.1-3 00:08:38.884 http://cunit.sourceforge.net/ 00:08:38.884 00:08:38.884 00:08:38.884 Suite: components_suite 00:08:38.884 Test: vtophys_malloc_test ...passed 00:08:38.884 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:08:38.884 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:38.884 EAL: Restoring previous memory policy: 4 00:08:38.884 EAL: Calling mem event callback 'spdk:(nil)' 00:08:38.884 EAL: request: mp_malloc_sync 00:08:38.884 EAL: No shared files mode enabled, IPC is disabled 00:08:38.884 EAL: Heap on socket 0 was expanded by 4MB 00:08:38.884 EAL: Calling mem event callback 'spdk:(nil)' 00:08:38.884 EAL: request: mp_malloc_sync 00:08:38.884 EAL: No shared files mode enabled, IPC is disabled 00:08:38.884 EAL: Heap on socket 0 was shrunk by 4MB 00:08:38.884 EAL: Trying to obtain current memory policy. 00:08:38.884 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:38.884 EAL: Restoring previous memory policy: 4 00:08:38.884 EAL: Calling mem event callback 'spdk:(nil)' 00:08:38.884 EAL: request: mp_malloc_sync 00:08:38.884 EAL: No shared files mode enabled, IPC is disabled 00:08:38.884 EAL: Heap on socket 0 was expanded by 6MB 00:08:38.884 EAL: Calling mem event callback 'spdk:(nil)' 00:08:38.884 EAL: request: mp_malloc_sync 00:08:38.885 EAL: No shared files mode enabled, IPC is disabled 00:08:38.885 EAL: Heap on socket 0 was shrunk by 6MB 00:08:38.885 EAL: Trying to obtain current memory policy. 00:08:38.885 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:38.885 EAL: Restoring previous memory policy: 4 00:08:38.885 EAL: Calling mem event callback 'spdk:(nil)' 00:08:38.885 EAL: request: mp_malloc_sync 00:08:38.885 EAL: No shared files mode enabled, IPC is disabled 00:08:38.885 EAL: Heap on socket 0 was expanded by 10MB 00:08:38.885 EAL: Calling mem event callback 'spdk:(nil)' 00:08:38.885 EAL: request: mp_malloc_sync 00:08:38.885 EAL: No shared files mode enabled, IPC is disabled 00:08:38.885 EAL: Heap on socket 0 was shrunk by 10MB 00:08:38.885 EAL: Trying to obtain current memory policy. 00:08:38.885 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:38.885 EAL: Restoring previous memory policy: 4 00:08:38.885 EAL: Calling mem event callback 'spdk:(nil)' 00:08:38.885 EAL: request: mp_malloc_sync 00:08:38.885 EAL: No shared files mode enabled, IPC is disabled 00:08:38.885 EAL: Heap on socket 0 was expanded by 18MB 00:08:38.885 EAL: Calling mem event callback 'spdk:(nil)' 00:08:38.885 EAL: request: mp_malloc_sync 00:08:38.885 EAL: No shared files mode enabled, IPC is disabled 00:08:38.885 EAL: Heap on socket 0 was shrunk by 18MB 00:08:38.885 EAL: Trying to obtain current memory policy. 00:08:38.885 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:38.885 EAL: Restoring previous memory policy: 4 00:08:38.885 EAL: Calling mem event callback 'spdk:(nil)' 00:08:38.885 EAL: request: mp_malloc_sync 00:08:38.885 EAL: No shared files mode enabled, IPC is disabled 00:08:38.885 EAL: Heap on socket 0 was expanded by 34MB 00:08:38.885 EAL: Calling mem event callback 'spdk:(nil)' 00:08:38.885 EAL: request: mp_malloc_sync 00:08:38.885 EAL: No shared files mode enabled, IPC is disabled 00:08:38.885 EAL: Heap on socket 0 was shrunk by 34MB 00:08:38.885 EAL: Trying to obtain current memory policy. 00:08:38.885 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:38.885 EAL: Restoring previous memory policy: 4 00:08:38.885 EAL: Calling mem event callback 'spdk:(nil)' 00:08:38.885 EAL: request: mp_malloc_sync 00:08:38.885 EAL: No shared files mode enabled, IPC is disabled 00:08:38.885 EAL: Heap on socket 0 was expanded by 66MB 00:08:38.885 EAL: Calling mem event callback 'spdk:(nil)' 00:08:38.885 EAL: request: mp_malloc_sync 00:08:38.885 EAL: No shared files mode enabled, IPC is disabled 00:08:38.885 EAL: Heap on socket 0 was shrunk by 66MB 00:08:38.885 EAL: Trying to obtain current memory policy. 00:08:38.885 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:38.885 EAL: Restoring previous memory policy: 4 00:08:38.885 EAL: Calling mem event callback 'spdk:(nil)' 00:08:38.885 EAL: request: mp_malloc_sync 00:08:38.885 EAL: No shared files mode enabled, IPC is disabled 00:08:38.885 EAL: Heap on socket 0 was expanded by 130MB 00:08:38.885 EAL: Calling mem event callback 'spdk:(nil)' 00:08:38.885 EAL: request: mp_malloc_sync 00:08:38.885 EAL: No shared files mode enabled, IPC is disabled 00:08:38.885 EAL: Heap on socket 0 was shrunk by 130MB 00:08:38.885 EAL: Trying to obtain current memory policy. 00:08:38.885 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:39.190 EAL: Restoring previous memory policy: 4 00:08:39.190 EAL: Calling mem event callback 'spdk:(nil)' 00:08:39.190 EAL: request: mp_malloc_sync 00:08:39.190 EAL: No shared files mode enabled, IPC is disabled 00:08:39.190 EAL: Heap on socket 0 was expanded by 258MB 00:08:39.190 EAL: Calling mem event callback 'spdk:(nil)' 00:08:39.190 EAL: request: mp_malloc_sync 00:08:39.190 EAL: No shared files mode enabled, IPC is disabled 00:08:39.190 EAL: Heap on socket 0 was shrunk by 258MB 00:08:39.190 EAL: Trying to obtain current memory policy. 00:08:39.190 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:39.190 EAL: Restoring previous memory policy: 4 00:08:39.190 EAL: Calling mem event callback 'spdk:(nil)' 00:08:39.190 EAL: request: mp_malloc_sync 00:08:39.190 EAL: No shared files mode enabled, IPC is disabled 00:08:39.190 EAL: Heap on socket 0 was expanded by 514MB 00:08:39.190 EAL: Calling mem event callback 'spdk:(nil)' 00:08:39.460 EAL: request: mp_malloc_sync 00:08:39.460 EAL: No shared files mode enabled, IPC is disabled 00:08:39.460 EAL: Heap on socket 0 was shrunk by 514MB 00:08:39.460 EAL: Trying to obtain current memory policy. 00:08:39.460 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:39.460 EAL: Restoring previous memory policy: 4 00:08:39.460 EAL: Calling mem event callback 'spdk:(nil)' 00:08:39.460 EAL: request: mp_malloc_sync 00:08:39.460 EAL: No shared files mode enabled, IPC is disabled 00:08:39.460 EAL: Heap on socket 0 was expanded by 1026MB 00:08:39.719 EAL: Calling mem event callback 'spdk:(nil)' 00:08:39.978 EAL: request: mp_malloc_sync 00:08:39.978 EAL: No shared files mode enabled, IPC is disabled 00:08:39.978 EAL: Heap on socket 0 was shrunk by 1026MB 00:08:39.978 passed 00:08:39.978 00:08:39.978 Run Summary: Type Total Ran Passed Failed Inactive 00:08:39.978 suites 1 1 n/a 0 0 00:08:39.978 tests 2 2 2 0 0 00:08:39.978 asserts 497 497 497 0 n/a 00:08:39.978 00:08:39.978 Elapsed time = 0.967 seconds 00:08:39.978 EAL: Calling mem event callback 'spdk:(nil)' 00:08:39.978 EAL: request: mp_malloc_sync 00:08:39.978 EAL: No shared files mode enabled, IPC is disabled 00:08:39.978 EAL: Heap on socket 0 was shrunk by 2MB 00:08:39.978 EAL: No shared files mode enabled, IPC is disabled 00:08:39.978 EAL: No shared files mode enabled, IPC is disabled 00:08:39.978 EAL: No shared files mode enabled, IPC is disabled 00:08:39.978 00:08:39.978 real 0m1.100s 00:08:39.978 user 0m0.650s 00:08:39.978 sys 0m0.420s 00:08:39.978 06:19:11 env.env_vtophys -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:39.978 06:19:11 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:08:39.978 ************************************ 00:08:39.978 END TEST env_vtophys 00:08:39.978 ************************************ 00:08:39.978 06:19:11 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:08:39.978 06:19:11 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:39.978 06:19:11 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:39.978 06:19:11 env -- common/autotest_common.sh@10 -- # set +x 00:08:39.978 ************************************ 00:08:39.978 START TEST env_pci 00:08:39.978 ************************************ 00:08:39.978 06:19:11 env.env_pci -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:08:39.978 00:08:39.978 00:08:39.978 CUnit - A unit testing framework for C - Version 2.1-3 00:08:39.978 http://cunit.sourceforge.net/ 00:08:39.978 00:08:39.978 00:08:39.978 Suite: pci 00:08:39.978 Test: pci_hook ...[2024-11-20 06:19:11.663554] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 351652 has claimed it 00:08:39.978 EAL: Cannot find device (10000:00:01.0) 00:08:39.978 EAL: Failed to attach device on primary process 00:08:39.978 passed 00:08:39.978 00:08:39.978 Run Summary: Type Total Ran Passed Failed Inactive 00:08:39.978 suites 1 1 n/a 0 0 00:08:39.978 tests 1 1 1 0 0 00:08:39.978 asserts 25 25 25 0 n/a 00:08:39.978 00:08:39.978 Elapsed time = 0.027 seconds 00:08:39.978 00:08:39.978 real 0m0.047s 00:08:39.978 user 0m0.011s 00:08:39.978 sys 0m0.035s 00:08:39.978 06:19:11 env.env_pci -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:39.978 06:19:11 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:08:39.978 ************************************ 00:08:39.978 END TEST env_pci 00:08:39.978 ************************************ 00:08:39.978 06:19:11 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:08:39.978 06:19:11 env -- env/env.sh@15 -- # uname 00:08:39.978 06:19:11 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:08:39.978 06:19:11 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:08:39.978 06:19:11 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:08:39.978 06:19:11 env -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:08:39.978 06:19:11 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:39.978 06:19:11 env -- common/autotest_common.sh@10 -- # set +x 00:08:39.978 ************************************ 00:08:39.978 START TEST env_dpdk_post_init 00:08:39.978 ************************************ 00:08:39.978 06:19:11 env.env_dpdk_post_init -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:08:39.978 EAL: Detected CPU lcores: 96 00:08:39.978 EAL: Detected NUMA nodes: 2 00:08:39.978 EAL: Detected shared linkage of DPDK 00:08:39.978 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:08:40.238 EAL: Selected IOVA mode 'VA' 00:08:40.238 EAL: VFIO support initialized 00:08:40.238 TELEMETRY: No legacy callbacks, legacy socket not created 00:08:40.238 EAL: Using IOMMU type 1 (Type 1) 00:08:40.238 EAL: Ignore mapping IO port bar(1) 00:08:40.238 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:08:40.238 EAL: Ignore mapping IO port bar(1) 00:08:40.238 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:08:40.238 EAL: Ignore mapping IO port bar(1) 00:08:40.238 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:08:40.238 EAL: Ignore mapping IO port bar(1) 00:08:40.238 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:08:40.238 EAL: Ignore mapping IO port bar(1) 00:08:40.238 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:08:40.238 EAL: Ignore mapping IO port bar(1) 00:08:40.238 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:08:40.238 EAL: Ignore mapping IO port bar(1) 00:08:40.238 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:08:40.238 EAL: Ignore mapping IO port bar(1) 00:08:40.238 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:08:41.176 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:5e:00.0 (socket 0) 00:08:41.176 EAL: Ignore mapping IO port bar(1) 00:08:41.176 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:08:41.176 EAL: Ignore mapping IO port bar(1) 00:08:41.176 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:08:41.176 EAL: Ignore mapping IO port bar(1) 00:08:41.176 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:08:41.176 EAL: Ignore mapping IO port bar(1) 00:08:41.176 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:08:41.176 EAL: Ignore mapping IO port bar(1) 00:08:41.176 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:08:41.176 EAL: Ignore mapping IO port bar(1) 00:08:41.176 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:08:41.176 EAL: Ignore mapping IO port bar(1) 00:08:41.176 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:08:41.176 EAL: Ignore mapping IO port bar(1) 00:08:41.176 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:08:45.367 EAL: Releasing PCI mapped resource for 0000:5e:00.0 00:08:45.367 EAL: Calling pci_unmap_resource for 0000:5e:00.0 at 0x202001020000 00:08:45.367 Starting DPDK initialization... 00:08:45.367 Starting SPDK post initialization... 00:08:45.367 SPDK NVMe probe 00:08:45.367 Attaching to 0000:5e:00.0 00:08:45.367 Attached to 0000:5e:00.0 00:08:45.367 Cleaning up... 00:08:45.367 00:08:45.367 real 0m4.960s 00:08:45.367 user 0m3.512s 00:08:45.367 sys 0m0.516s 00:08:45.367 06:19:16 env.env_dpdk_post_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:45.367 06:19:16 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:08:45.367 ************************************ 00:08:45.367 END TEST env_dpdk_post_init 00:08:45.367 ************************************ 00:08:45.367 06:19:16 env -- env/env.sh@26 -- # uname 00:08:45.367 06:19:16 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:08:45.367 06:19:16 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:08:45.367 06:19:16 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:45.367 06:19:16 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:45.367 06:19:16 env -- common/autotest_common.sh@10 -- # set +x 00:08:45.367 ************************************ 00:08:45.367 START TEST env_mem_callbacks 00:08:45.367 ************************************ 00:08:45.367 06:19:16 env.env_mem_callbacks -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:08:45.367 EAL: Detected CPU lcores: 96 00:08:45.367 EAL: Detected NUMA nodes: 2 00:08:45.367 EAL: Detected shared linkage of DPDK 00:08:45.367 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:08:45.367 EAL: Selected IOVA mode 'VA' 00:08:45.367 EAL: VFIO support initialized 00:08:45.367 TELEMETRY: No legacy callbacks, legacy socket not created 00:08:45.367 00:08:45.367 00:08:45.367 CUnit - A unit testing framework for C - Version 2.1-3 00:08:45.367 http://cunit.sourceforge.net/ 00:08:45.367 00:08:45.367 00:08:45.367 Suite: memory 00:08:45.367 Test: test ... 00:08:45.367 register 0x200000200000 2097152 00:08:45.367 malloc 3145728 00:08:45.367 register 0x200000400000 4194304 00:08:45.367 buf 0x200000500000 len 3145728 PASSED 00:08:45.367 malloc 64 00:08:45.367 buf 0x2000004fff40 len 64 PASSED 00:08:45.367 malloc 4194304 00:08:45.367 register 0x200000800000 6291456 00:08:45.367 buf 0x200000a00000 len 4194304 PASSED 00:08:45.367 free 0x200000500000 3145728 00:08:45.367 free 0x2000004fff40 64 00:08:45.367 unregister 0x200000400000 4194304 PASSED 00:08:45.367 free 0x200000a00000 4194304 00:08:45.367 unregister 0x200000800000 6291456 PASSED 00:08:45.367 malloc 8388608 00:08:45.367 register 0x200000400000 10485760 00:08:45.367 buf 0x200000600000 len 8388608 PASSED 00:08:45.367 free 0x200000600000 8388608 00:08:45.367 unregister 0x200000400000 10485760 PASSED 00:08:45.367 passed 00:08:45.367 00:08:45.367 Run Summary: Type Total Ran Passed Failed Inactive 00:08:45.367 suites 1 1 n/a 0 0 00:08:45.367 tests 1 1 1 0 0 00:08:45.367 asserts 15 15 15 0 n/a 00:08:45.367 00:08:45.367 Elapsed time = 0.008 seconds 00:08:45.367 00:08:45.367 real 0m0.059s 00:08:45.367 user 0m0.023s 00:08:45.367 sys 0m0.036s 00:08:45.367 06:19:16 env.env_mem_callbacks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:45.367 06:19:16 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:08:45.367 ************************************ 00:08:45.367 END TEST env_mem_callbacks 00:08:45.367 ************************************ 00:08:45.367 00:08:45.367 real 0m6.859s 00:08:45.367 user 0m4.564s 00:08:45.367 sys 0m1.366s 00:08:45.367 06:19:16 env -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:45.367 06:19:16 env -- common/autotest_common.sh@10 -- # set +x 00:08:45.367 ************************************ 00:08:45.367 END TEST env 00:08:45.367 ************************************ 00:08:45.367 06:19:16 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:08:45.367 06:19:16 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:45.367 06:19:16 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:45.367 06:19:16 -- common/autotest_common.sh@10 -- # set +x 00:08:45.367 ************************************ 00:08:45.367 START TEST rpc 00:08:45.367 ************************************ 00:08:45.367 06:19:16 rpc -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:08:45.367 * Looking for test storage... 00:08:45.367 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:08:45.367 06:19:17 rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:45.367 06:19:17 rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:08:45.367 06:19:17 rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:45.367 06:19:17 rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:45.367 06:19:17 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:45.367 06:19:17 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:45.367 06:19:17 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:45.367 06:19:17 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:08:45.367 06:19:17 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:08:45.367 06:19:17 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:08:45.367 06:19:17 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:08:45.367 06:19:17 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:08:45.367 06:19:17 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:08:45.367 06:19:17 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:08:45.367 06:19:17 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:45.367 06:19:17 rpc -- scripts/common.sh@344 -- # case "$op" in 00:08:45.367 06:19:17 rpc -- scripts/common.sh@345 -- # : 1 00:08:45.367 06:19:17 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:45.367 06:19:17 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:45.367 06:19:17 rpc -- scripts/common.sh@365 -- # decimal 1 00:08:45.367 06:19:17 rpc -- scripts/common.sh@353 -- # local d=1 00:08:45.367 06:19:17 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:45.367 06:19:17 rpc -- scripts/common.sh@355 -- # echo 1 00:08:45.367 06:19:17 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:08:45.367 06:19:17 rpc -- scripts/common.sh@366 -- # decimal 2 00:08:45.367 06:19:17 rpc -- scripts/common.sh@353 -- # local d=2 00:08:45.367 06:19:17 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:45.367 06:19:17 rpc -- scripts/common.sh@355 -- # echo 2 00:08:45.367 06:19:17 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:08:45.368 06:19:17 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:45.368 06:19:17 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:45.368 06:19:17 rpc -- scripts/common.sh@368 -- # return 0 00:08:45.368 06:19:17 rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:45.368 06:19:17 rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:45.368 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:45.368 --rc genhtml_branch_coverage=1 00:08:45.368 --rc genhtml_function_coverage=1 00:08:45.368 --rc genhtml_legend=1 00:08:45.368 --rc geninfo_all_blocks=1 00:08:45.368 --rc geninfo_unexecuted_blocks=1 00:08:45.368 00:08:45.368 ' 00:08:45.368 06:19:17 rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:45.368 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:45.368 --rc genhtml_branch_coverage=1 00:08:45.368 --rc genhtml_function_coverage=1 00:08:45.368 --rc genhtml_legend=1 00:08:45.368 --rc geninfo_all_blocks=1 00:08:45.368 --rc geninfo_unexecuted_blocks=1 00:08:45.368 00:08:45.368 ' 00:08:45.368 06:19:17 rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:45.368 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:45.368 --rc genhtml_branch_coverage=1 00:08:45.368 --rc genhtml_function_coverage=1 00:08:45.368 --rc genhtml_legend=1 00:08:45.368 --rc geninfo_all_blocks=1 00:08:45.368 --rc geninfo_unexecuted_blocks=1 00:08:45.368 00:08:45.368 ' 00:08:45.368 06:19:17 rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:45.368 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:45.368 --rc genhtml_branch_coverage=1 00:08:45.368 --rc genhtml_function_coverage=1 00:08:45.368 --rc genhtml_legend=1 00:08:45.368 --rc geninfo_all_blocks=1 00:08:45.368 --rc geninfo_unexecuted_blocks=1 00:08:45.368 00:08:45.368 ' 00:08:45.368 06:19:17 rpc -- rpc/rpc.sh@65 -- # spdk_pid=352713 00:08:45.368 06:19:17 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:08:45.368 06:19:17 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:08:45.368 06:19:17 rpc -- rpc/rpc.sh@67 -- # waitforlisten 352713 00:08:45.368 06:19:17 rpc -- common/autotest_common.sh@833 -- # '[' -z 352713 ']' 00:08:45.368 06:19:17 rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:45.368 06:19:17 rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:45.368 06:19:17 rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:45.368 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:45.368 06:19:17 rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:45.368 06:19:17 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:45.627 [2024-11-20 06:19:17.207417] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:08:45.627 [2024-11-20 06:19:17.207466] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid352713 ] 00:08:45.627 [2024-11-20 06:19:17.278120] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:45.627 [2024-11-20 06:19:17.319571] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:08:45.627 [2024-11-20 06:19:17.319608] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 352713' to capture a snapshot of events at runtime. 00:08:45.627 [2024-11-20 06:19:17.319615] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:45.627 [2024-11-20 06:19:17.319623] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:45.627 [2024-11-20 06:19:17.319629] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid352713 for offline analysis/debug. 00:08:45.627 [2024-11-20 06:19:17.320174] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:45.886 06:19:17 rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:45.886 06:19:17 rpc -- common/autotest_common.sh@866 -- # return 0 00:08:45.886 06:19:17 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:08:45.886 06:19:17 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:08:45.886 06:19:17 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:08:45.886 06:19:17 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:08:45.886 06:19:17 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:45.886 06:19:17 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:45.886 06:19:17 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:45.886 ************************************ 00:08:45.886 START TEST rpc_integrity 00:08:45.886 ************************************ 00:08:45.886 06:19:17 rpc.rpc_integrity -- common/autotest_common.sh@1127 -- # rpc_integrity 00:08:45.886 06:19:17 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:08:45.886 06:19:17 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.886 06:19:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:45.886 06:19:17 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.886 06:19:17 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:08:45.886 06:19:17 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:08:45.886 06:19:17 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:08:45.886 06:19:17 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:08:45.886 06:19:17 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.886 06:19:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:45.886 06:19:17 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.886 06:19:17 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:08:45.886 06:19:17 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:08:45.886 06:19:17 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.886 06:19:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:45.886 06:19:17 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.886 06:19:17 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:08:45.886 { 00:08:45.886 "name": "Malloc0", 00:08:45.886 "aliases": [ 00:08:45.886 "ce228a3e-ae0c-4b9e-9f6f-28b70fe9f211" 00:08:45.886 ], 00:08:45.886 "product_name": "Malloc disk", 00:08:45.886 "block_size": 512, 00:08:45.886 "num_blocks": 16384, 00:08:45.886 "uuid": "ce228a3e-ae0c-4b9e-9f6f-28b70fe9f211", 00:08:45.886 "assigned_rate_limits": { 00:08:45.886 "rw_ios_per_sec": 0, 00:08:45.886 "rw_mbytes_per_sec": 0, 00:08:45.886 "r_mbytes_per_sec": 0, 00:08:45.886 "w_mbytes_per_sec": 0 00:08:45.886 }, 00:08:45.886 "claimed": false, 00:08:45.886 "zoned": false, 00:08:45.886 "supported_io_types": { 00:08:45.886 "read": true, 00:08:45.886 "write": true, 00:08:45.886 "unmap": true, 00:08:45.886 "flush": true, 00:08:45.886 "reset": true, 00:08:45.886 "nvme_admin": false, 00:08:45.886 "nvme_io": false, 00:08:45.886 "nvme_io_md": false, 00:08:45.886 "write_zeroes": true, 00:08:45.886 "zcopy": true, 00:08:45.886 "get_zone_info": false, 00:08:45.886 "zone_management": false, 00:08:45.886 "zone_append": false, 00:08:45.886 "compare": false, 00:08:45.886 "compare_and_write": false, 00:08:45.886 "abort": true, 00:08:45.886 "seek_hole": false, 00:08:45.886 "seek_data": false, 00:08:45.886 "copy": true, 00:08:45.886 "nvme_iov_md": false 00:08:45.886 }, 00:08:45.886 "memory_domains": [ 00:08:45.886 { 00:08:45.886 "dma_device_id": "system", 00:08:45.886 "dma_device_type": 1 00:08:45.886 }, 00:08:45.886 { 00:08:45.886 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:45.886 "dma_device_type": 2 00:08:45.886 } 00:08:45.886 ], 00:08:45.886 "driver_specific": {} 00:08:45.886 } 00:08:45.886 ]' 00:08:45.886 06:19:17 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:08:45.886 06:19:17 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:08:45.886 06:19:17 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:08:45.886 06:19:17 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.886 06:19:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:45.886 [2024-11-20 06:19:17.688220] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:08:45.886 [2024-11-20 06:19:17.688249] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:45.886 [2024-11-20 06:19:17.688262] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xaa7270 00:08:45.886 [2024-11-20 06:19:17.688268] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:45.886 [2024-11-20 06:19:17.689342] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:45.886 [2024-11-20 06:19:17.689363] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:08:45.886 Passthru0 00:08:45.886 06:19:17 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.886 06:19:17 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:08:45.886 06:19:17 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.886 06:19:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:46.145 06:19:17 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.145 06:19:17 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:08:46.145 { 00:08:46.145 "name": "Malloc0", 00:08:46.145 "aliases": [ 00:08:46.145 "ce228a3e-ae0c-4b9e-9f6f-28b70fe9f211" 00:08:46.145 ], 00:08:46.145 "product_name": "Malloc disk", 00:08:46.145 "block_size": 512, 00:08:46.145 "num_blocks": 16384, 00:08:46.145 "uuid": "ce228a3e-ae0c-4b9e-9f6f-28b70fe9f211", 00:08:46.145 "assigned_rate_limits": { 00:08:46.145 "rw_ios_per_sec": 0, 00:08:46.145 "rw_mbytes_per_sec": 0, 00:08:46.145 "r_mbytes_per_sec": 0, 00:08:46.145 "w_mbytes_per_sec": 0 00:08:46.145 }, 00:08:46.145 "claimed": true, 00:08:46.145 "claim_type": "exclusive_write", 00:08:46.145 "zoned": false, 00:08:46.146 "supported_io_types": { 00:08:46.146 "read": true, 00:08:46.146 "write": true, 00:08:46.146 "unmap": true, 00:08:46.146 "flush": true, 00:08:46.146 "reset": true, 00:08:46.146 "nvme_admin": false, 00:08:46.146 "nvme_io": false, 00:08:46.146 "nvme_io_md": false, 00:08:46.146 "write_zeroes": true, 00:08:46.146 "zcopy": true, 00:08:46.146 "get_zone_info": false, 00:08:46.146 "zone_management": false, 00:08:46.146 "zone_append": false, 00:08:46.146 "compare": false, 00:08:46.146 "compare_and_write": false, 00:08:46.146 "abort": true, 00:08:46.146 "seek_hole": false, 00:08:46.146 "seek_data": false, 00:08:46.146 "copy": true, 00:08:46.146 "nvme_iov_md": false 00:08:46.146 }, 00:08:46.146 "memory_domains": [ 00:08:46.146 { 00:08:46.146 "dma_device_id": "system", 00:08:46.146 "dma_device_type": 1 00:08:46.146 }, 00:08:46.146 { 00:08:46.146 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:46.146 "dma_device_type": 2 00:08:46.146 } 00:08:46.146 ], 00:08:46.146 "driver_specific": {} 00:08:46.146 }, 00:08:46.146 { 00:08:46.146 "name": "Passthru0", 00:08:46.146 "aliases": [ 00:08:46.146 "c9f10b05-d79d-52cf-b0c7-a98440231299" 00:08:46.146 ], 00:08:46.146 "product_name": "passthru", 00:08:46.146 "block_size": 512, 00:08:46.146 "num_blocks": 16384, 00:08:46.146 "uuid": "c9f10b05-d79d-52cf-b0c7-a98440231299", 00:08:46.146 "assigned_rate_limits": { 00:08:46.146 "rw_ios_per_sec": 0, 00:08:46.146 "rw_mbytes_per_sec": 0, 00:08:46.146 "r_mbytes_per_sec": 0, 00:08:46.146 "w_mbytes_per_sec": 0 00:08:46.146 }, 00:08:46.146 "claimed": false, 00:08:46.146 "zoned": false, 00:08:46.146 "supported_io_types": { 00:08:46.146 "read": true, 00:08:46.146 "write": true, 00:08:46.146 "unmap": true, 00:08:46.146 "flush": true, 00:08:46.146 "reset": true, 00:08:46.146 "nvme_admin": false, 00:08:46.146 "nvme_io": false, 00:08:46.146 "nvme_io_md": false, 00:08:46.146 "write_zeroes": true, 00:08:46.146 "zcopy": true, 00:08:46.146 "get_zone_info": false, 00:08:46.146 "zone_management": false, 00:08:46.146 "zone_append": false, 00:08:46.146 "compare": false, 00:08:46.146 "compare_and_write": false, 00:08:46.146 "abort": true, 00:08:46.146 "seek_hole": false, 00:08:46.146 "seek_data": false, 00:08:46.146 "copy": true, 00:08:46.146 "nvme_iov_md": false 00:08:46.146 }, 00:08:46.146 "memory_domains": [ 00:08:46.146 { 00:08:46.146 "dma_device_id": "system", 00:08:46.146 "dma_device_type": 1 00:08:46.146 }, 00:08:46.146 { 00:08:46.146 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:46.146 "dma_device_type": 2 00:08:46.146 } 00:08:46.146 ], 00:08:46.146 "driver_specific": { 00:08:46.146 "passthru": { 00:08:46.146 "name": "Passthru0", 00:08:46.146 "base_bdev_name": "Malloc0" 00:08:46.146 } 00:08:46.146 } 00:08:46.146 } 00:08:46.146 ]' 00:08:46.146 06:19:17 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:08:46.146 06:19:17 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:08:46.146 06:19:17 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:08:46.146 06:19:17 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.146 06:19:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:46.146 06:19:17 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.146 06:19:17 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:08:46.146 06:19:17 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.146 06:19:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:46.146 06:19:17 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.146 06:19:17 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:08:46.146 06:19:17 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.146 06:19:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:46.146 06:19:17 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.146 06:19:17 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:08:46.146 06:19:17 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:08:46.146 06:19:17 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:08:46.146 00:08:46.146 real 0m0.270s 00:08:46.146 user 0m0.170s 00:08:46.146 sys 0m0.037s 00:08:46.146 06:19:17 rpc.rpc_integrity -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:46.146 06:19:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:46.146 ************************************ 00:08:46.146 END TEST rpc_integrity 00:08:46.146 ************************************ 00:08:46.146 06:19:17 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:08:46.146 06:19:17 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:46.146 06:19:17 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:46.146 06:19:17 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:46.146 ************************************ 00:08:46.146 START TEST rpc_plugins 00:08:46.146 ************************************ 00:08:46.146 06:19:17 rpc.rpc_plugins -- common/autotest_common.sh@1127 -- # rpc_plugins 00:08:46.146 06:19:17 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:08:46.146 06:19:17 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.146 06:19:17 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:08:46.146 06:19:17 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.146 06:19:17 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:08:46.146 06:19:17 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:08:46.146 06:19:17 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.146 06:19:17 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:08:46.146 06:19:17 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.146 06:19:17 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:08:46.146 { 00:08:46.146 "name": "Malloc1", 00:08:46.146 "aliases": [ 00:08:46.146 "21e528c5-d8be-49e5-9d39-1ea3ded0e514" 00:08:46.146 ], 00:08:46.146 "product_name": "Malloc disk", 00:08:46.146 "block_size": 4096, 00:08:46.146 "num_blocks": 256, 00:08:46.146 "uuid": "21e528c5-d8be-49e5-9d39-1ea3ded0e514", 00:08:46.146 "assigned_rate_limits": { 00:08:46.146 "rw_ios_per_sec": 0, 00:08:46.146 "rw_mbytes_per_sec": 0, 00:08:46.146 "r_mbytes_per_sec": 0, 00:08:46.146 "w_mbytes_per_sec": 0 00:08:46.146 }, 00:08:46.146 "claimed": false, 00:08:46.146 "zoned": false, 00:08:46.146 "supported_io_types": { 00:08:46.146 "read": true, 00:08:46.146 "write": true, 00:08:46.146 "unmap": true, 00:08:46.146 "flush": true, 00:08:46.146 "reset": true, 00:08:46.146 "nvme_admin": false, 00:08:46.146 "nvme_io": false, 00:08:46.146 "nvme_io_md": false, 00:08:46.146 "write_zeroes": true, 00:08:46.146 "zcopy": true, 00:08:46.146 "get_zone_info": false, 00:08:46.146 "zone_management": false, 00:08:46.146 "zone_append": false, 00:08:46.146 "compare": false, 00:08:46.146 "compare_and_write": false, 00:08:46.146 "abort": true, 00:08:46.146 "seek_hole": false, 00:08:46.146 "seek_data": false, 00:08:46.146 "copy": true, 00:08:46.146 "nvme_iov_md": false 00:08:46.146 }, 00:08:46.146 "memory_domains": [ 00:08:46.146 { 00:08:46.146 "dma_device_id": "system", 00:08:46.146 "dma_device_type": 1 00:08:46.146 }, 00:08:46.146 { 00:08:46.146 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:46.146 "dma_device_type": 2 00:08:46.146 } 00:08:46.146 ], 00:08:46.146 "driver_specific": {} 00:08:46.146 } 00:08:46.146 ]' 00:08:46.146 06:19:17 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:08:46.405 06:19:17 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:08:46.405 06:19:17 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:08:46.405 06:19:17 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.405 06:19:17 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:08:46.405 06:19:17 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.405 06:19:17 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:08:46.405 06:19:17 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.405 06:19:17 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:08:46.405 06:19:18 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.405 06:19:18 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:08:46.405 06:19:18 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:08:46.405 06:19:18 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:08:46.405 00:08:46.405 real 0m0.143s 00:08:46.405 user 0m0.087s 00:08:46.405 sys 0m0.018s 00:08:46.405 06:19:18 rpc.rpc_plugins -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:46.405 06:19:18 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:08:46.405 ************************************ 00:08:46.405 END TEST rpc_plugins 00:08:46.405 ************************************ 00:08:46.405 06:19:18 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:08:46.405 06:19:18 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:46.405 06:19:18 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:46.405 06:19:18 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:46.405 ************************************ 00:08:46.405 START TEST rpc_trace_cmd_test 00:08:46.405 ************************************ 00:08:46.405 06:19:18 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1127 -- # rpc_trace_cmd_test 00:08:46.405 06:19:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:08:46.405 06:19:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:08:46.405 06:19:18 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.405 06:19:18 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.405 06:19:18 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.405 06:19:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:08:46.405 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid352713", 00:08:46.405 "tpoint_group_mask": "0x8", 00:08:46.405 "iscsi_conn": { 00:08:46.405 "mask": "0x2", 00:08:46.405 "tpoint_mask": "0x0" 00:08:46.405 }, 00:08:46.405 "scsi": { 00:08:46.405 "mask": "0x4", 00:08:46.405 "tpoint_mask": "0x0" 00:08:46.405 }, 00:08:46.405 "bdev": { 00:08:46.405 "mask": "0x8", 00:08:46.405 "tpoint_mask": "0xffffffffffffffff" 00:08:46.405 }, 00:08:46.405 "nvmf_rdma": { 00:08:46.405 "mask": "0x10", 00:08:46.405 "tpoint_mask": "0x0" 00:08:46.405 }, 00:08:46.405 "nvmf_tcp": { 00:08:46.405 "mask": "0x20", 00:08:46.405 "tpoint_mask": "0x0" 00:08:46.405 }, 00:08:46.405 "ftl": { 00:08:46.405 "mask": "0x40", 00:08:46.405 "tpoint_mask": "0x0" 00:08:46.405 }, 00:08:46.405 "blobfs": { 00:08:46.405 "mask": "0x80", 00:08:46.405 "tpoint_mask": "0x0" 00:08:46.405 }, 00:08:46.405 "dsa": { 00:08:46.405 "mask": "0x200", 00:08:46.405 "tpoint_mask": "0x0" 00:08:46.405 }, 00:08:46.405 "thread": { 00:08:46.405 "mask": "0x400", 00:08:46.405 "tpoint_mask": "0x0" 00:08:46.405 }, 00:08:46.405 "nvme_pcie": { 00:08:46.405 "mask": "0x800", 00:08:46.405 "tpoint_mask": "0x0" 00:08:46.405 }, 00:08:46.405 "iaa": { 00:08:46.405 "mask": "0x1000", 00:08:46.405 "tpoint_mask": "0x0" 00:08:46.405 }, 00:08:46.405 "nvme_tcp": { 00:08:46.405 "mask": "0x2000", 00:08:46.405 "tpoint_mask": "0x0" 00:08:46.405 }, 00:08:46.405 "bdev_nvme": { 00:08:46.405 "mask": "0x4000", 00:08:46.405 "tpoint_mask": "0x0" 00:08:46.405 }, 00:08:46.405 "sock": { 00:08:46.405 "mask": "0x8000", 00:08:46.405 "tpoint_mask": "0x0" 00:08:46.405 }, 00:08:46.405 "blob": { 00:08:46.406 "mask": "0x10000", 00:08:46.406 "tpoint_mask": "0x0" 00:08:46.406 }, 00:08:46.406 "bdev_raid": { 00:08:46.406 "mask": "0x20000", 00:08:46.406 "tpoint_mask": "0x0" 00:08:46.406 }, 00:08:46.406 "scheduler": { 00:08:46.406 "mask": "0x40000", 00:08:46.406 "tpoint_mask": "0x0" 00:08:46.406 } 00:08:46.406 }' 00:08:46.406 06:19:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:08:46.406 06:19:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:08:46.406 06:19:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:08:46.406 06:19:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:08:46.406 06:19:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:08:46.664 06:19:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:08:46.664 06:19:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:08:46.664 06:19:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:08:46.664 06:19:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:08:46.664 06:19:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:08:46.664 00:08:46.664 real 0m0.214s 00:08:46.664 user 0m0.180s 00:08:46.664 sys 0m0.024s 00:08:46.664 06:19:18 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:46.664 06:19:18 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.664 ************************************ 00:08:46.664 END TEST rpc_trace_cmd_test 00:08:46.664 ************************************ 00:08:46.664 06:19:18 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:08:46.664 06:19:18 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:08:46.664 06:19:18 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:08:46.664 06:19:18 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:46.664 06:19:18 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:46.664 06:19:18 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:46.664 ************************************ 00:08:46.664 START TEST rpc_daemon_integrity 00:08:46.664 ************************************ 00:08:46.664 06:19:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1127 -- # rpc_integrity 00:08:46.664 06:19:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:08:46.664 06:19:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.664 06:19:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:46.664 06:19:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.665 06:19:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:08:46.665 06:19:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:08:46.665 06:19:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:08:46.665 06:19:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:08:46.665 06:19:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.665 06:19:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:46.665 06:19:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.665 06:19:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:08:46.665 06:19:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:08:46.665 06:19:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.665 06:19:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:46.665 06:19:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.665 06:19:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:08:46.665 { 00:08:46.665 "name": "Malloc2", 00:08:46.665 "aliases": [ 00:08:46.665 "29cf3366-b559-4d12-b90c-c5a5f0fc8c43" 00:08:46.665 ], 00:08:46.665 "product_name": "Malloc disk", 00:08:46.665 "block_size": 512, 00:08:46.665 "num_blocks": 16384, 00:08:46.665 "uuid": "29cf3366-b559-4d12-b90c-c5a5f0fc8c43", 00:08:46.665 "assigned_rate_limits": { 00:08:46.665 "rw_ios_per_sec": 0, 00:08:46.665 "rw_mbytes_per_sec": 0, 00:08:46.665 "r_mbytes_per_sec": 0, 00:08:46.665 "w_mbytes_per_sec": 0 00:08:46.665 }, 00:08:46.665 "claimed": false, 00:08:46.665 "zoned": false, 00:08:46.665 "supported_io_types": { 00:08:46.665 "read": true, 00:08:46.665 "write": true, 00:08:46.665 "unmap": true, 00:08:46.665 "flush": true, 00:08:46.665 "reset": true, 00:08:46.665 "nvme_admin": false, 00:08:46.665 "nvme_io": false, 00:08:46.665 "nvme_io_md": false, 00:08:46.665 "write_zeroes": true, 00:08:46.665 "zcopy": true, 00:08:46.665 "get_zone_info": false, 00:08:46.665 "zone_management": false, 00:08:46.665 "zone_append": false, 00:08:46.665 "compare": false, 00:08:46.665 "compare_and_write": false, 00:08:46.665 "abort": true, 00:08:46.665 "seek_hole": false, 00:08:46.665 "seek_data": false, 00:08:46.665 "copy": true, 00:08:46.665 "nvme_iov_md": false 00:08:46.665 }, 00:08:46.665 "memory_domains": [ 00:08:46.665 { 00:08:46.665 "dma_device_id": "system", 00:08:46.665 "dma_device_type": 1 00:08:46.665 }, 00:08:46.665 { 00:08:46.665 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:46.665 "dma_device_type": 2 00:08:46.665 } 00:08:46.665 ], 00:08:46.665 "driver_specific": {} 00:08:46.665 } 00:08:46.665 ]' 00:08:46.665 06:19:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:08:46.924 06:19:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:08:46.924 06:19:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:08:46.924 06:19:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.924 06:19:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:46.924 [2024-11-20 06:19:18.538538] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:08:46.924 [2024-11-20 06:19:18.538566] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:46.924 [2024-11-20 06:19:18.538577] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xbeae00 00:08:46.924 [2024-11-20 06:19:18.538583] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:46.924 [2024-11-20 06:19:18.539554] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:46.924 [2024-11-20 06:19:18.539575] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:08:46.924 Passthru0 00:08:46.924 06:19:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.924 06:19:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:08:46.924 06:19:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.924 06:19:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:46.924 06:19:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.924 06:19:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:08:46.924 { 00:08:46.924 "name": "Malloc2", 00:08:46.924 "aliases": [ 00:08:46.924 "29cf3366-b559-4d12-b90c-c5a5f0fc8c43" 00:08:46.924 ], 00:08:46.924 "product_name": "Malloc disk", 00:08:46.924 "block_size": 512, 00:08:46.924 "num_blocks": 16384, 00:08:46.924 "uuid": "29cf3366-b559-4d12-b90c-c5a5f0fc8c43", 00:08:46.924 "assigned_rate_limits": { 00:08:46.924 "rw_ios_per_sec": 0, 00:08:46.924 "rw_mbytes_per_sec": 0, 00:08:46.924 "r_mbytes_per_sec": 0, 00:08:46.924 "w_mbytes_per_sec": 0 00:08:46.924 }, 00:08:46.924 "claimed": true, 00:08:46.924 "claim_type": "exclusive_write", 00:08:46.924 "zoned": false, 00:08:46.924 "supported_io_types": { 00:08:46.924 "read": true, 00:08:46.924 "write": true, 00:08:46.924 "unmap": true, 00:08:46.924 "flush": true, 00:08:46.924 "reset": true, 00:08:46.924 "nvme_admin": false, 00:08:46.924 "nvme_io": false, 00:08:46.924 "nvme_io_md": false, 00:08:46.924 "write_zeroes": true, 00:08:46.924 "zcopy": true, 00:08:46.924 "get_zone_info": false, 00:08:46.924 "zone_management": false, 00:08:46.924 "zone_append": false, 00:08:46.924 "compare": false, 00:08:46.924 "compare_and_write": false, 00:08:46.924 "abort": true, 00:08:46.924 "seek_hole": false, 00:08:46.924 "seek_data": false, 00:08:46.924 "copy": true, 00:08:46.924 "nvme_iov_md": false 00:08:46.924 }, 00:08:46.924 "memory_domains": [ 00:08:46.924 { 00:08:46.924 "dma_device_id": "system", 00:08:46.924 "dma_device_type": 1 00:08:46.924 }, 00:08:46.924 { 00:08:46.924 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:46.924 "dma_device_type": 2 00:08:46.924 } 00:08:46.924 ], 00:08:46.924 "driver_specific": {} 00:08:46.924 }, 00:08:46.924 { 00:08:46.924 "name": "Passthru0", 00:08:46.924 "aliases": [ 00:08:46.924 "bb268e41-ed45-5e74-acbf-83903b299c37" 00:08:46.924 ], 00:08:46.924 "product_name": "passthru", 00:08:46.924 "block_size": 512, 00:08:46.924 "num_blocks": 16384, 00:08:46.924 "uuid": "bb268e41-ed45-5e74-acbf-83903b299c37", 00:08:46.924 "assigned_rate_limits": { 00:08:46.924 "rw_ios_per_sec": 0, 00:08:46.924 "rw_mbytes_per_sec": 0, 00:08:46.924 "r_mbytes_per_sec": 0, 00:08:46.924 "w_mbytes_per_sec": 0 00:08:46.924 }, 00:08:46.924 "claimed": false, 00:08:46.924 "zoned": false, 00:08:46.924 "supported_io_types": { 00:08:46.924 "read": true, 00:08:46.924 "write": true, 00:08:46.924 "unmap": true, 00:08:46.924 "flush": true, 00:08:46.924 "reset": true, 00:08:46.924 "nvme_admin": false, 00:08:46.924 "nvme_io": false, 00:08:46.924 "nvme_io_md": false, 00:08:46.924 "write_zeroes": true, 00:08:46.924 "zcopy": true, 00:08:46.924 "get_zone_info": false, 00:08:46.924 "zone_management": false, 00:08:46.924 "zone_append": false, 00:08:46.924 "compare": false, 00:08:46.924 "compare_and_write": false, 00:08:46.924 "abort": true, 00:08:46.924 "seek_hole": false, 00:08:46.924 "seek_data": false, 00:08:46.924 "copy": true, 00:08:46.924 "nvme_iov_md": false 00:08:46.924 }, 00:08:46.924 "memory_domains": [ 00:08:46.924 { 00:08:46.924 "dma_device_id": "system", 00:08:46.924 "dma_device_type": 1 00:08:46.924 }, 00:08:46.924 { 00:08:46.924 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:46.924 "dma_device_type": 2 00:08:46.924 } 00:08:46.924 ], 00:08:46.924 "driver_specific": { 00:08:46.924 "passthru": { 00:08:46.924 "name": "Passthru0", 00:08:46.924 "base_bdev_name": "Malloc2" 00:08:46.924 } 00:08:46.924 } 00:08:46.924 } 00:08:46.924 ]' 00:08:46.924 06:19:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:08:46.924 06:19:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:08:46.924 06:19:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:08:46.924 06:19:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.924 06:19:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:46.924 06:19:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.924 06:19:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:08:46.924 06:19:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.924 06:19:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:46.924 06:19:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.924 06:19:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:08:46.924 06:19:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.924 06:19:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:46.924 06:19:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.924 06:19:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:08:46.924 06:19:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:08:46.924 06:19:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:08:46.924 00:08:46.924 real 0m0.286s 00:08:46.924 user 0m0.181s 00:08:46.924 sys 0m0.037s 00:08:46.924 06:19:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:46.924 06:19:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:46.924 ************************************ 00:08:46.924 END TEST rpc_daemon_integrity 00:08:46.924 ************************************ 00:08:46.924 06:19:18 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:08:46.924 06:19:18 rpc -- rpc/rpc.sh@84 -- # killprocess 352713 00:08:46.924 06:19:18 rpc -- common/autotest_common.sh@952 -- # '[' -z 352713 ']' 00:08:46.924 06:19:18 rpc -- common/autotest_common.sh@956 -- # kill -0 352713 00:08:46.924 06:19:18 rpc -- common/autotest_common.sh@957 -- # uname 00:08:46.924 06:19:18 rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:46.924 06:19:18 rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 352713 00:08:47.183 06:19:18 rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:47.183 06:19:18 rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:47.183 06:19:18 rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 352713' 00:08:47.183 killing process with pid 352713 00:08:47.183 06:19:18 rpc -- common/autotest_common.sh@971 -- # kill 352713 00:08:47.183 06:19:18 rpc -- common/autotest_common.sh@976 -- # wait 352713 00:08:47.443 00:08:47.443 real 0m2.091s 00:08:47.443 user 0m2.655s 00:08:47.443 sys 0m0.703s 00:08:47.443 06:19:19 rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:47.443 06:19:19 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:47.443 ************************************ 00:08:47.443 END TEST rpc 00:08:47.443 ************************************ 00:08:47.443 06:19:19 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:08:47.443 06:19:19 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:47.443 06:19:19 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:47.443 06:19:19 -- common/autotest_common.sh@10 -- # set +x 00:08:47.443 ************************************ 00:08:47.443 START TEST skip_rpc 00:08:47.443 ************************************ 00:08:47.443 06:19:19 skip_rpc -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:08:47.443 * Looking for test storage... 00:08:47.443 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:08:47.443 06:19:19 skip_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:47.443 06:19:19 skip_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:08:47.443 06:19:19 skip_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:47.702 06:19:19 skip_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:47.702 06:19:19 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:47.702 06:19:19 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:47.702 06:19:19 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:47.702 06:19:19 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:08:47.702 06:19:19 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:08:47.702 06:19:19 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:08:47.702 06:19:19 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:08:47.702 06:19:19 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:08:47.702 06:19:19 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:08:47.702 06:19:19 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:08:47.702 06:19:19 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:47.702 06:19:19 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:08:47.702 06:19:19 skip_rpc -- scripts/common.sh@345 -- # : 1 00:08:47.702 06:19:19 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:47.702 06:19:19 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:47.702 06:19:19 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:08:47.702 06:19:19 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:08:47.702 06:19:19 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:47.702 06:19:19 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:08:47.702 06:19:19 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:08:47.702 06:19:19 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:08:47.702 06:19:19 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:08:47.702 06:19:19 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:47.702 06:19:19 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:08:47.702 06:19:19 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:08:47.702 06:19:19 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:47.702 06:19:19 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:47.702 06:19:19 skip_rpc -- scripts/common.sh@368 -- # return 0 00:08:47.702 06:19:19 skip_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:47.702 06:19:19 skip_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:47.702 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:47.702 --rc genhtml_branch_coverage=1 00:08:47.702 --rc genhtml_function_coverage=1 00:08:47.702 --rc genhtml_legend=1 00:08:47.702 --rc geninfo_all_blocks=1 00:08:47.702 --rc geninfo_unexecuted_blocks=1 00:08:47.702 00:08:47.702 ' 00:08:47.702 06:19:19 skip_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:47.702 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:47.702 --rc genhtml_branch_coverage=1 00:08:47.702 --rc genhtml_function_coverage=1 00:08:47.702 --rc genhtml_legend=1 00:08:47.702 --rc geninfo_all_blocks=1 00:08:47.702 --rc geninfo_unexecuted_blocks=1 00:08:47.702 00:08:47.702 ' 00:08:47.702 06:19:19 skip_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:47.702 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:47.702 --rc genhtml_branch_coverage=1 00:08:47.702 --rc genhtml_function_coverage=1 00:08:47.702 --rc genhtml_legend=1 00:08:47.702 --rc geninfo_all_blocks=1 00:08:47.702 --rc geninfo_unexecuted_blocks=1 00:08:47.702 00:08:47.702 ' 00:08:47.702 06:19:19 skip_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:47.702 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:47.702 --rc genhtml_branch_coverage=1 00:08:47.702 --rc genhtml_function_coverage=1 00:08:47.702 --rc genhtml_legend=1 00:08:47.702 --rc geninfo_all_blocks=1 00:08:47.702 --rc geninfo_unexecuted_blocks=1 00:08:47.702 00:08:47.702 ' 00:08:47.702 06:19:19 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:08:47.702 06:19:19 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:08:47.702 06:19:19 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:08:47.702 06:19:19 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:47.702 06:19:19 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:47.702 06:19:19 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:47.702 ************************************ 00:08:47.702 START TEST skip_rpc 00:08:47.702 ************************************ 00:08:47.702 06:19:19 skip_rpc.skip_rpc -- common/autotest_common.sh@1127 -- # test_skip_rpc 00:08:47.703 06:19:19 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:08:47.703 06:19:19 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=353291 00:08:47.703 06:19:19 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:08:47.703 06:19:19 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:08:47.703 [2024-11-20 06:19:19.397605] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:08:47.703 [2024-11-20 06:19:19.397641] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid353291 ] 00:08:47.703 [2024-11-20 06:19:19.472884] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:47.703 [2024-11-20 06:19:19.512745] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:52.972 06:19:24 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:08:52.972 06:19:24 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:08:52.972 06:19:24 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:08:52.972 06:19:24 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:08:52.972 06:19:24 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:52.972 06:19:24 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:08:52.972 06:19:24 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:52.972 06:19:24 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:08:52.972 06:19:24 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.972 06:19:24 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:52.972 06:19:24 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:08:52.972 06:19:24 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:08:52.972 06:19:24 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:52.972 06:19:24 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:52.972 06:19:24 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:52.972 06:19:24 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:08:52.972 06:19:24 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 353291 00:08:52.972 06:19:24 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # '[' -z 353291 ']' 00:08:52.972 06:19:24 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # kill -0 353291 00:08:52.972 06:19:24 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # uname 00:08:52.972 06:19:24 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:52.972 06:19:24 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 353291 00:08:52.972 06:19:24 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:52.972 06:19:24 skip_rpc.skip_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:52.972 06:19:24 skip_rpc.skip_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 353291' 00:08:52.972 killing process with pid 353291 00:08:52.972 06:19:24 skip_rpc.skip_rpc -- common/autotest_common.sh@971 -- # kill 353291 00:08:52.972 06:19:24 skip_rpc.skip_rpc -- common/autotest_common.sh@976 -- # wait 353291 00:08:52.972 00:08:52.972 real 0m5.360s 00:08:52.972 user 0m5.130s 00:08:52.972 sys 0m0.268s 00:08:52.972 06:19:24 skip_rpc.skip_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:52.972 06:19:24 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:52.972 ************************************ 00:08:52.972 END TEST skip_rpc 00:08:52.972 ************************************ 00:08:52.972 06:19:24 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:08:52.972 06:19:24 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:52.972 06:19:24 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:52.972 06:19:24 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:52.972 ************************************ 00:08:52.972 START TEST skip_rpc_with_json 00:08:52.972 ************************************ 00:08:52.972 06:19:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1127 -- # test_skip_rpc_with_json 00:08:52.972 06:19:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:08:52.972 06:19:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=354192 00:08:52.972 06:19:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:08:52.972 06:19:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:08:52.972 06:19:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 354192 00:08:52.972 06:19:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # '[' -z 354192 ']' 00:08:52.973 06:19:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:52.973 06:19:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:52.973 06:19:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:52.973 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:52.973 06:19:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:52.973 06:19:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:08:53.232 [2024-11-20 06:19:24.842251] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:08:53.232 [2024-11-20 06:19:24.842295] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid354192 ] 00:08:53.232 [2024-11-20 06:19:24.914206] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:53.232 [2024-11-20 06:19:24.956034] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:53.490 06:19:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:53.490 06:19:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@866 -- # return 0 00:08:53.490 06:19:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:08:53.490 06:19:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.490 06:19:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:08:53.490 [2024-11-20 06:19:25.168485] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:08:53.490 request: 00:08:53.490 { 00:08:53.490 "trtype": "tcp", 00:08:53.490 "method": "nvmf_get_transports", 00:08:53.490 "req_id": 1 00:08:53.490 } 00:08:53.490 Got JSON-RPC error response 00:08:53.490 response: 00:08:53.490 { 00:08:53.490 "code": -19, 00:08:53.490 "message": "No such device" 00:08:53.490 } 00:08:53.490 06:19:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:08:53.490 06:19:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:08:53.490 06:19:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.490 06:19:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:08:53.490 [2024-11-20 06:19:25.180594] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:53.490 06:19:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.490 06:19:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:08:53.490 06:19:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.490 06:19:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:08:53.750 06:19:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.750 06:19:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:08:53.750 { 00:08:53.750 "subsystems": [ 00:08:53.750 { 00:08:53.750 "subsystem": "fsdev", 00:08:53.750 "config": [ 00:08:53.750 { 00:08:53.750 "method": "fsdev_set_opts", 00:08:53.750 "params": { 00:08:53.750 "fsdev_io_pool_size": 65535, 00:08:53.750 "fsdev_io_cache_size": 256 00:08:53.750 } 00:08:53.750 } 00:08:53.750 ] 00:08:53.750 }, 00:08:53.750 { 00:08:53.750 "subsystem": "vfio_user_target", 00:08:53.750 "config": null 00:08:53.750 }, 00:08:53.750 { 00:08:53.750 "subsystem": "keyring", 00:08:53.750 "config": [] 00:08:53.750 }, 00:08:53.750 { 00:08:53.750 "subsystem": "iobuf", 00:08:53.750 "config": [ 00:08:53.750 { 00:08:53.750 "method": "iobuf_set_options", 00:08:53.750 "params": { 00:08:53.750 "small_pool_count": 8192, 00:08:53.750 "large_pool_count": 1024, 00:08:53.750 "small_bufsize": 8192, 00:08:53.750 "large_bufsize": 135168, 00:08:53.750 "enable_numa": false 00:08:53.750 } 00:08:53.750 } 00:08:53.750 ] 00:08:53.750 }, 00:08:53.750 { 00:08:53.750 "subsystem": "sock", 00:08:53.750 "config": [ 00:08:53.750 { 00:08:53.750 "method": "sock_set_default_impl", 00:08:53.750 "params": { 00:08:53.750 "impl_name": "posix" 00:08:53.750 } 00:08:53.750 }, 00:08:53.750 { 00:08:53.750 "method": "sock_impl_set_options", 00:08:53.750 "params": { 00:08:53.750 "impl_name": "ssl", 00:08:53.750 "recv_buf_size": 4096, 00:08:53.750 "send_buf_size": 4096, 00:08:53.750 "enable_recv_pipe": true, 00:08:53.750 "enable_quickack": false, 00:08:53.750 "enable_placement_id": 0, 00:08:53.750 "enable_zerocopy_send_server": true, 00:08:53.750 "enable_zerocopy_send_client": false, 00:08:53.750 "zerocopy_threshold": 0, 00:08:53.750 "tls_version": 0, 00:08:53.750 "enable_ktls": false 00:08:53.750 } 00:08:53.750 }, 00:08:53.750 { 00:08:53.750 "method": "sock_impl_set_options", 00:08:53.750 "params": { 00:08:53.750 "impl_name": "posix", 00:08:53.750 "recv_buf_size": 2097152, 00:08:53.750 "send_buf_size": 2097152, 00:08:53.750 "enable_recv_pipe": true, 00:08:53.750 "enable_quickack": false, 00:08:53.750 "enable_placement_id": 0, 00:08:53.750 "enable_zerocopy_send_server": true, 00:08:53.750 "enable_zerocopy_send_client": false, 00:08:53.750 "zerocopy_threshold": 0, 00:08:53.750 "tls_version": 0, 00:08:53.750 "enable_ktls": false 00:08:53.750 } 00:08:53.750 } 00:08:53.750 ] 00:08:53.750 }, 00:08:53.750 { 00:08:53.750 "subsystem": "vmd", 00:08:53.750 "config": [] 00:08:53.750 }, 00:08:53.750 { 00:08:53.750 "subsystem": "accel", 00:08:53.750 "config": [ 00:08:53.750 { 00:08:53.750 "method": "accel_set_options", 00:08:53.750 "params": { 00:08:53.750 "small_cache_size": 128, 00:08:53.750 "large_cache_size": 16, 00:08:53.750 "task_count": 2048, 00:08:53.750 "sequence_count": 2048, 00:08:53.750 "buf_count": 2048 00:08:53.750 } 00:08:53.750 } 00:08:53.750 ] 00:08:53.750 }, 00:08:53.750 { 00:08:53.750 "subsystem": "bdev", 00:08:53.750 "config": [ 00:08:53.750 { 00:08:53.750 "method": "bdev_set_options", 00:08:53.750 "params": { 00:08:53.750 "bdev_io_pool_size": 65535, 00:08:53.750 "bdev_io_cache_size": 256, 00:08:53.750 "bdev_auto_examine": true, 00:08:53.750 "iobuf_small_cache_size": 128, 00:08:53.750 "iobuf_large_cache_size": 16 00:08:53.750 } 00:08:53.750 }, 00:08:53.750 { 00:08:53.750 "method": "bdev_raid_set_options", 00:08:53.750 "params": { 00:08:53.750 "process_window_size_kb": 1024, 00:08:53.750 "process_max_bandwidth_mb_sec": 0 00:08:53.750 } 00:08:53.750 }, 00:08:53.750 { 00:08:53.750 "method": "bdev_iscsi_set_options", 00:08:53.750 "params": { 00:08:53.750 "timeout_sec": 30 00:08:53.750 } 00:08:53.750 }, 00:08:53.750 { 00:08:53.750 "method": "bdev_nvme_set_options", 00:08:53.750 "params": { 00:08:53.750 "action_on_timeout": "none", 00:08:53.750 "timeout_us": 0, 00:08:53.750 "timeout_admin_us": 0, 00:08:53.750 "keep_alive_timeout_ms": 10000, 00:08:53.750 "arbitration_burst": 0, 00:08:53.750 "low_priority_weight": 0, 00:08:53.750 "medium_priority_weight": 0, 00:08:53.750 "high_priority_weight": 0, 00:08:53.750 "nvme_adminq_poll_period_us": 10000, 00:08:53.750 "nvme_ioq_poll_period_us": 0, 00:08:53.750 "io_queue_requests": 0, 00:08:53.750 "delay_cmd_submit": true, 00:08:53.750 "transport_retry_count": 4, 00:08:53.750 "bdev_retry_count": 3, 00:08:53.750 "transport_ack_timeout": 0, 00:08:53.750 "ctrlr_loss_timeout_sec": 0, 00:08:53.750 "reconnect_delay_sec": 0, 00:08:53.750 "fast_io_fail_timeout_sec": 0, 00:08:53.750 "disable_auto_failback": false, 00:08:53.750 "generate_uuids": false, 00:08:53.750 "transport_tos": 0, 00:08:53.750 "nvme_error_stat": false, 00:08:53.750 "rdma_srq_size": 0, 00:08:53.750 "io_path_stat": false, 00:08:53.750 "allow_accel_sequence": false, 00:08:53.750 "rdma_max_cq_size": 0, 00:08:53.750 "rdma_cm_event_timeout_ms": 0, 00:08:53.750 "dhchap_digests": [ 00:08:53.750 "sha256", 00:08:53.750 "sha384", 00:08:53.750 "sha512" 00:08:53.750 ], 00:08:53.750 "dhchap_dhgroups": [ 00:08:53.750 "null", 00:08:53.750 "ffdhe2048", 00:08:53.750 "ffdhe3072", 00:08:53.750 "ffdhe4096", 00:08:53.750 "ffdhe6144", 00:08:53.750 "ffdhe8192" 00:08:53.750 ] 00:08:53.750 } 00:08:53.750 }, 00:08:53.750 { 00:08:53.750 "method": "bdev_nvme_set_hotplug", 00:08:53.750 "params": { 00:08:53.750 "period_us": 100000, 00:08:53.750 "enable": false 00:08:53.750 } 00:08:53.750 }, 00:08:53.750 { 00:08:53.750 "method": "bdev_wait_for_examine" 00:08:53.750 } 00:08:53.750 ] 00:08:53.750 }, 00:08:53.750 { 00:08:53.750 "subsystem": "scsi", 00:08:53.750 "config": null 00:08:53.750 }, 00:08:53.750 { 00:08:53.750 "subsystem": "scheduler", 00:08:53.750 "config": [ 00:08:53.750 { 00:08:53.750 "method": "framework_set_scheduler", 00:08:53.750 "params": { 00:08:53.750 "name": "static" 00:08:53.750 } 00:08:53.750 } 00:08:53.750 ] 00:08:53.750 }, 00:08:53.750 { 00:08:53.750 "subsystem": "vhost_scsi", 00:08:53.750 "config": [] 00:08:53.750 }, 00:08:53.750 { 00:08:53.750 "subsystem": "vhost_blk", 00:08:53.750 "config": [] 00:08:53.750 }, 00:08:53.750 { 00:08:53.750 "subsystem": "ublk", 00:08:53.750 "config": [] 00:08:53.750 }, 00:08:53.750 { 00:08:53.750 "subsystem": "nbd", 00:08:53.750 "config": [] 00:08:53.750 }, 00:08:53.750 { 00:08:53.750 "subsystem": "nvmf", 00:08:53.750 "config": [ 00:08:53.750 { 00:08:53.750 "method": "nvmf_set_config", 00:08:53.750 "params": { 00:08:53.750 "discovery_filter": "match_any", 00:08:53.750 "admin_cmd_passthru": { 00:08:53.750 "identify_ctrlr": false 00:08:53.750 }, 00:08:53.750 "dhchap_digests": [ 00:08:53.750 "sha256", 00:08:53.750 "sha384", 00:08:53.750 "sha512" 00:08:53.750 ], 00:08:53.750 "dhchap_dhgroups": [ 00:08:53.750 "null", 00:08:53.750 "ffdhe2048", 00:08:53.750 "ffdhe3072", 00:08:53.750 "ffdhe4096", 00:08:53.750 "ffdhe6144", 00:08:53.751 "ffdhe8192" 00:08:53.751 ] 00:08:53.751 } 00:08:53.751 }, 00:08:53.751 { 00:08:53.751 "method": "nvmf_set_max_subsystems", 00:08:53.751 "params": { 00:08:53.751 "max_subsystems": 1024 00:08:53.751 } 00:08:53.751 }, 00:08:53.751 { 00:08:53.751 "method": "nvmf_set_crdt", 00:08:53.751 "params": { 00:08:53.751 "crdt1": 0, 00:08:53.751 "crdt2": 0, 00:08:53.751 "crdt3": 0 00:08:53.751 } 00:08:53.751 }, 00:08:53.751 { 00:08:53.751 "method": "nvmf_create_transport", 00:08:53.751 "params": { 00:08:53.751 "trtype": "TCP", 00:08:53.751 "max_queue_depth": 128, 00:08:53.751 "max_io_qpairs_per_ctrlr": 127, 00:08:53.751 "in_capsule_data_size": 4096, 00:08:53.751 "max_io_size": 131072, 00:08:53.751 "io_unit_size": 131072, 00:08:53.751 "max_aq_depth": 128, 00:08:53.751 "num_shared_buffers": 511, 00:08:53.751 "buf_cache_size": 4294967295, 00:08:53.751 "dif_insert_or_strip": false, 00:08:53.751 "zcopy": false, 00:08:53.751 "c2h_success": true, 00:08:53.751 "sock_priority": 0, 00:08:53.751 "abort_timeout_sec": 1, 00:08:53.751 "ack_timeout": 0, 00:08:53.751 "data_wr_pool_size": 0 00:08:53.751 } 00:08:53.751 } 00:08:53.751 ] 00:08:53.751 }, 00:08:53.751 { 00:08:53.751 "subsystem": "iscsi", 00:08:53.751 "config": [ 00:08:53.751 { 00:08:53.751 "method": "iscsi_set_options", 00:08:53.751 "params": { 00:08:53.751 "node_base": "iqn.2016-06.io.spdk", 00:08:53.751 "max_sessions": 128, 00:08:53.751 "max_connections_per_session": 2, 00:08:53.751 "max_queue_depth": 64, 00:08:53.751 "default_time2wait": 2, 00:08:53.751 "default_time2retain": 20, 00:08:53.751 "first_burst_length": 8192, 00:08:53.751 "immediate_data": true, 00:08:53.751 "allow_duplicated_isid": false, 00:08:53.751 "error_recovery_level": 0, 00:08:53.751 "nop_timeout": 60, 00:08:53.751 "nop_in_interval": 30, 00:08:53.751 "disable_chap": false, 00:08:53.751 "require_chap": false, 00:08:53.751 "mutual_chap": false, 00:08:53.751 "chap_group": 0, 00:08:53.751 "max_large_datain_per_connection": 64, 00:08:53.751 "max_r2t_per_connection": 4, 00:08:53.751 "pdu_pool_size": 36864, 00:08:53.751 "immediate_data_pool_size": 16384, 00:08:53.751 "data_out_pool_size": 2048 00:08:53.751 } 00:08:53.751 } 00:08:53.751 ] 00:08:53.751 } 00:08:53.751 ] 00:08:53.751 } 00:08:53.751 06:19:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:08:53.751 06:19:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 354192 00:08:53.751 06:19:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' -z 354192 ']' 00:08:53.751 06:19:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # kill -0 354192 00:08:53.751 06:19:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # uname 00:08:53.751 06:19:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:53.751 06:19:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 354192 00:08:53.751 06:19:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:53.751 06:19:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:53.751 06:19:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # echo 'killing process with pid 354192' 00:08:53.751 killing process with pid 354192 00:08:53.751 06:19:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # kill 354192 00:08:53.751 06:19:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@976 -- # wait 354192 00:08:54.009 06:19:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=354311 00:08:54.009 06:19:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:08:54.009 06:19:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:08:59.276 06:19:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 354311 00:08:59.276 06:19:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' -z 354311 ']' 00:08:59.276 06:19:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # kill -0 354311 00:08:59.276 06:19:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # uname 00:08:59.276 06:19:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:59.276 06:19:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 354311 00:08:59.276 06:19:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:59.276 06:19:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:59.276 06:19:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # echo 'killing process with pid 354311' 00:08:59.276 killing process with pid 354311 00:08:59.276 06:19:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # kill 354311 00:08:59.276 06:19:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@976 -- # wait 354311 00:08:59.276 06:19:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:08:59.276 06:19:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:08:59.276 00:08:59.276 real 0m6.279s 00:08:59.276 user 0m5.972s 00:08:59.276 sys 0m0.600s 00:08:59.276 06:19:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:59.276 06:19:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:08:59.276 ************************************ 00:08:59.276 END TEST skip_rpc_with_json 00:08:59.276 ************************************ 00:08:59.276 06:19:31 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:08:59.276 06:19:31 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:59.276 06:19:31 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:59.276 06:19:31 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:59.536 ************************************ 00:08:59.536 START TEST skip_rpc_with_delay 00:08:59.536 ************************************ 00:08:59.536 06:19:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1127 -- # test_skip_rpc_with_delay 00:08:59.536 06:19:31 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:08:59.536 06:19:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:08:59.536 06:19:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:08:59.536 06:19:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:08:59.536 06:19:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:59.536 06:19:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:08:59.536 06:19:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:59.536 06:19:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:08:59.536 06:19:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:59.536 06:19:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:08:59.536 06:19:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:08:59.536 06:19:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:08:59.536 [2024-11-20 06:19:31.192404] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:08:59.536 06:19:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:08:59.536 06:19:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:59.536 06:19:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:59.536 06:19:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:59.536 00:08:59.536 real 0m0.069s 00:08:59.536 user 0m0.044s 00:08:59.536 sys 0m0.024s 00:08:59.536 06:19:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:59.536 06:19:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:08:59.536 ************************************ 00:08:59.536 END TEST skip_rpc_with_delay 00:08:59.536 ************************************ 00:08:59.536 06:19:31 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:08:59.536 06:19:31 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:08:59.536 06:19:31 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:08:59.536 06:19:31 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:59.536 06:19:31 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:59.536 06:19:31 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:59.536 ************************************ 00:08:59.536 START TEST exit_on_failed_rpc_init 00:08:59.536 ************************************ 00:08:59.536 06:19:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1127 -- # test_exit_on_failed_rpc_init 00:08:59.536 06:19:31 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=355282 00:08:59.536 06:19:31 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 355282 00:08:59.536 06:19:31 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:08:59.536 06:19:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # '[' -z 355282 ']' 00:08:59.536 06:19:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:59.536 06:19:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:59.536 06:19:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:59.536 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:59.536 06:19:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:59.536 06:19:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:08:59.536 [2024-11-20 06:19:31.326644] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:08:59.536 [2024-11-20 06:19:31.326687] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid355282 ] 00:08:59.796 [2024-11-20 06:19:31.401737] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:59.796 [2024-11-20 06:19:31.444487] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:00.055 06:19:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:00.055 06:19:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@866 -- # return 0 00:09:00.055 06:19:31 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:09:00.055 06:19:31 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:09:00.055 06:19:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:09:00.055 06:19:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:09:00.055 06:19:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:09:00.055 06:19:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:00.055 06:19:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:09:00.055 06:19:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:00.055 06:19:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:09:00.055 06:19:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:00.055 06:19:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:09:00.055 06:19:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:09:00.055 06:19:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:09:00.055 [2024-11-20 06:19:31.719782] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:09:00.055 [2024-11-20 06:19:31.719827] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid355418 ] 00:09:00.055 [2024-11-20 06:19:31.792475] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:00.055 [2024-11-20 06:19:31.832728] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:00.055 [2024-11-20 06:19:31.832779] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:09:00.055 [2024-11-20 06:19:31.832788] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:09:00.055 [2024-11-20 06:19:31.832794] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:00.055 06:19:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:09:00.055 06:19:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:00.055 06:19:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:09:00.055 06:19:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:09:00.055 06:19:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:09:00.055 06:19:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:00.055 06:19:31 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:09:00.055 06:19:31 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 355282 00:09:00.055 06:19:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # '[' -z 355282 ']' 00:09:00.056 06:19:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # kill -0 355282 00:09:00.056 06:19:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # uname 00:09:00.056 06:19:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:00.056 06:19:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 355282 00:09:00.315 06:19:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:00.315 06:19:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:00.315 06:19:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@970 -- # echo 'killing process with pid 355282' 00:09:00.315 killing process with pid 355282 00:09:00.315 06:19:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@971 -- # kill 355282 00:09:00.315 06:19:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@976 -- # wait 355282 00:09:00.574 00:09:00.574 real 0m0.954s 00:09:00.574 user 0m1.004s 00:09:00.574 sys 0m0.390s 00:09:00.574 06:19:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:00.574 06:19:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:09:00.574 ************************************ 00:09:00.574 END TEST exit_on_failed_rpc_init 00:09:00.574 ************************************ 00:09:00.574 06:19:32 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:09:00.574 00:09:00.574 real 0m13.128s 00:09:00.574 user 0m12.368s 00:09:00.574 sys 0m1.562s 00:09:00.574 06:19:32 skip_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:00.574 06:19:32 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:00.574 ************************************ 00:09:00.574 END TEST skip_rpc 00:09:00.574 ************************************ 00:09:00.574 06:19:32 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:09:00.574 06:19:32 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:00.574 06:19:32 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:00.574 06:19:32 -- common/autotest_common.sh@10 -- # set +x 00:09:00.574 ************************************ 00:09:00.574 START TEST rpc_client 00:09:00.574 ************************************ 00:09:00.574 06:19:32 rpc_client -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:09:00.834 * Looking for test storage... 00:09:00.834 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:09:00.834 06:19:32 rpc_client -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:00.834 06:19:32 rpc_client -- common/autotest_common.sh@1691 -- # lcov --version 00:09:00.834 06:19:32 rpc_client -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:00.834 06:19:32 rpc_client -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:00.834 06:19:32 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:00.834 06:19:32 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:00.834 06:19:32 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:00.834 06:19:32 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:09:00.834 06:19:32 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:09:00.834 06:19:32 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:09:00.834 06:19:32 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:09:00.834 06:19:32 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:09:00.834 06:19:32 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:09:00.834 06:19:32 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:09:00.834 06:19:32 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:00.834 06:19:32 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:09:00.834 06:19:32 rpc_client -- scripts/common.sh@345 -- # : 1 00:09:00.834 06:19:32 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:00.834 06:19:32 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:00.834 06:19:32 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:09:00.834 06:19:32 rpc_client -- scripts/common.sh@353 -- # local d=1 00:09:00.834 06:19:32 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:00.834 06:19:32 rpc_client -- scripts/common.sh@355 -- # echo 1 00:09:00.834 06:19:32 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:09:00.834 06:19:32 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:09:00.834 06:19:32 rpc_client -- scripts/common.sh@353 -- # local d=2 00:09:00.834 06:19:32 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:00.834 06:19:32 rpc_client -- scripts/common.sh@355 -- # echo 2 00:09:00.834 06:19:32 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:09:00.834 06:19:32 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:00.834 06:19:32 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:00.834 06:19:32 rpc_client -- scripts/common.sh@368 -- # return 0 00:09:00.834 06:19:32 rpc_client -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:00.834 06:19:32 rpc_client -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:00.834 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:00.834 --rc genhtml_branch_coverage=1 00:09:00.834 --rc genhtml_function_coverage=1 00:09:00.835 --rc genhtml_legend=1 00:09:00.835 --rc geninfo_all_blocks=1 00:09:00.835 --rc geninfo_unexecuted_blocks=1 00:09:00.835 00:09:00.835 ' 00:09:00.835 06:19:32 rpc_client -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:00.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:00.835 --rc genhtml_branch_coverage=1 00:09:00.835 --rc genhtml_function_coverage=1 00:09:00.835 --rc genhtml_legend=1 00:09:00.835 --rc geninfo_all_blocks=1 00:09:00.835 --rc geninfo_unexecuted_blocks=1 00:09:00.835 00:09:00.835 ' 00:09:00.835 06:19:32 rpc_client -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:00.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:00.835 --rc genhtml_branch_coverage=1 00:09:00.835 --rc genhtml_function_coverage=1 00:09:00.835 --rc genhtml_legend=1 00:09:00.835 --rc geninfo_all_blocks=1 00:09:00.835 --rc geninfo_unexecuted_blocks=1 00:09:00.835 00:09:00.835 ' 00:09:00.835 06:19:32 rpc_client -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:00.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:00.835 --rc genhtml_branch_coverage=1 00:09:00.835 --rc genhtml_function_coverage=1 00:09:00.835 --rc genhtml_legend=1 00:09:00.835 --rc geninfo_all_blocks=1 00:09:00.835 --rc geninfo_unexecuted_blocks=1 00:09:00.835 00:09:00.835 ' 00:09:00.835 06:19:32 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:09:00.835 OK 00:09:00.835 06:19:32 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:09:00.835 00:09:00.835 real 0m0.196s 00:09:00.835 user 0m0.124s 00:09:00.835 sys 0m0.085s 00:09:00.835 06:19:32 rpc_client -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:00.835 06:19:32 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:09:00.835 ************************************ 00:09:00.835 END TEST rpc_client 00:09:00.835 ************************************ 00:09:00.835 06:19:32 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:09:00.835 06:19:32 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:00.835 06:19:32 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:00.835 06:19:32 -- common/autotest_common.sh@10 -- # set +x 00:09:00.835 ************************************ 00:09:00.835 START TEST json_config 00:09:00.835 ************************************ 00:09:00.835 06:19:32 json_config -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:09:00.835 06:19:32 json_config -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:01.095 06:19:32 json_config -- common/autotest_common.sh@1691 -- # lcov --version 00:09:01.095 06:19:32 json_config -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:01.095 06:19:32 json_config -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:01.095 06:19:32 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:01.095 06:19:32 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:01.095 06:19:32 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:01.095 06:19:32 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:09:01.095 06:19:32 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:09:01.095 06:19:32 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:09:01.095 06:19:32 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:09:01.095 06:19:32 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:09:01.095 06:19:32 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:09:01.095 06:19:32 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:09:01.095 06:19:32 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:01.095 06:19:32 json_config -- scripts/common.sh@344 -- # case "$op" in 00:09:01.095 06:19:32 json_config -- scripts/common.sh@345 -- # : 1 00:09:01.095 06:19:32 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:01.095 06:19:32 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:01.095 06:19:32 json_config -- scripts/common.sh@365 -- # decimal 1 00:09:01.095 06:19:32 json_config -- scripts/common.sh@353 -- # local d=1 00:09:01.095 06:19:32 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:01.095 06:19:32 json_config -- scripts/common.sh@355 -- # echo 1 00:09:01.095 06:19:32 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:09:01.095 06:19:32 json_config -- scripts/common.sh@366 -- # decimal 2 00:09:01.095 06:19:32 json_config -- scripts/common.sh@353 -- # local d=2 00:09:01.095 06:19:32 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:01.095 06:19:32 json_config -- scripts/common.sh@355 -- # echo 2 00:09:01.095 06:19:32 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:09:01.095 06:19:32 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:01.095 06:19:32 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:01.095 06:19:32 json_config -- scripts/common.sh@368 -- # return 0 00:09:01.095 06:19:32 json_config -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:01.095 06:19:32 json_config -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:01.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:01.095 --rc genhtml_branch_coverage=1 00:09:01.095 --rc genhtml_function_coverage=1 00:09:01.095 --rc genhtml_legend=1 00:09:01.095 --rc geninfo_all_blocks=1 00:09:01.095 --rc geninfo_unexecuted_blocks=1 00:09:01.095 00:09:01.095 ' 00:09:01.095 06:19:32 json_config -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:01.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:01.095 --rc genhtml_branch_coverage=1 00:09:01.095 --rc genhtml_function_coverage=1 00:09:01.095 --rc genhtml_legend=1 00:09:01.095 --rc geninfo_all_blocks=1 00:09:01.095 --rc geninfo_unexecuted_blocks=1 00:09:01.095 00:09:01.095 ' 00:09:01.095 06:19:32 json_config -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:01.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:01.095 --rc genhtml_branch_coverage=1 00:09:01.095 --rc genhtml_function_coverage=1 00:09:01.095 --rc genhtml_legend=1 00:09:01.095 --rc geninfo_all_blocks=1 00:09:01.095 --rc geninfo_unexecuted_blocks=1 00:09:01.095 00:09:01.095 ' 00:09:01.095 06:19:32 json_config -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:01.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:01.095 --rc genhtml_branch_coverage=1 00:09:01.095 --rc genhtml_function_coverage=1 00:09:01.095 --rc genhtml_legend=1 00:09:01.095 --rc geninfo_all_blocks=1 00:09:01.095 --rc geninfo_unexecuted_blocks=1 00:09:01.095 00:09:01.095 ' 00:09:01.095 06:19:32 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:01.095 06:19:32 json_config -- nvmf/common.sh@7 -- # uname -s 00:09:01.095 06:19:32 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:01.095 06:19:32 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:01.095 06:19:32 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:01.095 06:19:32 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:01.095 06:19:32 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:01.095 06:19:32 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:01.095 06:19:32 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:01.095 06:19:32 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:01.095 06:19:32 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:01.095 06:19:32 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:01.095 06:19:32 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:09:01.095 06:19:32 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:09:01.095 06:19:32 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:01.095 06:19:32 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:01.095 06:19:32 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:09:01.095 06:19:32 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:01.095 06:19:32 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:01.095 06:19:32 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:09:01.095 06:19:32 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:01.095 06:19:32 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:01.095 06:19:32 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:01.095 06:19:32 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.095 06:19:32 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.095 06:19:32 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.095 06:19:32 json_config -- paths/export.sh@5 -- # export PATH 00:09:01.095 06:19:32 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.095 06:19:32 json_config -- nvmf/common.sh@51 -- # : 0 00:09:01.095 06:19:32 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:01.095 06:19:32 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:01.095 06:19:32 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:01.095 06:19:32 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:01.095 06:19:32 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:01.095 06:19:32 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:01.095 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:01.095 06:19:32 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:01.095 06:19:32 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:01.095 06:19:32 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:01.095 06:19:32 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:09:01.095 06:19:32 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:09:01.095 06:19:32 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:09:01.095 06:19:32 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:09:01.095 06:19:32 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:09:01.095 06:19:32 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:09:01.095 06:19:32 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:09:01.095 06:19:32 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:09:01.095 06:19:32 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:09:01.095 06:19:32 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:09:01.095 06:19:32 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:09:01.096 06:19:32 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:09:01.096 06:19:32 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:09:01.096 06:19:32 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:09:01.096 06:19:32 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:09:01.096 06:19:32 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:09:01.096 INFO: JSON configuration test init 00:09:01.096 06:19:32 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:09:01.096 06:19:32 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:09:01.096 06:19:32 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:01.096 06:19:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:01.096 06:19:32 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:09:01.096 06:19:32 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:01.096 06:19:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:01.096 06:19:32 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:09:01.096 06:19:32 json_config -- json_config/common.sh@9 -- # local app=target 00:09:01.096 06:19:32 json_config -- json_config/common.sh@10 -- # shift 00:09:01.096 06:19:32 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:09:01.096 06:19:32 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:09:01.096 06:19:32 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:09:01.096 06:19:32 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:09:01.096 06:19:32 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:09:01.096 06:19:32 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=355649 00:09:01.096 06:19:32 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:09:01.096 Waiting for target to run... 00:09:01.096 06:19:32 json_config -- json_config/common.sh@25 -- # waitforlisten 355649 /var/tmp/spdk_tgt.sock 00:09:01.096 06:19:32 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:09:01.096 06:19:32 json_config -- common/autotest_common.sh@833 -- # '[' -z 355649 ']' 00:09:01.096 06:19:32 json_config -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:09:01.096 06:19:32 json_config -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:01.096 06:19:32 json_config -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:09:01.096 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:09:01.096 06:19:32 json_config -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:01.096 06:19:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:01.096 [2024-11-20 06:19:32.852513] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:09:01.096 [2024-11-20 06:19:32.852565] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid355649 ] 00:09:01.355 [2024-11-20 06:19:33.147759] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:01.355 [2024-11-20 06:19:33.181225] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:01.923 06:19:33 json_config -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:01.923 06:19:33 json_config -- common/autotest_common.sh@866 -- # return 0 00:09:01.923 06:19:33 json_config -- json_config/common.sh@26 -- # echo '' 00:09:01.923 00:09:01.923 06:19:33 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:09:01.923 06:19:33 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:09:01.923 06:19:33 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:01.923 06:19:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:01.923 06:19:33 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:09:01.923 06:19:33 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:09:01.923 06:19:33 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:01.923 06:19:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:01.923 06:19:33 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:09:01.923 06:19:33 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:09:01.923 06:19:33 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:09:05.210 06:19:36 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:09:05.210 06:19:36 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:09:05.210 06:19:36 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:05.210 06:19:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:05.210 06:19:36 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:09:05.210 06:19:36 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:09:05.210 06:19:36 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:09:05.210 06:19:36 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:09:05.210 06:19:36 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:09:05.210 06:19:36 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:09:05.210 06:19:36 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:09:05.210 06:19:36 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:09:05.210 06:19:37 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:09:05.210 06:19:37 json_config -- json_config/json_config.sh@51 -- # local get_types 00:09:05.210 06:19:37 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:09:05.210 06:19:37 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:09:05.210 06:19:37 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:09:05.210 06:19:37 json_config -- json_config/json_config.sh@54 -- # sort 00:09:05.210 06:19:37 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:09:05.210 06:19:37 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:09:05.210 06:19:37 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:09:05.210 06:19:37 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:09:05.210 06:19:37 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:05.210 06:19:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:05.468 06:19:37 json_config -- json_config/json_config.sh@62 -- # return 0 00:09:05.468 06:19:37 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:09:05.468 06:19:37 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:09:05.468 06:19:37 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:09:05.468 06:19:37 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:09:05.468 06:19:37 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:09:05.468 06:19:37 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:09:05.468 06:19:37 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:05.468 06:19:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:05.468 06:19:37 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:09:05.468 06:19:37 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:09:05.468 06:19:37 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:09:05.468 06:19:37 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:09:05.468 06:19:37 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:09:05.468 MallocForNvmf0 00:09:05.468 06:19:37 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:09:05.468 06:19:37 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:09:05.727 MallocForNvmf1 00:09:05.727 06:19:37 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:09:05.727 06:19:37 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:09:05.985 [2024-11-20 06:19:37.654778] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:05.985 06:19:37 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:05.985 06:19:37 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:06.243 06:19:37 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:09:06.243 06:19:37 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:09:06.243 06:19:38 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:09:06.243 06:19:38 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:09:06.501 06:19:38 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:09:06.501 06:19:38 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:09:06.759 [2024-11-20 06:19:38.409198] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:09:06.759 06:19:38 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:09:06.759 06:19:38 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:06.759 06:19:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:06.759 06:19:38 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:09:06.759 06:19:38 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:06.759 06:19:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:06.759 06:19:38 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:09:06.759 06:19:38 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:09:06.759 06:19:38 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:09:07.017 MallocBdevForConfigChangeCheck 00:09:07.017 06:19:38 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:09:07.017 06:19:38 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:07.017 06:19:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:07.017 06:19:38 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:09:07.017 06:19:38 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:09:07.277 06:19:39 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:09:07.277 INFO: shutting down applications... 00:09:07.277 06:19:39 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:09:07.277 06:19:39 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:09:07.277 06:19:39 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:09:07.277 06:19:39 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:09:09.820 Calling clear_iscsi_subsystem 00:09:09.820 Calling clear_nvmf_subsystem 00:09:09.820 Calling clear_nbd_subsystem 00:09:09.820 Calling clear_ublk_subsystem 00:09:09.820 Calling clear_vhost_blk_subsystem 00:09:09.820 Calling clear_vhost_scsi_subsystem 00:09:09.820 Calling clear_bdev_subsystem 00:09:09.820 06:19:41 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:09:09.820 06:19:41 json_config -- json_config/json_config.sh@350 -- # count=100 00:09:09.820 06:19:41 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:09:09.820 06:19:41 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:09:09.820 06:19:41 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:09:09.820 06:19:41 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:09:09.820 06:19:41 json_config -- json_config/json_config.sh@352 -- # break 00:09:09.820 06:19:41 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:09:09.820 06:19:41 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:09:09.820 06:19:41 json_config -- json_config/common.sh@31 -- # local app=target 00:09:09.820 06:19:41 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:09:09.820 06:19:41 json_config -- json_config/common.sh@35 -- # [[ -n 355649 ]] 00:09:09.820 06:19:41 json_config -- json_config/common.sh@38 -- # kill -SIGINT 355649 00:09:09.820 06:19:41 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:09:09.820 06:19:41 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:09:09.820 06:19:41 json_config -- json_config/common.sh@41 -- # kill -0 355649 00:09:09.820 06:19:41 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:09:10.388 06:19:42 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:09:10.388 06:19:42 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:09:10.388 06:19:42 json_config -- json_config/common.sh@41 -- # kill -0 355649 00:09:10.388 06:19:42 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:09:10.388 06:19:42 json_config -- json_config/common.sh@43 -- # break 00:09:10.388 06:19:42 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:09:10.388 06:19:42 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:09:10.388 SPDK target shutdown done 00:09:10.388 06:19:42 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:09:10.388 INFO: relaunching applications... 00:09:10.388 06:19:42 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:09:10.388 06:19:42 json_config -- json_config/common.sh@9 -- # local app=target 00:09:10.388 06:19:42 json_config -- json_config/common.sh@10 -- # shift 00:09:10.388 06:19:42 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:09:10.388 06:19:42 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:09:10.388 06:19:42 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:09:10.388 06:19:42 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:09:10.388 06:19:42 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:09:10.388 06:19:42 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=357389 00:09:10.388 06:19:42 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:09:10.388 Waiting for target to run... 00:09:10.388 06:19:42 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:09:10.388 06:19:42 json_config -- json_config/common.sh@25 -- # waitforlisten 357389 /var/tmp/spdk_tgt.sock 00:09:10.388 06:19:42 json_config -- common/autotest_common.sh@833 -- # '[' -z 357389 ']' 00:09:10.388 06:19:42 json_config -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:09:10.388 06:19:42 json_config -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:10.388 06:19:42 json_config -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:09:10.388 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:09:10.388 06:19:42 json_config -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:10.388 06:19:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:10.388 [2024-11-20 06:19:42.188474] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:09:10.388 [2024-11-20 06:19:42.188531] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid357389 ] 00:09:10.956 [2024-11-20 06:19:42.648032] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:10.956 [2024-11-20 06:19:42.705840] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:14.243 [2024-11-20 06:19:45.732219] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:14.243 [2024-11-20 06:19:45.764584] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:09:14.810 06:19:46 json_config -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:14.810 06:19:46 json_config -- common/autotest_common.sh@866 -- # return 0 00:09:14.810 06:19:46 json_config -- json_config/common.sh@26 -- # echo '' 00:09:14.810 00:09:14.810 06:19:46 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:09:14.810 06:19:46 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:09:14.810 INFO: Checking if target configuration is the same... 00:09:14.810 06:19:46 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:09:14.810 06:19:46 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:09:14.810 06:19:46 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:09:14.810 + '[' 2 -ne 2 ']' 00:09:14.810 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:09:14.810 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:09:14.810 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:09:14.810 +++ basename /dev/fd/62 00:09:14.810 ++ mktemp /tmp/62.XXX 00:09:14.810 + tmp_file_1=/tmp/62.ooT 00:09:14.810 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:09:14.810 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:09:14.810 + tmp_file_2=/tmp/spdk_tgt_config.json.wSu 00:09:14.810 + ret=0 00:09:14.810 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:09:15.069 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:09:15.069 + diff -u /tmp/62.ooT /tmp/spdk_tgt_config.json.wSu 00:09:15.069 + echo 'INFO: JSON config files are the same' 00:09:15.069 INFO: JSON config files are the same 00:09:15.069 + rm /tmp/62.ooT /tmp/spdk_tgt_config.json.wSu 00:09:15.069 + exit 0 00:09:15.069 06:19:46 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:09:15.069 06:19:46 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:09:15.069 INFO: changing configuration and checking if this can be detected... 00:09:15.069 06:19:46 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:09:15.069 06:19:46 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:09:15.327 06:19:46 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:09:15.327 06:19:46 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:09:15.327 06:19:46 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:09:15.327 + '[' 2 -ne 2 ']' 00:09:15.327 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:09:15.327 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:09:15.327 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:09:15.327 +++ basename /dev/fd/62 00:09:15.327 ++ mktemp /tmp/62.XXX 00:09:15.327 + tmp_file_1=/tmp/62.4jn 00:09:15.327 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:09:15.327 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:09:15.327 + tmp_file_2=/tmp/spdk_tgt_config.json.JMp 00:09:15.327 + ret=0 00:09:15.327 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:09:15.586 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:09:15.586 + diff -u /tmp/62.4jn /tmp/spdk_tgt_config.json.JMp 00:09:15.586 + ret=1 00:09:15.586 + echo '=== Start of file: /tmp/62.4jn ===' 00:09:15.586 + cat /tmp/62.4jn 00:09:15.586 + echo '=== End of file: /tmp/62.4jn ===' 00:09:15.586 + echo '' 00:09:15.586 + echo '=== Start of file: /tmp/spdk_tgt_config.json.JMp ===' 00:09:15.586 + cat /tmp/spdk_tgt_config.json.JMp 00:09:15.586 + echo '=== End of file: /tmp/spdk_tgt_config.json.JMp ===' 00:09:15.586 + echo '' 00:09:15.586 + rm /tmp/62.4jn /tmp/spdk_tgt_config.json.JMp 00:09:15.586 + exit 1 00:09:15.586 06:19:47 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:09:15.586 INFO: configuration change detected. 00:09:15.586 06:19:47 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:09:15.586 06:19:47 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:09:15.586 06:19:47 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:15.586 06:19:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:15.586 06:19:47 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:09:15.586 06:19:47 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:09:15.586 06:19:47 json_config -- json_config/json_config.sh@324 -- # [[ -n 357389 ]] 00:09:15.586 06:19:47 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:09:15.586 06:19:47 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:09:15.586 06:19:47 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:15.586 06:19:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:15.586 06:19:47 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:09:15.586 06:19:47 json_config -- json_config/json_config.sh@200 -- # uname -s 00:09:15.586 06:19:47 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:09:15.586 06:19:47 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:09:15.586 06:19:47 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:09:15.586 06:19:47 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:09:15.586 06:19:47 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:15.586 06:19:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:15.845 06:19:47 json_config -- json_config/json_config.sh@330 -- # killprocess 357389 00:09:15.845 06:19:47 json_config -- common/autotest_common.sh@952 -- # '[' -z 357389 ']' 00:09:15.845 06:19:47 json_config -- common/autotest_common.sh@956 -- # kill -0 357389 00:09:15.845 06:19:47 json_config -- common/autotest_common.sh@957 -- # uname 00:09:15.845 06:19:47 json_config -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:15.845 06:19:47 json_config -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 357389 00:09:15.845 06:19:47 json_config -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:15.845 06:19:47 json_config -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:15.845 06:19:47 json_config -- common/autotest_common.sh@970 -- # echo 'killing process with pid 357389' 00:09:15.845 killing process with pid 357389 00:09:15.845 06:19:47 json_config -- common/autotest_common.sh@971 -- # kill 357389 00:09:15.845 06:19:47 json_config -- common/autotest_common.sh@976 -- # wait 357389 00:09:17.750 06:19:49 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:09:17.750 06:19:49 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:09:17.750 06:19:49 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:17.750 06:19:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:17.750 06:19:49 json_config -- json_config/json_config.sh@335 -- # return 0 00:09:17.750 06:19:49 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:09:17.750 INFO: Success 00:09:17.750 00:09:17.750 real 0m16.969s 00:09:17.750 user 0m17.591s 00:09:17.750 sys 0m2.558s 00:09:17.750 06:19:49 json_config -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:17.750 06:19:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:17.750 ************************************ 00:09:17.750 END TEST json_config 00:09:17.750 ************************************ 00:09:18.010 06:19:49 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:09:18.010 06:19:49 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:18.010 06:19:49 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:18.011 06:19:49 -- common/autotest_common.sh@10 -- # set +x 00:09:18.011 ************************************ 00:09:18.011 START TEST json_config_extra_key 00:09:18.011 ************************************ 00:09:18.011 06:19:49 json_config_extra_key -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:09:18.011 06:19:49 json_config_extra_key -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:18.011 06:19:49 json_config_extra_key -- common/autotest_common.sh@1691 -- # lcov --version 00:09:18.011 06:19:49 json_config_extra_key -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:18.011 06:19:49 json_config_extra_key -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:18.011 06:19:49 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:18.011 06:19:49 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:18.011 06:19:49 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:18.011 06:19:49 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:09:18.011 06:19:49 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:09:18.011 06:19:49 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:09:18.011 06:19:49 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:09:18.011 06:19:49 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:09:18.011 06:19:49 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:09:18.011 06:19:49 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:09:18.011 06:19:49 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:18.011 06:19:49 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:09:18.011 06:19:49 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:09:18.011 06:19:49 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:18.011 06:19:49 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:18.011 06:19:49 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:09:18.011 06:19:49 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:09:18.011 06:19:49 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:18.011 06:19:49 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:09:18.011 06:19:49 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:09:18.011 06:19:49 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:09:18.011 06:19:49 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:09:18.011 06:19:49 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:18.011 06:19:49 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:09:18.011 06:19:49 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:09:18.011 06:19:49 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:18.011 06:19:49 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:18.011 06:19:49 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:09:18.011 06:19:49 json_config_extra_key -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:18.011 06:19:49 json_config_extra_key -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:18.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:18.011 --rc genhtml_branch_coverage=1 00:09:18.011 --rc genhtml_function_coverage=1 00:09:18.011 --rc genhtml_legend=1 00:09:18.011 --rc geninfo_all_blocks=1 00:09:18.011 --rc geninfo_unexecuted_blocks=1 00:09:18.011 00:09:18.011 ' 00:09:18.011 06:19:49 json_config_extra_key -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:18.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:18.011 --rc genhtml_branch_coverage=1 00:09:18.011 --rc genhtml_function_coverage=1 00:09:18.011 --rc genhtml_legend=1 00:09:18.011 --rc geninfo_all_blocks=1 00:09:18.011 --rc geninfo_unexecuted_blocks=1 00:09:18.011 00:09:18.011 ' 00:09:18.011 06:19:49 json_config_extra_key -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:18.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:18.011 --rc genhtml_branch_coverage=1 00:09:18.011 --rc genhtml_function_coverage=1 00:09:18.011 --rc genhtml_legend=1 00:09:18.011 --rc geninfo_all_blocks=1 00:09:18.011 --rc geninfo_unexecuted_blocks=1 00:09:18.011 00:09:18.011 ' 00:09:18.011 06:19:49 json_config_extra_key -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:18.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:18.011 --rc genhtml_branch_coverage=1 00:09:18.011 --rc genhtml_function_coverage=1 00:09:18.011 --rc genhtml_legend=1 00:09:18.011 --rc geninfo_all_blocks=1 00:09:18.011 --rc geninfo_unexecuted_blocks=1 00:09:18.011 00:09:18.011 ' 00:09:18.011 06:19:49 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:18.011 06:19:49 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:09:18.011 06:19:49 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:18.011 06:19:49 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:18.011 06:19:49 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:18.011 06:19:49 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:18.011 06:19:49 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:18.011 06:19:49 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:18.011 06:19:49 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:18.011 06:19:49 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:18.011 06:19:49 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:18.011 06:19:49 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:18.011 06:19:49 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:09:18.011 06:19:49 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:09:18.011 06:19:49 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:18.011 06:19:49 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:18.011 06:19:49 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:09:18.011 06:19:49 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:18.011 06:19:49 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:18.011 06:19:49 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:09:18.011 06:19:49 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:18.011 06:19:49 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:18.011 06:19:49 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:18.011 06:19:49 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:18.011 06:19:49 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:18.011 06:19:49 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:18.011 06:19:49 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:09:18.011 06:19:49 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:18.011 06:19:49 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:09:18.011 06:19:49 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:18.011 06:19:49 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:18.011 06:19:49 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:18.011 06:19:49 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:18.011 06:19:49 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:18.011 06:19:49 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:18.011 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:18.011 06:19:49 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:18.011 06:19:49 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:18.011 06:19:49 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:18.011 06:19:49 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:09:18.011 06:19:49 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:09:18.011 06:19:49 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:09:18.011 06:19:49 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:09:18.011 06:19:49 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:09:18.011 06:19:49 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:09:18.011 06:19:49 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:09:18.011 06:19:49 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:09:18.011 06:19:49 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:09:18.011 06:19:49 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:09:18.011 06:19:49 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:09:18.012 INFO: launching applications... 00:09:18.012 06:19:49 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:09:18.012 06:19:49 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:09:18.012 06:19:49 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:09:18.012 06:19:49 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:09:18.012 06:19:49 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:09:18.012 06:19:49 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:09:18.012 06:19:49 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:09:18.012 06:19:49 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:09:18.012 06:19:49 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=358886 00:09:18.012 06:19:49 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:09:18.012 Waiting for target to run... 00:09:18.012 06:19:49 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 358886 /var/tmp/spdk_tgt.sock 00:09:18.012 06:19:49 json_config_extra_key -- common/autotest_common.sh@833 -- # '[' -z 358886 ']' 00:09:18.012 06:19:49 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:09:18.012 06:19:49 json_config_extra_key -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:09:18.012 06:19:49 json_config_extra_key -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:18.012 06:19:49 json_config_extra_key -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:09:18.012 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:09:18.012 06:19:49 json_config_extra_key -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:18.012 06:19:49 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:09:18.337 [2024-11-20 06:19:49.886534] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:09:18.337 [2024-11-20 06:19:49.886584] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid358886 ] 00:09:18.596 [2024-11-20 06:19:50.178534] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:18.596 [2024-11-20 06:19:50.213406] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:19.164 06:19:50 json_config_extra_key -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:19.164 06:19:50 json_config_extra_key -- common/autotest_common.sh@866 -- # return 0 00:09:19.164 06:19:50 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:09:19.164 00:09:19.164 06:19:50 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:09:19.164 INFO: shutting down applications... 00:09:19.164 06:19:50 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:09:19.164 06:19:50 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:09:19.164 06:19:50 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:09:19.164 06:19:50 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 358886 ]] 00:09:19.164 06:19:50 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 358886 00:09:19.164 06:19:50 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:09:19.164 06:19:50 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:09:19.164 06:19:50 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 358886 00:09:19.164 06:19:50 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:09:19.423 06:19:51 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:09:19.423 06:19:51 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:09:19.423 06:19:51 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 358886 00:09:19.423 06:19:51 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:09:19.423 06:19:51 json_config_extra_key -- json_config/common.sh@43 -- # break 00:09:19.423 06:19:51 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:09:19.423 06:19:51 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:09:19.423 SPDK target shutdown done 00:09:19.423 06:19:51 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:09:19.423 Success 00:09:19.423 00:09:19.423 real 0m1.576s 00:09:19.423 user 0m1.364s 00:09:19.423 sys 0m0.392s 00:09:19.423 06:19:51 json_config_extra_key -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:19.423 06:19:51 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:09:19.423 ************************************ 00:09:19.423 END TEST json_config_extra_key 00:09:19.423 ************************************ 00:09:19.423 06:19:51 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:09:19.423 06:19:51 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:19.423 06:19:51 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:19.682 06:19:51 -- common/autotest_common.sh@10 -- # set +x 00:09:19.682 ************************************ 00:09:19.682 START TEST alias_rpc 00:09:19.682 ************************************ 00:09:19.682 06:19:51 alias_rpc -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:09:19.682 * Looking for test storage... 00:09:19.682 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:09:19.682 06:19:51 alias_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:19.682 06:19:51 alias_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:09:19.682 06:19:51 alias_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:19.682 06:19:51 alias_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:19.682 06:19:51 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:19.682 06:19:51 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:19.682 06:19:51 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:19.682 06:19:51 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:09:19.682 06:19:51 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:09:19.682 06:19:51 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:09:19.682 06:19:51 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:09:19.682 06:19:51 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:09:19.682 06:19:51 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:09:19.682 06:19:51 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:09:19.682 06:19:51 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:19.682 06:19:51 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:09:19.682 06:19:51 alias_rpc -- scripts/common.sh@345 -- # : 1 00:09:19.682 06:19:51 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:19.682 06:19:51 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:19.682 06:19:51 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:09:19.682 06:19:51 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:09:19.682 06:19:51 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:19.682 06:19:51 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:09:19.682 06:19:51 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:19.682 06:19:51 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:09:19.682 06:19:51 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:09:19.682 06:19:51 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:19.682 06:19:51 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:09:19.682 06:19:51 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:19.682 06:19:51 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:19.682 06:19:51 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:19.682 06:19:51 alias_rpc -- scripts/common.sh@368 -- # return 0 00:09:19.682 06:19:51 alias_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:19.682 06:19:51 alias_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:19.682 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:19.682 --rc genhtml_branch_coverage=1 00:09:19.682 --rc genhtml_function_coverage=1 00:09:19.682 --rc genhtml_legend=1 00:09:19.682 --rc geninfo_all_blocks=1 00:09:19.682 --rc geninfo_unexecuted_blocks=1 00:09:19.682 00:09:19.682 ' 00:09:19.682 06:19:51 alias_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:19.682 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:19.682 --rc genhtml_branch_coverage=1 00:09:19.682 --rc genhtml_function_coverage=1 00:09:19.682 --rc genhtml_legend=1 00:09:19.682 --rc geninfo_all_blocks=1 00:09:19.682 --rc geninfo_unexecuted_blocks=1 00:09:19.682 00:09:19.682 ' 00:09:19.682 06:19:51 alias_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:19.682 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:19.682 --rc genhtml_branch_coverage=1 00:09:19.682 --rc genhtml_function_coverage=1 00:09:19.682 --rc genhtml_legend=1 00:09:19.682 --rc geninfo_all_blocks=1 00:09:19.682 --rc geninfo_unexecuted_blocks=1 00:09:19.682 00:09:19.682 ' 00:09:19.682 06:19:51 alias_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:19.682 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:19.682 --rc genhtml_branch_coverage=1 00:09:19.682 --rc genhtml_function_coverage=1 00:09:19.682 --rc genhtml_legend=1 00:09:19.682 --rc geninfo_all_blocks=1 00:09:19.682 --rc geninfo_unexecuted_blocks=1 00:09:19.682 00:09:19.682 ' 00:09:19.682 06:19:51 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:09:19.682 06:19:51 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=359176 00:09:19.682 06:19:51 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:09:19.682 06:19:51 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 359176 00:09:19.682 06:19:51 alias_rpc -- common/autotest_common.sh@833 -- # '[' -z 359176 ']' 00:09:19.682 06:19:51 alias_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:19.682 06:19:51 alias_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:19.682 06:19:51 alias_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:19.682 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:19.682 06:19:51 alias_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:19.682 06:19:51 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:19.941 [2024-11-20 06:19:51.517949] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:09:19.942 [2024-11-20 06:19:51.517999] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid359176 ] 00:09:19.942 [2024-11-20 06:19:51.593599] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:19.942 [2024-11-20 06:19:51.632740] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:20.201 06:19:51 alias_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:20.201 06:19:51 alias_rpc -- common/autotest_common.sh@866 -- # return 0 00:09:20.201 06:19:51 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:09:20.460 06:19:52 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 359176 00:09:20.460 06:19:52 alias_rpc -- common/autotest_common.sh@952 -- # '[' -z 359176 ']' 00:09:20.460 06:19:52 alias_rpc -- common/autotest_common.sh@956 -- # kill -0 359176 00:09:20.460 06:19:52 alias_rpc -- common/autotest_common.sh@957 -- # uname 00:09:20.460 06:19:52 alias_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:20.460 06:19:52 alias_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 359176 00:09:20.460 06:19:52 alias_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:20.460 06:19:52 alias_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:20.460 06:19:52 alias_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 359176' 00:09:20.460 killing process with pid 359176 00:09:20.460 06:19:52 alias_rpc -- common/autotest_common.sh@971 -- # kill 359176 00:09:20.460 06:19:52 alias_rpc -- common/autotest_common.sh@976 -- # wait 359176 00:09:20.719 00:09:20.719 real 0m1.134s 00:09:20.719 user 0m1.147s 00:09:20.719 sys 0m0.424s 00:09:20.719 06:19:52 alias_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:20.719 06:19:52 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:20.719 ************************************ 00:09:20.719 END TEST alias_rpc 00:09:20.719 ************************************ 00:09:20.719 06:19:52 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:09:20.719 06:19:52 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:09:20.719 06:19:52 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:20.719 06:19:52 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:20.719 06:19:52 -- common/autotest_common.sh@10 -- # set +x 00:09:20.719 ************************************ 00:09:20.719 START TEST spdkcli_tcp 00:09:20.719 ************************************ 00:09:20.719 06:19:52 spdkcli_tcp -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:09:20.979 * Looking for test storage... 00:09:20.979 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:09:20.979 06:19:52 spdkcli_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:20.979 06:19:52 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:09:20.979 06:19:52 spdkcli_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:20.979 06:19:52 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:20.979 06:19:52 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:20.979 06:19:52 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:20.979 06:19:52 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:20.979 06:19:52 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:09:20.979 06:19:52 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:09:20.979 06:19:52 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:09:20.979 06:19:52 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:09:20.979 06:19:52 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:09:20.979 06:19:52 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:09:20.979 06:19:52 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:09:20.979 06:19:52 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:20.979 06:19:52 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:09:20.979 06:19:52 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:09:20.979 06:19:52 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:20.979 06:19:52 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:20.979 06:19:52 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:09:20.979 06:19:52 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:09:20.979 06:19:52 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:20.979 06:19:52 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:09:20.979 06:19:52 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:09:20.979 06:19:52 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:09:20.979 06:19:52 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:09:20.979 06:19:52 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:20.979 06:19:52 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:09:20.979 06:19:52 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:09:20.979 06:19:52 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:20.979 06:19:52 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:20.979 06:19:52 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:09:20.979 06:19:52 spdkcli_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:20.979 06:19:52 spdkcli_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:20.979 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:20.979 --rc genhtml_branch_coverage=1 00:09:20.979 --rc genhtml_function_coverage=1 00:09:20.979 --rc genhtml_legend=1 00:09:20.979 --rc geninfo_all_blocks=1 00:09:20.979 --rc geninfo_unexecuted_blocks=1 00:09:20.979 00:09:20.979 ' 00:09:20.979 06:19:52 spdkcli_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:20.979 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:20.979 --rc genhtml_branch_coverage=1 00:09:20.979 --rc genhtml_function_coverage=1 00:09:20.979 --rc genhtml_legend=1 00:09:20.979 --rc geninfo_all_blocks=1 00:09:20.979 --rc geninfo_unexecuted_blocks=1 00:09:20.979 00:09:20.979 ' 00:09:20.979 06:19:52 spdkcli_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:20.979 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:20.979 --rc genhtml_branch_coverage=1 00:09:20.979 --rc genhtml_function_coverage=1 00:09:20.979 --rc genhtml_legend=1 00:09:20.979 --rc geninfo_all_blocks=1 00:09:20.979 --rc geninfo_unexecuted_blocks=1 00:09:20.979 00:09:20.979 ' 00:09:20.979 06:19:52 spdkcli_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:20.979 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:20.979 --rc genhtml_branch_coverage=1 00:09:20.979 --rc genhtml_function_coverage=1 00:09:20.979 --rc genhtml_legend=1 00:09:20.979 --rc geninfo_all_blocks=1 00:09:20.979 --rc geninfo_unexecuted_blocks=1 00:09:20.979 00:09:20.979 ' 00:09:20.979 06:19:52 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:09:20.979 06:19:52 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:09:20.979 06:19:52 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:09:20.979 06:19:52 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:09:20.979 06:19:52 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:09:20.979 06:19:52 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:09:20.979 06:19:52 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:09:20.979 06:19:52 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:20.979 06:19:52 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:20.979 06:19:52 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=359466 00:09:20.979 06:19:52 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 359466 00:09:20.979 06:19:52 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:09:20.979 06:19:52 spdkcli_tcp -- common/autotest_common.sh@833 -- # '[' -z 359466 ']' 00:09:20.979 06:19:52 spdkcli_tcp -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:20.979 06:19:52 spdkcli_tcp -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:20.979 06:19:52 spdkcli_tcp -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:20.979 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:20.979 06:19:52 spdkcli_tcp -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:20.979 06:19:52 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:20.979 [2024-11-20 06:19:52.729943] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:09:20.979 [2024-11-20 06:19:52.729995] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid359466 ] 00:09:20.979 [2024-11-20 06:19:52.804835] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:21.238 [2024-11-20 06:19:52.848434] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:21.238 [2024-11-20 06:19:52.848434] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:21.805 06:19:53 spdkcli_tcp -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:21.805 06:19:53 spdkcli_tcp -- common/autotest_common.sh@866 -- # return 0 00:09:21.805 06:19:53 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=359572 00:09:21.805 06:19:53 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:09:21.805 06:19:53 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:09:22.065 [ 00:09:22.065 "bdev_malloc_delete", 00:09:22.065 "bdev_malloc_create", 00:09:22.065 "bdev_null_resize", 00:09:22.065 "bdev_null_delete", 00:09:22.065 "bdev_null_create", 00:09:22.065 "bdev_nvme_cuse_unregister", 00:09:22.065 "bdev_nvme_cuse_register", 00:09:22.065 "bdev_opal_new_user", 00:09:22.065 "bdev_opal_set_lock_state", 00:09:22.065 "bdev_opal_delete", 00:09:22.065 "bdev_opal_get_info", 00:09:22.065 "bdev_opal_create", 00:09:22.065 "bdev_nvme_opal_revert", 00:09:22.065 "bdev_nvme_opal_init", 00:09:22.065 "bdev_nvme_send_cmd", 00:09:22.065 "bdev_nvme_set_keys", 00:09:22.065 "bdev_nvme_get_path_iostat", 00:09:22.065 "bdev_nvme_get_mdns_discovery_info", 00:09:22.065 "bdev_nvme_stop_mdns_discovery", 00:09:22.065 "bdev_nvme_start_mdns_discovery", 00:09:22.065 "bdev_nvme_set_multipath_policy", 00:09:22.065 "bdev_nvme_set_preferred_path", 00:09:22.065 "bdev_nvme_get_io_paths", 00:09:22.065 "bdev_nvme_remove_error_injection", 00:09:22.065 "bdev_nvme_add_error_injection", 00:09:22.065 "bdev_nvme_get_discovery_info", 00:09:22.065 "bdev_nvme_stop_discovery", 00:09:22.065 "bdev_nvme_start_discovery", 00:09:22.065 "bdev_nvme_get_controller_health_info", 00:09:22.065 "bdev_nvme_disable_controller", 00:09:22.065 "bdev_nvme_enable_controller", 00:09:22.065 "bdev_nvme_reset_controller", 00:09:22.065 "bdev_nvme_get_transport_statistics", 00:09:22.065 "bdev_nvme_apply_firmware", 00:09:22.065 "bdev_nvme_detach_controller", 00:09:22.065 "bdev_nvme_get_controllers", 00:09:22.065 "bdev_nvme_attach_controller", 00:09:22.065 "bdev_nvme_set_hotplug", 00:09:22.065 "bdev_nvme_set_options", 00:09:22.065 "bdev_passthru_delete", 00:09:22.065 "bdev_passthru_create", 00:09:22.065 "bdev_lvol_set_parent_bdev", 00:09:22.065 "bdev_lvol_set_parent", 00:09:22.065 "bdev_lvol_check_shallow_copy", 00:09:22.065 "bdev_lvol_start_shallow_copy", 00:09:22.065 "bdev_lvol_grow_lvstore", 00:09:22.065 "bdev_lvol_get_lvols", 00:09:22.065 "bdev_lvol_get_lvstores", 00:09:22.065 "bdev_lvol_delete", 00:09:22.065 "bdev_lvol_set_read_only", 00:09:22.065 "bdev_lvol_resize", 00:09:22.065 "bdev_lvol_decouple_parent", 00:09:22.065 "bdev_lvol_inflate", 00:09:22.065 "bdev_lvol_rename", 00:09:22.065 "bdev_lvol_clone_bdev", 00:09:22.065 "bdev_lvol_clone", 00:09:22.065 "bdev_lvol_snapshot", 00:09:22.065 "bdev_lvol_create", 00:09:22.065 "bdev_lvol_delete_lvstore", 00:09:22.065 "bdev_lvol_rename_lvstore", 00:09:22.065 "bdev_lvol_create_lvstore", 00:09:22.065 "bdev_raid_set_options", 00:09:22.065 "bdev_raid_remove_base_bdev", 00:09:22.065 "bdev_raid_add_base_bdev", 00:09:22.065 "bdev_raid_delete", 00:09:22.065 "bdev_raid_create", 00:09:22.065 "bdev_raid_get_bdevs", 00:09:22.065 "bdev_error_inject_error", 00:09:22.065 "bdev_error_delete", 00:09:22.065 "bdev_error_create", 00:09:22.065 "bdev_split_delete", 00:09:22.065 "bdev_split_create", 00:09:22.065 "bdev_delay_delete", 00:09:22.065 "bdev_delay_create", 00:09:22.065 "bdev_delay_update_latency", 00:09:22.065 "bdev_zone_block_delete", 00:09:22.065 "bdev_zone_block_create", 00:09:22.065 "blobfs_create", 00:09:22.065 "blobfs_detect", 00:09:22.065 "blobfs_set_cache_size", 00:09:22.065 "bdev_aio_delete", 00:09:22.065 "bdev_aio_rescan", 00:09:22.065 "bdev_aio_create", 00:09:22.065 "bdev_ftl_set_property", 00:09:22.065 "bdev_ftl_get_properties", 00:09:22.065 "bdev_ftl_get_stats", 00:09:22.065 "bdev_ftl_unmap", 00:09:22.065 "bdev_ftl_unload", 00:09:22.065 "bdev_ftl_delete", 00:09:22.066 "bdev_ftl_load", 00:09:22.066 "bdev_ftl_create", 00:09:22.066 "bdev_virtio_attach_controller", 00:09:22.066 "bdev_virtio_scsi_get_devices", 00:09:22.066 "bdev_virtio_detach_controller", 00:09:22.066 "bdev_virtio_blk_set_hotplug", 00:09:22.066 "bdev_iscsi_delete", 00:09:22.066 "bdev_iscsi_create", 00:09:22.066 "bdev_iscsi_set_options", 00:09:22.066 "accel_error_inject_error", 00:09:22.066 "ioat_scan_accel_module", 00:09:22.066 "dsa_scan_accel_module", 00:09:22.066 "iaa_scan_accel_module", 00:09:22.066 "vfu_virtio_create_fs_endpoint", 00:09:22.066 "vfu_virtio_create_scsi_endpoint", 00:09:22.066 "vfu_virtio_scsi_remove_target", 00:09:22.066 "vfu_virtio_scsi_add_target", 00:09:22.066 "vfu_virtio_create_blk_endpoint", 00:09:22.066 "vfu_virtio_delete_endpoint", 00:09:22.066 "keyring_file_remove_key", 00:09:22.066 "keyring_file_add_key", 00:09:22.066 "keyring_linux_set_options", 00:09:22.066 "fsdev_aio_delete", 00:09:22.066 "fsdev_aio_create", 00:09:22.066 "iscsi_get_histogram", 00:09:22.066 "iscsi_enable_histogram", 00:09:22.066 "iscsi_set_options", 00:09:22.066 "iscsi_get_auth_groups", 00:09:22.066 "iscsi_auth_group_remove_secret", 00:09:22.066 "iscsi_auth_group_add_secret", 00:09:22.066 "iscsi_delete_auth_group", 00:09:22.066 "iscsi_create_auth_group", 00:09:22.066 "iscsi_set_discovery_auth", 00:09:22.066 "iscsi_get_options", 00:09:22.066 "iscsi_target_node_request_logout", 00:09:22.066 "iscsi_target_node_set_redirect", 00:09:22.066 "iscsi_target_node_set_auth", 00:09:22.066 "iscsi_target_node_add_lun", 00:09:22.066 "iscsi_get_stats", 00:09:22.066 "iscsi_get_connections", 00:09:22.066 "iscsi_portal_group_set_auth", 00:09:22.066 "iscsi_start_portal_group", 00:09:22.066 "iscsi_delete_portal_group", 00:09:22.066 "iscsi_create_portal_group", 00:09:22.066 "iscsi_get_portal_groups", 00:09:22.066 "iscsi_delete_target_node", 00:09:22.066 "iscsi_target_node_remove_pg_ig_maps", 00:09:22.066 "iscsi_target_node_add_pg_ig_maps", 00:09:22.066 "iscsi_create_target_node", 00:09:22.066 "iscsi_get_target_nodes", 00:09:22.066 "iscsi_delete_initiator_group", 00:09:22.066 "iscsi_initiator_group_remove_initiators", 00:09:22.066 "iscsi_initiator_group_add_initiators", 00:09:22.066 "iscsi_create_initiator_group", 00:09:22.066 "iscsi_get_initiator_groups", 00:09:22.066 "nvmf_set_crdt", 00:09:22.066 "nvmf_set_config", 00:09:22.066 "nvmf_set_max_subsystems", 00:09:22.066 "nvmf_stop_mdns_prr", 00:09:22.066 "nvmf_publish_mdns_prr", 00:09:22.066 "nvmf_subsystem_get_listeners", 00:09:22.066 "nvmf_subsystem_get_qpairs", 00:09:22.066 "nvmf_subsystem_get_controllers", 00:09:22.066 "nvmf_get_stats", 00:09:22.066 "nvmf_get_transports", 00:09:22.066 "nvmf_create_transport", 00:09:22.066 "nvmf_get_targets", 00:09:22.066 "nvmf_delete_target", 00:09:22.066 "nvmf_create_target", 00:09:22.066 "nvmf_subsystem_allow_any_host", 00:09:22.066 "nvmf_subsystem_set_keys", 00:09:22.066 "nvmf_subsystem_remove_host", 00:09:22.066 "nvmf_subsystem_add_host", 00:09:22.066 "nvmf_ns_remove_host", 00:09:22.066 "nvmf_ns_add_host", 00:09:22.066 "nvmf_subsystem_remove_ns", 00:09:22.066 "nvmf_subsystem_set_ns_ana_group", 00:09:22.066 "nvmf_subsystem_add_ns", 00:09:22.066 "nvmf_subsystem_listener_set_ana_state", 00:09:22.066 "nvmf_discovery_get_referrals", 00:09:22.066 "nvmf_discovery_remove_referral", 00:09:22.066 "nvmf_discovery_add_referral", 00:09:22.066 "nvmf_subsystem_remove_listener", 00:09:22.066 "nvmf_subsystem_add_listener", 00:09:22.066 "nvmf_delete_subsystem", 00:09:22.066 "nvmf_create_subsystem", 00:09:22.066 "nvmf_get_subsystems", 00:09:22.066 "env_dpdk_get_mem_stats", 00:09:22.066 "nbd_get_disks", 00:09:22.066 "nbd_stop_disk", 00:09:22.066 "nbd_start_disk", 00:09:22.066 "ublk_recover_disk", 00:09:22.066 "ublk_get_disks", 00:09:22.066 "ublk_stop_disk", 00:09:22.066 "ublk_start_disk", 00:09:22.066 "ublk_destroy_target", 00:09:22.066 "ublk_create_target", 00:09:22.066 "virtio_blk_create_transport", 00:09:22.066 "virtio_blk_get_transports", 00:09:22.066 "vhost_controller_set_coalescing", 00:09:22.066 "vhost_get_controllers", 00:09:22.066 "vhost_delete_controller", 00:09:22.066 "vhost_create_blk_controller", 00:09:22.066 "vhost_scsi_controller_remove_target", 00:09:22.066 "vhost_scsi_controller_add_target", 00:09:22.066 "vhost_start_scsi_controller", 00:09:22.066 "vhost_create_scsi_controller", 00:09:22.066 "thread_set_cpumask", 00:09:22.066 "scheduler_set_options", 00:09:22.066 "framework_get_governor", 00:09:22.066 "framework_get_scheduler", 00:09:22.066 "framework_set_scheduler", 00:09:22.066 "framework_get_reactors", 00:09:22.066 "thread_get_io_channels", 00:09:22.066 "thread_get_pollers", 00:09:22.066 "thread_get_stats", 00:09:22.066 "framework_monitor_context_switch", 00:09:22.066 "spdk_kill_instance", 00:09:22.066 "log_enable_timestamps", 00:09:22.066 "log_get_flags", 00:09:22.066 "log_clear_flag", 00:09:22.066 "log_set_flag", 00:09:22.066 "log_get_level", 00:09:22.066 "log_set_level", 00:09:22.066 "log_get_print_level", 00:09:22.066 "log_set_print_level", 00:09:22.066 "framework_enable_cpumask_locks", 00:09:22.066 "framework_disable_cpumask_locks", 00:09:22.066 "framework_wait_init", 00:09:22.066 "framework_start_init", 00:09:22.066 "scsi_get_devices", 00:09:22.066 "bdev_get_histogram", 00:09:22.066 "bdev_enable_histogram", 00:09:22.066 "bdev_set_qos_limit", 00:09:22.066 "bdev_set_qd_sampling_period", 00:09:22.066 "bdev_get_bdevs", 00:09:22.066 "bdev_reset_iostat", 00:09:22.066 "bdev_get_iostat", 00:09:22.066 "bdev_examine", 00:09:22.066 "bdev_wait_for_examine", 00:09:22.066 "bdev_set_options", 00:09:22.066 "accel_get_stats", 00:09:22.066 "accel_set_options", 00:09:22.066 "accel_set_driver", 00:09:22.066 "accel_crypto_key_destroy", 00:09:22.066 "accel_crypto_keys_get", 00:09:22.066 "accel_crypto_key_create", 00:09:22.066 "accel_assign_opc", 00:09:22.066 "accel_get_module_info", 00:09:22.066 "accel_get_opc_assignments", 00:09:22.066 "vmd_rescan", 00:09:22.066 "vmd_remove_device", 00:09:22.066 "vmd_enable", 00:09:22.066 "sock_get_default_impl", 00:09:22.066 "sock_set_default_impl", 00:09:22.066 "sock_impl_set_options", 00:09:22.066 "sock_impl_get_options", 00:09:22.066 "iobuf_get_stats", 00:09:22.066 "iobuf_set_options", 00:09:22.066 "keyring_get_keys", 00:09:22.066 "vfu_tgt_set_base_path", 00:09:22.066 "framework_get_pci_devices", 00:09:22.066 "framework_get_config", 00:09:22.066 "framework_get_subsystems", 00:09:22.066 "fsdev_set_opts", 00:09:22.066 "fsdev_get_opts", 00:09:22.066 "trace_get_info", 00:09:22.066 "trace_get_tpoint_group_mask", 00:09:22.066 "trace_disable_tpoint_group", 00:09:22.066 "trace_enable_tpoint_group", 00:09:22.066 "trace_clear_tpoint_mask", 00:09:22.066 "trace_set_tpoint_mask", 00:09:22.066 "notify_get_notifications", 00:09:22.066 "notify_get_types", 00:09:22.066 "spdk_get_version", 00:09:22.066 "rpc_get_methods" 00:09:22.066 ] 00:09:22.066 06:19:53 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:09:22.066 06:19:53 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:22.066 06:19:53 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:22.066 06:19:53 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:09:22.066 06:19:53 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 359466 00:09:22.066 06:19:53 spdkcli_tcp -- common/autotest_common.sh@952 -- # '[' -z 359466 ']' 00:09:22.066 06:19:53 spdkcli_tcp -- common/autotest_common.sh@956 -- # kill -0 359466 00:09:22.066 06:19:53 spdkcli_tcp -- common/autotest_common.sh@957 -- # uname 00:09:22.066 06:19:53 spdkcli_tcp -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:22.066 06:19:53 spdkcli_tcp -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 359466 00:09:22.066 06:19:53 spdkcli_tcp -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:22.066 06:19:53 spdkcli_tcp -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:22.066 06:19:53 spdkcli_tcp -- common/autotest_common.sh@970 -- # echo 'killing process with pid 359466' 00:09:22.066 killing process with pid 359466 00:09:22.066 06:19:53 spdkcli_tcp -- common/autotest_common.sh@971 -- # kill 359466 00:09:22.066 06:19:53 spdkcli_tcp -- common/autotest_common.sh@976 -- # wait 359466 00:09:22.634 00:09:22.635 real 0m1.673s 00:09:22.635 user 0m3.141s 00:09:22.635 sys 0m0.473s 00:09:22.635 06:19:54 spdkcli_tcp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:22.635 06:19:54 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:22.635 ************************************ 00:09:22.635 END TEST spdkcli_tcp 00:09:22.635 ************************************ 00:09:22.635 06:19:54 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:09:22.635 06:19:54 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:22.635 06:19:54 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:22.635 06:19:54 -- common/autotest_common.sh@10 -- # set +x 00:09:22.635 ************************************ 00:09:22.635 START TEST dpdk_mem_utility 00:09:22.635 ************************************ 00:09:22.635 06:19:54 dpdk_mem_utility -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:09:22.635 * Looking for test storage... 00:09:22.635 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:09:22.635 06:19:54 dpdk_mem_utility -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:22.635 06:19:54 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lcov --version 00:09:22.635 06:19:54 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:22.635 06:19:54 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:22.635 06:19:54 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:22.635 06:19:54 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:22.635 06:19:54 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:22.635 06:19:54 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:09:22.635 06:19:54 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:09:22.635 06:19:54 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:09:22.635 06:19:54 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:09:22.635 06:19:54 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:09:22.635 06:19:54 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:09:22.635 06:19:54 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:09:22.635 06:19:54 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:22.635 06:19:54 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:09:22.635 06:19:54 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:09:22.635 06:19:54 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:22.635 06:19:54 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:22.635 06:19:54 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:09:22.635 06:19:54 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:09:22.635 06:19:54 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:22.635 06:19:54 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:09:22.635 06:19:54 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:09:22.635 06:19:54 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:09:22.635 06:19:54 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:09:22.635 06:19:54 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:22.635 06:19:54 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:09:22.635 06:19:54 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:09:22.635 06:19:54 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:22.635 06:19:54 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:22.635 06:19:54 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:09:22.635 06:19:54 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:22.635 06:19:54 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:22.635 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:22.635 --rc genhtml_branch_coverage=1 00:09:22.635 --rc genhtml_function_coverage=1 00:09:22.635 --rc genhtml_legend=1 00:09:22.635 --rc geninfo_all_blocks=1 00:09:22.635 --rc geninfo_unexecuted_blocks=1 00:09:22.635 00:09:22.635 ' 00:09:22.635 06:19:54 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:22.635 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:22.635 --rc genhtml_branch_coverage=1 00:09:22.635 --rc genhtml_function_coverage=1 00:09:22.635 --rc genhtml_legend=1 00:09:22.635 --rc geninfo_all_blocks=1 00:09:22.635 --rc geninfo_unexecuted_blocks=1 00:09:22.635 00:09:22.635 ' 00:09:22.635 06:19:54 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:22.635 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:22.635 --rc genhtml_branch_coverage=1 00:09:22.635 --rc genhtml_function_coverage=1 00:09:22.635 --rc genhtml_legend=1 00:09:22.635 --rc geninfo_all_blocks=1 00:09:22.635 --rc geninfo_unexecuted_blocks=1 00:09:22.635 00:09:22.635 ' 00:09:22.635 06:19:54 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:22.635 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:22.635 --rc genhtml_branch_coverage=1 00:09:22.635 --rc genhtml_function_coverage=1 00:09:22.635 --rc genhtml_legend=1 00:09:22.635 --rc geninfo_all_blocks=1 00:09:22.635 --rc geninfo_unexecuted_blocks=1 00:09:22.635 00:09:22.635 ' 00:09:22.635 06:19:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:09:22.635 06:19:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=359782 00:09:22.635 06:19:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 359782 00:09:22.635 06:19:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:09:22.635 06:19:54 dpdk_mem_utility -- common/autotest_common.sh@833 -- # '[' -z 359782 ']' 00:09:22.635 06:19:54 dpdk_mem_utility -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:22.635 06:19:54 dpdk_mem_utility -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:22.635 06:19:54 dpdk_mem_utility -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:22.635 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:22.635 06:19:54 dpdk_mem_utility -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:22.635 06:19:54 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:09:22.895 [2024-11-20 06:19:54.470684] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:09:22.895 [2024-11-20 06:19:54.470735] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid359782 ] 00:09:22.895 [2024-11-20 06:19:54.546174] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:22.895 [2024-11-20 06:19:54.585796] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:23.156 06:19:54 dpdk_mem_utility -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:23.156 06:19:54 dpdk_mem_utility -- common/autotest_common.sh@866 -- # return 0 00:09:23.156 06:19:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:09:23.156 06:19:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:09:23.156 06:19:54 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.156 06:19:54 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:09:23.156 { 00:09:23.156 "filename": "/tmp/spdk_mem_dump.txt" 00:09:23.156 } 00:09:23.156 06:19:54 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.156 06:19:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:09:23.156 DPDK memory size 810.000000 MiB in 1 heap(s) 00:09:23.156 1 heaps totaling size 810.000000 MiB 00:09:23.156 size: 810.000000 MiB heap id: 0 00:09:23.156 end heaps---------- 00:09:23.156 9 mempools totaling size 595.772034 MiB 00:09:23.156 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:09:23.156 size: 158.602051 MiB name: PDU_data_out_Pool 00:09:23.156 size: 92.545471 MiB name: bdev_io_359782 00:09:23.156 size: 50.003479 MiB name: msgpool_359782 00:09:23.156 size: 36.509338 MiB name: fsdev_io_359782 00:09:23.156 size: 21.763794 MiB name: PDU_Pool 00:09:23.156 size: 19.513306 MiB name: SCSI_TASK_Pool 00:09:23.156 size: 4.133484 MiB name: evtpool_359782 00:09:23.156 size: 0.026123 MiB name: Session_Pool 00:09:23.156 end mempools------- 00:09:23.156 6 memzones totaling size 4.142822 MiB 00:09:23.156 size: 1.000366 MiB name: RG_ring_0_359782 00:09:23.156 size: 1.000366 MiB name: RG_ring_1_359782 00:09:23.156 size: 1.000366 MiB name: RG_ring_4_359782 00:09:23.156 size: 1.000366 MiB name: RG_ring_5_359782 00:09:23.156 size: 0.125366 MiB name: RG_ring_2_359782 00:09:23.156 size: 0.015991 MiB name: RG_ring_3_359782 00:09:23.156 end memzones------- 00:09:23.156 06:19:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:09:23.156 heap id: 0 total size: 810.000000 MiB number of busy elements: 44 number of free elements: 15 00:09:23.156 list of free elements. size: 10.862488 MiB 00:09:23.156 element at address: 0x200018a00000 with size: 0.999878 MiB 00:09:23.156 element at address: 0x200018c00000 with size: 0.999878 MiB 00:09:23.156 element at address: 0x200000400000 with size: 0.998535 MiB 00:09:23.156 element at address: 0x200031800000 with size: 0.994446 MiB 00:09:23.156 element at address: 0x200006400000 with size: 0.959839 MiB 00:09:23.156 element at address: 0x200012c00000 with size: 0.954285 MiB 00:09:23.156 element at address: 0x200018e00000 with size: 0.936584 MiB 00:09:23.156 element at address: 0x200000200000 with size: 0.717346 MiB 00:09:23.156 element at address: 0x20001a600000 with size: 0.582886 MiB 00:09:23.156 element at address: 0x200000c00000 with size: 0.495422 MiB 00:09:23.156 element at address: 0x20000a600000 with size: 0.490723 MiB 00:09:23.156 element at address: 0x200019000000 with size: 0.485657 MiB 00:09:23.156 element at address: 0x200003e00000 with size: 0.481934 MiB 00:09:23.156 element at address: 0x200027a00000 with size: 0.410034 MiB 00:09:23.156 element at address: 0x200000800000 with size: 0.355042 MiB 00:09:23.156 list of standard malloc elements. size: 199.218628 MiB 00:09:23.156 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:09:23.156 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:09:23.156 element at address: 0x200018afff80 with size: 1.000122 MiB 00:09:23.156 element at address: 0x200018cfff80 with size: 1.000122 MiB 00:09:23.156 element at address: 0x200018efff80 with size: 1.000122 MiB 00:09:23.156 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:09:23.156 element at address: 0x200018eeff00 with size: 0.062622 MiB 00:09:23.156 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:09:23.156 element at address: 0x200018eefdc0 with size: 0.000305 MiB 00:09:23.156 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:09:23.156 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:09:23.156 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:09:23.156 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:09:23.156 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:09:23.156 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:09:23.156 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:09:23.156 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:09:23.156 element at address: 0x20000085b040 with size: 0.000183 MiB 00:09:23.156 element at address: 0x20000085f300 with size: 0.000183 MiB 00:09:23.156 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:09:23.156 element at address: 0x20000087f680 with size: 0.000183 MiB 00:09:23.156 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:09:23.156 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:09:23.156 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:09:23.156 element at address: 0x200000cff000 with size: 0.000183 MiB 00:09:23.156 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:09:23.156 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:09:23.156 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:09:23.156 element at address: 0x200003efb980 with size: 0.000183 MiB 00:09:23.156 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:09:23.156 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:09:23.156 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:09:23.156 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:09:23.156 element at address: 0x200012cf44c0 with size: 0.000183 MiB 00:09:23.156 element at address: 0x200018eefc40 with size: 0.000183 MiB 00:09:23.156 element at address: 0x200018eefd00 with size: 0.000183 MiB 00:09:23.156 element at address: 0x2000190bc740 with size: 0.000183 MiB 00:09:23.156 element at address: 0x20001a695380 with size: 0.000183 MiB 00:09:23.156 element at address: 0x20001a695440 with size: 0.000183 MiB 00:09:23.156 element at address: 0x200027a68f80 with size: 0.000183 MiB 00:09:23.156 element at address: 0x200027a69040 with size: 0.000183 MiB 00:09:23.156 element at address: 0x200027a6fc40 with size: 0.000183 MiB 00:09:23.156 element at address: 0x200027a6fe40 with size: 0.000183 MiB 00:09:23.156 element at address: 0x200027a6ff00 with size: 0.000183 MiB 00:09:23.156 list of memzone associated elements. size: 599.918884 MiB 00:09:23.156 element at address: 0x20001a695500 with size: 211.416748 MiB 00:09:23.156 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:09:23.156 element at address: 0x200027a6ffc0 with size: 157.562561 MiB 00:09:23.156 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:09:23.156 element at address: 0x200012df4780 with size: 92.045044 MiB 00:09:23.156 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_359782_0 00:09:23.156 element at address: 0x200000dff380 with size: 48.003052 MiB 00:09:23.156 associated memzone info: size: 48.002930 MiB name: MP_msgpool_359782_0 00:09:23.156 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:09:23.156 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_359782_0 00:09:23.156 element at address: 0x2000191be940 with size: 20.255554 MiB 00:09:23.156 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:09:23.156 element at address: 0x2000319feb40 with size: 18.005066 MiB 00:09:23.156 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:09:23.156 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:09:23.156 associated memzone info: size: 3.000122 MiB name: MP_evtpool_359782_0 00:09:23.156 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:09:23.156 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_359782 00:09:23.156 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:09:23.156 associated memzone info: size: 1.007996 MiB name: MP_evtpool_359782 00:09:23.156 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:09:23.156 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:09:23.156 element at address: 0x2000190bc800 with size: 1.008118 MiB 00:09:23.156 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:09:23.156 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:09:23.156 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:09:23.156 element at address: 0x200003efba40 with size: 1.008118 MiB 00:09:23.156 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:09:23.156 element at address: 0x200000cff180 with size: 1.000488 MiB 00:09:23.156 associated memzone info: size: 1.000366 MiB name: RG_ring_0_359782 00:09:23.156 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:09:23.156 associated memzone info: size: 1.000366 MiB name: RG_ring_1_359782 00:09:23.157 element at address: 0x200012cf4580 with size: 1.000488 MiB 00:09:23.157 associated memzone info: size: 1.000366 MiB name: RG_ring_4_359782 00:09:23.157 element at address: 0x2000318fe940 with size: 1.000488 MiB 00:09:23.157 associated memzone info: size: 1.000366 MiB name: RG_ring_5_359782 00:09:23.157 element at address: 0x20000087f740 with size: 0.500488 MiB 00:09:23.157 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_359782 00:09:23.157 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:09:23.157 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_359782 00:09:23.157 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:09:23.157 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:09:23.157 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:09:23.157 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:09:23.157 element at address: 0x20001907c540 with size: 0.250488 MiB 00:09:23.157 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:09:23.157 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:09:23.157 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_359782 00:09:23.157 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:09:23.157 associated memzone info: size: 0.125366 MiB name: RG_ring_2_359782 00:09:23.157 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:09:23.157 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:09:23.157 element at address: 0x200027a69100 with size: 0.023743 MiB 00:09:23.157 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:09:23.157 element at address: 0x20000085b100 with size: 0.016113 MiB 00:09:23.157 associated memzone info: size: 0.015991 MiB name: RG_ring_3_359782 00:09:23.157 element at address: 0x200027a6f240 with size: 0.002441 MiB 00:09:23.157 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:09:23.157 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:09:23.157 associated memzone info: size: 0.000183 MiB name: MP_msgpool_359782 00:09:23.157 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:09:23.157 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_359782 00:09:23.157 element at address: 0x20000085af00 with size: 0.000305 MiB 00:09:23.157 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_359782 00:09:23.157 element at address: 0x200027a6fd00 with size: 0.000305 MiB 00:09:23.157 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:09:23.157 06:19:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:09:23.157 06:19:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 359782 00:09:23.157 06:19:54 dpdk_mem_utility -- common/autotest_common.sh@952 -- # '[' -z 359782 ']' 00:09:23.157 06:19:54 dpdk_mem_utility -- common/autotest_common.sh@956 -- # kill -0 359782 00:09:23.157 06:19:54 dpdk_mem_utility -- common/autotest_common.sh@957 -- # uname 00:09:23.157 06:19:54 dpdk_mem_utility -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:23.157 06:19:54 dpdk_mem_utility -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 359782 00:09:23.157 06:19:54 dpdk_mem_utility -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:23.157 06:19:54 dpdk_mem_utility -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:23.157 06:19:54 dpdk_mem_utility -- common/autotest_common.sh@970 -- # echo 'killing process with pid 359782' 00:09:23.157 killing process with pid 359782 00:09:23.157 06:19:54 dpdk_mem_utility -- common/autotest_common.sh@971 -- # kill 359782 00:09:23.157 06:19:54 dpdk_mem_utility -- common/autotest_common.sh@976 -- # wait 359782 00:09:23.725 00:09:23.725 real 0m1.009s 00:09:23.725 user 0m0.958s 00:09:23.725 sys 0m0.391s 00:09:23.725 06:19:55 dpdk_mem_utility -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:23.725 06:19:55 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:09:23.725 ************************************ 00:09:23.725 END TEST dpdk_mem_utility 00:09:23.725 ************************************ 00:09:23.725 06:19:55 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:09:23.725 06:19:55 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:23.725 06:19:55 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:23.725 06:19:55 -- common/autotest_common.sh@10 -- # set +x 00:09:23.725 ************************************ 00:09:23.725 START TEST event 00:09:23.725 ************************************ 00:09:23.725 06:19:55 event -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:09:23.725 * Looking for test storage... 00:09:23.725 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:09:23.725 06:19:55 event -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:23.725 06:19:55 event -- common/autotest_common.sh@1691 -- # lcov --version 00:09:23.725 06:19:55 event -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:23.725 06:19:55 event -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:23.725 06:19:55 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:23.725 06:19:55 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:23.725 06:19:55 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:23.726 06:19:55 event -- scripts/common.sh@336 -- # IFS=.-: 00:09:23.726 06:19:55 event -- scripts/common.sh@336 -- # read -ra ver1 00:09:23.726 06:19:55 event -- scripts/common.sh@337 -- # IFS=.-: 00:09:23.726 06:19:55 event -- scripts/common.sh@337 -- # read -ra ver2 00:09:23.726 06:19:55 event -- scripts/common.sh@338 -- # local 'op=<' 00:09:23.726 06:19:55 event -- scripts/common.sh@340 -- # ver1_l=2 00:09:23.726 06:19:55 event -- scripts/common.sh@341 -- # ver2_l=1 00:09:23.726 06:19:55 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:23.726 06:19:55 event -- scripts/common.sh@344 -- # case "$op" in 00:09:23.726 06:19:55 event -- scripts/common.sh@345 -- # : 1 00:09:23.726 06:19:55 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:23.726 06:19:55 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:23.726 06:19:55 event -- scripts/common.sh@365 -- # decimal 1 00:09:23.726 06:19:55 event -- scripts/common.sh@353 -- # local d=1 00:09:23.726 06:19:55 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:23.726 06:19:55 event -- scripts/common.sh@355 -- # echo 1 00:09:23.726 06:19:55 event -- scripts/common.sh@365 -- # ver1[v]=1 00:09:23.726 06:19:55 event -- scripts/common.sh@366 -- # decimal 2 00:09:23.726 06:19:55 event -- scripts/common.sh@353 -- # local d=2 00:09:23.726 06:19:55 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:23.726 06:19:55 event -- scripts/common.sh@355 -- # echo 2 00:09:23.726 06:19:55 event -- scripts/common.sh@366 -- # ver2[v]=2 00:09:23.726 06:19:55 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:23.726 06:19:55 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:23.726 06:19:55 event -- scripts/common.sh@368 -- # return 0 00:09:23.726 06:19:55 event -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:23.726 06:19:55 event -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:23.726 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:23.726 --rc genhtml_branch_coverage=1 00:09:23.726 --rc genhtml_function_coverage=1 00:09:23.726 --rc genhtml_legend=1 00:09:23.726 --rc geninfo_all_blocks=1 00:09:23.726 --rc geninfo_unexecuted_blocks=1 00:09:23.726 00:09:23.726 ' 00:09:23.726 06:19:55 event -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:23.726 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:23.726 --rc genhtml_branch_coverage=1 00:09:23.726 --rc genhtml_function_coverage=1 00:09:23.726 --rc genhtml_legend=1 00:09:23.726 --rc geninfo_all_blocks=1 00:09:23.726 --rc geninfo_unexecuted_blocks=1 00:09:23.726 00:09:23.726 ' 00:09:23.726 06:19:55 event -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:23.726 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:23.726 --rc genhtml_branch_coverage=1 00:09:23.726 --rc genhtml_function_coverage=1 00:09:23.726 --rc genhtml_legend=1 00:09:23.726 --rc geninfo_all_blocks=1 00:09:23.726 --rc geninfo_unexecuted_blocks=1 00:09:23.726 00:09:23.726 ' 00:09:23.726 06:19:55 event -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:23.726 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:23.726 --rc genhtml_branch_coverage=1 00:09:23.726 --rc genhtml_function_coverage=1 00:09:23.726 --rc genhtml_legend=1 00:09:23.726 --rc geninfo_all_blocks=1 00:09:23.726 --rc geninfo_unexecuted_blocks=1 00:09:23.726 00:09:23.726 ' 00:09:23.726 06:19:55 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:09:23.726 06:19:55 event -- bdev/nbd_common.sh@6 -- # set -e 00:09:23.726 06:19:55 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:09:23.726 06:19:55 event -- common/autotest_common.sh@1103 -- # '[' 6 -le 1 ']' 00:09:23.726 06:19:55 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:23.726 06:19:55 event -- common/autotest_common.sh@10 -- # set +x 00:09:23.726 ************************************ 00:09:23.726 START TEST event_perf 00:09:23.726 ************************************ 00:09:23.726 06:19:55 event.event_perf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:09:23.726 Running I/O for 1 seconds...[2024-11-20 06:19:55.543680] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:09:23.726 [2024-11-20 06:19:55.543738] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid360068 ] 00:09:23.985 [2024-11-20 06:19:55.620310] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:23.985 [2024-11-20 06:19:55.663259] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:23.985 [2024-11-20 06:19:55.663300] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:23.985 [2024-11-20 06:19:55.663406] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:23.985 [2024-11-20 06:19:55.663406] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:24.921 Running I/O for 1 seconds... 00:09:24.921 lcore 0: 209269 00:09:24.921 lcore 1: 209267 00:09:24.921 lcore 2: 209269 00:09:24.921 lcore 3: 209268 00:09:24.921 done. 00:09:24.921 00:09:24.921 real 0m1.181s 00:09:24.921 user 0m4.102s 00:09:24.921 sys 0m0.076s 00:09:24.921 06:19:56 event.event_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:24.921 06:19:56 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:09:24.921 ************************************ 00:09:24.921 END TEST event_perf 00:09:24.921 ************************************ 00:09:24.921 06:19:56 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:09:24.921 06:19:56 event -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:09:24.921 06:19:56 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:24.921 06:19:56 event -- common/autotest_common.sh@10 -- # set +x 00:09:25.180 ************************************ 00:09:25.180 START TEST event_reactor 00:09:25.180 ************************************ 00:09:25.180 06:19:56 event.event_reactor -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:09:25.180 [2024-11-20 06:19:56.793929] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:09:25.180 [2024-11-20 06:19:56.793997] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid360324 ] 00:09:25.180 [2024-11-20 06:19:56.870925] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:25.180 [2024-11-20 06:19:56.910675] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:26.117 test_start 00:09:26.117 oneshot 00:09:26.117 tick 100 00:09:26.117 tick 100 00:09:26.117 tick 250 00:09:26.117 tick 100 00:09:26.117 tick 100 00:09:26.117 tick 100 00:09:26.117 tick 250 00:09:26.117 tick 500 00:09:26.117 tick 100 00:09:26.117 tick 100 00:09:26.117 tick 250 00:09:26.117 tick 100 00:09:26.117 tick 100 00:09:26.117 test_end 00:09:26.117 00:09:26.117 real 0m1.175s 00:09:26.117 user 0m1.097s 00:09:26.117 sys 0m0.075s 00:09:26.117 06:19:57 event.event_reactor -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:26.117 06:19:57 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:09:26.117 ************************************ 00:09:26.117 END TEST event_reactor 00:09:26.117 ************************************ 00:09:26.376 06:19:57 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:09:26.376 06:19:57 event -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:09:26.376 06:19:57 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:26.376 06:19:57 event -- common/autotest_common.sh@10 -- # set +x 00:09:26.376 ************************************ 00:09:26.376 START TEST event_reactor_perf 00:09:26.376 ************************************ 00:09:26.376 06:19:58 event.event_reactor_perf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:09:26.376 [2024-11-20 06:19:58.038287] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:09:26.376 [2024-11-20 06:19:58.038356] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid360574 ] 00:09:26.376 [2024-11-20 06:19:58.115698] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:26.376 [2024-11-20 06:19:58.155012] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:27.754 test_start 00:09:27.754 test_end 00:09:27.754 Performance: 507381 events per second 00:09:27.754 00:09:27.754 real 0m1.178s 00:09:27.754 user 0m1.099s 00:09:27.754 sys 0m0.075s 00:09:27.754 06:19:59 event.event_reactor_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:27.754 06:19:59 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:09:27.754 ************************************ 00:09:27.754 END TEST event_reactor_perf 00:09:27.754 ************************************ 00:09:27.754 06:19:59 event -- event/event.sh@49 -- # uname -s 00:09:27.754 06:19:59 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:09:27.754 06:19:59 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:09:27.754 06:19:59 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:27.754 06:19:59 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:27.754 06:19:59 event -- common/autotest_common.sh@10 -- # set +x 00:09:27.754 ************************************ 00:09:27.754 START TEST event_scheduler 00:09:27.754 ************************************ 00:09:27.754 06:19:59 event.event_scheduler -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:09:27.754 * Looking for test storage... 00:09:27.754 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:09:27.754 06:19:59 event.event_scheduler -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:27.754 06:19:59 event.event_scheduler -- common/autotest_common.sh@1691 -- # lcov --version 00:09:27.754 06:19:59 event.event_scheduler -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:27.754 06:19:59 event.event_scheduler -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:27.754 06:19:59 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:27.754 06:19:59 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:27.754 06:19:59 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:27.754 06:19:59 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:09:27.754 06:19:59 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:09:27.754 06:19:59 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:09:27.754 06:19:59 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:09:27.754 06:19:59 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:09:27.754 06:19:59 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:09:27.754 06:19:59 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:09:27.754 06:19:59 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:27.754 06:19:59 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:09:27.754 06:19:59 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:09:27.754 06:19:59 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:27.754 06:19:59 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:27.754 06:19:59 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:09:27.754 06:19:59 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:09:27.754 06:19:59 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:27.754 06:19:59 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:09:27.754 06:19:59 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:09:27.754 06:19:59 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:09:27.754 06:19:59 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:09:27.754 06:19:59 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:27.754 06:19:59 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:09:27.754 06:19:59 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:09:27.755 06:19:59 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:27.755 06:19:59 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:27.755 06:19:59 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:09:27.755 06:19:59 event.event_scheduler -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:27.755 06:19:59 event.event_scheduler -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:27.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:27.755 --rc genhtml_branch_coverage=1 00:09:27.755 --rc genhtml_function_coverage=1 00:09:27.755 --rc genhtml_legend=1 00:09:27.755 --rc geninfo_all_blocks=1 00:09:27.755 --rc geninfo_unexecuted_blocks=1 00:09:27.755 00:09:27.755 ' 00:09:27.755 06:19:59 event.event_scheduler -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:27.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:27.755 --rc genhtml_branch_coverage=1 00:09:27.755 --rc genhtml_function_coverage=1 00:09:27.755 --rc genhtml_legend=1 00:09:27.755 --rc geninfo_all_blocks=1 00:09:27.755 --rc geninfo_unexecuted_blocks=1 00:09:27.755 00:09:27.755 ' 00:09:27.755 06:19:59 event.event_scheduler -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:27.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:27.755 --rc genhtml_branch_coverage=1 00:09:27.755 --rc genhtml_function_coverage=1 00:09:27.755 --rc genhtml_legend=1 00:09:27.755 --rc geninfo_all_blocks=1 00:09:27.755 --rc geninfo_unexecuted_blocks=1 00:09:27.755 00:09:27.755 ' 00:09:27.755 06:19:59 event.event_scheduler -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:27.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:27.755 --rc genhtml_branch_coverage=1 00:09:27.755 --rc genhtml_function_coverage=1 00:09:27.755 --rc genhtml_legend=1 00:09:27.755 --rc geninfo_all_blocks=1 00:09:27.755 --rc geninfo_unexecuted_blocks=1 00:09:27.755 00:09:27.755 ' 00:09:27.755 06:19:59 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:09:27.755 06:19:59 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=360862 00:09:27.755 06:19:59 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:09:27.755 06:19:59 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:09:27.755 06:19:59 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 360862 00:09:27.755 06:19:59 event.event_scheduler -- common/autotest_common.sh@833 -- # '[' -z 360862 ']' 00:09:27.755 06:19:59 event.event_scheduler -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:27.755 06:19:59 event.event_scheduler -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:27.755 06:19:59 event.event_scheduler -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:27.755 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:27.755 06:19:59 event.event_scheduler -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:27.755 06:19:59 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:09:27.755 [2024-11-20 06:19:59.493828] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:09:27.755 [2024-11-20 06:19:59.493876] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid360862 ] 00:09:27.755 [2024-11-20 06:19:59.566765] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:28.014 [2024-11-20 06:19:59.610379] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:28.014 [2024-11-20 06:19:59.610491] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:28.014 [2024-11-20 06:19:59.610581] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:28.014 [2024-11-20 06:19:59.610582] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:28.014 06:19:59 event.event_scheduler -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:28.014 06:19:59 event.event_scheduler -- common/autotest_common.sh@866 -- # return 0 00:09:28.014 06:19:59 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:09:28.014 06:19:59 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.014 06:19:59 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:09:28.014 [2024-11-20 06:19:59.659186] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:09:28.014 [2024-11-20 06:19:59.659209] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:09:28.014 [2024-11-20 06:19:59.659219] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:09:28.014 [2024-11-20 06:19:59.659225] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:09:28.014 [2024-11-20 06:19:59.659230] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:09:28.014 06:19:59 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.014 06:19:59 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:09:28.014 06:19:59 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.014 06:19:59 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:09:28.014 [2024-11-20 06:19:59.736431] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:09:28.014 06:19:59 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.015 06:19:59 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:09:28.015 06:19:59 event.event_scheduler -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:28.015 06:19:59 event.event_scheduler -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:28.015 06:19:59 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:09:28.015 ************************************ 00:09:28.015 START TEST scheduler_create_thread 00:09:28.015 ************************************ 00:09:28.015 06:19:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1127 -- # scheduler_create_thread 00:09:28.015 06:19:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:09:28.015 06:19:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.015 06:19:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:28.015 2 00:09:28.015 06:19:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.015 06:19:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:09:28.015 06:19:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.015 06:19:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:28.015 3 00:09:28.015 06:19:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.015 06:19:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:09:28.015 06:19:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.015 06:19:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:28.015 4 00:09:28.015 06:19:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.015 06:19:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:09:28.015 06:19:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.015 06:19:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:28.015 5 00:09:28.015 06:19:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.015 06:19:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:09:28.015 06:19:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.015 06:19:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:28.015 6 00:09:28.015 06:19:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.015 06:19:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:09:28.015 06:19:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.015 06:19:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:28.015 7 00:09:28.015 06:19:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.015 06:19:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:09:28.015 06:19:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.015 06:19:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:28.015 8 00:09:28.015 06:19:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.015 06:19:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:09:28.015 06:19:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.015 06:19:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:28.015 9 00:09:28.015 06:19:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.015 06:19:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:09:28.015 06:19:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.015 06:19:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:28.274 10 00:09:28.274 06:19:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.274 06:19:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:09:28.274 06:19:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.274 06:19:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:28.274 06:19:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.274 06:19:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:09:28.274 06:19:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:09:28.274 06:19:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.274 06:19:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:28.274 06:19:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.274 06:19:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:09:28.274 06:19:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.274 06:19:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:29.652 06:20:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.652 06:20:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:09:29.652 06:20:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:09:29.652 06:20:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.652 06:20:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:30.665 06:20:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.665 00:09:30.665 real 0m2.620s 00:09:30.665 user 0m0.026s 00:09:30.665 sys 0m0.004s 00:09:30.665 06:20:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:30.665 06:20:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:30.665 ************************************ 00:09:30.665 END TEST scheduler_create_thread 00:09:30.665 ************************************ 00:09:30.665 06:20:02 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:09:30.665 06:20:02 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 360862 00:09:30.665 06:20:02 event.event_scheduler -- common/autotest_common.sh@952 -- # '[' -z 360862 ']' 00:09:30.665 06:20:02 event.event_scheduler -- common/autotest_common.sh@956 -- # kill -0 360862 00:09:30.665 06:20:02 event.event_scheduler -- common/autotest_common.sh@957 -- # uname 00:09:30.665 06:20:02 event.event_scheduler -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:30.665 06:20:02 event.event_scheduler -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 360862 00:09:30.956 06:20:02 event.event_scheduler -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:09:30.956 06:20:02 event.event_scheduler -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:09:30.956 06:20:02 event.event_scheduler -- common/autotest_common.sh@970 -- # echo 'killing process with pid 360862' 00:09:30.956 killing process with pid 360862 00:09:30.956 06:20:02 event.event_scheduler -- common/autotest_common.sh@971 -- # kill 360862 00:09:30.956 06:20:02 event.event_scheduler -- common/autotest_common.sh@976 -- # wait 360862 00:09:31.215 [2024-11-20 06:20:02.874732] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:09:31.215 00:09:31.215 real 0m3.773s 00:09:31.215 user 0m5.636s 00:09:31.215 sys 0m0.387s 00:09:31.215 06:20:03 event.event_scheduler -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:31.215 06:20:03 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:09:31.215 ************************************ 00:09:31.215 END TEST event_scheduler 00:09:31.215 ************************************ 00:09:31.474 06:20:03 event -- event/event.sh@51 -- # modprobe -n nbd 00:09:31.474 06:20:03 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:09:31.474 06:20:03 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:31.474 06:20:03 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:31.474 06:20:03 event -- common/autotest_common.sh@10 -- # set +x 00:09:31.474 ************************************ 00:09:31.474 START TEST app_repeat 00:09:31.474 ************************************ 00:09:31.474 06:20:03 event.app_repeat -- common/autotest_common.sh@1127 -- # app_repeat_test 00:09:31.474 06:20:03 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:31.474 06:20:03 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:31.474 06:20:03 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:09:31.474 06:20:03 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:31.474 06:20:03 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:09:31.474 06:20:03 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:09:31.474 06:20:03 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:09:31.474 06:20:03 event.app_repeat -- event/event.sh@19 -- # repeat_pid=361425 00:09:31.474 06:20:03 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:09:31.474 06:20:03 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:09:31.475 06:20:03 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 361425' 00:09:31.475 Process app_repeat pid: 361425 00:09:31.475 06:20:03 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:09:31.475 06:20:03 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:09:31.475 spdk_app_start Round 0 00:09:31.475 06:20:03 event.app_repeat -- event/event.sh@25 -- # waitforlisten 361425 /var/tmp/spdk-nbd.sock 00:09:31.475 06:20:03 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 361425 ']' 00:09:31.475 06:20:03 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:31.475 06:20:03 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:31.475 06:20:03 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:31.475 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:31.475 06:20:03 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:31.475 06:20:03 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:09:31.475 [2024-11-20 06:20:03.156524] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:09:31.475 [2024-11-20 06:20:03.156576] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid361425 ] 00:09:31.475 [2024-11-20 06:20:03.232491] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:31.475 [2024-11-20 06:20:03.276655] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:31.475 [2024-11-20 06:20:03.276658] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:31.734 06:20:03 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:31.734 06:20:03 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:09:31.734 06:20:03 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:31.734 Malloc0 00:09:31.734 06:20:03 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:31.993 Malloc1 00:09:31.993 06:20:03 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:31.993 06:20:03 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:31.993 06:20:03 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:31.993 06:20:03 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:31.993 06:20:03 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:31.993 06:20:03 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:31.993 06:20:03 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:31.993 06:20:03 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:31.993 06:20:03 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:31.993 06:20:03 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:31.993 06:20:03 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:31.993 06:20:03 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:31.993 06:20:03 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:09:31.993 06:20:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:31.993 06:20:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:31.993 06:20:03 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:09:32.252 /dev/nbd0 00:09:32.252 06:20:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:32.252 06:20:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:32.252 06:20:04 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:09:32.252 06:20:04 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:09:32.252 06:20:04 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:09:32.252 06:20:04 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:09:32.252 06:20:04 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:09:32.252 06:20:04 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:09:32.252 06:20:04 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:09:32.252 06:20:04 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:09:32.252 06:20:04 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:32.252 1+0 records in 00:09:32.252 1+0 records out 00:09:32.252 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000228738 s, 17.9 MB/s 00:09:32.252 06:20:04 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:32.252 06:20:04 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:09:32.252 06:20:04 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:32.252 06:20:04 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:09:32.252 06:20:04 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:09:32.252 06:20:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:32.252 06:20:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:32.252 06:20:04 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:09:32.511 /dev/nbd1 00:09:32.511 06:20:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:32.511 06:20:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:32.511 06:20:04 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:09:32.511 06:20:04 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:09:32.511 06:20:04 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:09:32.511 06:20:04 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:09:32.511 06:20:04 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:09:32.511 06:20:04 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:09:32.511 06:20:04 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:09:32.511 06:20:04 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:09:32.511 06:20:04 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:32.511 1+0 records in 00:09:32.511 1+0 records out 00:09:32.511 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000231692 s, 17.7 MB/s 00:09:32.511 06:20:04 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:32.511 06:20:04 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:09:32.511 06:20:04 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:32.511 06:20:04 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:09:32.511 06:20:04 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:09:32.511 06:20:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:32.511 06:20:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:32.511 06:20:04 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:32.511 06:20:04 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:32.511 06:20:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:32.771 06:20:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:32.771 { 00:09:32.771 "nbd_device": "/dev/nbd0", 00:09:32.771 "bdev_name": "Malloc0" 00:09:32.771 }, 00:09:32.771 { 00:09:32.771 "nbd_device": "/dev/nbd1", 00:09:32.771 "bdev_name": "Malloc1" 00:09:32.771 } 00:09:32.771 ]' 00:09:32.771 06:20:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:32.771 { 00:09:32.771 "nbd_device": "/dev/nbd0", 00:09:32.771 "bdev_name": "Malloc0" 00:09:32.771 }, 00:09:32.771 { 00:09:32.771 "nbd_device": "/dev/nbd1", 00:09:32.771 "bdev_name": "Malloc1" 00:09:32.771 } 00:09:32.771 ]' 00:09:32.771 06:20:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:32.771 06:20:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:32.771 /dev/nbd1' 00:09:32.771 06:20:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:32.771 /dev/nbd1' 00:09:32.771 06:20:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:32.771 06:20:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:09:32.771 06:20:04 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:09:32.771 06:20:04 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:09:32.771 06:20:04 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:09:32.771 06:20:04 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:09:32.771 06:20:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:32.771 06:20:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:32.771 06:20:04 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:32.771 06:20:04 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:09:32.771 06:20:04 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:32.771 06:20:04 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:09:32.771 256+0 records in 00:09:32.771 256+0 records out 00:09:32.771 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0101446 s, 103 MB/s 00:09:32.771 06:20:04 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:32.771 06:20:04 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:32.771 256+0 records in 00:09:32.771 256+0 records out 00:09:32.771 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0136011 s, 77.1 MB/s 00:09:32.771 06:20:04 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:32.771 06:20:04 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:32.771 256+0 records in 00:09:32.771 256+0 records out 00:09:32.771 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0147957 s, 70.9 MB/s 00:09:32.771 06:20:04 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:09:32.771 06:20:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:32.771 06:20:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:32.771 06:20:04 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:32.771 06:20:04 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:09:32.771 06:20:04 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:32.771 06:20:04 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:32.771 06:20:04 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:32.771 06:20:04 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:09:32.771 06:20:04 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:32.771 06:20:04 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:09:32.771 06:20:04 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:09:32.771 06:20:04 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:09:32.771 06:20:04 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:32.771 06:20:04 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:32.771 06:20:04 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:32.771 06:20:04 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:09:32.771 06:20:04 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:32.771 06:20:04 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:33.030 06:20:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:33.030 06:20:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:33.030 06:20:04 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:33.030 06:20:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:33.030 06:20:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:33.030 06:20:04 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:33.030 06:20:04 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:33.030 06:20:04 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:33.030 06:20:04 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:33.030 06:20:04 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:33.287 06:20:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:33.287 06:20:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:33.287 06:20:04 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:33.287 06:20:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:33.287 06:20:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:33.287 06:20:04 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:33.287 06:20:04 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:33.287 06:20:04 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:33.287 06:20:04 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:33.287 06:20:04 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:33.287 06:20:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:33.545 06:20:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:33.545 06:20:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:33.545 06:20:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:33.545 06:20:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:33.545 06:20:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:09:33.545 06:20:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:33.545 06:20:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:09:33.545 06:20:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:09:33.545 06:20:05 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:09:33.545 06:20:05 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:09:33.545 06:20:05 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:33.545 06:20:05 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:09:33.545 06:20:05 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:09:33.803 06:20:05 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:09:33.803 [2024-11-20 06:20:05.587827] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:33.804 [2024-11-20 06:20:05.624247] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:33.804 [2024-11-20 06:20:05.624249] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:34.063 [2024-11-20 06:20:05.665061] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:09:34.063 [2024-11-20 06:20:05.665098] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:09:37.353 06:20:08 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:09:37.353 06:20:08 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:09:37.353 spdk_app_start Round 1 00:09:37.353 06:20:08 event.app_repeat -- event/event.sh@25 -- # waitforlisten 361425 /var/tmp/spdk-nbd.sock 00:09:37.353 06:20:08 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 361425 ']' 00:09:37.353 06:20:08 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:37.353 06:20:08 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:37.353 06:20:08 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:37.353 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:37.353 06:20:08 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:37.353 06:20:08 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:09:37.353 06:20:08 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:37.353 06:20:08 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:09:37.353 06:20:08 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:37.353 Malloc0 00:09:37.353 06:20:08 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:37.353 Malloc1 00:09:37.353 06:20:09 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:37.353 06:20:09 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:37.354 06:20:09 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:37.354 06:20:09 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:37.354 06:20:09 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:37.354 06:20:09 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:37.354 06:20:09 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:37.354 06:20:09 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:37.354 06:20:09 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:37.354 06:20:09 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:37.354 06:20:09 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:37.354 06:20:09 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:37.354 06:20:09 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:09:37.354 06:20:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:37.354 06:20:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:37.354 06:20:09 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:09:37.612 /dev/nbd0 00:09:37.612 06:20:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:37.612 06:20:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:37.612 06:20:09 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:09:37.612 06:20:09 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:09:37.612 06:20:09 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:09:37.612 06:20:09 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:09:37.612 06:20:09 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:09:37.612 06:20:09 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:09:37.612 06:20:09 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:09:37.612 06:20:09 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:09:37.612 06:20:09 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:37.612 1+0 records in 00:09:37.612 1+0 records out 00:09:37.612 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00239145 s, 1.7 MB/s 00:09:37.612 06:20:09 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:37.612 06:20:09 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:09:37.612 06:20:09 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:37.612 06:20:09 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:09:37.612 06:20:09 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:09:37.612 06:20:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:37.612 06:20:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:37.612 06:20:09 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:09:37.871 /dev/nbd1 00:09:37.871 06:20:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:37.871 06:20:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:37.871 06:20:09 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:09:37.871 06:20:09 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:09:37.871 06:20:09 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:09:37.871 06:20:09 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:09:37.871 06:20:09 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:09:37.871 06:20:09 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:09:37.871 06:20:09 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:09:37.871 06:20:09 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:09:37.871 06:20:09 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:37.871 1+0 records in 00:09:37.871 1+0 records out 00:09:37.871 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000229287 s, 17.9 MB/s 00:09:37.871 06:20:09 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:37.871 06:20:09 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:09:37.871 06:20:09 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:37.871 06:20:09 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:09:37.871 06:20:09 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:09:37.871 06:20:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:37.871 06:20:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:37.871 06:20:09 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:37.871 06:20:09 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:37.871 06:20:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:38.131 06:20:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:38.131 { 00:09:38.131 "nbd_device": "/dev/nbd0", 00:09:38.131 "bdev_name": "Malloc0" 00:09:38.131 }, 00:09:38.131 { 00:09:38.131 "nbd_device": "/dev/nbd1", 00:09:38.131 "bdev_name": "Malloc1" 00:09:38.131 } 00:09:38.131 ]' 00:09:38.131 06:20:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:38.131 { 00:09:38.131 "nbd_device": "/dev/nbd0", 00:09:38.131 "bdev_name": "Malloc0" 00:09:38.131 }, 00:09:38.131 { 00:09:38.131 "nbd_device": "/dev/nbd1", 00:09:38.131 "bdev_name": "Malloc1" 00:09:38.131 } 00:09:38.131 ]' 00:09:38.131 06:20:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:38.131 06:20:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:38.131 /dev/nbd1' 00:09:38.131 06:20:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:38.131 06:20:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:38.131 /dev/nbd1' 00:09:38.131 06:20:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:09:38.131 06:20:09 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:09:38.131 06:20:09 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:09:38.131 06:20:09 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:09:38.131 06:20:09 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:09:38.131 06:20:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:38.131 06:20:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:38.131 06:20:09 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:38.131 06:20:09 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:09:38.131 06:20:09 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:38.131 06:20:09 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:09:38.131 256+0 records in 00:09:38.131 256+0 records out 00:09:38.131 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0106432 s, 98.5 MB/s 00:09:38.131 06:20:09 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:38.131 06:20:09 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:38.131 256+0 records in 00:09:38.131 256+0 records out 00:09:38.131 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0144702 s, 72.5 MB/s 00:09:38.131 06:20:09 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:38.131 06:20:09 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:38.131 256+0 records in 00:09:38.131 256+0 records out 00:09:38.131 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0148478 s, 70.6 MB/s 00:09:38.131 06:20:09 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:09:38.131 06:20:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:38.131 06:20:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:38.131 06:20:09 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:38.131 06:20:09 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:09:38.131 06:20:09 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:38.131 06:20:09 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:38.131 06:20:09 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:38.131 06:20:09 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:09:38.131 06:20:09 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:38.131 06:20:09 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:09:38.131 06:20:09 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:09:38.131 06:20:09 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:09:38.131 06:20:09 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:38.131 06:20:09 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:38.131 06:20:09 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:38.131 06:20:09 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:09:38.131 06:20:09 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:38.131 06:20:09 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:38.390 06:20:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:38.390 06:20:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:38.390 06:20:10 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:38.390 06:20:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:38.390 06:20:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:38.390 06:20:10 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:38.390 06:20:10 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:38.390 06:20:10 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:38.390 06:20:10 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:38.390 06:20:10 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:38.649 06:20:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:38.649 06:20:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:38.649 06:20:10 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:38.650 06:20:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:38.650 06:20:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:38.650 06:20:10 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:38.650 06:20:10 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:38.650 06:20:10 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:38.650 06:20:10 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:38.650 06:20:10 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:38.650 06:20:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:38.908 06:20:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:38.908 06:20:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:38.908 06:20:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:38.908 06:20:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:38.908 06:20:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:09:38.908 06:20:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:38.908 06:20:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:09:38.908 06:20:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:09:38.908 06:20:10 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:09:38.908 06:20:10 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:09:38.908 06:20:10 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:38.908 06:20:10 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:09:38.908 06:20:10 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:09:39.167 06:20:10 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:09:39.167 [2024-11-20 06:20:10.958686] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:39.167 [2024-11-20 06:20:10.994916] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:39.167 [2024-11-20 06:20:10.994916] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:39.426 [2024-11-20 06:20:11.036349] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:09:39.426 [2024-11-20 06:20:11.036389] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:09:42.716 06:20:13 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:09:42.716 06:20:13 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:09:42.716 spdk_app_start Round 2 00:09:42.716 06:20:13 event.app_repeat -- event/event.sh@25 -- # waitforlisten 361425 /var/tmp/spdk-nbd.sock 00:09:42.716 06:20:13 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 361425 ']' 00:09:42.716 06:20:13 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:42.716 06:20:13 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:42.716 06:20:13 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:42.716 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:42.716 06:20:13 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:42.716 06:20:13 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:09:42.716 06:20:14 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:42.716 06:20:14 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:09:42.716 06:20:14 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:42.716 Malloc0 00:09:42.716 06:20:14 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:42.716 Malloc1 00:09:42.716 06:20:14 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:42.716 06:20:14 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:42.716 06:20:14 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:42.716 06:20:14 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:42.716 06:20:14 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:42.716 06:20:14 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:42.716 06:20:14 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:42.716 06:20:14 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:42.716 06:20:14 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:42.716 06:20:14 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:42.716 06:20:14 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:42.716 06:20:14 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:42.716 06:20:14 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:09:42.716 06:20:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:42.716 06:20:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:42.716 06:20:14 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:09:42.975 /dev/nbd0 00:09:42.975 06:20:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:42.975 06:20:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:42.975 06:20:14 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:09:42.975 06:20:14 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:09:42.975 06:20:14 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:09:42.975 06:20:14 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:09:42.975 06:20:14 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:09:42.975 06:20:14 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:09:42.975 06:20:14 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:09:42.975 06:20:14 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:09:42.975 06:20:14 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:42.975 1+0 records in 00:09:42.975 1+0 records out 00:09:42.976 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000234205 s, 17.5 MB/s 00:09:42.976 06:20:14 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:42.976 06:20:14 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:09:42.976 06:20:14 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:42.976 06:20:14 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:09:42.976 06:20:14 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:09:42.976 06:20:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:42.976 06:20:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:42.976 06:20:14 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:09:43.235 /dev/nbd1 00:09:43.235 06:20:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:43.235 06:20:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:43.235 06:20:14 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:09:43.235 06:20:14 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:09:43.235 06:20:14 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:09:43.235 06:20:14 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:09:43.235 06:20:14 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:09:43.235 06:20:14 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:09:43.235 06:20:14 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:09:43.235 06:20:14 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:09:43.235 06:20:14 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:43.235 1+0 records in 00:09:43.235 1+0 records out 00:09:43.235 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000243353 s, 16.8 MB/s 00:09:43.235 06:20:14 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:43.235 06:20:14 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:09:43.235 06:20:14 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:43.235 06:20:14 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:09:43.235 06:20:14 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:09:43.235 06:20:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:43.235 06:20:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:43.235 06:20:14 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:43.235 06:20:14 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:43.235 06:20:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:43.494 06:20:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:43.494 { 00:09:43.494 "nbd_device": "/dev/nbd0", 00:09:43.494 "bdev_name": "Malloc0" 00:09:43.494 }, 00:09:43.494 { 00:09:43.494 "nbd_device": "/dev/nbd1", 00:09:43.494 "bdev_name": "Malloc1" 00:09:43.494 } 00:09:43.494 ]' 00:09:43.494 06:20:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:43.494 { 00:09:43.494 "nbd_device": "/dev/nbd0", 00:09:43.494 "bdev_name": "Malloc0" 00:09:43.494 }, 00:09:43.494 { 00:09:43.494 "nbd_device": "/dev/nbd1", 00:09:43.494 "bdev_name": "Malloc1" 00:09:43.494 } 00:09:43.494 ]' 00:09:43.494 06:20:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:43.494 06:20:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:43.494 /dev/nbd1' 00:09:43.494 06:20:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:43.494 /dev/nbd1' 00:09:43.494 06:20:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:43.494 06:20:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:09:43.494 06:20:15 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:09:43.494 06:20:15 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:09:43.494 06:20:15 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:09:43.494 06:20:15 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:09:43.494 06:20:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:43.494 06:20:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:43.494 06:20:15 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:43.494 06:20:15 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:09:43.494 06:20:15 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:43.494 06:20:15 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:09:43.494 256+0 records in 00:09:43.494 256+0 records out 00:09:43.494 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0106661 s, 98.3 MB/s 00:09:43.494 06:20:15 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:43.494 06:20:15 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:43.494 256+0 records in 00:09:43.494 256+0 records out 00:09:43.494 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0138455 s, 75.7 MB/s 00:09:43.494 06:20:15 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:43.495 06:20:15 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:43.495 256+0 records in 00:09:43.495 256+0 records out 00:09:43.495 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0145961 s, 71.8 MB/s 00:09:43.495 06:20:15 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:09:43.495 06:20:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:43.495 06:20:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:43.495 06:20:15 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:43.495 06:20:15 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:09:43.495 06:20:15 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:43.495 06:20:15 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:43.495 06:20:15 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:43.495 06:20:15 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:09:43.495 06:20:15 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:43.495 06:20:15 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:09:43.495 06:20:15 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:09:43.495 06:20:15 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:09:43.495 06:20:15 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:43.495 06:20:15 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:43.495 06:20:15 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:43.495 06:20:15 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:09:43.495 06:20:15 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:43.495 06:20:15 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:43.754 06:20:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:43.754 06:20:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:43.754 06:20:15 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:43.754 06:20:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:43.754 06:20:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:43.754 06:20:15 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:43.754 06:20:15 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:43.754 06:20:15 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:43.754 06:20:15 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:43.754 06:20:15 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:44.013 06:20:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:44.013 06:20:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:44.013 06:20:15 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:44.013 06:20:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:44.013 06:20:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:44.013 06:20:15 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:44.013 06:20:15 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:44.013 06:20:15 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:44.013 06:20:15 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:44.013 06:20:15 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:44.013 06:20:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:44.272 06:20:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:44.272 06:20:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:44.272 06:20:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:44.272 06:20:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:44.272 06:20:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:09:44.272 06:20:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:44.272 06:20:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:09:44.272 06:20:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:09:44.272 06:20:15 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:09:44.272 06:20:15 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:09:44.272 06:20:15 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:44.272 06:20:15 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:09:44.272 06:20:15 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:09:44.532 06:20:16 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:09:44.532 [2024-11-20 06:20:16.307454] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:44.532 [2024-11-20 06:20:16.343699] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:44.532 [2024-11-20 06:20:16.343700] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:44.791 [2024-11-20 06:20:16.383704] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:09:44.791 [2024-11-20 06:20:16.383740] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:09:48.079 06:20:19 event.app_repeat -- event/event.sh@38 -- # waitforlisten 361425 /var/tmp/spdk-nbd.sock 00:09:48.079 06:20:19 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 361425 ']' 00:09:48.079 06:20:19 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:48.079 06:20:19 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:48.079 06:20:19 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:48.079 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:48.079 06:20:19 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:48.079 06:20:19 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:09:48.079 06:20:19 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:48.079 06:20:19 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:09:48.079 06:20:19 event.app_repeat -- event/event.sh@39 -- # killprocess 361425 00:09:48.079 06:20:19 event.app_repeat -- common/autotest_common.sh@952 -- # '[' -z 361425 ']' 00:09:48.079 06:20:19 event.app_repeat -- common/autotest_common.sh@956 -- # kill -0 361425 00:09:48.079 06:20:19 event.app_repeat -- common/autotest_common.sh@957 -- # uname 00:09:48.079 06:20:19 event.app_repeat -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:48.079 06:20:19 event.app_repeat -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 361425 00:09:48.079 06:20:19 event.app_repeat -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:48.079 06:20:19 event.app_repeat -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:48.079 06:20:19 event.app_repeat -- common/autotest_common.sh@970 -- # echo 'killing process with pid 361425' 00:09:48.079 killing process with pid 361425 00:09:48.079 06:20:19 event.app_repeat -- common/autotest_common.sh@971 -- # kill 361425 00:09:48.079 06:20:19 event.app_repeat -- common/autotest_common.sh@976 -- # wait 361425 00:09:48.079 spdk_app_start is called in Round 0. 00:09:48.079 Shutdown signal received, stop current app iteration 00:09:48.079 Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 reinitialization... 00:09:48.079 spdk_app_start is called in Round 1. 00:09:48.079 Shutdown signal received, stop current app iteration 00:09:48.079 Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 reinitialization... 00:09:48.079 spdk_app_start is called in Round 2. 00:09:48.079 Shutdown signal received, stop current app iteration 00:09:48.079 Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 reinitialization... 00:09:48.079 spdk_app_start is called in Round 3. 00:09:48.079 Shutdown signal received, stop current app iteration 00:09:48.079 06:20:19 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:09:48.079 06:20:19 event.app_repeat -- event/event.sh@42 -- # return 0 00:09:48.079 00:09:48.079 real 0m16.432s 00:09:48.079 user 0m36.189s 00:09:48.079 sys 0m2.428s 00:09:48.079 06:20:19 event.app_repeat -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:48.079 06:20:19 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:09:48.079 ************************************ 00:09:48.079 END TEST app_repeat 00:09:48.079 ************************************ 00:09:48.079 06:20:19 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:09:48.079 06:20:19 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:09:48.079 06:20:19 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:48.079 06:20:19 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:48.079 06:20:19 event -- common/autotest_common.sh@10 -- # set +x 00:09:48.079 ************************************ 00:09:48.079 START TEST cpu_locks 00:09:48.079 ************************************ 00:09:48.079 06:20:19 event.cpu_locks -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:09:48.079 * Looking for test storage... 00:09:48.079 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:09:48.079 06:20:19 event.cpu_locks -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:48.079 06:20:19 event.cpu_locks -- common/autotest_common.sh@1691 -- # lcov --version 00:09:48.079 06:20:19 event.cpu_locks -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:48.079 06:20:19 event.cpu_locks -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:48.079 06:20:19 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:48.079 06:20:19 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:48.079 06:20:19 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:48.079 06:20:19 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:09:48.079 06:20:19 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:09:48.079 06:20:19 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:09:48.079 06:20:19 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:09:48.079 06:20:19 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:09:48.079 06:20:19 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:09:48.079 06:20:19 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:09:48.079 06:20:19 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:48.079 06:20:19 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:09:48.079 06:20:19 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:09:48.079 06:20:19 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:48.079 06:20:19 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:48.079 06:20:19 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:09:48.079 06:20:19 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:09:48.079 06:20:19 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:48.079 06:20:19 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:09:48.079 06:20:19 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:09:48.079 06:20:19 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:09:48.079 06:20:19 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:09:48.079 06:20:19 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:48.079 06:20:19 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:09:48.079 06:20:19 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:09:48.079 06:20:19 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:48.079 06:20:19 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:48.079 06:20:19 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:09:48.079 06:20:19 event.cpu_locks -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:48.079 06:20:19 event.cpu_locks -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:48.079 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:48.079 --rc genhtml_branch_coverage=1 00:09:48.079 --rc genhtml_function_coverage=1 00:09:48.079 --rc genhtml_legend=1 00:09:48.079 --rc geninfo_all_blocks=1 00:09:48.080 --rc geninfo_unexecuted_blocks=1 00:09:48.080 00:09:48.080 ' 00:09:48.080 06:20:19 event.cpu_locks -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:48.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:48.080 --rc genhtml_branch_coverage=1 00:09:48.080 --rc genhtml_function_coverage=1 00:09:48.080 --rc genhtml_legend=1 00:09:48.080 --rc geninfo_all_blocks=1 00:09:48.080 --rc geninfo_unexecuted_blocks=1 00:09:48.080 00:09:48.080 ' 00:09:48.080 06:20:19 event.cpu_locks -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:48.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:48.080 --rc genhtml_branch_coverage=1 00:09:48.080 --rc genhtml_function_coverage=1 00:09:48.080 --rc genhtml_legend=1 00:09:48.080 --rc geninfo_all_blocks=1 00:09:48.080 --rc geninfo_unexecuted_blocks=1 00:09:48.080 00:09:48.080 ' 00:09:48.080 06:20:19 event.cpu_locks -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:48.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:48.080 --rc genhtml_branch_coverage=1 00:09:48.080 --rc genhtml_function_coverage=1 00:09:48.080 --rc genhtml_legend=1 00:09:48.080 --rc geninfo_all_blocks=1 00:09:48.080 --rc geninfo_unexecuted_blocks=1 00:09:48.080 00:09:48.080 ' 00:09:48.080 06:20:19 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:09:48.080 06:20:19 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:09:48.080 06:20:19 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:09:48.080 06:20:19 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:09:48.080 06:20:19 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:48.080 06:20:19 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:48.080 06:20:19 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:48.080 ************************************ 00:09:48.080 START TEST default_locks 00:09:48.080 ************************************ 00:09:48.080 06:20:19 event.cpu_locks.default_locks -- common/autotest_common.sh@1127 -- # default_locks 00:09:48.080 06:20:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=364532 00:09:48.080 06:20:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 364532 00:09:48.080 06:20:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:09:48.080 06:20:19 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # '[' -z 364532 ']' 00:09:48.080 06:20:19 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:48.080 06:20:19 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:48.080 06:20:19 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:48.080 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:48.080 06:20:19 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:48.080 06:20:19 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:09:48.080 [2024-11-20 06:20:19.884506] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:09:48.080 [2024-11-20 06:20:19.884550] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid364532 ] 00:09:48.339 [2024-11-20 06:20:19.957843] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:48.339 [2024-11-20 06:20:19.999751] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:48.906 06:20:20 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:48.906 06:20:20 event.cpu_locks.default_locks -- common/autotest_common.sh@866 -- # return 0 00:09:48.906 06:20:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 364532 00:09:48.906 06:20:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 364532 00:09:48.906 06:20:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:49.474 lslocks: write error 00:09:49.474 06:20:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 364532 00:09:49.474 06:20:21 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # '[' -z 364532 ']' 00:09:49.474 06:20:21 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # kill -0 364532 00:09:49.474 06:20:21 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # uname 00:09:49.474 06:20:21 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:49.474 06:20:21 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 364532 00:09:49.474 06:20:21 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:49.474 06:20:21 event.cpu_locks.default_locks -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:49.474 06:20:21 event.cpu_locks.default_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 364532' 00:09:49.474 killing process with pid 364532 00:09:49.474 06:20:21 event.cpu_locks.default_locks -- common/autotest_common.sh@971 -- # kill 364532 00:09:49.474 06:20:21 event.cpu_locks.default_locks -- common/autotest_common.sh@976 -- # wait 364532 00:09:49.733 06:20:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 364532 00:09:49.733 06:20:21 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:09:49.733 06:20:21 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 364532 00:09:49.733 06:20:21 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:09:49.733 06:20:21 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:49.733 06:20:21 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:09:49.993 06:20:21 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:49.993 06:20:21 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 364532 00:09:49.993 06:20:21 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # '[' -z 364532 ']' 00:09:49.993 06:20:21 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:49.993 06:20:21 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:49.993 06:20:21 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:49.993 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:49.993 06:20:21 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:49.994 06:20:21 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:09:49.994 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 848: kill: (364532) - No such process 00:09:49.994 ERROR: process (pid: 364532) is no longer running 00:09:49.994 06:20:21 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:49.994 06:20:21 event.cpu_locks.default_locks -- common/autotest_common.sh@866 -- # return 1 00:09:49.994 06:20:21 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:09:49.994 06:20:21 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:49.994 06:20:21 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:49.994 06:20:21 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:49.994 06:20:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:09:49.994 06:20:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:09:49.994 06:20:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:09:49.994 06:20:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:09:49.994 00:09:49.994 real 0m1.742s 00:09:49.994 user 0m1.834s 00:09:49.994 sys 0m0.575s 00:09:49.994 06:20:21 event.cpu_locks.default_locks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:49.994 06:20:21 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:09:49.994 ************************************ 00:09:49.994 END TEST default_locks 00:09:49.994 ************************************ 00:09:49.994 06:20:21 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:09:49.994 06:20:21 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:49.994 06:20:21 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:49.994 06:20:21 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:49.994 ************************************ 00:09:49.994 START TEST default_locks_via_rpc 00:09:49.994 ************************************ 00:09:49.994 06:20:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1127 -- # default_locks_via_rpc 00:09:49.994 06:20:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=364864 00:09:49.994 06:20:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:09:49.994 06:20:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 364864 00:09:49.994 06:20:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 364864 ']' 00:09:49.994 06:20:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:49.994 06:20:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:49.994 06:20:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:49.994 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:49.994 06:20:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:49.994 06:20:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:49.994 [2024-11-20 06:20:21.687224] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:09:49.994 [2024-11-20 06:20:21.687263] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid364864 ] 00:09:49.994 [2024-11-20 06:20:21.761779] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:49.994 [2024-11-20 06:20:21.803818] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:50.253 06:20:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:50.253 06:20:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:09:50.253 06:20:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:09:50.253 06:20:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.253 06:20:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:50.253 06:20:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.253 06:20:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:09:50.253 06:20:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:09:50.253 06:20:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:09:50.253 06:20:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:09:50.253 06:20:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:09:50.253 06:20:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.253 06:20:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:50.253 06:20:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.253 06:20:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 364864 00:09:50.253 06:20:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 364864 00:09:50.253 06:20:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:50.513 06:20:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 364864 00:09:50.513 06:20:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # '[' -z 364864 ']' 00:09:50.513 06:20:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # kill -0 364864 00:09:50.513 06:20:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # uname 00:09:50.513 06:20:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:50.513 06:20:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 364864 00:09:50.772 06:20:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:50.772 06:20:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:50.772 06:20:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 364864' 00:09:50.772 killing process with pid 364864 00:09:50.772 06:20:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@971 -- # kill 364864 00:09:50.772 06:20:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@976 -- # wait 364864 00:09:51.031 00:09:51.031 real 0m1.009s 00:09:51.031 user 0m0.968s 00:09:51.031 sys 0m0.457s 00:09:51.031 06:20:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:51.031 06:20:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:51.031 ************************************ 00:09:51.031 END TEST default_locks_via_rpc 00:09:51.031 ************************************ 00:09:51.031 06:20:22 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:09:51.031 06:20:22 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:51.031 06:20:22 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:51.031 06:20:22 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:51.031 ************************************ 00:09:51.031 START TEST non_locking_app_on_locked_coremask 00:09:51.031 ************************************ 00:09:51.031 06:20:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1127 -- # non_locking_app_on_locked_coremask 00:09:51.031 06:20:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=365118 00:09:51.031 06:20:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 365118 /var/tmp/spdk.sock 00:09:51.031 06:20:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:09:51.031 06:20:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 365118 ']' 00:09:51.031 06:20:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:51.031 06:20:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:51.031 06:20:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:51.031 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:51.031 06:20:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:51.031 06:20:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:51.031 [2024-11-20 06:20:22.768523] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:09:51.031 [2024-11-20 06:20:22.768563] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid365118 ] 00:09:51.031 [2024-11-20 06:20:22.841557] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:51.290 [2024-11-20 06:20:22.885315] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:51.290 06:20:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:51.290 06:20:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:09:51.290 06:20:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=365130 00:09:51.290 06:20:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 365130 /var/tmp/spdk2.sock 00:09:51.290 06:20:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:09:51.290 06:20:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 365130 ']' 00:09:51.290 06:20:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:51.290 06:20:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:51.290 06:20:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:51.290 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:51.290 06:20:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:51.290 06:20:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:51.549 [2024-11-20 06:20:23.152872] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:09:51.549 [2024-11-20 06:20:23.152915] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid365130 ] 00:09:51.549 [2024-11-20 06:20:23.236324] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:51.549 [2024-11-20 06:20:23.236348] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:51.549 [2024-11-20 06:20:23.316786] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:52.486 06:20:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:52.486 06:20:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:09:52.486 06:20:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 365118 00:09:52.486 06:20:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:52.486 06:20:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 365118 00:09:52.486 lslocks: write error 00:09:52.486 06:20:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 365118 00:09:52.486 06:20:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 365118 ']' 00:09:52.486 06:20:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 365118 00:09:52.486 06:20:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:09:52.486 06:20:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:52.486 06:20:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 365118 00:09:52.486 06:20:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:52.486 06:20:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:52.486 06:20:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 365118' 00:09:52.486 killing process with pid 365118 00:09:52.486 06:20:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 365118 00:09:52.486 06:20:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 365118 00:09:53.424 06:20:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 365130 00:09:53.424 06:20:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 365130 ']' 00:09:53.424 06:20:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 365130 00:09:53.424 06:20:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:09:53.424 06:20:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:53.424 06:20:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 365130 00:09:53.424 06:20:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:53.424 06:20:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:53.424 06:20:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 365130' 00:09:53.424 killing process with pid 365130 00:09:53.424 06:20:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 365130 00:09:53.424 06:20:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 365130 00:09:53.424 00:09:53.424 real 0m2.522s 00:09:53.424 user 0m2.659s 00:09:53.424 sys 0m0.804s 00:09:53.424 06:20:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:53.424 06:20:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:53.424 ************************************ 00:09:53.424 END TEST non_locking_app_on_locked_coremask 00:09:53.424 ************************************ 00:09:53.683 06:20:25 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:09:53.683 06:20:25 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:53.683 06:20:25 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:53.683 06:20:25 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:53.683 ************************************ 00:09:53.683 START TEST locking_app_on_unlocked_coremask 00:09:53.683 ************************************ 00:09:53.683 06:20:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1127 -- # locking_app_on_unlocked_coremask 00:09:53.683 06:20:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=365518 00:09:53.683 06:20:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 365518 /var/tmp/spdk.sock 00:09:53.683 06:20:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:09:53.683 06:20:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # '[' -z 365518 ']' 00:09:53.683 06:20:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:53.683 06:20:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:53.683 06:20:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:53.683 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:53.683 06:20:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:53.683 06:20:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:53.683 [2024-11-20 06:20:25.357584] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:09:53.683 [2024-11-20 06:20:25.357624] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid365518 ] 00:09:53.683 [2024-11-20 06:20:25.432636] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:53.683 [2024-11-20 06:20:25.432661] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:53.683 [2024-11-20 06:20:25.474477] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:53.942 06:20:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:53.942 06:20:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@866 -- # return 0 00:09:53.942 06:20:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=365623 00:09:53.942 06:20:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 365623 /var/tmp/spdk2.sock 00:09:53.942 06:20:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:09:53.942 06:20:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # '[' -z 365623 ']' 00:09:53.942 06:20:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:53.942 06:20:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:53.942 06:20:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:53.942 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:53.942 06:20:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:53.942 06:20:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:53.942 [2024-11-20 06:20:25.730050] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:09:53.942 [2024-11-20 06:20:25.730096] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid365623 ] 00:09:54.200 [2024-11-20 06:20:25.814068] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:54.200 [2024-11-20 06:20:25.902132] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:54.768 06:20:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:54.768 06:20:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@866 -- # return 0 00:09:54.768 06:20:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 365623 00:09:54.768 06:20:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 365623 00:09:54.768 06:20:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:55.027 lslocks: write error 00:09:55.027 06:20:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 365518 00:09:55.027 06:20:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' -z 365518 ']' 00:09:55.027 06:20:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # kill -0 365518 00:09:55.027 06:20:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # uname 00:09:55.027 06:20:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:55.027 06:20:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 365518 00:09:55.286 06:20:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:55.286 06:20:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:55.286 06:20:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 365518' 00:09:55.286 killing process with pid 365518 00:09:55.286 06:20:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # kill 365518 00:09:55.286 06:20:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@976 -- # wait 365518 00:09:55.854 06:20:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 365623 00:09:55.854 06:20:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' -z 365623 ']' 00:09:55.854 06:20:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # kill -0 365623 00:09:55.854 06:20:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # uname 00:09:55.854 06:20:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:55.854 06:20:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 365623 00:09:55.854 06:20:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:55.854 06:20:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:55.855 06:20:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 365623' 00:09:55.855 killing process with pid 365623 00:09:55.855 06:20:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # kill 365623 00:09:55.855 06:20:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@976 -- # wait 365623 00:09:56.114 00:09:56.114 real 0m2.495s 00:09:56.114 user 0m2.623s 00:09:56.114 sys 0m0.827s 00:09:56.114 06:20:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:56.114 06:20:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:56.114 ************************************ 00:09:56.114 END TEST locking_app_on_unlocked_coremask 00:09:56.114 ************************************ 00:09:56.114 06:20:27 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:09:56.114 06:20:27 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:56.114 06:20:27 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:56.114 06:20:27 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:56.114 ************************************ 00:09:56.114 START TEST locking_app_on_locked_coremask 00:09:56.114 ************************************ 00:09:56.114 06:20:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1127 -- # locking_app_on_locked_coremask 00:09:56.114 06:20:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=365906 00:09:56.114 06:20:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 365906 /var/tmp/spdk.sock 00:09:56.114 06:20:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:09:56.114 06:20:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 365906 ']' 00:09:56.114 06:20:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:56.114 06:20:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:56.114 06:20:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:56.114 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:56.114 06:20:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:56.114 06:20:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:56.114 [2024-11-20 06:20:27.926340] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:09:56.114 [2024-11-20 06:20:27.926383] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid365906 ] 00:09:56.373 [2024-11-20 06:20:28.002017] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:56.373 [2024-11-20 06:20:28.044173] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:56.632 06:20:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:56.632 06:20:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:09:56.632 06:20:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=366126 00:09:56.632 06:20:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 366126 /var/tmp/spdk2.sock 00:09:56.632 06:20:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:09:56.632 06:20:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:09:56.632 06:20:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 366126 /var/tmp/spdk2.sock 00:09:56.632 06:20:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:09:56.632 06:20:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:56.632 06:20:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:09:56.632 06:20:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:56.632 06:20:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 366126 /var/tmp/spdk2.sock 00:09:56.632 06:20:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 366126 ']' 00:09:56.632 06:20:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:56.632 06:20:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:56.632 06:20:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:56.632 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:56.632 06:20:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:56.632 06:20:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:56.632 [2024-11-20 06:20:28.320167] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:09:56.632 [2024-11-20 06:20:28.320219] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid366126 ] 00:09:56.632 [2024-11-20 06:20:28.404747] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 365906 has claimed it. 00:09:56.632 [2024-11-20 06:20:28.404776] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:09:57.199 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 848: kill: (366126) - No such process 00:09:57.199 ERROR: process (pid: 366126) is no longer running 00:09:57.199 06:20:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:57.199 06:20:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 1 00:09:57.199 06:20:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:09:57.199 06:20:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:57.199 06:20:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:57.199 06:20:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:57.199 06:20:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 365906 00:09:57.199 06:20:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 365906 00:09:57.199 06:20:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:57.765 lslocks: write error 00:09:57.765 06:20:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 365906 00:09:57.765 06:20:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 365906 ']' 00:09:57.765 06:20:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 365906 00:09:57.765 06:20:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:09:57.765 06:20:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:57.765 06:20:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 365906 00:09:57.765 06:20:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:57.765 06:20:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:57.765 06:20:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 365906' 00:09:57.765 killing process with pid 365906 00:09:57.765 06:20:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 365906 00:09:57.765 06:20:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 365906 00:09:58.023 00:09:58.023 real 0m1.840s 00:09:58.023 user 0m1.939s 00:09:58.023 sys 0m0.630s 00:09:58.023 06:20:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:58.023 06:20:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:58.023 ************************************ 00:09:58.023 END TEST locking_app_on_locked_coremask 00:09:58.023 ************************************ 00:09:58.023 06:20:29 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:09:58.023 06:20:29 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:58.023 06:20:29 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:58.023 06:20:29 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:58.023 ************************************ 00:09:58.023 START TEST locking_overlapped_coremask 00:09:58.023 ************************************ 00:09:58.023 06:20:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1127 -- # locking_overlapped_coremask 00:09:58.023 06:20:29 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=366384 00:09:58.023 06:20:29 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 366384 /var/tmp/spdk.sock 00:09:58.023 06:20:29 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:09:58.023 06:20:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # '[' -z 366384 ']' 00:09:58.023 06:20:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:58.023 06:20:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:58.023 06:20:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:58.023 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:58.023 06:20:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:58.023 06:20:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:58.023 [2024-11-20 06:20:29.832110] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:09:58.023 [2024-11-20 06:20:29.832150] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid366384 ] 00:09:58.307 [2024-11-20 06:20:29.905330] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:58.307 [2024-11-20 06:20:29.949676] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:58.307 [2024-11-20 06:20:29.949785] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:58.307 [2024-11-20 06:20:29.949785] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:58.565 06:20:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:58.565 06:20:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@866 -- # return 0 00:09:58.565 06:20:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=366395 00:09:58.565 06:20:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 366395 /var/tmp/spdk2.sock 00:09:58.565 06:20:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:09:58.565 06:20:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:09:58.565 06:20:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 366395 /var/tmp/spdk2.sock 00:09:58.565 06:20:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:09:58.565 06:20:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:58.565 06:20:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:09:58.565 06:20:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:58.565 06:20:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 366395 /var/tmp/spdk2.sock 00:09:58.565 06:20:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # '[' -z 366395 ']' 00:09:58.565 06:20:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:58.565 06:20:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:58.565 06:20:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:58.565 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:58.565 06:20:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:58.565 06:20:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:58.565 [2024-11-20 06:20:30.213155] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:09:58.565 [2024-11-20 06:20:30.213213] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid366395 ] 00:09:58.565 [2024-11-20 06:20:30.306437] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 366384 has claimed it. 00:09:58.565 [2024-11-20 06:20:30.306474] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:09:59.132 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 848: kill: (366395) - No such process 00:09:59.132 ERROR: process (pid: 366395) is no longer running 00:09:59.132 06:20:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:59.132 06:20:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@866 -- # return 1 00:09:59.132 06:20:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:09:59.132 06:20:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:59.132 06:20:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:59.132 06:20:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:59.132 06:20:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:09:59.132 06:20:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:09:59.132 06:20:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:09:59.132 06:20:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:09:59.132 06:20:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 366384 00:09:59.132 06:20:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # '[' -z 366384 ']' 00:09:59.132 06:20:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # kill -0 366384 00:09:59.132 06:20:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # uname 00:09:59.132 06:20:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:59.132 06:20:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 366384 00:09:59.132 06:20:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:59.132 06:20:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:59.132 06:20:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 366384' 00:09:59.132 killing process with pid 366384 00:09:59.132 06:20:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@971 -- # kill 366384 00:09:59.132 06:20:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@976 -- # wait 366384 00:09:59.392 00:09:59.392 real 0m1.429s 00:09:59.392 user 0m3.939s 00:09:59.392 sys 0m0.394s 00:09:59.392 06:20:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:59.392 06:20:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:59.392 ************************************ 00:09:59.392 END TEST locking_overlapped_coremask 00:09:59.392 ************************************ 00:09:59.651 06:20:31 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:09:59.651 06:20:31 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:59.651 06:20:31 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:59.651 06:20:31 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:59.651 ************************************ 00:09:59.651 START TEST locking_overlapped_coremask_via_rpc 00:09:59.651 ************************************ 00:09:59.651 06:20:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1127 -- # locking_overlapped_coremask_via_rpc 00:09:59.651 06:20:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=366653 00:09:59.651 06:20:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 366653 /var/tmp/spdk.sock 00:09:59.651 06:20:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:09:59.651 06:20:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 366653 ']' 00:09:59.651 06:20:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:59.651 06:20:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:59.651 06:20:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:59.651 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:59.651 06:20:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:59.651 06:20:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:59.651 [2024-11-20 06:20:31.331866] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:09:59.651 [2024-11-20 06:20:31.331912] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid366653 ] 00:09:59.651 [2024-11-20 06:20:31.405609] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:59.651 [2024-11-20 06:20:31.405638] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:59.651 [2024-11-20 06:20:31.446355] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:59.651 [2024-11-20 06:20:31.446464] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:59.651 [2024-11-20 06:20:31.446464] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:59.911 06:20:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:59.911 06:20:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:09:59.911 06:20:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=366664 00:09:59.911 06:20:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 366664 /var/tmp/spdk2.sock 00:09:59.911 06:20:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:09:59.911 06:20:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 366664 ']' 00:09:59.911 06:20:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:59.911 06:20:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:59.911 06:20:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:59.911 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:59.911 06:20:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:59.911 06:20:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:59.911 [2024-11-20 06:20:31.715256] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:09:59.911 [2024-11-20 06:20:31.715307] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid366664 ] 00:10:00.170 [2024-11-20 06:20:31.806591] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:10:00.170 [2024-11-20 06:20:31.806619] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:00.170 [2024-11-20 06:20:31.893575] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:00.170 [2024-11-20 06:20:31.893693] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:00.170 [2024-11-20 06:20:31.893695] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:10:00.738 06:20:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:00.738 06:20:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:10:00.738 06:20:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:10:00.738 06:20:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.738 06:20:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:00.738 06:20:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.738 06:20:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:10:00.738 06:20:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:10:00.738 06:20:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:10:00.738 06:20:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:10:00.738 06:20:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:00.738 06:20:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:10:00.738 06:20:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:00.738 06:20:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:10:00.738 06:20:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.738 06:20:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:00.738 [2024-11-20 06:20:32.571277] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 366653 has claimed it. 00:10:00.998 request: 00:10:00.998 { 00:10:00.998 "method": "framework_enable_cpumask_locks", 00:10:00.998 "req_id": 1 00:10:00.998 } 00:10:00.998 Got JSON-RPC error response 00:10:00.998 response: 00:10:00.998 { 00:10:00.998 "code": -32603, 00:10:00.998 "message": "Failed to claim CPU core: 2" 00:10:00.998 } 00:10:00.998 06:20:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:10:00.998 06:20:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:10:00.998 06:20:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:00.998 06:20:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:00.998 06:20:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:00.998 06:20:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 366653 /var/tmp/spdk.sock 00:10:00.998 06:20:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 366653 ']' 00:10:00.998 06:20:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:00.998 06:20:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:00.998 06:20:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:00.998 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:00.998 06:20:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:00.998 06:20:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:00.998 06:20:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:00.998 06:20:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:10:00.998 06:20:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 366664 /var/tmp/spdk2.sock 00:10:00.998 06:20:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 366664 ']' 00:10:00.998 06:20:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:00.998 06:20:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:00.998 06:20:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:00.998 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:00.998 06:20:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:00.998 06:20:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:01.257 06:20:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:01.257 06:20:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:10:01.257 06:20:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:10:01.257 06:20:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:10:01.257 06:20:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:10:01.257 06:20:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:10:01.257 00:10:01.257 real 0m1.702s 00:10:01.258 user 0m0.826s 00:10:01.258 sys 0m0.134s 00:10:01.258 06:20:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:01.258 06:20:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:01.258 ************************************ 00:10:01.258 END TEST locking_overlapped_coremask_via_rpc 00:10:01.258 ************************************ 00:10:01.258 06:20:33 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:10:01.258 06:20:33 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 366653 ]] 00:10:01.258 06:20:33 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 366653 00:10:01.258 06:20:33 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 366653 ']' 00:10:01.258 06:20:33 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 366653 00:10:01.258 06:20:33 event.cpu_locks -- common/autotest_common.sh@957 -- # uname 00:10:01.258 06:20:33 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:01.258 06:20:33 event.cpu_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 366653 00:10:01.258 06:20:33 event.cpu_locks -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:01.258 06:20:33 event.cpu_locks -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:01.258 06:20:33 event.cpu_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 366653' 00:10:01.258 killing process with pid 366653 00:10:01.258 06:20:33 event.cpu_locks -- common/autotest_common.sh@971 -- # kill 366653 00:10:01.258 06:20:33 event.cpu_locks -- common/autotest_common.sh@976 -- # wait 366653 00:10:01.826 06:20:33 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 366664 ]] 00:10:01.826 06:20:33 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 366664 00:10:01.826 06:20:33 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 366664 ']' 00:10:01.826 06:20:33 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 366664 00:10:01.826 06:20:33 event.cpu_locks -- common/autotest_common.sh@957 -- # uname 00:10:01.826 06:20:33 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:01.826 06:20:33 event.cpu_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 366664 00:10:01.826 06:20:33 event.cpu_locks -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:10:01.826 06:20:33 event.cpu_locks -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:10:01.826 06:20:33 event.cpu_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 366664' 00:10:01.826 killing process with pid 366664 00:10:01.826 06:20:33 event.cpu_locks -- common/autotest_common.sh@971 -- # kill 366664 00:10:01.826 06:20:33 event.cpu_locks -- common/autotest_common.sh@976 -- # wait 366664 00:10:02.086 06:20:33 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:10:02.086 06:20:33 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:10:02.086 06:20:33 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 366653 ]] 00:10:02.086 06:20:33 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 366653 00:10:02.086 06:20:33 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 366653 ']' 00:10:02.086 06:20:33 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 366653 00:10:02.086 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (366653) - No such process 00:10:02.086 06:20:33 event.cpu_locks -- common/autotest_common.sh@979 -- # echo 'Process with pid 366653 is not found' 00:10:02.086 Process with pid 366653 is not found 00:10:02.086 06:20:33 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 366664 ]] 00:10:02.086 06:20:33 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 366664 00:10:02.086 06:20:33 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 366664 ']' 00:10:02.086 06:20:33 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 366664 00:10:02.086 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (366664) - No such process 00:10:02.086 06:20:33 event.cpu_locks -- common/autotest_common.sh@979 -- # echo 'Process with pid 366664 is not found' 00:10:02.086 Process with pid 366664 is not found 00:10:02.086 06:20:33 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:10:02.086 00:10:02.086 real 0m14.120s 00:10:02.086 user 0m24.481s 00:10:02.086 sys 0m4.791s 00:10:02.086 06:20:33 event.cpu_locks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:02.086 06:20:33 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:02.086 ************************************ 00:10:02.086 END TEST cpu_locks 00:10:02.086 ************************************ 00:10:02.086 00:10:02.086 real 0m38.459s 00:10:02.086 user 1m12.863s 00:10:02.086 sys 0m8.210s 00:10:02.086 06:20:33 event -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:02.086 06:20:33 event -- common/autotest_common.sh@10 -- # set +x 00:10:02.086 ************************************ 00:10:02.086 END TEST event 00:10:02.086 ************************************ 00:10:02.086 06:20:33 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:10:02.086 06:20:33 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:10:02.086 06:20:33 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:02.086 06:20:33 -- common/autotest_common.sh@10 -- # set +x 00:10:02.086 ************************************ 00:10:02.086 START TEST thread 00:10:02.086 ************************************ 00:10:02.086 06:20:33 thread -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:10:02.346 * Looking for test storage... 00:10:02.346 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:10:02.346 06:20:33 thread -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:02.346 06:20:33 thread -- common/autotest_common.sh@1691 -- # lcov --version 00:10:02.346 06:20:33 thread -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:02.346 06:20:33 thread -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:02.346 06:20:33 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:02.346 06:20:33 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:02.346 06:20:33 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:02.346 06:20:33 thread -- scripts/common.sh@336 -- # IFS=.-: 00:10:02.346 06:20:33 thread -- scripts/common.sh@336 -- # read -ra ver1 00:10:02.346 06:20:33 thread -- scripts/common.sh@337 -- # IFS=.-: 00:10:02.346 06:20:33 thread -- scripts/common.sh@337 -- # read -ra ver2 00:10:02.346 06:20:33 thread -- scripts/common.sh@338 -- # local 'op=<' 00:10:02.346 06:20:33 thread -- scripts/common.sh@340 -- # ver1_l=2 00:10:02.346 06:20:33 thread -- scripts/common.sh@341 -- # ver2_l=1 00:10:02.346 06:20:33 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:02.346 06:20:34 thread -- scripts/common.sh@344 -- # case "$op" in 00:10:02.346 06:20:34 thread -- scripts/common.sh@345 -- # : 1 00:10:02.346 06:20:34 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:02.346 06:20:34 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:02.346 06:20:34 thread -- scripts/common.sh@365 -- # decimal 1 00:10:02.346 06:20:34 thread -- scripts/common.sh@353 -- # local d=1 00:10:02.346 06:20:34 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:02.346 06:20:34 thread -- scripts/common.sh@355 -- # echo 1 00:10:02.346 06:20:34 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:10:02.346 06:20:34 thread -- scripts/common.sh@366 -- # decimal 2 00:10:02.346 06:20:34 thread -- scripts/common.sh@353 -- # local d=2 00:10:02.346 06:20:34 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:02.346 06:20:34 thread -- scripts/common.sh@355 -- # echo 2 00:10:02.346 06:20:34 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:10:02.346 06:20:34 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:02.346 06:20:34 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:02.346 06:20:34 thread -- scripts/common.sh@368 -- # return 0 00:10:02.346 06:20:34 thread -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:02.346 06:20:34 thread -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:02.346 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:02.346 --rc genhtml_branch_coverage=1 00:10:02.346 --rc genhtml_function_coverage=1 00:10:02.346 --rc genhtml_legend=1 00:10:02.346 --rc geninfo_all_blocks=1 00:10:02.346 --rc geninfo_unexecuted_blocks=1 00:10:02.346 00:10:02.346 ' 00:10:02.346 06:20:34 thread -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:02.346 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:02.346 --rc genhtml_branch_coverage=1 00:10:02.346 --rc genhtml_function_coverage=1 00:10:02.346 --rc genhtml_legend=1 00:10:02.346 --rc geninfo_all_blocks=1 00:10:02.346 --rc geninfo_unexecuted_blocks=1 00:10:02.346 00:10:02.346 ' 00:10:02.346 06:20:34 thread -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:02.346 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:02.346 --rc genhtml_branch_coverage=1 00:10:02.346 --rc genhtml_function_coverage=1 00:10:02.346 --rc genhtml_legend=1 00:10:02.346 --rc geninfo_all_blocks=1 00:10:02.346 --rc geninfo_unexecuted_blocks=1 00:10:02.346 00:10:02.346 ' 00:10:02.346 06:20:34 thread -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:02.346 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:02.346 --rc genhtml_branch_coverage=1 00:10:02.346 --rc genhtml_function_coverage=1 00:10:02.346 --rc genhtml_legend=1 00:10:02.346 --rc geninfo_all_blocks=1 00:10:02.346 --rc geninfo_unexecuted_blocks=1 00:10:02.346 00:10:02.346 ' 00:10:02.346 06:20:34 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:10:02.346 06:20:34 thread -- common/autotest_common.sh@1103 -- # '[' 8 -le 1 ']' 00:10:02.346 06:20:34 thread -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:02.346 06:20:34 thread -- common/autotest_common.sh@10 -- # set +x 00:10:02.346 ************************************ 00:10:02.346 START TEST thread_poller_perf 00:10:02.346 ************************************ 00:10:02.346 06:20:34 thread.thread_poller_perf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:10:02.346 [2024-11-20 06:20:34.069448] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:10:02.346 [2024-11-20 06:20:34.069519] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid367229 ] 00:10:02.346 [2024-11-20 06:20:34.146881] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:02.605 [2024-11-20 06:20:34.187407] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:02.605 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:10:03.543 [2024-11-20T05:20:35.379Z] ====================================== 00:10:03.543 [2024-11-20T05:20:35.379Z] busy:2106779510 (cyc) 00:10:03.543 [2024-11-20T05:20:35.379Z] total_run_count: 414000 00:10:03.543 [2024-11-20T05:20:35.379Z] tsc_hz: 2100000000 (cyc) 00:10:03.543 [2024-11-20T05:20:35.379Z] ====================================== 00:10:03.543 [2024-11-20T05:20:35.379Z] poller_cost: 5088 (cyc), 2422 (nsec) 00:10:03.543 00:10:03.543 real 0m1.186s 00:10:03.543 user 0m1.113s 00:10:03.543 sys 0m0.069s 00:10:03.543 06:20:35 thread.thread_poller_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:03.543 06:20:35 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:10:03.543 ************************************ 00:10:03.543 END TEST thread_poller_perf 00:10:03.543 ************************************ 00:10:03.543 06:20:35 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:10:03.543 06:20:35 thread -- common/autotest_common.sh@1103 -- # '[' 8 -le 1 ']' 00:10:03.543 06:20:35 thread -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:03.543 06:20:35 thread -- common/autotest_common.sh@10 -- # set +x 00:10:03.543 ************************************ 00:10:03.543 START TEST thread_poller_perf 00:10:03.543 ************************************ 00:10:03.543 06:20:35 thread.thread_poller_perf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:10:03.543 [2024-11-20 06:20:35.328606] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:10:03.543 [2024-11-20 06:20:35.328670] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid367477 ] 00:10:03.802 [2024-11-20 06:20:35.410111] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:03.802 [2024-11-20 06:20:35.450128] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:03.802 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:10:04.739 [2024-11-20T05:20:36.575Z] ====================================== 00:10:04.739 [2024-11-20T05:20:36.575Z] busy:2101372888 (cyc) 00:10:04.739 [2024-11-20T05:20:36.575Z] total_run_count: 5572000 00:10:04.739 [2024-11-20T05:20:36.575Z] tsc_hz: 2100000000 (cyc) 00:10:04.739 [2024-11-20T05:20:36.575Z] ====================================== 00:10:04.739 [2024-11-20T05:20:36.575Z] poller_cost: 377 (cyc), 179 (nsec) 00:10:04.739 00:10:04.739 real 0m1.184s 00:10:04.739 user 0m1.102s 00:10:04.739 sys 0m0.078s 00:10:04.740 06:20:36 thread.thread_poller_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:04.740 06:20:36 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:10:04.740 ************************************ 00:10:04.740 END TEST thread_poller_perf 00:10:04.740 ************************************ 00:10:04.740 06:20:36 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:10:04.740 00:10:04.740 real 0m2.684s 00:10:04.740 user 0m2.376s 00:10:04.740 sys 0m0.321s 00:10:04.740 06:20:36 thread -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:04.740 06:20:36 thread -- common/autotest_common.sh@10 -- # set +x 00:10:04.740 ************************************ 00:10:04.740 END TEST thread 00:10:04.740 ************************************ 00:10:04.740 06:20:36 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:10:04.740 06:20:36 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:10:04.740 06:20:36 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:10:04.740 06:20:36 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:04.740 06:20:36 -- common/autotest_common.sh@10 -- # set +x 00:10:04.999 ************************************ 00:10:04.999 START TEST app_cmdline 00:10:04.999 ************************************ 00:10:04.999 06:20:36 app_cmdline -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:10:04.999 * Looking for test storage... 00:10:04.999 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:10:04.999 06:20:36 app_cmdline -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:04.999 06:20:36 app_cmdline -- common/autotest_common.sh@1691 -- # lcov --version 00:10:04.999 06:20:36 app_cmdline -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:04.999 06:20:36 app_cmdline -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:04.999 06:20:36 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:04.999 06:20:36 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:04.999 06:20:36 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:04.999 06:20:36 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:10:04.999 06:20:36 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:10:04.999 06:20:36 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:10:04.999 06:20:36 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:10:04.999 06:20:36 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:10:04.999 06:20:36 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:10:04.999 06:20:36 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:10:04.999 06:20:36 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:04.999 06:20:36 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:10:04.999 06:20:36 app_cmdline -- scripts/common.sh@345 -- # : 1 00:10:04.999 06:20:36 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:04.999 06:20:36 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:04.999 06:20:36 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:10:04.999 06:20:36 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:10:04.999 06:20:36 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:04.999 06:20:36 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:10:04.999 06:20:36 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:10:04.999 06:20:36 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:10:04.999 06:20:36 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:10:04.999 06:20:36 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:04.999 06:20:36 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:10:04.999 06:20:36 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:10:04.999 06:20:36 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:04.999 06:20:36 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:04.999 06:20:36 app_cmdline -- scripts/common.sh@368 -- # return 0 00:10:04.999 06:20:36 app_cmdline -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:04.999 06:20:36 app_cmdline -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:04.999 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:04.999 --rc genhtml_branch_coverage=1 00:10:04.999 --rc genhtml_function_coverage=1 00:10:04.999 --rc genhtml_legend=1 00:10:05.000 --rc geninfo_all_blocks=1 00:10:05.000 --rc geninfo_unexecuted_blocks=1 00:10:05.000 00:10:05.000 ' 00:10:05.000 06:20:36 app_cmdline -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:05.000 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:05.000 --rc genhtml_branch_coverage=1 00:10:05.000 --rc genhtml_function_coverage=1 00:10:05.000 --rc genhtml_legend=1 00:10:05.000 --rc geninfo_all_blocks=1 00:10:05.000 --rc geninfo_unexecuted_blocks=1 00:10:05.000 00:10:05.000 ' 00:10:05.000 06:20:36 app_cmdline -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:05.000 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:05.000 --rc genhtml_branch_coverage=1 00:10:05.000 --rc genhtml_function_coverage=1 00:10:05.000 --rc genhtml_legend=1 00:10:05.000 --rc geninfo_all_blocks=1 00:10:05.000 --rc geninfo_unexecuted_blocks=1 00:10:05.000 00:10:05.000 ' 00:10:05.000 06:20:36 app_cmdline -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:05.000 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:05.000 --rc genhtml_branch_coverage=1 00:10:05.000 --rc genhtml_function_coverage=1 00:10:05.000 --rc genhtml_legend=1 00:10:05.000 --rc geninfo_all_blocks=1 00:10:05.000 --rc geninfo_unexecuted_blocks=1 00:10:05.000 00:10:05.000 ' 00:10:05.000 06:20:36 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:10:05.000 06:20:36 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=367773 00:10:05.000 06:20:36 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:10:05.000 06:20:36 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 367773 00:10:05.000 06:20:36 app_cmdline -- common/autotest_common.sh@833 -- # '[' -z 367773 ']' 00:10:05.000 06:20:36 app_cmdline -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:05.000 06:20:36 app_cmdline -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:05.000 06:20:36 app_cmdline -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:05.000 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:05.000 06:20:36 app_cmdline -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:05.000 06:20:36 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:10:05.000 [2024-11-20 06:20:36.831500] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:10:05.000 [2024-11-20 06:20:36.831549] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid367773 ] 00:10:05.258 [2024-11-20 06:20:36.906883] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:05.258 [2024-11-20 06:20:36.945932] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:06.194 06:20:37 app_cmdline -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:06.194 06:20:37 app_cmdline -- common/autotest_common.sh@866 -- # return 0 00:10:06.194 06:20:37 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:10:06.194 { 00:10:06.194 "version": "SPDK v25.01-pre git sha1 95f6a056e", 00:10:06.194 "fields": { 00:10:06.194 "major": 25, 00:10:06.194 "minor": 1, 00:10:06.194 "patch": 0, 00:10:06.194 "suffix": "-pre", 00:10:06.194 "commit": "95f6a056e" 00:10:06.194 } 00:10:06.194 } 00:10:06.194 06:20:37 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:10:06.194 06:20:37 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:10:06.194 06:20:37 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:10:06.194 06:20:37 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:10:06.194 06:20:37 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:10:06.194 06:20:37 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:10:06.194 06:20:37 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.194 06:20:37 app_cmdline -- app/cmdline.sh@26 -- # sort 00:10:06.194 06:20:37 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:10:06.194 06:20:37 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.194 06:20:37 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:10:06.194 06:20:37 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:10:06.195 06:20:37 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:10:06.195 06:20:37 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:10:06.195 06:20:37 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:10:06.195 06:20:37 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:06.195 06:20:37 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:06.195 06:20:37 app_cmdline -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:06.195 06:20:37 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:06.195 06:20:37 app_cmdline -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:06.195 06:20:37 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:06.195 06:20:37 app_cmdline -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:06.195 06:20:37 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:10:06.195 06:20:37 app_cmdline -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:10:06.454 request: 00:10:06.454 { 00:10:06.454 "method": "env_dpdk_get_mem_stats", 00:10:06.454 "req_id": 1 00:10:06.454 } 00:10:06.454 Got JSON-RPC error response 00:10:06.454 response: 00:10:06.454 { 00:10:06.454 "code": -32601, 00:10:06.454 "message": "Method not found" 00:10:06.454 } 00:10:06.454 06:20:38 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:10:06.454 06:20:38 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:06.454 06:20:38 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:06.454 06:20:38 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:06.454 06:20:38 app_cmdline -- app/cmdline.sh@1 -- # killprocess 367773 00:10:06.454 06:20:38 app_cmdline -- common/autotest_common.sh@952 -- # '[' -z 367773 ']' 00:10:06.454 06:20:38 app_cmdline -- common/autotest_common.sh@956 -- # kill -0 367773 00:10:06.454 06:20:38 app_cmdline -- common/autotest_common.sh@957 -- # uname 00:10:06.454 06:20:38 app_cmdline -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:06.455 06:20:38 app_cmdline -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 367773 00:10:06.455 06:20:38 app_cmdline -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:06.455 06:20:38 app_cmdline -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:06.455 06:20:38 app_cmdline -- common/autotest_common.sh@970 -- # echo 'killing process with pid 367773' 00:10:06.455 killing process with pid 367773 00:10:06.455 06:20:38 app_cmdline -- common/autotest_common.sh@971 -- # kill 367773 00:10:06.455 06:20:38 app_cmdline -- common/autotest_common.sh@976 -- # wait 367773 00:10:06.713 00:10:06.713 real 0m1.845s 00:10:06.713 user 0m2.204s 00:10:06.713 sys 0m0.493s 00:10:06.713 06:20:38 app_cmdline -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:06.714 06:20:38 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:10:06.714 ************************************ 00:10:06.714 END TEST app_cmdline 00:10:06.714 ************************************ 00:10:06.714 06:20:38 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:10:06.714 06:20:38 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:10:06.714 06:20:38 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:06.714 06:20:38 -- common/autotest_common.sh@10 -- # set +x 00:10:06.714 ************************************ 00:10:06.714 START TEST version 00:10:06.714 ************************************ 00:10:06.714 06:20:38 version -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:10:06.973 * Looking for test storage... 00:10:06.973 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:10:06.973 06:20:38 version -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:06.973 06:20:38 version -- common/autotest_common.sh@1691 -- # lcov --version 00:10:06.973 06:20:38 version -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:06.973 06:20:38 version -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:06.973 06:20:38 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:06.973 06:20:38 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:06.973 06:20:38 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:06.973 06:20:38 version -- scripts/common.sh@336 -- # IFS=.-: 00:10:06.973 06:20:38 version -- scripts/common.sh@336 -- # read -ra ver1 00:10:06.973 06:20:38 version -- scripts/common.sh@337 -- # IFS=.-: 00:10:06.973 06:20:38 version -- scripts/common.sh@337 -- # read -ra ver2 00:10:06.973 06:20:38 version -- scripts/common.sh@338 -- # local 'op=<' 00:10:06.973 06:20:38 version -- scripts/common.sh@340 -- # ver1_l=2 00:10:06.973 06:20:38 version -- scripts/common.sh@341 -- # ver2_l=1 00:10:06.973 06:20:38 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:06.973 06:20:38 version -- scripts/common.sh@344 -- # case "$op" in 00:10:06.973 06:20:38 version -- scripts/common.sh@345 -- # : 1 00:10:06.973 06:20:38 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:06.973 06:20:38 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:06.973 06:20:38 version -- scripts/common.sh@365 -- # decimal 1 00:10:06.973 06:20:38 version -- scripts/common.sh@353 -- # local d=1 00:10:06.973 06:20:38 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:06.973 06:20:38 version -- scripts/common.sh@355 -- # echo 1 00:10:06.973 06:20:38 version -- scripts/common.sh@365 -- # ver1[v]=1 00:10:06.973 06:20:38 version -- scripts/common.sh@366 -- # decimal 2 00:10:06.973 06:20:38 version -- scripts/common.sh@353 -- # local d=2 00:10:06.973 06:20:38 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:06.973 06:20:38 version -- scripts/common.sh@355 -- # echo 2 00:10:06.973 06:20:38 version -- scripts/common.sh@366 -- # ver2[v]=2 00:10:06.973 06:20:38 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:06.973 06:20:38 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:06.973 06:20:38 version -- scripts/common.sh@368 -- # return 0 00:10:06.973 06:20:38 version -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:06.973 06:20:38 version -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:06.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:06.973 --rc genhtml_branch_coverage=1 00:10:06.973 --rc genhtml_function_coverage=1 00:10:06.973 --rc genhtml_legend=1 00:10:06.973 --rc geninfo_all_blocks=1 00:10:06.973 --rc geninfo_unexecuted_blocks=1 00:10:06.973 00:10:06.973 ' 00:10:06.973 06:20:38 version -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:06.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:06.973 --rc genhtml_branch_coverage=1 00:10:06.973 --rc genhtml_function_coverage=1 00:10:06.973 --rc genhtml_legend=1 00:10:06.973 --rc geninfo_all_blocks=1 00:10:06.973 --rc geninfo_unexecuted_blocks=1 00:10:06.973 00:10:06.973 ' 00:10:06.974 06:20:38 version -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:06.974 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:06.974 --rc genhtml_branch_coverage=1 00:10:06.974 --rc genhtml_function_coverage=1 00:10:06.974 --rc genhtml_legend=1 00:10:06.974 --rc geninfo_all_blocks=1 00:10:06.974 --rc geninfo_unexecuted_blocks=1 00:10:06.974 00:10:06.974 ' 00:10:06.974 06:20:38 version -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:06.974 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:06.974 --rc genhtml_branch_coverage=1 00:10:06.974 --rc genhtml_function_coverage=1 00:10:06.974 --rc genhtml_legend=1 00:10:06.974 --rc geninfo_all_blocks=1 00:10:06.974 --rc geninfo_unexecuted_blocks=1 00:10:06.974 00:10:06.974 ' 00:10:06.974 06:20:38 version -- app/version.sh@17 -- # get_header_version major 00:10:06.974 06:20:38 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:10:06.974 06:20:38 version -- app/version.sh@14 -- # cut -f2 00:10:06.974 06:20:38 version -- app/version.sh@14 -- # tr -d '"' 00:10:06.974 06:20:38 version -- app/version.sh@17 -- # major=25 00:10:06.974 06:20:38 version -- app/version.sh@18 -- # get_header_version minor 00:10:06.974 06:20:38 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:10:06.974 06:20:38 version -- app/version.sh@14 -- # cut -f2 00:10:06.974 06:20:38 version -- app/version.sh@14 -- # tr -d '"' 00:10:06.974 06:20:38 version -- app/version.sh@18 -- # minor=1 00:10:06.974 06:20:38 version -- app/version.sh@19 -- # get_header_version patch 00:10:06.974 06:20:38 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:10:06.974 06:20:38 version -- app/version.sh@14 -- # cut -f2 00:10:06.974 06:20:38 version -- app/version.sh@14 -- # tr -d '"' 00:10:06.974 06:20:38 version -- app/version.sh@19 -- # patch=0 00:10:06.974 06:20:38 version -- app/version.sh@20 -- # get_header_version suffix 00:10:06.974 06:20:38 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:10:06.974 06:20:38 version -- app/version.sh@14 -- # cut -f2 00:10:06.974 06:20:38 version -- app/version.sh@14 -- # tr -d '"' 00:10:06.974 06:20:38 version -- app/version.sh@20 -- # suffix=-pre 00:10:06.974 06:20:38 version -- app/version.sh@22 -- # version=25.1 00:10:06.974 06:20:38 version -- app/version.sh@25 -- # (( patch != 0 )) 00:10:06.974 06:20:38 version -- app/version.sh@28 -- # version=25.1rc0 00:10:06.974 06:20:38 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:10:06.974 06:20:38 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:10:06.974 06:20:38 version -- app/version.sh@30 -- # py_version=25.1rc0 00:10:06.974 06:20:38 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:10:06.974 00:10:06.974 real 0m0.242s 00:10:06.974 user 0m0.139s 00:10:06.974 sys 0m0.147s 00:10:06.974 06:20:38 version -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:06.974 06:20:38 version -- common/autotest_common.sh@10 -- # set +x 00:10:06.974 ************************************ 00:10:06.974 END TEST version 00:10:06.974 ************************************ 00:10:06.974 06:20:38 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:10:06.974 06:20:38 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:10:06.974 06:20:38 -- spdk/autotest.sh@194 -- # uname -s 00:10:06.974 06:20:38 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:10:06.974 06:20:38 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:10:06.974 06:20:38 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:10:06.974 06:20:38 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:10:06.974 06:20:38 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:10:06.974 06:20:38 -- spdk/autotest.sh@256 -- # timing_exit lib 00:10:06.974 06:20:38 -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:06.974 06:20:38 -- common/autotest_common.sh@10 -- # set +x 00:10:07.234 06:20:38 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:10:07.234 06:20:38 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:10:07.234 06:20:38 -- spdk/autotest.sh@272 -- # '[' 1 -eq 1 ']' 00:10:07.234 06:20:38 -- spdk/autotest.sh@273 -- # export NET_TYPE 00:10:07.234 06:20:38 -- spdk/autotest.sh@276 -- # '[' tcp = rdma ']' 00:10:07.234 06:20:38 -- spdk/autotest.sh@279 -- # '[' tcp = tcp ']' 00:10:07.234 06:20:38 -- spdk/autotest.sh@280 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:10:07.234 06:20:38 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:10:07.234 06:20:38 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:07.234 06:20:38 -- common/autotest_common.sh@10 -- # set +x 00:10:07.234 ************************************ 00:10:07.234 START TEST nvmf_tcp 00:10:07.234 ************************************ 00:10:07.234 06:20:38 nvmf_tcp -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:10:07.234 * Looking for test storage... 00:10:07.234 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:10:07.234 06:20:38 nvmf_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:07.234 06:20:38 nvmf_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:10:07.234 06:20:38 nvmf_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:07.234 06:20:39 nvmf_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:07.234 06:20:39 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:07.234 06:20:39 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:07.234 06:20:39 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:07.234 06:20:39 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:10:07.234 06:20:39 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:10:07.234 06:20:39 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:10:07.234 06:20:39 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:10:07.234 06:20:39 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:10:07.234 06:20:39 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:10:07.234 06:20:39 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:10:07.234 06:20:39 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:07.234 06:20:39 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:10:07.234 06:20:39 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:10:07.234 06:20:39 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:07.234 06:20:39 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:07.234 06:20:39 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:10:07.234 06:20:39 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:10:07.234 06:20:39 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:07.234 06:20:39 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:10:07.234 06:20:39 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:10:07.234 06:20:39 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:10:07.234 06:20:39 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:10:07.234 06:20:39 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:07.234 06:20:39 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:10:07.234 06:20:39 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:10:07.234 06:20:39 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:07.234 06:20:39 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:07.234 06:20:39 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:10:07.234 06:20:39 nvmf_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:07.234 06:20:39 nvmf_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:07.234 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:07.234 --rc genhtml_branch_coverage=1 00:10:07.234 --rc genhtml_function_coverage=1 00:10:07.234 --rc genhtml_legend=1 00:10:07.234 --rc geninfo_all_blocks=1 00:10:07.234 --rc geninfo_unexecuted_blocks=1 00:10:07.234 00:10:07.234 ' 00:10:07.234 06:20:39 nvmf_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:07.234 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:07.234 --rc genhtml_branch_coverage=1 00:10:07.234 --rc genhtml_function_coverage=1 00:10:07.234 --rc genhtml_legend=1 00:10:07.234 --rc geninfo_all_blocks=1 00:10:07.234 --rc geninfo_unexecuted_blocks=1 00:10:07.234 00:10:07.234 ' 00:10:07.234 06:20:39 nvmf_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:07.234 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:07.234 --rc genhtml_branch_coverage=1 00:10:07.234 --rc genhtml_function_coverage=1 00:10:07.234 --rc genhtml_legend=1 00:10:07.234 --rc geninfo_all_blocks=1 00:10:07.234 --rc geninfo_unexecuted_blocks=1 00:10:07.234 00:10:07.234 ' 00:10:07.234 06:20:39 nvmf_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:07.234 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:07.234 --rc genhtml_branch_coverage=1 00:10:07.234 --rc genhtml_function_coverage=1 00:10:07.234 --rc genhtml_legend=1 00:10:07.234 --rc geninfo_all_blocks=1 00:10:07.234 --rc geninfo_unexecuted_blocks=1 00:10:07.234 00:10:07.234 ' 00:10:07.234 06:20:39 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:10:07.234 06:20:39 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:10:07.234 06:20:39 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:10:07.234 06:20:39 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:10:07.234 06:20:39 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:07.234 06:20:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:07.492 ************************************ 00:10:07.492 START TEST nvmf_target_core 00:10:07.492 ************************************ 00:10:07.492 06:20:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:10:07.492 * Looking for test storage... 00:10:07.492 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:10:07.492 06:20:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:07.492 06:20:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1691 -- # lcov --version 00:10:07.492 06:20:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:07.492 06:20:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:07.492 06:20:39 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:07.492 06:20:39 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:07.492 06:20:39 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:07.492 06:20:39 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:10:07.492 06:20:39 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:10:07.492 06:20:39 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:10:07.492 06:20:39 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:10:07.492 06:20:39 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:10:07.492 06:20:39 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:10:07.492 06:20:39 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:10:07.492 06:20:39 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:07.492 06:20:39 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:10:07.492 06:20:39 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:10:07.492 06:20:39 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:07.492 06:20:39 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:07.492 06:20:39 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:10:07.492 06:20:39 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:10:07.492 06:20:39 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:07.492 06:20:39 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:10:07.492 06:20:39 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:10:07.492 06:20:39 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:10:07.492 06:20:39 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:10:07.492 06:20:39 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:07.492 06:20:39 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:10:07.492 06:20:39 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:10:07.492 06:20:39 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:07.492 06:20:39 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:07.492 06:20:39 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:10:07.492 06:20:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:07.492 06:20:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:07.492 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:07.492 --rc genhtml_branch_coverage=1 00:10:07.492 --rc genhtml_function_coverage=1 00:10:07.492 --rc genhtml_legend=1 00:10:07.492 --rc geninfo_all_blocks=1 00:10:07.492 --rc geninfo_unexecuted_blocks=1 00:10:07.492 00:10:07.492 ' 00:10:07.492 06:20:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:07.492 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:07.492 --rc genhtml_branch_coverage=1 00:10:07.492 --rc genhtml_function_coverage=1 00:10:07.492 --rc genhtml_legend=1 00:10:07.492 --rc geninfo_all_blocks=1 00:10:07.493 --rc geninfo_unexecuted_blocks=1 00:10:07.493 00:10:07.493 ' 00:10:07.493 06:20:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:07.493 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:07.493 --rc genhtml_branch_coverage=1 00:10:07.493 --rc genhtml_function_coverage=1 00:10:07.493 --rc genhtml_legend=1 00:10:07.493 --rc geninfo_all_blocks=1 00:10:07.493 --rc geninfo_unexecuted_blocks=1 00:10:07.493 00:10:07.493 ' 00:10:07.493 06:20:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:07.493 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:07.493 --rc genhtml_branch_coverage=1 00:10:07.493 --rc genhtml_function_coverage=1 00:10:07.493 --rc genhtml_legend=1 00:10:07.493 --rc geninfo_all_blocks=1 00:10:07.493 --rc geninfo_unexecuted_blocks=1 00:10:07.493 00:10:07.493 ' 00:10:07.493 06:20:39 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:10:07.493 06:20:39 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:10:07.493 06:20:39 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:07.493 06:20:39 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:10:07.493 06:20:39 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:07.493 06:20:39 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:07.493 06:20:39 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:07.493 06:20:39 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:07.493 06:20:39 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:07.493 06:20:39 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:07.493 06:20:39 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:07.493 06:20:39 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:07.493 06:20:39 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:07.493 06:20:39 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:07.493 06:20:39 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:10:07.493 06:20:39 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:10:07.493 06:20:39 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:07.493 06:20:39 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:07.493 06:20:39 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:07.493 06:20:39 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:07.493 06:20:39 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:07.493 06:20:39 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:10:07.493 06:20:39 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:07.493 06:20:39 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:07.493 06:20:39 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:07.493 06:20:39 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:07.493 06:20:39 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:07.493 06:20:39 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:07.493 06:20:39 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:10:07.493 06:20:39 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:07.493 06:20:39 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:10:07.493 06:20:39 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:07.493 06:20:39 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:07.493 06:20:39 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:07.493 06:20:39 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:07.493 06:20:39 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:07.493 06:20:39 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:07.493 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:07.493 06:20:39 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:07.493 06:20:39 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:07.493 06:20:39 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:07.493 06:20:39 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:10:07.493 06:20:39 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:10:07.493 06:20:39 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:10:07.493 06:20:39 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:10:07.493 06:20:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:10:07.493 06:20:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:07.493 06:20:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:07.752 ************************************ 00:10:07.752 START TEST nvmf_abort 00:10:07.752 ************************************ 00:10:07.752 06:20:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:10:07.752 * Looking for test storage... 00:10:07.752 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:07.752 06:20:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:07.752 06:20:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1691 -- # lcov --version 00:10:07.752 06:20:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:07.752 06:20:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:07.752 06:20:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:07.752 06:20:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:07.752 06:20:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:07.752 06:20:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:10:07.752 06:20:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:10:07.752 06:20:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:10:07.752 06:20:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:10:07.752 06:20:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:10:07.752 06:20:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:10:07.752 06:20:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:10:07.752 06:20:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:07.752 06:20:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:10:07.752 06:20:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:10:07.752 06:20:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:07.752 06:20:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:07.752 06:20:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:10:07.752 06:20:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:10:07.752 06:20:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:07.752 06:20:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:10:07.752 06:20:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:10:07.752 06:20:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:10:07.753 06:20:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:10:07.753 06:20:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:07.753 06:20:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:10:07.753 06:20:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:10:07.753 06:20:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:07.753 06:20:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:07.753 06:20:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:10:07.753 06:20:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:07.753 06:20:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:07.753 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:07.753 --rc genhtml_branch_coverage=1 00:10:07.753 --rc genhtml_function_coverage=1 00:10:07.753 --rc genhtml_legend=1 00:10:07.753 --rc geninfo_all_blocks=1 00:10:07.753 --rc geninfo_unexecuted_blocks=1 00:10:07.753 00:10:07.753 ' 00:10:07.753 06:20:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:07.753 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:07.753 --rc genhtml_branch_coverage=1 00:10:07.753 --rc genhtml_function_coverage=1 00:10:07.753 --rc genhtml_legend=1 00:10:07.753 --rc geninfo_all_blocks=1 00:10:07.753 --rc geninfo_unexecuted_blocks=1 00:10:07.753 00:10:07.753 ' 00:10:07.753 06:20:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:07.753 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:07.753 --rc genhtml_branch_coverage=1 00:10:07.753 --rc genhtml_function_coverage=1 00:10:07.753 --rc genhtml_legend=1 00:10:07.753 --rc geninfo_all_blocks=1 00:10:07.753 --rc geninfo_unexecuted_blocks=1 00:10:07.753 00:10:07.753 ' 00:10:07.753 06:20:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:07.753 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:07.753 --rc genhtml_branch_coverage=1 00:10:07.753 --rc genhtml_function_coverage=1 00:10:07.753 --rc genhtml_legend=1 00:10:07.753 --rc geninfo_all_blocks=1 00:10:07.753 --rc geninfo_unexecuted_blocks=1 00:10:07.753 00:10:07.753 ' 00:10:07.753 06:20:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:07.753 06:20:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:10:07.753 06:20:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:07.753 06:20:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:07.753 06:20:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:07.753 06:20:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:07.753 06:20:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:07.753 06:20:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:07.753 06:20:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:07.753 06:20:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:07.753 06:20:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:07.753 06:20:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:07.753 06:20:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:10:07.753 06:20:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:10:07.753 06:20:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:07.753 06:20:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:07.753 06:20:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:07.753 06:20:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:07.753 06:20:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:07.753 06:20:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:10:07.753 06:20:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:07.753 06:20:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:07.753 06:20:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:07.753 06:20:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:07.753 06:20:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:07.753 06:20:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:07.753 06:20:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:10:07.753 06:20:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:07.753 06:20:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:10:07.753 06:20:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:07.753 06:20:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:07.753 06:20:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:07.753 06:20:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:07.753 06:20:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:07.753 06:20:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:07.753 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:07.753 06:20:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:07.753 06:20:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:07.753 06:20:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:07.753 06:20:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:07.753 06:20:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:10:07.753 06:20:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:10:07.753 06:20:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:07.753 06:20:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:07.753 06:20:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:07.753 06:20:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:07.753 06:20:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:07.753 06:20:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:07.753 06:20:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:07.754 06:20:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:07.754 06:20:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:07.754 06:20:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:07.754 06:20:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:10:07.754 06:20:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:14.326 06:20:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:14.326 06:20:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:10:14.326 06:20:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:14.326 06:20:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:14.326 06:20:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:14.326 06:20:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:14.326 06:20:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:14.326 06:20:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:10:14.326 06:20:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:14.326 06:20:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:10:14.326 06:20:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:10:14.326 06:20:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:10:14.326 06:20:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:10:14.326 06:20:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:10:14.326 06:20:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:10:14.326 06:20:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:14.326 06:20:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:14.326 06:20:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:14.326 06:20:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:14.326 06:20:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:14.326 06:20:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:14.326 06:20:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:14.326 06:20:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:14.326 06:20:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:14.326 06:20:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:14.326 06:20:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:14.326 06:20:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:14.326 06:20:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:14.326 06:20:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:14.326 06:20:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:14.326 06:20:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:14.326 06:20:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:14.326 06:20:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:14.326 06:20:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:14.326 06:20:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:14.326 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:14.326 06:20:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:14.326 06:20:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:14.326 06:20:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:14.326 06:20:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:14.326 06:20:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:14.326 06:20:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:14.326 06:20:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:14.326 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:14.326 06:20:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:14.326 06:20:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:14.326 06:20:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:14.326 06:20:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:14.326 06:20:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:14.326 06:20:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:14.326 06:20:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:14.326 06:20:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:14.326 06:20:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:14.326 06:20:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:14.326 06:20:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:14.326 06:20:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:14.326 06:20:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:14.326 06:20:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:14.326 06:20:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:14.326 06:20:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:14.326 Found net devices under 0000:86:00.0: cvl_0_0 00:10:14.326 06:20:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:14.326 06:20:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:14.326 06:20:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:14.326 06:20:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:14.326 06:20:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:14.326 06:20:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:14.326 06:20:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:14.326 06:20:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:14.326 06:20:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:14.326 Found net devices under 0000:86:00.1: cvl_0_1 00:10:14.326 06:20:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:14.326 06:20:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:14.326 06:20:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:10:14.326 06:20:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:14.326 06:20:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:14.326 06:20:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:14.326 06:20:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:14.326 06:20:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:14.326 06:20:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:14.326 06:20:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:14.326 06:20:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:14.326 06:20:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:14.326 06:20:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:14.326 06:20:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:14.326 06:20:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:14.326 06:20:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:14.326 06:20:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:14.326 06:20:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:14.326 06:20:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:14.326 06:20:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:14.326 06:20:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:14.326 06:20:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:14.326 06:20:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:14.326 06:20:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:14.326 06:20:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:14.326 06:20:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:14.326 06:20:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:14.326 06:20:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:14.326 06:20:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:14.326 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:14.326 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.464 ms 00:10:14.326 00:10:14.326 --- 10.0.0.2 ping statistics --- 00:10:14.326 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:14.326 rtt min/avg/max/mdev = 0.464/0.464/0.464/0.000 ms 00:10:14.326 06:20:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:14.326 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:14.326 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.204 ms 00:10:14.326 00:10:14.326 --- 10.0.0.1 ping statistics --- 00:10:14.326 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:14.327 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:10:14.327 06:20:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:14.327 06:20:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:10:14.327 06:20:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:14.327 06:20:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:14.327 06:20:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:14.327 06:20:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:14.327 06:20:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:14.327 06:20:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:14.327 06:20:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:14.327 06:20:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:10:14.327 06:20:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:14.327 06:20:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:14.327 06:20:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:14.327 06:20:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=371461 00:10:14.327 06:20:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 371461 00:10:14.327 06:20:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:10:14.327 06:20:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@833 -- # '[' -z 371461 ']' 00:10:14.327 06:20:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:14.327 06:20:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:14.327 06:20:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:14.327 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:14.327 06:20:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:14.327 06:20:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:14.327 [2024-11-20 06:20:45.605848] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:10:14.327 [2024-11-20 06:20:45.605896] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:14.327 [2024-11-20 06:20:45.683218] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:14.327 [2024-11-20 06:20:45.726760] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:14.327 [2024-11-20 06:20:45.726796] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:14.327 [2024-11-20 06:20:45.726803] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:14.327 [2024-11-20 06:20:45.726809] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:14.327 [2024-11-20 06:20:45.726814] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:14.327 [2024-11-20 06:20:45.728272] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:14.327 [2024-11-20 06:20:45.728303] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:14.327 [2024-11-20 06:20:45.728303] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:14.327 06:20:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:14.327 06:20:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@866 -- # return 0 00:10:14.327 06:20:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:14.327 06:20:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:14.327 06:20:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:14.327 06:20:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:14.327 06:20:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:10:14.327 06:20:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.327 06:20:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:14.327 [2024-11-20 06:20:45.867962] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:14.327 06:20:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.327 06:20:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:10:14.327 06:20:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.327 06:20:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:14.327 Malloc0 00:10:14.327 06:20:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.327 06:20:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:14.327 06:20:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.327 06:20:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:14.327 Delay0 00:10:14.327 06:20:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.327 06:20:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:10:14.327 06:20:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.327 06:20:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:14.327 06:20:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.327 06:20:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:10:14.327 06:20:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.327 06:20:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:14.327 06:20:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.327 06:20:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:10:14.327 06:20:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.327 06:20:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:14.327 [2024-11-20 06:20:45.952205] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:14.327 06:20:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.327 06:20:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:14.327 06:20:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.327 06:20:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:14.327 06:20:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.327 06:20:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:10:14.327 [2024-11-20 06:20:46.089861] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:10:16.910 Initializing NVMe Controllers 00:10:16.910 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:10:16.910 controller IO queue size 128 less than required 00:10:16.910 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:10:16.910 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:10:16.910 Initialization complete. Launching workers. 00:10:16.910 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 127, failed: 37705 00:10:16.910 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 37770, failed to submit 62 00:10:16.910 success 37709, unsuccessful 61, failed 0 00:10:16.910 06:20:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:10:16.910 06:20:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.910 06:20:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:16.910 06:20:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.910 06:20:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:10:16.910 06:20:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:10:16.910 06:20:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:16.910 06:20:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:10:16.910 06:20:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:16.910 06:20:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:10:16.910 06:20:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:16.910 06:20:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:16.910 rmmod nvme_tcp 00:10:16.910 rmmod nvme_fabrics 00:10:16.910 rmmod nvme_keyring 00:10:16.910 06:20:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:16.910 06:20:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:10:16.910 06:20:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:10:16.910 06:20:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 371461 ']' 00:10:16.910 06:20:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 371461 00:10:16.910 06:20:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@952 -- # '[' -z 371461 ']' 00:10:16.910 06:20:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # kill -0 371461 00:10:16.910 06:20:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@957 -- # uname 00:10:16.910 06:20:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:16.910 06:20:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 371461 00:10:16.910 06:20:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:10:16.910 06:20:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:10:16.910 06:20:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@970 -- # echo 'killing process with pid 371461' 00:10:16.911 killing process with pid 371461 00:10:16.911 06:20:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@971 -- # kill 371461 00:10:16.911 06:20:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@976 -- # wait 371461 00:10:16.911 06:20:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:16.911 06:20:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:16.911 06:20:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:16.911 06:20:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:10:16.911 06:20:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:10:16.911 06:20:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:16.911 06:20:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:10:16.911 06:20:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:16.911 06:20:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:16.911 06:20:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:16.911 06:20:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:16.911 06:20:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:18.909 06:20:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:18.909 00:10:18.909 real 0m11.270s 00:10:18.909 user 0m11.839s 00:10:18.909 sys 0m5.408s 00:10:18.909 06:20:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:18.909 06:20:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:18.909 ************************************ 00:10:18.909 END TEST nvmf_abort 00:10:18.909 ************************************ 00:10:18.909 06:20:50 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:10:18.909 06:20:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:10:18.909 06:20:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:18.909 06:20:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:18.909 ************************************ 00:10:18.909 START TEST nvmf_ns_hotplug_stress 00:10:18.909 ************************************ 00:10:18.909 06:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:10:19.168 * Looking for test storage... 00:10:19.168 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:19.168 06:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:19.168 06:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lcov --version 00:10:19.168 06:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:19.168 06:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:19.168 06:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:19.168 06:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:19.168 06:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:19.168 06:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:10:19.168 06:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:10:19.168 06:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:10:19.168 06:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:10:19.168 06:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:10:19.169 06:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:10:19.169 06:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:10:19.169 06:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:19.169 06:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:10:19.169 06:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:10:19.169 06:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:19.169 06:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:19.169 06:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:10:19.169 06:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:10:19.169 06:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:19.169 06:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:10:19.169 06:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:10:19.169 06:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:10:19.169 06:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:10:19.169 06:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:19.169 06:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:10:19.169 06:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:10:19.169 06:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:19.169 06:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:19.169 06:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:10:19.169 06:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:19.169 06:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:19.169 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:19.169 --rc genhtml_branch_coverage=1 00:10:19.169 --rc genhtml_function_coverage=1 00:10:19.169 --rc genhtml_legend=1 00:10:19.169 --rc geninfo_all_blocks=1 00:10:19.169 --rc geninfo_unexecuted_blocks=1 00:10:19.169 00:10:19.169 ' 00:10:19.169 06:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:19.169 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:19.169 --rc genhtml_branch_coverage=1 00:10:19.169 --rc genhtml_function_coverage=1 00:10:19.169 --rc genhtml_legend=1 00:10:19.169 --rc geninfo_all_blocks=1 00:10:19.169 --rc geninfo_unexecuted_blocks=1 00:10:19.169 00:10:19.169 ' 00:10:19.169 06:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:19.169 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:19.169 --rc genhtml_branch_coverage=1 00:10:19.169 --rc genhtml_function_coverage=1 00:10:19.169 --rc genhtml_legend=1 00:10:19.169 --rc geninfo_all_blocks=1 00:10:19.169 --rc geninfo_unexecuted_blocks=1 00:10:19.169 00:10:19.169 ' 00:10:19.169 06:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:19.169 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:19.169 --rc genhtml_branch_coverage=1 00:10:19.169 --rc genhtml_function_coverage=1 00:10:19.169 --rc genhtml_legend=1 00:10:19.169 --rc geninfo_all_blocks=1 00:10:19.169 --rc geninfo_unexecuted_blocks=1 00:10:19.169 00:10:19.169 ' 00:10:19.169 06:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:19.169 06:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:10:19.169 06:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:19.169 06:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:19.169 06:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:19.169 06:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:19.169 06:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:19.169 06:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:19.169 06:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:19.169 06:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:19.169 06:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:19.169 06:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:19.169 06:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:10:19.169 06:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:10:19.169 06:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:19.169 06:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:19.169 06:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:19.169 06:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:19.169 06:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:19.169 06:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:10:19.169 06:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:19.169 06:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:19.169 06:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:19.169 06:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:19.169 06:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:19.169 06:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:19.169 06:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:10:19.169 06:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:19.169 06:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:10:19.169 06:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:19.169 06:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:19.169 06:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:19.169 06:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:19.169 06:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:19.169 06:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:19.169 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:19.169 06:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:19.169 06:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:19.169 06:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:19.169 06:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:19.169 06:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:10:19.169 06:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:19.169 06:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:19.170 06:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:19.170 06:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:19.170 06:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:19.170 06:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:19.170 06:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:19.170 06:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:19.170 06:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:19.170 06:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:19.170 06:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:10:19.170 06:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:25.741 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:25.741 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:10:25.741 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:25.741 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:25.741 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:25.741 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:25.741 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:25.741 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:10:25.741 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:25.741 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:10:25.741 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:10:25.741 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:10:25.741 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:10:25.741 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:10:25.741 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:10:25.741 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:25.741 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:25.741 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:25.741 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:25.741 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:25.741 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:25.741 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:25.741 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:25.741 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:25.741 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:25.741 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:25.741 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:25.741 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:25.741 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:25.741 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:25.741 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:25.741 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:25.741 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:25.741 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:25.741 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:25.741 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:25.741 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:25.741 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:25.741 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:25.741 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:25.741 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:25.741 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:25.741 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:25.741 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:25.741 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:25.741 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:25.741 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:25.741 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:25.741 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:25.741 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:25.741 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:25.741 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:25.741 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:25.741 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:25.741 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:25.741 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:25.741 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:25.741 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:25.741 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:25.741 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:25.741 Found net devices under 0000:86:00.0: cvl_0_0 00:10:25.741 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:25.741 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:25.741 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:25.741 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:25.741 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:25.741 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:25.741 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:25.741 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:25.741 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:25.741 Found net devices under 0000:86:00.1: cvl_0_1 00:10:25.741 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:25.741 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:25.741 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:10:25.741 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:25.741 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:25.741 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:25.741 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:25.741 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:25.742 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:25.742 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:25.742 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:25.742 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:25.742 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:25.742 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:25.742 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:25.742 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:25.742 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:25.742 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:25.742 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:25.742 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:25.742 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:25.742 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:25.742 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:25.742 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:25.742 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:25.742 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:25.742 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:25.742 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:25.742 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:25.742 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:25.742 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.506 ms 00:10:25.742 00:10:25.742 --- 10.0.0.2 ping statistics --- 00:10:25.742 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:25.742 rtt min/avg/max/mdev = 0.506/0.506/0.506/0.000 ms 00:10:25.742 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:25.742 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:25.742 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.206 ms 00:10:25.742 00:10:25.742 --- 10.0.0.1 ping statistics --- 00:10:25.742 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:25.742 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:10:25.742 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:25.742 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:10:25.742 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:25.742 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:25.742 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:25.742 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:25.742 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:25.742 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:25.742 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:25.742 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:10:25.742 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:25.742 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:25.742 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:25.742 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=375498 00:10:25.742 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:10:25.742 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 375498 00:10:25.742 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # '[' -z 375498 ']' 00:10:25.742 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:25.742 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:25.742 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:25.742 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:25.742 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:25.742 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:25.742 [2024-11-20 06:20:56.986500] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:10:25.742 [2024-11-20 06:20:56.986540] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:25.742 [2024-11-20 06:20:57.066065] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:25.742 [2024-11-20 06:20:57.106199] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:25.742 [2024-11-20 06:20:57.106237] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:25.742 [2024-11-20 06:20:57.106245] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:25.742 [2024-11-20 06:20:57.106251] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:25.742 [2024-11-20 06:20:57.106257] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:25.742 [2024-11-20 06:20:57.107617] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:25.742 [2024-11-20 06:20:57.107724] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:25.742 [2024-11-20 06:20:57.107725] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:26.001 06:20:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:26.001 06:20:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@866 -- # return 0 00:10:26.001 06:20:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:26.001 06:20:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:26.001 06:20:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:26.260 06:20:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:26.260 06:20:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:10:26.260 06:20:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:26.260 [2024-11-20 06:20:58.005882] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:26.260 06:20:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:26.518 06:20:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:26.777 [2024-11-20 06:20:58.395298] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:26.777 06:20:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:27.036 06:20:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:10:27.036 Malloc0 00:10:27.036 06:20:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:27.295 Delay0 00:10:27.295 06:20:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:27.556 06:20:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:10:27.818 NULL1 00:10:27.818 06:20:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:10:27.818 06:20:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:10:27.818 06:20:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=375986 00:10:27.818 06:20:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 375986 00:10:27.818 06:20:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:29.196 Read completed with error (sct=0, sc=11) 00:10:29.196 06:21:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:29.196 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:29.196 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:29.196 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:29.196 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:29.196 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:29.196 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:29.455 06:21:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:10:29.455 06:21:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:10:29.455 true 00:10:29.455 06:21:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 375986 00:10:29.455 06:21:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:30.390 06:21:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:30.650 06:21:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:10:30.650 06:21:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:10:30.650 true 00:10:30.650 06:21:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 375986 00:10:30.650 06:21:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:30.908 06:21:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:31.167 06:21:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:10:31.167 06:21:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:10:31.425 true 00:10:31.425 06:21:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 375986 00:10:31.425 06:21:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:32.361 06:21:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:32.361 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:32.361 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:32.620 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:32.620 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:32.620 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:32.620 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:32.620 06:21:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:10:32.620 06:21:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:10:32.879 true 00:10:32.879 06:21:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 375986 00:10:32.879 06:21:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:33.815 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:33.815 06:21:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:33.815 06:21:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:10:33.815 06:21:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:10:34.073 true 00:10:34.073 06:21:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 375986 00:10:34.073 06:21:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:34.331 06:21:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:34.590 06:21:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:10:34.590 06:21:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:10:34.590 true 00:10:34.590 06:21:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 375986 00:10:34.590 06:21:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:35.967 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:35.967 06:21:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:35.967 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:35.967 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:35.967 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:35.967 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:35.967 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:35.967 06:21:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:10:35.967 06:21:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:10:36.226 true 00:10:36.226 06:21:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 375986 00:10:36.226 06:21:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:37.164 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:37.164 06:21:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:37.164 06:21:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:10:37.164 06:21:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:10:37.422 true 00:10:37.422 06:21:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 375986 00:10:37.422 06:21:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:37.681 06:21:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:37.939 06:21:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:10:37.939 06:21:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:10:37.939 true 00:10:37.939 06:21:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 375986 00:10:37.939 06:21:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:38.198 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:38.198 06:21:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:38.198 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:38.198 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:38.198 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:38.457 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:38.457 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:38.457 06:21:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:10:38.457 06:21:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:10:38.716 true 00:10:38.716 06:21:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 375986 00:10:38.716 06:21:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:39.653 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:39.653 06:21:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:39.653 06:21:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:10:39.653 06:21:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:10:39.911 true 00:10:39.911 06:21:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 375986 00:10:39.911 06:21:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:40.169 06:21:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:40.169 06:21:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:10:40.169 06:21:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:10:40.428 true 00:10:40.428 06:21:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 375986 00:10:40.428 06:21:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:41.805 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:41.805 06:21:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:41.805 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:41.805 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:41.805 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:41.805 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:41.805 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:41.805 06:21:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:10:41.805 06:21:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:10:42.064 true 00:10:42.064 06:21:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 375986 00:10:42.064 06:21:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:43.001 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:43.001 06:21:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:43.001 06:21:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:10:43.001 06:21:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:10:43.260 true 00:10:43.260 06:21:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 375986 00:10:43.260 06:21:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:43.519 06:21:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:43.777 06:21:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:10:43.777 06:21:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:10:43.777 true 00:10:43.777 06:21:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 375986 00:10:43.777 06:21:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:45.155 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:45.155 06:21:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:45.155 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:45.155 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:45.155 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:45.155 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:45.155 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:45.155 06:21:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:10:45.155 06:21:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:10:45.413 true 00:10:45.413 06:21:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 375986 00:10:45.413 06:21:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:46.349 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:46.349 06:21:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:46.349 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:46.349 06:21:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:10:46.349 06:21:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:10:46.608 true 00:10:46.608 06:21:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 375986 00:10:46.608 06:21:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:46.867 06:21:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:47.126 06:21:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:10:47.126 06:21:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:10:47.126 true 00:10:47.126 06:21:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 375986 00:10:47.126 06:21:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:48.544 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:48.544 06:21:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:48.544 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:48.544 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:48.544 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:48.544 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:48.544 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:48.544 06:21:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:10:48.544 06:21:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:10:48.825 true 00:10:48.825 06:21:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 375986 00:10:48.826 06:21:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:49.761 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:49.761 06:21:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:49.761 06:21:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:10:49.761 06:21:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:10:50.021 true 00:10:50.021 06:21:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 375986 00:10:50.021 06:21:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:50.280 06:21:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:50.280 06:21:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:10:50.280 06:21:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:10:50.539 true 00:10:50.539 06:21:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 375986 00:10:50.539 06:21:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:51.916 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:51.916 06:21:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:51.916 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:51.916 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:51.916 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:51.916 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:51.916 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:51.916 06:21:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:10:51.916 06:21:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:10:52.175 true 00:10:52.175 06:21:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 375986 00:10:52.175 06:21:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:53.112 06:21:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:53.112 06:21:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:10:53.112 06:21:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:10:53.371 true 00:10:53.371 06:21:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 375986 00:10:53.371 06:21:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:53.630 06:21:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:53.889 06:21:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:10:53.889 06:21:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:10:53.889 true 00:10:53.889 06:21:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 375986 00:10:53.889 06:21:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:55.268 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:55.268 06:21:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:55.268 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:55.268 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:55.268 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:55.268 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:55.268 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:55.268 06:21:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:10:55.268 06:21:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:10:55.527 true 00:10:55.527 06:21:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 375986 00:10:55.527 06:21:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:56.469 06:21:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:56.469 06:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:10:56.469 06:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:10:56.726 true 00:10:56.726 06:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 375986 00:10:56.726 06:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:56.726 06:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:56.984 06:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:10:56.984 06:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:10:57.240 true 00:10:57.240 06:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 375986 00:10:57.240 06:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:58.610 Initializing NVMe Controllers 00:10:58.610 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:58.610 Controller IO queue size 128, less than required. 00:10:58.610 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:58.610 Controller IO queue size 128, less than required. 00:10:58.610 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:58.610 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:58.610 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:10:58.610 Initialization complete. Launching workers. 00:10:58.610 ======================================================== 00:10:58.610 Latency(us) 00:10:58.610 Device Information : IOPS MiB/s Average min max 00:10:58.610 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2138.83 1.04 41589.26 2244.89 1012665.48 00:10:58.610 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 17907.80 8.74 7147.47 1716.17 371287.82 00:10:58.610 ======================================================== 00:10:58.610 Total : 20046.63 9.79 10822.16 1716.17 1012665.48 00:10:58.610 00:10:58.610 06:21:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:58.610 06:21:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:10:58.610 06:21:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:10:58.610 true 00:10:58.610 06:21:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 375986 00:10:58.610 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (375986) - No such process 00:10:58.610 06:21:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 375986 00:10:58.610 06:21:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:58.867 06:21:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:59.125 06:21:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:10:59.125 06:21:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:10:59.125 06:21:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:10:59.125 06:21:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:59.125 06:21:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:10:59.383 null0 00:10:59.383 06:21:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:59.383 06:21:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:59.383 06:21:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:10:59.383 null1 00:10:59.383 06:21:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:59.383 06:21:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:59.383 06:21:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:10:59.642 null2 00:10:59.642 06:21:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:59.642 06:21:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:59.642 06:21:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:10:59.900 null3 00:10:59.900 06:21:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:59.900 06:21:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:59.900 06:21:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:11:00.159 null4 00:11:00.159 06:21:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:00.159 06:21:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:00.159 06:21:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:11:00.159 null5 00:11:00.159 06:21:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:00.159 06:21:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:00.159 06:21:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:11:00.417 null6 00:11:00.417 06:21:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:00.417 06:21:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:00.417 06:21:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:11:00.676 null7 00:11:00.676 06:21:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:00.676 06:21:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:00.676 06:21:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:11:00.676 06:21:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:00.676 06:21:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:00.676 06:21:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:11:00.676 06:21:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:00.676 06:21:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:00.676 06:21:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:11:00.676 06:21:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:00.676 06:21:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:00.676 06:21:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:00.676 06:21:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:00.676 06:21:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:11:00.676 06:21:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:00.676 06:21:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:00.676 06:21:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:11:00.676 06:21:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:00.676 06:21:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:00.676 06:21:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:00.676 06:21:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:00.676 06:21:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:11:00.676 06:21:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:00.677 06:21:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:00.677 06:21:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:11:00.677 06:21:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:00.677 06:21:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:00.677 06:21:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:00.677 06:21:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:00.677 06:21:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:11:00.677 06:21:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:00.677 06:21:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:00.677 06:21:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:11:00.677 06:21:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:00.677 06:21:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:00.677 06:21:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:00.677 06:21:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:00.677 06:21:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:00.677 06:21:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:11:00.677 06:21:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:00.677 06:21:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:11:00.677 06:21:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:00.677 06:21:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:00.677 06:21:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:00.677 06:21:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:00.677 06:21:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:11:00.677 06:21:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:00.677 06:21:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:00.677 06:21:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:11:00.677 06:21:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:00.677 06:21:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:00.677 06:21:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:00.677 06:21:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:00.677 06:21:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:00.677 06:21:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:00.677 06:21:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:11:00.677 06:21:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:11:00.677 06:21:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:00.677 06:21:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:00.677 06:21:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:00.677 06:21:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:00.677 06:21:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:00.677 06:21:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 382102 382103 382105 382107 382109 382111 382113 382115 00:11:00.677 06:21:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:11:00.677 06:21:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:00.677 06:21:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:11:00.677 06:21:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:00.677 06:21:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:00.677 06:21:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:00.937 06:21:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:00.937 06:21:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:00.937 06:21:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:00.937 06:21:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:00.937 06:21:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:00.937 06:21:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:00.937 06:21:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:00.937 06:21:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:01.196 06:21:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:01.196 06:21:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:01.196 06:21:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:01.196 06:21:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:01.196 06:21:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:01.196 06:21:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:01.196 06:21:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:01.196 06:21:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:01.196 06:21:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:01.196 06:21:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:01.196 06:21:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:01.196 06:21:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:01.196 06:21:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:01.196 06:21:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:01.197 06:21:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:01.197 06:21:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:01.197 06:21:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:01.197 06:21:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:01.197 06:21:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:01.197 06:21:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:01.197 06:21:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:01.197 06:21:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:01.197 06:21:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:01.197 06:21:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:01.197 06:21:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:01.197 06:21:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:01.197 06:21:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:01.455 06:21:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:01.455 06:21:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:01.455 06:21:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:01.456 06:21:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:01.456 06:21:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:01.456 06:21:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:01.456 06:21:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:01.456 06:21:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:01.456 06:21:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:01.456 06:21:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:01.456 06:21:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:01.456 06:21:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:01.456 06:21:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:01.456 06:21:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:01.456 06:21:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:01.456 06:21:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:01.456 06:21:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:01.456 06:21:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:01.456 06:21:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:01.456 06:21:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:01.456 06:21:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:01.456 06:21:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:01.456 06:21:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:01.456 06:21:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:01.456 06:21:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:01.456 06:21:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:01.456 06:21:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:01.456 06:21:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:01.456 06:21:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:01.715 06:21:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:01.715 06:21:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:01.715 06:21:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:01.715 06:21:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:01.715 06:21:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:01.715 06:21:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:01.715 06:21:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:01.715 06:21:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:01.974 06:21:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:01.974 06:21:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:01.974 06:21:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:01.974 06:21:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:01.974 06:21:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:01.974 06:21:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:01.974 06:21:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:01.974 06:21:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:01.974 06:21:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:01.974 06:21:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:01.974 06:21:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:01.974 06:21:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:01.974 06:21:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:01.974 06:21:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:01.974 06:21:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:01.974 06:21:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:01.974 06:21:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:01.974 06:21:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:01.974 06:21:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:01.974 06:21:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:01.974 06:21:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:01.974 06:21:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:01.974 06:21:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:01.974 06:21:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:02.233 06:21:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:02.233 06:21:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:02.233 06:21:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:02.233 06:21:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:02.233 06:21:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:02.233 06:21:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:02.233 06:21:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:02.233 06:21:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:02.233 06:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:02.233 06:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:02.233 06:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:02.233 06:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:02.233 06:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:02.233 06:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:02.492 06:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:02.492 06:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:02.492 06:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:02.492 06:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:02.492 06:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:02.492 06:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:02.492 06:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:02.492 06:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:02.492 06:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:02.492 06:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:02.492 06:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:02.492 06:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:02.492 06:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:02.492 06:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:02.492 06:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:02.492 06:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:02.492 06:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:02.492 06:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:02.492 06:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:02.492 06:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:02.492 06:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:02.492 06:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:02.492 06:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:02.492 06:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:02.492 06:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:02.492 06:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:02.751 06:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:02.751 06:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:02.751 06:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:02.751 06:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:02.751 06:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:02.751 06:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:02.751 06:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:02.751 06:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:02.751 06:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:02.751 06:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:02.751 06:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:02.751 06:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:02.751 06:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:02.751 06:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:02.751 06:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:02.751 06:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:02.751 06:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:02.751 06:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:02.751 06:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:02.751 06:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:02.751 06:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:02.751 06:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:02.751 06:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:02.751 06:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:03.009 06:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:03.010 06:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:03.010 06:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:03.010 06:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:03.010 06:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:03.010 06:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:03.010 06:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:03.010 06:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:03.268 06:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:03.268 06:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:03.268 06:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:03.268 06:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:03.268 06:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:03.268 06:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:03.268 06:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:03.268 06:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:03.268 06:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:03.268 06:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:03.268 06:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:03.269 06:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:03.269 06:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:03.269 06:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:03.269 06:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:03.269 06:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:03.269 06:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:03.269 06:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:03.269 06:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:03.269 06:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:03.269 06:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:03.269 06:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:03.269 06:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:03.269 06:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:03.528 06:21:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:03.528 06:21:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:03.528 06:21:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:03.528 06:21:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:03.528 06:21:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:03.528 06:21:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:03.528 06:21:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:03.528 06:21:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:03.528 06:21:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:03.528 06:21:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:03.528 06:21:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:03.528 06:21:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:03.528 06:21:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:03.528 06:21:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:03.528 06:21:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:03.528 06:21:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:03.528 06:21:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:03.528 06:21:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:03.528 06:21:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:03.528 06:21:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:03.528 06:21:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:03.528 06:21:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:03.528 06:21:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:03.528 06:21:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:03.528 06:21:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:03.528 06:21:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:03.528 06:21:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:03.528 06:21:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:03.528 06:21:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:03.528 06:21:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:03.528 06:21:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:03.528 06:21:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:03.787 06:21:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:03.787 06:21:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:03.787 06:21:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:03.787 06:21:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:03.787 06:21:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:03.787 06:21:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:03.787 06:21:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:03.787 06:21:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:04.047 06:21:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:04.047 06:21:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:04.047 06:21:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:04.047 06:21:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:04.047 06:21:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:04.047 06:21:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:04.047 06:21:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:04.047 06:21:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:04.047 06:21:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:04.047 06:21:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:04.047 06:21:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:04.047 06:21:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:04.047 06:21:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:04.047 06:21:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:04.047 06:21:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:04.047 06:21:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:04.047 06:21:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:04.047 06:21:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:04.047 06:21:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:04.047 06:21:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:04.047 06:21:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:04.047 06:21:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:04.047 06:21:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:04.047 06:21:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:04.306 06:21:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:04.306 06:21:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:04.306 06:21:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:04.306 06:21:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:04.307 06:21:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:04.307 06:21:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:04.307 06:21:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:04.307 06:21:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:04.307 06:21:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:04.307 06:21:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:04.307 06:21:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:04.564 06:21:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:04.564 06:21:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:04.565 06:21:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:04.565 06:21:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:04.565 06:21:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:04.565 06:21:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:04.565 06:21:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:04.565 06:21:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:04.565 06:21:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:04.565 06:21:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:04.565 06:21:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:04.565 06:21:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:04.565 06:21:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:04.565 06:21:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:04.565 06:21:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:04.565 06:21:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:04.565 06:21:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:04.565 06:21:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:04.565 06:21:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:04.565 06:21:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:04.565 06:21:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:04.565 06:21:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:04.565 06:21:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:04.565 06:21:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:04.565 06:21:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:04.565 06:21:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:04.565 06:21:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:04.565 06:21:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:04.565 06:21:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:04.823 06:21:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:04.824 06:21:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:04.824 06:21:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:04.824 06:21:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:04.824 06:21:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:04.824 06:21:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:04.824 06:21:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:04.824 06:21:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:04.824 06:21:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:04.824 06:21:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:04.824 06:21:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:04.824 06:21:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:04.824 06:21:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:04.824 06:21:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:04.824 06:21:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:04.824 06:21:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:04.824 06:21:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:11:04.824 06:21:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:11:04.824 06:21:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:04.824 06:21:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:11:04.824 06:21:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:04.824 06:21:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:11:04.824 06:21:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:04.824 06:21:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:04.824 rmmod nvme_tcp 00:11:04.824 rmmod nvme_fabrics 00:11:05.083 rmmod nvme_keyring 00:11:05.083 06:21:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:05.083 06:21:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:11:05.083 06:21:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:11:05.083 06:21:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 375498 ']' 00:11:05.083 06:21:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 375498 00:11:05.083 06:21:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # '[' -z 375498 ']' 00:11:05.083 06:21:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # kill -0 375498 00:11:05.083 06:21:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@957 -- # uname 00:11:05.083 06:21:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:05.083 06:21:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 375498 00:11:05.083 06:21:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:11:05.083 06:21:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:11:05.083 06:21:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@970 -- # echo 'killing process with pid 375498' 00:11:05.083 killing process with pid 375498 00:11:05.083 06:21:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@971 -- # kill 375498 00:11:05.083 06:21:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@976 -- # wait 375498 00:11:05.083 06:21:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:05.083 06:21:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:05.083 06:21:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:05.083 06:21:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:11:05.083 06:21:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:11:05.083 06:21:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:05.083 06:21:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:11:05.083 06:21:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:05.083 06:21:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:05.083 06:21:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:05.083 06:21:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:05.083 06:21:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:07.619 06:21:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:07.619 00:11:07.619 real 0m48.293s 00:11:07.619 user 3m15.983s 00:11:07.619 sys 0m15.809s 00:11:07.619 06:21:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:07.619 06:21:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:11:07.619 ************************************ 00:11:07.619 END TEST nvmf_ns_hotplug_stress 00:11:07.619 ************************************ 00:11:07.619 06:21:39 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:11:07.619 06:21:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:11:07.619 06:21:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:07.619 06:21:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:07.619 ************************************ 00:11:07.619 START TEST nvmf_delete_subsystem 00:11:07.619 ************************************ 00:11:07.619 06:21:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:11:07.619 * Looking for test storage... 00:11:07.619 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:07.619 06:21:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:07.619 06:21:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lcov --version 00:11:07.620 06:21:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:07.620 06:21:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:07.620 06:21:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:07.620 06:21:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:07.620 06:21:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:07.620 06:21:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:11:07.620 06:21:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:11:07.620 06:21:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:11:07.620 06:21:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:11:07.620 06:21:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:11:07.620 06:21:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:11:07.620 06:21:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:11:07.620 06:21:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:07.620 06:21:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:11:07.620 06:21:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:11:07.620 06:21:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:07.620 06:21:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:07.620 06:21:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:11:07.620 06:21:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:11:07.620 06:21:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:07.620 06:21:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:11:07.620 06:21:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:11:07.620 06:21:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:11:07.620 06:21:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:11:07.620 06:21:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:07.620 06:21:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:11:07.620 06:21:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:11:07.620 06:21:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:07.620 06:21:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:07.620 06:21:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:11:07.620 06:21:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:07.620 06:21:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:07.620 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:07.620 --rc genhtml_branch_coverage=1 00:11:07.620 --rc genhtml_function_coverage=1 00:11:07.620 --rc genhtml_legend=1 00:11:07.620 --rc geninfo_all_blocks=1 00:11:07.620 --rc geninfo_unexecuted_blocks=1 00:11:07.620 00:11:07.620 ' 00:11:07.620 06:21:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:07.620 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:07.620 --rc genhtml_branch_coverage=1 00:11:07.620 --rc genhtml_function_coverage=1 00:11:07.620 --rc genhtml_legend=1 00:11:07.620 --rc geninfo_all_blocks=1 00:11:07.620 --rc geninfo_unexecuted_blocks=1 00:11:07.620 00:11:07.620 ' 00:11:07.620 06:21:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:07.620 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:07.620 --rc genhtml_branch_coverage=1 00:11:07.620 --rc genhtml_function_coverage=1 00:11:07.620 --rc genhtml_legend=1 00:11:07.620 --rc geninfo_all_blocks=1 00:11:07.620 --rc geninfo_unexecuted_blocks=1 00:11:07.620 00:11:07.620 ' 00:11:07.620 06:21:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:07.620 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:07.620 --rc genhtml_branch_coverage=1 00:11:07.620 --rc genhtml_function_coverage=1 00:11:07.620 --rc genhtml_legend=1 00:11:07.620 --rc geninfo_all_blocks=1 00:11:07.620 --rc geninfo_unexecuted_blocks=1 00:11:07.620 00:11:07.620 ' 00:11:07.620 06:21:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:07.620 06:21:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:11:07.620 06:21:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:07.620 06:21:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:07.620 06:21:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:07.620 06:21:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:07.620 06:21:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:07.620 06:21:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:07.620 06:21:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:07.620 06:21:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:07.620 06:21:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:07.620 06:21:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:07.620 06:21:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:11:07.620 06:21:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:11:07.620 06:21:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:07.620 06:21:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:07.620 06:21:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:07.620 06:21:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:07.620 06:21:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:07.620 06:21:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:11:07.620 06:21:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:07.620 06:21:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:07.620 06:21:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:07.620 06:21:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:07.620 06:21:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:07.620 06:21:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:07.620 06:21:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:11:07.620 06:21:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:07.620 06:21:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:11:07.620 06:21:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:07.620 06:21:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:07.620 06:21:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:07.620 06:21:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:07.620 06:21:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:07.620 06:21:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:07.620 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:07.620 06:21:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:07.620 06:21:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:07.620 06:21:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:07.620 06:21:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:11:07.620 06:21:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:07.620 06:21:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:07.620 06:21:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:07.620 06:21:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:07.620 06:21:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:07.620 06:21:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:07.620 06:21:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:07.620 06:21:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:07.620 06:21:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:07.620 06:21:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:07.620 06:21:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:11:07.620 06:21:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:14.190 06:21:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:14.190 06:21:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:11:14.190 06:21:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:14.190 06:21:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:14.190 06:21:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:14.190 06:21:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:14.190 06:21:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:14.190 06:21:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:11:14.190 06:21:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:14.190 06:21:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:11:14.190 06:21:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:11:14.190 06:21:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:11:14.190 06:21:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:11:14.190 06:21:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:11:14.190 06:21:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:11:14.190 06:21:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:14.190 06:21:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:14.190 06:21:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:14.191 06:21:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:14.191 06:21:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:14.191 06:21:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:14.191 06:21:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:14.191 06:21:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:14.191 06:21:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:14.191 06:21:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:14.191 06:21:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:14.191 06:21:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:14.191 06:21:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:14.191 06:21:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:14.191 06:21:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:14.191 06:21:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:14.191 06:21:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:14.191 06:21:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:14.191 06:21:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:14.191 06:21:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:14.191 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:14.191 06:21:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:14.191 06:21:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:14.191 06:21:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:14.191 06:21:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:14.191 06:21:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:14.191 06:21:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:14.191 06:21:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:14.191 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:14.191 06:21:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:14.191 06:21:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:14.191 06:21:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:14.191 06:21:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:14.191 06:21:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:14.191 06:21:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:14.191 06:21:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:14.191 06:21:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:14.191 06:21:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:14.191 06:21:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:14.191 06:21:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:14.191 06:21:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:14.191 06:21:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:14.191 06:21:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:14.191 06:21:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:14.191 06:21:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:14.191 Found net devices under 0000:86:00.0: cvl_0_0 00:11:14.191 06:21:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:14.191 06:21:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:14.191 06:21:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:14.191 06:21:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:14.191 06:21:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:14.191 06:21:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:14.191 06:21:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:14.191 06:21:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:14.191 06:21:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:14.191 Found net devices under 0000:86:00.1: cvl_0_1 00:11:14.191 06:21:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:14.191 06:21:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:14.191 06:21:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:11:14.191 06:21:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:14.191 06:21:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:14.191 06:21:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:14.191 06:21:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:14.191 06:21:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:14.191 06:21:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:14.191 06:21:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:14.191 06:21:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:14.191 06:21:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:14.191 06:21:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:14.191 06:21:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:14.191 06:21:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:14.191 06:21:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:14.191 06:21:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:14.191 06:21:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:14.191 06:21:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:14.191 06:21:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:14.191 06:21:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:14.191 06:21:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:14.191 06:21:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:14.191 06:21:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:14.191 06:21:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:14.191 06:21:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:14.191 06:21:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:14.191 06:21:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:14.191 06:21:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:14.191 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:14.191 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.421 ms 00:11:14.191 00:11:14.191 --- 10.0.0.2 ping statistics --- 00:11:14.191 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:14.191 rtt min/avg/max/mdev = 0.421/0.421/0.421/0.000 ms 00:11:14.191 06:21:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:14.191 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:14.191 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.182 ms 00:11:14.191 00:11:14.191 --- 10.0.0.1 ping statistics --- 00:11:14.191 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:14.191 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:11:14.191 06:21:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:14.191 06:21:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:11:14.191 06:21:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:14.191 06:21:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:14.191 06:21:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:14.191 06:21:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:14.191 06:21:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:14.191 06:21:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:14.191 06:21:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:14.191 06:21:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:11:14.191 06:21:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:14.191 06:21:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:14.192 06:21:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:14.192 06:21:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=386503 00:11:14.192 06:21:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:11:14.192 06:21:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 386503 00:11:14.192 06:21:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # '[' -z 386503 ']' 00:11:14.192 06:21:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:14.192 06:21:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:14.192 06:21:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:14.192 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:14.192 06:21:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:14.192 06:21:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:14.192 [2024-11-20 06:21:45.335355] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:11:14.192 [2024-11-20 06:21:45.335397] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:14.192 [2024-11-20 06:21:45.412872] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:14.192 [2024-11-20 06:21:45.453777] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:14.192 [2024-11-20 06:21:45.453821] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:14.192 [2024-11-20 06:21:45.453828] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:14.192 [2024-11-20 06:21:45.453835] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:14.192 [2024-11-20 06:21:45.453840] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:14.192 [2024-11-20 06:21:45.454996] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:14.192 [2024-11-20 06:21:45.454998] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:14.450 06:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:14.450 06:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@866 -- # return 0 00:11:14.450 06:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:14.450 06:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:14.450 06:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:14.450 06:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:14.450 06:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:14.450 06:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.450 06:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:14.450 [2024-11-20 06:21:46.196510] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:14.450 06:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.450 06:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:14.451 06:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.451 06:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:14.451 06:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.451 06:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:14.451 06:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.451 06:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:14.451 [2024-11-20 06:21:46.216665] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:14.451 06:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.451 06:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:11:14.451 06:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.451 06:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:14.451 NULL1 00:11:14.451 06:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.451 06:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:11:14.451 06:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.451 06:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:14.451 Delay0 00:11:14.451 06:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.451 06:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:14.451 06:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.451 06:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:14.451 06:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.451 06:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=386747 00:11:14.451 06:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:11:14.451 06:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:11:14.709 [2024-11-20 06:21:46.328416] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:11:16.615 06:21:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:16.615 06:21:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.615 06:21:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:16.615 Read completed with error (sct=0, sc=8) 00:11:16.615 Read completed with error (sct=0, sc=8) 00:11:16.615 starting I/O failed: -6 00:11:16.615 Read completed with error (sct=0, sc=8) 00:11:16.615 Read completed with error (sct=0, sc=8) 00:11:16.615 Read completed with error (sct=0, sc=8) 00:11:16.615 Read completed with error (sct=0, sc=8) 00:11:16.615 starting I/O failed: -6 00:11:16.615 Read completed with error (sct=0, sc=8) 00:11:16.615 Read completed with error (sct=0, sc=8) 00:11:16.615 Read completed with error (sct=0, sc=8) 00:11:16.615 Read completed with error (sct=0, sc=8) 00:11:16.615 starting I/O failed: -6 00:11:16.615 Write completed with error (sct=0, sc=8) 00:11:16.615 Write completed with error (sct=0, sc=8) 00:11:16.615 Read completed with error (sct=0, sc=8) 00:11:16.615 Write completed with error (sct=0, sc=8) 00:11:16.615 starting I/O failed: -6 00:11:16.615 Read completed with error (sct=0, sc=8) 00:11:16.615 Write completed with error (sct=0, sc=8) 00:11:16.615 Read completed with error (sct=0, sc=8) 00:11:16.615 Read completed with error (sct=0, sc=8) 00:11:16.615 starting I/O failed: -6 00:11:16.615 Read completed with error (sct=0, sc=8) 00:11:16.615 Read completed with error (sct=0, sc=8) 00:11:16.615 Read completed with error (sct=0, sc=8) 00:11:16.615 Read completed with error (sct=0, sc=8) 00:11:16.615 starting I/O failed: -6 00:11:16.615 Read completed with error (sct=0, sc=8) 00:11:16.615 Write completed with error (sct=0, sc=8) 00:11:16.615 Write completed with error (sct=0, sc=8) 00:11:16.615 Read completed with error (sct=0, sc=8) 00:11:16.615 starting I/O failed: -6 00:11:16.615 Write completed with error (sct=0, sc=8) 00:11:16.615 Write completed with error (sct=0, sc=8) 00:11:16.615 Read completed with error (sct=0, sc=8) 00:11:16.615 Read completed with error (sct=0, sc=8) 00:11:16.615 starting I/O failed: -6 00:11:16.615 Read completed with error (sct=0, sc=8) 00:11:16.615 Read completed with error (sct=0, sc=8) 00:11:16.615 Read completed with error (sct=0, sc=8) 00:11:16.615 Read completed with error (sct=0, sc=8) 00:11:16.615 starting I/O failed: -6 00:11:16.615 Write completed with error (sct=0, sc=8) 00:11:16.615 Read completed with error (sct=0, sc=8) 00:11:16.615 Write completed with error (sct=0, sc=8) 00:11:16.615 [2024-11-20 06:21:48.367183] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16932c0 is same with the state(6) to be set 00:11:16.615 Read completed with error (sct=0, sc=8) 00:11:16.615 Write completed with error (sct=0, sc=8) 00:11:16.615 Write completed with error (sct=0, sc=8) 00:11:16.615 Read completed with error (sct=0, sc=8) 00:11:16.615 Read completed with error (sct=0, sc=8) 00:11:16.615 Read completed with error (sct=0, sc=8) 00:11:16.615 Write completed with error (sct=0, sc=8) 00:11:16.615 Read completed with error (sct=0, sc=8) 00:11:16.615 Read completed with error (sct=0, sc=8) 00:11:16.615 Read completed with error (sct=0, sc=8) 00:11:16.615 Read completed with error (sct=0, sc=8) 00:11:16.615 Read completed with error (sct=0, sc=8) 00:11:16.615 Write completed with error (sct=0, sc=8) 00:11:16.615 Write completed with error (sct=0, sc=8) 00:11:16.615 Read completed with error (sct=0, sc=8) 00:11:16.615 Read completed with error (sct=0, sc=8) 00:11:16.615 Read completed with error (sct=0, sc=8) 00:11:16.615 Write completed with error (sct=0, sc=8) 00:11:16.615 Write completed with error (sct=0, sc=8) 00:11:16.615 Write completed with error (sct=0, sc=8) 00:11:16.615 Read completed with error (sct=0, sc=8) 00:11:16.615 Read completed with error (sct=0, sc=8) 00:11:16.615 Read completed with error (sct=0, sc=8) 00:11:16.615 Read completed with error (sct=0, sc=8) 00:11:16.615 Read completed with error (sct=0, sc=8) 00:11:16.615 Read completed with error (sct=0, sc=8) 00:11:16.615 Write completed with error (sct=0, sc=8) 00:11:16.615 Read completed with error (sct=0, sc=8) 00:11:16.615 Write completed with error (sct=0, sc=8) 00:11:16.615 Write completed with error (sct=0, sc=8) 00:11:16.615 Read completed with error (sct=0, sc=8) 00:11:16.615 Read completed with error (sct=0, sc=8) 00:11:16.615 Read completed with error (sct=0, sc=8) 00:11:16.615 Read completed with error (sct=0, sc=8) 00:11:16.615 Read completed with error (sct=0, sc=8) 00:11:16.615 Read completed with error (sct=0, sc=8) 00:11:16.615 Read completed with error (sct=0, sc=8) 00:11:16.615 Read completed with error (sct=0, sc=8) 00:11:16.615 Read completed with error (sct=0, sc=8) 00:11:16.615 Read completed with error (sct=0, sc=8) 00:11:16.615 Write completed with error (sct=0, sc=8) 00:11:16.615 Read completed with error (sct=0, sc=8) 00:11:16.615 Read completed with error (sct=0, sc=8) 00:11:16.615 Read completed with error (sct=0, sc=8) 00:11:16.615 Write completed with error (sct=0, sc=8) 00:11:16.615 Write completed with error (sct=0, sc=8) 00:11:16.615 Read completed with error (sct=0, sc=8) 00:11:16.615 Read completed with error (sct=0, sc=8) 00:11:16.615 Read completed with error (sct=0, sc=8) 00:11:16.615 Write completed with error (sct=0, sc=8) 00:11:16.615 Read completed with error (sct=0, sc=8) 00:11:16.615 Write completed with error (sct=0, sc=8) 00:11:16.615 Write completed with error (sct=0, sc=8) 00:11:16.615 Read completed with error (sct=0, sc=8) 00:11:16.615 Read completed with error (sct=0, sc=8) 00:11:16.615 Read completed with error (sct=0, sc=8) 00:11:16.615 Read completed with error (sct=0, sc=8) 00:11:16.615 Read completed with error (sct=0, sc=8) 00:11:16.615 Read completed with error (sct=0, sc=8) 00:11:16.615 Read completed with error (sct=0, sc=8) 00:11:16.615 Read completed with error (sct=0, sc=8) 00:11:16.615 Read completed with error (sct=0, sc=8) 00:11:16.615 Read completed with error (sct=0, sc=8) 00:11:16.615 Read completed with error (sct=0, sc=8) 00:11:16.615 Read completed with error (sct=0, sc=8) 00:11:16.615 Write completed with error (sct=0, sc=8) 00:11:16.615 Write completed with error (sct=0, sc=8) 00:11:16.615 Read completed with error (sct=0, sc=8) 00:11:16.615 Read completed with error (sct=0, sc=8) 00:11:16.615 Write completed with error (sct=0, sc=8) 00:11:16.615 Read completed with error (sct=0, sc=8) 00:11:16.615 Read completed with error (sct=0, sc=8) 00:11:16.615 Write completed with error (sct=0, sc=8) 00:11:16.615 Write completed with error (sct=0, sc=8) 00:11:16.615 Read completed with error (sct=0, sc=8) 00:11:16.615 Read completed with error (sct=0, sc=8) 00:11:16.615 Read completed with error (sct=0, sc=8) 00:11:16.615 Write completed with error (sct=0, sc=8) 00:11:16.615 Read completed with error (sct=0, sc=8) 00:11:16.615 Read completed with error (sct=0, sc=8) 00:11:16.616 Read completed with error (sct=0, sc=8) 00:11:16.616 Read completed with error (sct=0, sc=8) 00:11:16.616 Read completed with error (sct=0, sc=8) 00:11:16.616 Read completed with error (sct=0, sc=8) 00:11:16.616 Read completed with error (sct=0, sc=8) 00:11:16.616 Write completed with error (sct=0, sc=8) 00:11:16.616 Read completed with error (sct=0, sc=8) 00:11:16.616 Read completed with error (sct=0, sc=8) 00:11:16.616 Read completed with error (sct=0, sc=8) 00:11:16.616 Write completed with error (sct=0, sc=8) 00:11:16.616 Read completed with error (sct=0, sc=8) 00:11:16.616 Read completed with error (sct=0, sc=8) 00:11:16.616 [2024-11-20 06:21:48.367635] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16934a0 is same with the state(6) to be set 00:11:16.616 Read completed with error (sct=0, sc=8) 00:11:16.616 Read completed with error (sct=0, sc=8) 00:11:16.616 starting I/O failed: -6 00:11:16.616 Read completed with error (sct=0, sc=8) 00:11:16.616 Read completed with error (sct=0, sc=8) 00:11:16.616 Read completed with error (sct=0, sc=8) 00:11:16.616 Write completed with error (sct=0, sc=8) 00:11:16.616 starting I/O failed: -6 00:11:16.616 Read completed with error (sct=0, sc=8) 00:11:16.616 Write completed with error (sct=0, sc=8) 00:11:16.616 Read completed with error (sct=0, sc=8) 00:11:16.616 Write completed with error (sct=0, sc=8) 00:11:16.616 starting I/O failed: -6 00:11:16.616 Read completed with error (sct=0, sc=8) 00:11:16.616 Read completed with error (sct=0, sc=8) 00:11:16.616 Read completed with error (sct=0, sc=8) 00:11:16.616 Read completed with error (sct=0, sc=8) 00:11:16.616 starting I/O failed: -6 00:11:16.616 Read completed with error (sct=0, sc=8) 00:11:16.616 Write completed with error (sct=0, sc=8) 00:11:16.616 Write completed with error (sct=0, sc=8) 00:11:16.616 Write completed with error (sct=0, sc=8) 00:11:16.616 starting I/O failed: -6 00:11:16.616 Write completed with error (sct=0, sc=8) 00:11:16.616 Read completed with error (sct=0, sc=8) 00:11:16.616 Read completed with error (sct=0, sc=8) 00:11:16.616 Read completed with error (sct=0, sc=8) 00:11:16.616 starting I/O failed: -6 00:11:16.616 Read completed with error (sct=0, sc=8) 00:11:16.616 Write completed with error (sct=0, sc=8) 00:11:16.616 Read completed with error (sct=0, sc=8) 00:11:16.616 Read completed with error (sct=0, sc=8) 00:11:16.616 starting I/O failed: -6 00:11:16.616 Read completed with error (sct=0, sc=8) 00:11:16.616 Read completed with error (sct=0, sc=8) 00:11:16.616 Read completed with error (sct=0, sc=8) 00:11:16.616 Read completed with error (sct=0, sc=8) 00:11:16.616 starting I/O failed: -6 00:11:16.616 Write completed with error (sct=0, sc=8) 00:11:16.616 Read completed with error (sct=0, sc=8) 00:11:16.616 Read completed with error (sct=0, sc=8) 00:11:16.616 Read completed with error (sct=0, sc=8) 00:11:16.616 starting I/O failed: -6 00:11:16.616 Read completed with error (sct=0, sc=8) 00:11:16.616 Read completed with error (sct=0, sc=8) 00:11:16.616 Read completed with error (sct=0, sc=8) 00:11:16.616 Read completed with error (sct=0, sc=8) 00:11:16.616 starting I/O failed: -6 00:11:16.616 Write completed with error (sct=0, sc=8) 00:11:16.616 Read completed with error (sct=0, sc=8) 00:11:16.616 Read completed with error (sct=0, sc=8) 00:11:16.616 Read completed with error (sct=0, sc=8) 00:11:16.616 starting I/O failed: -6 00:11:16.616 [2024-11-20 06:21:48.368312] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7facd800d4b0 is same with the state(6) to be set 00:11:17.551 [2024-11-20 06:21:49.340645] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16949a0 is same with the state(6) to be set 00:11:17.551 Read completed with error (sct=0, sc=8) 00:11:17.551 Read completed with error (sct=0, sc=8) 00:11:17.551 Read completed with error (sct=0, sc=8) 00:11:17.551 Read completed with error (sct=0, sc=8) 00:11:17.551 Write completed with error (sct=0, sc=8) 00:11:17.551 Read completed with error (sct=0, sc=8) 00:11:17.551 Read completed with error (sct=0, sc=8) 00:11:17.551 Read completed with error (sct=0, sc=8) 00:11:17.551 Read completed with error (sct=0, sc=8) 00:11:17.551 Read completed with error (sct=0, sc=8) 00:11:17.551 Read completed with error (sct=0, sc=8) 00:11:17.551 Write completed with error (sct=0, sc=8) 00:11:17.551 Read completed with error (sct=0, sc=8) 00:11:17.551 Read completed with error (sct=0, sc=8) 00:11:17.551 Read completed with error (sct=0, sc=8) 00:11:17.551 Write completed with error (sct=0, sc=8) 00:11:17.551 Read completed with error (sct=0, sc=8) 00:11:17.551 Write completed with error (sct=0, sc=8) 00:11:17.551 Write completed with error (sct=0, sc=8) 00:11:17.551 Read completed with error (sct=0, sc=8) 00:11:17.551 Write completed with error (sct=0, sc=8) 00:11:17.551 Read completed with error (sct=0, sc=8) 00:11:17.551 Read completed with error (sct=0, sc=8) 00:11:17.551 Write completed with error (sct=0, sc=8) 00:11:17.551 Read completed with error (sct=0, sc=8) 00:11:17.551 Write completed with error (sct=0, sc=8) 00:11:17.551 Read completed with error (sct=0, sc=8) 00:11:17.551 Read completed with error (sct=0, sc=8) 00:11:17.551 Read completed with error (sct=0, sc=8) 00:11:17.551 Read completed with error (sct=0, sc=8) 00:11:17.551 Write completed with error (sct=0, sc=8) 00:11:17.551 [2024-11-20 06:21:49.369898] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7facd800d020 is same with the state(6) to be set 00:11:17.551 Write completed with error (sct=0, sc=8) 00:11:17.551 Read completed with error (sct=0, sc=8) 00:11:17.551 Read completed with error (sct=0, sc=8) 00:11:17.551 Write completed with error (sct=0, sc=8) 00:11:17.551 Read completed with error (sct=0, sc=8) 00:11:17.551 Read completed with error (sct=0, sc=8) 00:11:17.551 Read completed with error (sct=0, sc=8) 00:11:17.551 Read completed with error (sct=0, sc=8) 00:11:17.551 Read completed with error (sct=0, sc=8) 00:11:17.551 Read completed with error (sct=0, sc=8) 00:11:17.551 Write completed with error (sct=0, sc=8) 00:11:17.551 Read completed with error (sct=0, sc=8) 00:11:17.551 Read completed with error (sct=0, sc=8) 00:11:17.551 Write completed with error (sct=0, sc=8) 00:11:17.551 [2024-11-20 06:21:49.370329] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1693680 is same with the state(6) to be set 00:11:17.551 Read completed with error (sct=0, sc=8) 00:11:17.551 Read completed with error (sct=0, sc=8) 00:11:17.551 Read completed with error (sct=0, sc=8) 00:11:17.551 Write completed with error (sct=0, sc=8) 00:11:17.551 Read completed with error (sct=0, sc=8) 00:11:17.551 Read completed with error (sct=0, sc=8) 00:11:17.551 Read completed with error (sct=0, sc=8) 00:11:17.551 Read completed with error (sct=0, sc=8) 00:11:17.551 Write completed with error (sct=0, sc=8) 00:11:17.551 Read completed with error (sct=0, sc=8) 00:11:17.551 Read completed with error (sct=0, sc=8) 00:11:17.551 Read completed with error (sct=0, sc=8) 00:11:17.551 Write completed with error (sct=0, sc=8) 00:11:17.551 Read completed with error (sct=0, sc=8) 00:11:17.551 Read completed with error (sct=0, sc=8) 00:11:17.551 Read completed with error (sct=0, sc=8) 00:11:17.551 Read completed with error (sct=0, sc=8) 00:11:17.551 Read completed with error (sct=0, sc=8) 00:11:17.551 Write completed with error (sct=0, sc=8) 00:11:17.551 Write completed with error (sct=0, sc=8) 00:11:17.551 Read completed with error (sct=0, sc=8) 00:11:17.551 Read completed with error (sct=0, sc=8) 00:11:17.551 Read completed with error (sct=0, sc=8) 00:11:17.551 Read completed with error (sct=0, sc=8) 00:11:17.551 Read completed with error (sct=0, sc=8) 00:11:17.551 Read completed with error (sct=0, sc=8) 00:11:17.551 Read completed with error (sct=0, sc=8) 00:11:17.551 Read completed with error (sct=0, sc=8) 00:11:17.551 Write completed with error (sct=0, sc=8) 00:11:17.551 Read completed with error (sct=0, sc=8) 00:11:17.551 Write completed with error (sct=0, sc=8) 00:11:17.551 [2024-11-20 06:21:49.370788] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7facd8000c40 is same with the state(6) to be set 00:11:17.551 Write completed with error (sct=0, sc=8) 00:11:17.551 Read completed with error (sct=0, sc=8) 00:11:17.551 Write completed with error (sct=0, sc=8) 00:11:17.551 Read completed with error (sct=0, sc=8) 00:11:17.551 Read completed with error (sct=0, sc=8) 00:11:17.551 Read completed with error (sct=0, sc=8) 00:11:17.551 Read completed with error (sct=0, sc=8) 00:11:17.551 Read completed with error (sct=0, sc=8) 00:11:17.551 Read completed with error (sct=0, sc=8) 00:11:17.551 Read completed with error (sct=0, sc=8) 00:11:17.551 Write completed with error (sct=0, sc=8) 00:11:17.551 Read completed with error (sct=0, sc=8) 00:11:17.551 Read completed with error (sct=0, sc=8) 00:11:17.551 Read completed with error (sct=0, sc=8) 00:11:17.551 Read completed with error (sct=0, sc=8) 00:11:17.551 Write completed with error (sct=0, sc=8) 00:11:17.551 Read completed with error (sct=0, sc=8) 00:11:17.551 Read completed with error (sct=0, sc=8) 00:11:17.551 Read completed with error (sct=0, sc=8) 00:11:17.551 Read completed with error (sct=0, sc=8) 00:11:17.551 Write completed with error (sct=0, sc=8) 00:11:17.551 Write completed with error (sct=0, sc=8) 00:11:17.551 Read completed with error (sct=0, sc=8) 00:11:17.551 Write completed with error (sct=0, sc=8) 00:11:17.551 Read completed with error (sct=0, sc=8) 00:11:17.551 Read completed with error (sct=0, sc=8) 00:11:17.551 Read completed with error (sct=0, sc=8) 00:11:17.551 Write completed with error (sct=0, sc=8) 00:11:17.551 Write completed with error (sct=0, sc=8) 00:11:17.551 Read completed with error (sct=0, sc=8) 00:11:17.551 Read completed with error (sct=0, sc=8) 00:11:17.551 [2024-11-20 06:21:49.371381] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7facd800d7e0 is same with the state(6) to be set 00:11:17.551 Initializing NVMe Controllers 00:11:17.551 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:17.551 Controller IO queue size 128, less than required. 00:11:17.551 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:17.551 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:11:17.551 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:11:17.551 Initialization complete. Launching workers. 00:11:17.551 ======================================================== 00:11:17.551 Latency(us) 00:11:17.551 Device Information : IOPS MiB/s Average min max 00:11:17.551 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 150.58 0.07 895770.73 267.07 1008895.79 00:11:17.551 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 163.00 0.08 1068273.12 818.42 2001544.91 00:11:17.551 ======================================================== 00:11:17.551 Total : 313.58 0.15 985439.17 267.07 2001544.91 00:11:17.551 00:11:17.551 [2024-11-20 06:21:49.371950] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16949a0 (9): Bad file descriptor 00:11:17.551 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:11:17.551 06:21:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.552 06:21:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:11:17.552 06:21:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 386747 00:11:17.552 06:21:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:11:18.120 06:21:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:11:18.120 06:21:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 386747 00:11:18.120 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (386747) - No such process 00:11:18.120 06:21:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 386747 00:11:18.120 06:21:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:11:18.120 06:21:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 386747 00:11:18.120 06:21:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:11:18.120 06:21:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:18.120 06:21:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:11:18.120 06:21:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:18.120 06:21:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 386747 00:11:18.120 06:21:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:11:18.120 06:21:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:18.120 06:21:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:18.120 06:21:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:18.120 06:21:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:18.120 06:21:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.120 06:21:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:18.120 06:21:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.120 06:21:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:18.120 06:21:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.120 06:21:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:18.120 [2024-11-20 06:21:49.901314] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:18.120 06:21:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.120 06:21:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:18.120 06:21:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.120 06:21:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:18.120 06:21:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.120 06:21:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=387232 00:11:18.120 06:21:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:11:18.120 06:21:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:11:18.120 06:21:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 387232 00:11:18.120 06:21:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:18.378 [2024-11-20 06:21:49.991706] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:11:18.636 06:21:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:18.636 06:21:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 387232 00:11:18.636 06:21:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:19.203 06:21:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:19.203 06:21:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 387232 00:11:19.203 06:21:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:19.771 06:21:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:19.771 06:21:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 387232 00:11:19.771 06:21:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:20.338 06:21:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:20.338 06:21:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 387232 00:11:20.338 06:21:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:20.904 06:21:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:20.904 06:21:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 387232 00:11:20.904 06:21:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:21.163 06:21:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:21.163 06:21:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 387232 00:11:21.163 06:21:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:21.421 Initializing NVMe Controllers 00:11:21.421 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:21.421 Controller IO queue size 128, less than required. 00:11:21.421 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:21.422 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:11:21.422 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:11:21.422 Initialization complete. Launching workers. 00:11:21.422 ======================================================== 00:11:21.422 Latency(us) 00:11:21.422 Device Information : IOPS MiB/s Average min max 00:11:21.422 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002003.01 1000133.60 1006244.37 00:11:21.422 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004096.63 1000181.27 1041693.25 00:11:21.422 ======================================================== 00:11:21.422 Total : 256.00 0.12 1003049.82 1000133.60 1041693.25 00:11:21.422 00:11:21.680 06:21:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:21.680 06:21:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 387232 00:11:21.680 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (387232) - No such process 00:11:21.680 06:21:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 387232 00:11:21.680 06:21:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:11:21.680 06:21:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:11:21.680 06:21:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:21.680 06:21:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:11:21.680 06:21:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:21.680 06:21:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:11:21.680 06:21:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:21.680 06:21:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:21.680 rmmod nvme_tcp 00:11:21.680 rmmod nvme_fabrics 00:11:21.680 rmmod nvme_keyring 00:11:21.680 06:21:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:21.939 06:21:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:11:21.939 06:21:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:11:21.939 06:21:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 386503 ']' 00:11:21.939 06:21:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 386503 00:11:21.939 06:21:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # '[' -z 386503 ']' 00:11:21.939 06:21:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # kill -0 386503 00:11:21.939 06:21:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@957 -- # uname 00:11:21.939 06:21:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:21.939 06:21:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 386503 00:11:21.939 06:21:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:21.939 06:21:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:21.939 06:21:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@970 -- # echo 'killing process with pid 386503' 00:11:21.939 killing process with pid 386503 00:11:21.939 06:21:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@971 -- # kill 386503 00:11:21.939 06:21:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@976 -- # wait 386503 00:11:21.939 06:21:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:21.939 06:21:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:21.939 06:21:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:21.939 06:21:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:11:21.939 06:21:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:11:21.940 06:21:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:21.940 06:21:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:11:21.940 06:21:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:21.940 06:21:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:21.940 06:21:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:21.940 06:21:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:21.940 06:21:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:24.478 06:21:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:24.478 00:11:24.478 real 0m16.759s 00:11:24.478 user 0m30.402s 00:11:24.478 sys 0m5.467s 00:11:24.478 06:21:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:24.478 06:21:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:24.478 ************************************ 00:11:24.478 END TEST nvmf_delete_subsystem 00:11:24.478 ************************************ 00:11:24.478 06:21:55 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:11:24.478 06:21:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:11:24.478 06:21:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:24.478 06:21:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:24.478 ************************************ 00:11:24.478 START TEST nvmf_host_management 00:11:24.478 ************************************ 00:11:24.478 06:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:11:24.478 * Looking for test storage... 00:11:24.478 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:24.478 06:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:24.478 06:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # lcov --version 00:11:24.478 06:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:24.478 06:21:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:24.478 06:21:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:24.478 06:21:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:24.478 06:21:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:24.478 06:21:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:11:24.478 06:21:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:11:24.478 06:21:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:11:24.478 06:21:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:11:24.478 06:21:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:11:24.478 06:21:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:11:24.478 06:21:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:11:24.478 06:21:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:24.478 06:21:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:11:24.478 06:21:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:11:24.478 06:21:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:24.478 06:21:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:24.478 06:21:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:11:24.478 06:21:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:11:24.478 06:21:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:24.478 06:21:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:11:24.478 06:21:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:11:24.478 06:21:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:11:24.478 06:21:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:11:24.478 06:21:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:24.478 06:21:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:11:24.478 06:21:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:11:24.478 06:21:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:24.478 06:21:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:24.478 06:21:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:11:24.478 06:21:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:24.478 06:21:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:24.478 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:24.478 --rc genhtml_branch_coverage=1 00:11:24.478 --rc genhtml_function_coverage=1 00:11:24.478 --rc genhtml_legend=1 00:11:24.478 --rc geninfo_all_blocks=1 00:11:24.478 --rc geninfo_unexecuted_blocks=1 00:11:24.478 00:11:24.478 ' 00:11:24.478 06:21:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:24.478 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:24.478 --rc genhtml_branch_coverage=1 00:11:24.478 --rc genhtml_function_coverage=1 00:11:24.478 --rc genhtml_legend=1 00:11:24.478 --rc geninfo_all_blocks=1 00:11:24.478 --rc geninfo_unexecuted_blocks=1 00:11:24.478 00:11:24.478 ' 00:11:24.478 06:21:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:24.478 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:24.478 --rc genhtml_branch_coverage=1 00:11:24.478 --rc genhtml_function_coverage=1 00:11:24.478 --rc genhtml_legend=1 00:11:24.478 --rc geninfo_all_blocks=1 00:11:24.478 --rc geninfo_unexecuted_blocks=1 00:11:24.478 00:11:24.478 ' 00:11:24.478 06:21:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:24.478 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:24.478 --rc genhtml_branch_coverage=1 00:11:24.478 --rc genhtml_function_coverage=1 00:11:24.478 --rc genhtml_legend=1 00:11:24.478 --rc geninfo_all_blocks=1 00:11:24.478 --rc geninfo_unexecuted_blocks=1 00:11:24.478 00:11:24.478 ' 00:11:24.478 06:21:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:24.478 06:21:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:11:24.478 06:21:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:24.478 06:21:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:24.478 06:21:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:24.478 06:21:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:24.478 06:21:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:24.478 06:21:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:24.478 06:21:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:24.478 06:21:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:24.478 06:21:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:24.478 06:21:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:24.478 06:21:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:11:24.478 06:21:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:11:24.478 06:21:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:24.478 06:21:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:24.478 06:21:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:24.479 06:21:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:24.479 06:21:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:24.479 06:21:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:11:24.479 06:21:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:24.479 06:21:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:24.479 06:21:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:24.479 06:21:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:24.479 06:21:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:24.479 06:21:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:24.479 06:21:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:11:24.479 06:21:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:24.479 06:21:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:11:24.479 06:21:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:24.479 06:21:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:24.479 06:21:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:24.479 06:21:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:24.479 06:21:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:24.479 06:21:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:24.479 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:24.479 06:21:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:24.479 06:21:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:24.479 06:21:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:24.479 06:21:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:24.479 06:21:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:24.479 06:21:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:11:24.479 06:21:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:24.479 06:21:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:24.479 06:21:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:24.479 06:21:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:24.479 06:21:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:24.479 06:21:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:24.479 06:21:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:24.479 06:21:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:24.479 06:21:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:24.479 06:21:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:24.479 06:21:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:11:24.479 06:21:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:31.138 06:22:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:31.138 06:22:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:11:31.138 06:22:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:31.138 06:22:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:31.138 06:22:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:31.138 06:22:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:31.138 06:22:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:31.138 06:22:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:11:31.138 06:22:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:31.138 06:22:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:11:31.138 06:22:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:11:31.138 06:22:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:11:31.138 06:22:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:11:31.138 06:22:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:11:31.138 06:22:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:11:31.138 06:22:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:31.138 06:22:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:31.138 06:22:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:31.138 06:22:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:31.138 06:22:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:31.138 06:22:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:31.138 06:22:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:31.138 06:22:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:31.138 06:22:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:31.138 06:22:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:31.138 06:22:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:31.138 06:22:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:31.138 06:22:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:31.138 06:22:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:31.138 06:22:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:31.138 06:22:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:31.138 06:22:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:31.138 06:22:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:31.138 06:22:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:31.138 06:22:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:31.138 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:31.138 06:22:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:31.138 06:22:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:31.138 06:22:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:31.138 06:22:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:31.138 06:22:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:31.138 06:22:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:31.138 06:22:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:31.138 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:31.138 06:22:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:31.138 06:22:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:31.138 06:22:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:31.138 06:22:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:31.138 06:22:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:31.138 06:22:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:31.138 06:22:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:31.138 06:22:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:31.138 06:22:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:31.138 06:22:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:31.138 06:22:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:31.138 06:22:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:31.138 06:22:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:31.138 06:22:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:31.138 06:22:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:31.138 06:22:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:31.138 Found net devices under 0000:86:00.0: cvl_0_0 00:11:31.138 06:22:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:31.138 06:22:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:31.138 06:22:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:31.138 06:22:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:31.138 06:22:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:31.139 06:22:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:31.139 06:22:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:31.139 06:22:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:31.139 06:22:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:31.139 Found net devices under 0000:86:00.1: cvl_0_1 00:11:31.139 06:22:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:31.139 06:22:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:31.139 06:22:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:11:31.139 06:22:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:31.139 06:22:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:31.139 06:22:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:31.139 06:22:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:31.139 06:22:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:31.139 06:22:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:31.139 06:22:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:31.139 06:22:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:31.139 06:22:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:31.139 06:22:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:31.139 06:22:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:31.139 06:22:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:31.139 06:22:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:31.139 06:22:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:31.139 06:22:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:31.139 06:22:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:31.139 06:22:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:31.139 06:22:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:31.139 06:22:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:31.139 06:22:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:31.139 06:22:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:31.139 06:22:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:31.139 06:22:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:31.139 06:22:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:31.139 06:22:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:31.139 06:22:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:31.139 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:31.139 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.472 ms 00:11:31.139 00:11:31.139 --- 10.0.0.2 ping statistics --- 00:11:31.139 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:31.139 rtt min/avg/max/mdev = 0.472/0.472/0.472/0.000 ms 00:11:31.139 06:22:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:31.139 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:31.139 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.168 ms 00:11:31.139 00:11:31.139 --- 10.0.0.1 ping statistics --- 00:11:31.139 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:31.139 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:11:31.139 06:22:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:31.139 06:22:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:11:31.139 06:22:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:31.139 06:22:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:31.139 06:22:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:31.139 06:22:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:31.139 06:22:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:31.139 06:22:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:31.139 06:22:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:31.139 06:22:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:11:31.139 06:22:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:11:31.139 06:22:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:11:31.139 06:22:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:31.139 06:22:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:31.139 06:22:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:31.139 06:22:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=391460 00:11:31.139 06:22:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 391460 00:11:31.139 06:22:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:11:31.139 06:22:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@833 -- # '[' -z 391460 ']' 00:11:31.139 06:22:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:31.139 06:22:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:31.139 06:22:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:31.139 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:31.139 06:22:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:31.139 06:22:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:31.139 [2024-11-20 06:22:02.147079] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:11:31.139 [2024-11-20 06:22:02.147130] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:31.139 [2024-11-20 06:22:02.223359] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:31.139 [2024-11-20 06:22:02.264012] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:31.139 [2024-11-20 06:22:02.264049] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:31.139 [2024-11-20 06:22:02.264056] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:31.139 [2024-11-20 06:22:02.264062] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:31.139 [2024-11-20 06:22:02.264066] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:31.139 [2024-11-20 06:22:02.265704] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:31.139 [2024-11-20 06:22:02.265821] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:31.139 [2024-11-20 06:22:02.265930] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:31.139 [2024-11-20 06:22:02.265931] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:11:31.398 06:22:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:31.398 06:22:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@866 -- # return 0 00:11:31.398 06:22:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:31.398 06:22:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:31.398 06:22:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:31.398 06:22:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:31.398 06:22:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:31.398 06:22:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.398 06:22:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:31.398 [2024-11-20 06:22:03.032375] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:31.398 06:22:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.398 06:22:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:11:31.398 06:22:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:31.398 06:22:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:31.398 06:22:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:11:31.398 06:22:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:11:31.398 06:22:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:11:31.398 06:22:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.398 06:22:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:31.398 Malloc0 00:11:31.398 [2024-11-20 06:22:03.111413] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:31.398 06:22:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.398 06:22:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:11:31.398 06:22:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:31.398 06:22:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:31.398 06:22:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=391729 00:11:31.398 06:22:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 391729 /var/tmp/bdevperf.sock 00:11:31.398 06:22:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@833 -- # '[' -z 391729 ']' 00:11:31.398 06:22:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:31.399 06:22:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:11:31.399 06:22:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:11:31.399 06:22:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:31.399 06:22:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:31.399 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:31.399 06:22:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:11:31.399 06:22:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:31.399 06:22:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:11:31.399 06:22:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:31.399 06:22:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:11:31.399 06:22:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:11:31.399 { 00:11:31.399 "params": { 00:11:31.399 "name": "Nvme$subsystem", 00:11:31.399 "trtype": "$TEST_TRANSPORT", 00:11:31.399 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:31.399 "adrfam": "ipv4", 00:11:31.399 "trsvcid": "$NVMF_PORT", 00:11:31.399 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:31.399 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:31.399 "hdgst": ${hdgst:-false}, 00:11:31.399 "ddgst": ${ddgst:-false} 00:11:31.399 }, 00:11:31.399 "method": "bdev_nvme_attach_controller" 00:11:31.399 } 00:11:31.399 EOF 00:11:31.399 )") 00:11:31.399 06:22:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:11:31.399 06:22:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:11:31.399 06:22:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:11:31.399 06:22:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:11:31.399 "params": { 00:11:31.399 "name": "Nvme0", 00:11:31.399 "trtype": "tcp", 00:11:31.399 "traddr": "10.0.0.2", 00:11:31.399 "adrfam": "ipv4", 00:11:31.399 "trsvcid": "4420", 00:11:31.399 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:11:31.399 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:11:31.399 "hdgst": false, 00:11:31.399 "ddgst": false 00:11:31.399 }, 00:11:31.399 "method": "bdev_nvme_attach_controller" 00:11:31.399 }' 00:11:31.399 [2024-11-20 06:22:03.206431] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:11:31.399 [2024-11-20 06:22:03.206475] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid391729 ] 00:11:31.658 [2024-11-20 06:22:03.284240] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:31.658 [2024-11-20 06:22:03.325377] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:31.658 Running I/O for 10 seconds... 00:11:32.597 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:32.597 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@866 -- # return 0 00:11:32.597 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:11:32.597 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.597 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:32.597 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.597 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:32.597 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:11:32.597 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:11:32.597 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:11:32.597 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:11:32.597 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:11:32.597 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:11:32.597 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:11:32.597 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:11:32.597 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:11:32.597 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.597 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:32.597 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.597 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=1166 00:11:32.597 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 1166 -ge 100 ']' 00:11:32.597 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:11:32.597 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:11:32.597 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:11:32.597 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:11:32.597 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.597 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:32.597 [2024-11-20 06:22:04.130783] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21eae80 is same with the state(6) to be set 00:11:32.597 [2024-11-20 06:22:04.130850] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21eae80 is same with the state(6) to be set 00:11:32.597 [2024-11-20 06:22:04.130858] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21eae80 is same with the state(6) to be set 00:11:32.598 [2024-11-20 06:22:04.130864] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21eae80 is same with the state(6) to be set 00:11:32.598 [2024-11-20 06:22:04.130871] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21eae80 is same with the state(6) to be set 00:11:32.598 [2024-11-20 06:22:04.130877] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21eae80 is same with the state(6) to be set 00:11:32.598 [2024-11-20 06:22:04.130883] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21eae80 is same with the state(6) to be set 00:11:32.598 [2024-11-20 06:22:04.130889] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21eae80 is same with the state(6) to be set 00:11:32.598 [2024-11-20 06:22:04.130895] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21eae80 is same with the state(6) to be set 00:11:32.598 [2024-11-20 06:22:04.130901] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21eae80 is same with the state(6) to be set 00:11:32.598 [2024-11-20 06:22:04.130907] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21eae80 is same with the state(6) to be set 00:11:32.598 [2024-11-20 06:22:04.130913] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21eae80 is same with the state(6) to be set 00:11:32.598 [2024-11-20 06:22:04.130919] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21eae80 is same with the state(6) to be set 00:11:32.598 [2024-11-20 06:22:04.130925] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21eae80 is same with the state(6) to be set 00:11:32.598 [2024-11-20 06:22:04.130930] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21eae80 is same with the state(6) to be set 00:11:32.598 [2024-11-20 06:22:04.130936] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21eae80 is same with the state(6) to be set 00:11:32.598 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.598 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:11:32.598 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.598 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:32.598 [2024-11-20 06:22:04.138315] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:11:32.598 [2024-11-20 06:22:04.138348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:32.598 [2024-11-20 06:22:04.138358] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:11:32.598 [2024-11-20 06:22:04.138366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:32.598 [2024-11-20 06:22:04.138374] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:11:32.598 [2024-11-20 06:22:04.138389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:32.598 [2024-11-20 06:22:04.138397] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:11:32.598 [2024-11-20 06:22:04.138403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:32.598 [2024-11-20 06:22:04.138410] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1359500 is same with the state(6) to be set 00:11:32.598 [2024-11-20 06:22:04.139124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:32.598 [2024-11-20 06:22:04.139141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:32.598 [2024-11-20 06:22:04.139155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:32.598 [2024-11-20 06:22:04.139162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:32.598 [2024-11-20 06:22:04.139172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:32.598 [2024-11-20 06:22:04.139178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:32.598 [2024-11-20 06:22:04.139186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:32.598 [2024-11-20 06:22:04.139192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:32.598 [2024-11-20 06:22:04.139200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:32.598 [2024-11-20 06:22:04.139214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:32.598 [2024-11-20 06:22:04.139223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:32.598 [2024-11-20 06:22:04.139229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:32.598 [2024-11-20 06:22:04.139237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:32.598 [2024-11-20 06:22:04.139244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:32.598 [2024-11-20 06:22:04.139252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:32.598 [2024-11-20 06:22:04.139258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:32.598 [2024-11-20 06:22:04.139266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:32.598 [2024-11-20 06:22:04.139272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:32.598 [2024-11-20 06:22:04.139280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:32.598 [2024-11-20 06:22:04.139287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:32.598 [2024-11-20 06:22:04.139296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:32.598 [2024-11-20 06:22:04.139305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:32.598 [2024-11-20 06:22:04.139313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:32.598 [2024-11-20 06:22:04.139320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:32.598 [2024-11-20 06:22:04.139328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:32.598 [2024-11-20 06:22:04.139334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:32.598 [2024-11-20 06:22:04.139343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:32.598 [2024-11-20 06:22:04.139349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:32.598 [2024-11-20 06:22:04.139357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:32.598 [2024-11-20 06:22:04.139363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:32.598 [2024-11-20 06:22:04.139371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:32.598 [2024-11-20 06:22:04.139377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:32.598 [2024-11-20 06:22:04.139385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:34816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:32.598 [2024-11-20 06:22:04.139392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:32.598 [2024-11-20 06:22:04.139399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:32.598 [2024-11-20 06:22:04.139406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:32.598 [2024-11-20 06:22:04.139413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:35072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:32.598 [2024-11-20 06:22:04.139420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:32.598 [2024-11-20 06:22:04.139429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:32.598 [2024-11-20 06:22:04.139435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:32.598 [2024-11-20 06:22:04.139443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:32.598 [2024-11-20 06:22:04.139450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:32.598 [2024-11-20 06:22:04.139458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:35456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:32.598 [2024-11-20 06:22:04.139464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:32.598 [2024-11-20 06:22:04.139472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:32.598 [2024-11-20 06:22:04.139478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:32.598 [2024-11-20 06:22:04.139487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:32.598 [2024-11-20 06:22:04.139494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:32.598 [2024-11-20 06:22:04.139502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:35840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:32.598 [2024-11-20 06:22:04.139508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:32.598 [2024-11-20 06:22:04.139516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:35968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:32.598 [2024-11-20 06:22:04.139523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:32.598 [2024-11-20 06:22:04.139531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:32.598 [2024-11-20 06:22:04.139537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:32.599 [2024-11-20 06:22:04.139545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:36224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:32.599 [2024-11-20 06:22:04.139552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:32.599 [2024-11-20 06:22:04.139560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:36352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:32.599 [2024-11-20 06:22:04.139566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:32.599 [2024-11-20 06:22:04.139574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:32.599 [2024-11-20 06:22:04.139580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:32.599 [2024-11-20 06:22:04.139588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:32.599 [2024-11-20 06:22:04.139594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:32.599 [2024-11-20 06:22:04.139602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:36736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:32.599 [2024-11-20 06:22:04.139608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:32.599 [2024-11-20 06:22:04.139616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:32.599 [2024-11-20 06:22:04.139622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:32.599 [2024-11-20 06:22:04.139630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:36992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:32.599 [2024-11-20 06:22:04.139636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:32.599 [2024-11-20 06:22:04.139644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:32.599 [2024-11-20 06:22:04.139650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:32.599 [2024-11-20 06:22:04.139658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:32.599 [2024-11-20 06:22:04.139666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:32.599 [2024-11-20 06:22:04.139674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:32.599 [2024-11-20 06:22:04.139681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:32.599 [2024-11-20 06:22:04.139688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:37504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:32.599 [2024-11-20 06:22:04.139695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:32.599 [2024-11-20 06:22:04.139702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:37632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:32.599 [2024-11-20 06:22:04.139709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:32.599 [2024-11-20 06:22:04.139717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:32.599 [2024-11-20 06:22:04.139723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:32.599 [2024-11-20 06:22:04.139731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:32.599 [2024-11-20 06:22:04.139737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:32.599 [2024-11-20 06:22:04.139745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:32.599 [2024-11-20 06:22:04.139751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:32.599 [2024-11-20 06:22:04.139759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:38144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:32.599 [2024-11-20 06:22:04.139766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:32.599 [2024-11-20 06:22:04.139773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:38272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:32.599 [2024-11-20 06:22:04.139780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:32.599 [2024-11-20 06:22:04.139788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:32.599 [2024-11-20 06:22:04.139794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:32.599 [2024-11-20 06:22:04.139801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:38528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:32.599 [2024-11-20 06:22:04.139808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:32.599 [2024-11-20 06:22:04.139816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:32.599 [2024-11-20 06:22:04.139822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:32.599 [2024-11-20 06:22:04.139830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:32.599 [2024-11-20 06:22:04.139836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:32.599 [2024-11-20 06:22:04.139846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:32.599 [2024-11-20 06:22:04.139852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:32.599 [2024-11-20 06:22:04.139863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:39040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:32.599 [2024-11-20 06:22:04.139870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:32.599 [2024-11-20 06:22:04.139877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:39168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:32.599 [2024-11-20 06:22:04.139884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:32.599 [2024-11-20 06:22:04.139892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:39296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:32.599 [2024-11-20 06:22:04.139898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:32.599 [2024-11-20 06:22:04.139906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:32.599 [2024-11-20 06:22:04.139912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:32.599 [2024-11-20 06:22:04.139920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:32.599 [2024-11-20 06:22:04.139927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:32.599 [2024-11-20 06:22:04.139935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:32.599 [2024-11-20 06:22:04.139942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:32.599 [2024-11-20 06:22:04.139949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:39808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:32.599 [2024-11-20 06:22:04.139956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:32.599 [2024-11-20 06:22:04.139964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:32.599 [2024-11-20 06:22:04.139970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:32.599 [2024-11-20 06:22:04.139978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:40064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:32.599 [2024-11-20 06:22:04.139985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:32.599 [2024-11-20 06:22:04.139992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:32.599 [2024-11-20 06:22:04.139999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:32.599 [2024-11-20 06:22:04.140006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:40320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:32.599 [2024-11-20 06:22:04.140013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:32.599 [2024-11-20 06:22:04.140020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:40448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:32.599 [2024-11-20 06:22:04.140028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:32.599 [2024-11-20 06:22:04.140036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:40576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:32.599 [2024-11-20 06:22:04.140042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:32.599 [2024-11-20 06:22:04.140050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:40704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:32.599 [2024-11-20 06:22:04.140056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:32.599 [2024-11-20 06:22:04.140064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:40832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:32.599 [2024-11-20 06:22:04.140070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:32.599 [2024-11-20 06:22:04.141000] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:11:32.599 task offset: 32768 on job bdev=Nvme0n1 fails 00:11:32.599 00:11:32.599 Latency(us) 00:11:32.599 [2024-11-20T05:22:04.435Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:32.599 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:11:32.599 Job: Nvme0n1 ended in about 0.65 seconds with error 00:11:32.599 Verification LBA range: start 0x0 length 0x400 00:11:32.600 Nvme0n1 : 0.65 1964.29 122.77 98.21 0.00 30433.00 1341.93 26963.38 00:11:32.600 [2024-11-20T05:22:04.436Z] =================================================================================================================== 00:11:32.600 [2024-11-20T05:22:04.436Z] Total : 1964.29 122.77 98.21 0.00 30433.00 1341.93 26963.38 00:11:32.600 [2024-11-20 06:22:04.143389] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:32.600 [2024-11-20 06:22:04.143411] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1359500 (9): Bad file descriptor 00:11:32.600 [2024-11-20 06:22:04.146303] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:11:32.600 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.600 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:11:33.536 06:22:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 391729 00:11:33.536 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (391729) - No such process 00:11:33.536 06:22:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:11:33.536 06:22:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:11:33.536 06:22:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:11:33.536 06:22:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:11:33.536 06:22:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:11:33.536 06:22:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:11:33.536 06:22:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:11:33.537 06:22:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:11:33.537 { 00:11:33.537 "params": { 00:11:33.537 "name": "Nvme$subsystem", 00:11:33.537 "trtype": "$TEST_TRANSPORT", 00:11:33.537 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:33.537 "adrfam": "ipv4", 00:11:33.537 "trsvcid": "$NVMF_PORT", 00:11:33.537 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:33.537 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:33.537 "hdgst": ${hdgst:-false}, 00:11:33.537 "ddgst": ${ddgst:-false} 00:11:33.537 }, 00:11:33.537 "method": "bdev_nvme_attach_controller" 00:11:33.537 } 00:11:33.537 EOF 00:11:33.537 )") 00:11:33.537 06:22:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:11:33.537 06:22:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:11:33.537 06:22:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:11:33.537 06:22:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:11:33.537 "params": { 00:11:33.537 "name": "Nvme0", 00:11:33.537 "trtype": "tcp", 00:11:33.537 "traddr": "10.0.0.2", 00:11:33.537 "adrfam": "ipv4", 00:11:33.537 "trsvcid": "4420", 00:11:33.537 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:11:33.537 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:11:33.537 "hdgst": false, 00:11:33.537 "ddgst": false 00:11:33.537 }, 00:11:33.537 "method": "bdev_nvme_attach_controller" 00:11:33.537 }' 00:11:33.537 [2024-11-20 06:22:05.200785] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:11:33.537 [2024-11-20 06:22:05.200834] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid391982 ] 00:11:33.537 [2024-11-20 06:22:05.276383] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:33.537 [2024-11-20 06:22:05.315249] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:33.796 Running I/O for 1 seconds... 00:11:34.733 1984.00 IOPS, 124.00 MiB/s 00:11:34.734 Latency(us) 00:11:34.734 [2024-11-20T05:22:06.570Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:34.734 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:11:34.734 Verification LBA range: start 0x0 length 0x400 00:11:34.734 Nvme0n1 : 1.01 2026.58 126.66 0.00 0.00 31091.31 5055.63 26588.89 00:11:34.734 [2024-11-20T05:22:06.570Z] =================================================================================================================== 00:11:34.734 [2024-11-20T05:22:06.570Z] Total : 2026.58 126.66 0.00 0.00 31091.31 5055.63 26588.89 00:11:34.993 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:11:34.993 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:11:34.993 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:11:34.993 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:11:34.993 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:11:34.993 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:34.993 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:11:34.993 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:34.993 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:11:34.993 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:34.993 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:34.993 rmmod nvme_tcp 00:11:34.993 rmmod nvme_fabrics 00:11:34.993 rmmod nvme_keyring 00:11:34.993 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:34.993 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:11:34.993 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:11:34.993 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 391460 ']' 00:11:34.993 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 391460 00:11:34.993 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@952 -- # '[' -z 391460 ']' 00:11:34.993 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # kill -0 391460 00:11:34.993 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@957 -- # uname 00:11:34.993 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:34.993 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 391460 00:11:34.993 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:11:34.993 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:11:34.993 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@970 -- # echo 'killing process with pid 391460' 00:11:34.993 killing process with pid 391460 00:11:34.993 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@971 -- # kill 391460 00:11:34.993 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@976 -- # wait 391460 00:11:35.253 [2024-11-20 06:22:06.946428] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:11:35.253 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:35.253 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:35.253 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:35.253 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:11:35.253 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:11:35.253 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:35.253 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:11:35.253 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:35.253 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:35.253 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:35.253 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:35.253 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:37.792 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:37.792 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:11:37.792 00:11:37.792 real 0m13.175s 00:11:37.792 user 0m23.010s 00:11:37.792 sys 0m5.692s 00:11:37.792 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:37.792 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:37.792 ************************************ 00:11:37.792 END TEST nvmf_host_management 00:11:37.792 ************************************ 00:11:37.792 06:22:09 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:11:37.792 06:22:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:11:37.792 06:22:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:37.792 06:22:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:37.792 ************************************ 00:11:37.792 START TEST nvmf_lvol 00:11:37.792 ************************************ 00:11:37.792 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:11:37.792 * Looking for test storage... 00:11:37.792 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:37.792 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:37.792 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # lcov --version 00:11:37.792 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:37.792 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:37.792 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:37.792 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:37.792 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:37.792 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:11:37.792 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:11:37.792 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:11:37.793 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:11:37.793 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:11:37.793 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:11:37.793 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:11:37.793 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:37.793 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:11:37.793 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:11:37.793 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:37.793 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:37.793 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:11:37.793 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:11:37.793 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:37.793 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:11:37.793 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:11:37.793 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:11:37.793 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:11:37.793 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:37.793 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:11:37.793 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:11:37.793 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:37.793 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:37.793 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:11:37.793 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:37.793 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:37.793 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:37.793 --rc genhtml_branch_coverage=1 00:11:37.793 --rc genhtml_function_coverage=1 00:11:37.793 --rc genhtml_legend=1 00:11:37.793 --rc geninfo_all_blocks=1 00:11:37.793 --rc geninfo_unexecuted_blocks=1 00:11:37.793 00:11:37.793 ' 00:11:37.793 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:37.793 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:37.793 --rc genhtml_branch_coverage=1 00:11:37.793 --rc genhtml_function_coverage=1 00:11:37.793 --rc genhtml_legend=1 00:11:37.793 --rc geninfo_all_blocks=1 00:11:37.793 --rc geninfo_unexecuted_blocks=1 00:11:37.793 00:11:37.793 ' 00:11:37.793 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:37.793 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:37.793 --rc genhtml_branch_coverage=1 00:11:37.793 --rc genhtml_function_coverage=1 00:11:37.793 --rc genhtml_legend=1 00:11:37.793 --rc geninfo_all_blocks=1 00:11:37.793 --rc geninfo_unexecuted_blocks=1 00:11:37.793 00:11:37.793 ' 00:11:37.793 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:37.793 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:37.793 --rc genhtml_branch_coverage=1 00:11:37.793 --rc genhtml_function_coverage=1 00:11:37.793 --rc genhtml_legend=1 00:11:37.793 --rc geninfo_all_blocks=1 00:11:37.793 --rc geninfo_unexecuted_blocks=1 00:11:37.793 00:11:37.793 ' 00:11:37.793 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:37.793 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:11:37.793 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:37.793 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:37.793 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:37.793 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:37.793 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:37.793 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:37.793 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:37.793 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:37.793 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:37.793 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:37.793 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:11:37.793 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:11:37.793 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:37.793 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:37.793 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:37.793 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:37.793 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:37.793 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:11:37.793 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:37.793 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:37.793 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:37.793 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:37.793 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:37.793 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:37.793 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:11:37.793 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:37.793 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:11:37.793 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:37.793 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:37.793 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:37.793 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:37.793 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:37.793 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:37.793 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:37.793 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:37.793 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:37.793 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:37.793 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:37.793 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:37.793 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:11:37.793 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:11:37.793 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:37.793 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:11:37.793 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:37.794 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:37.794 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:37.794 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:37.794 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:37.794 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:37.794 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:37.794 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:37.794 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:37.794 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:37.794 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:11:37.794 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:44.367 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:44.367 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:11:44.367 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:44.367 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:44.367 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:44.367 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:44.367 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:44.367 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:11:44.367 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:44.367 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:11:44.367 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:11:44.367 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:11:44.367 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:11:44.367 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:11:44.367 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:11:44.367 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:44.367 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:44.367 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:44.367 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:44.367 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:44.367 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:44.367 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:44.367 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:44.367 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:44.367 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:44.367 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:44.367 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:44.367 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:44.367 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:44.367 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:44.367 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:44.367 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:44.367 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:44.367 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:44.367 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:44.367 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:44.367 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:44.367 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:44.367 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:44.367 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:44.367 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:44.367 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:44.367 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:44.367 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:44.367 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:44.367 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:44.367 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:44.367 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:44.367 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:44.367 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:44.367 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:44.367 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:44.367 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:44.367 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:44.367 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:44.367 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:44.367 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:44.367 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:44.367 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:44.367 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:44.367 Found net devices under 0000:86:00.0: cvl_0_0 00:11:44.367 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:44.367 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:44.367 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:44.367 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:44.367 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:44.367 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:44.367 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:44.367 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:44.367 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:44.367 Found net devices under 0000:86:00.1: cvl_0_1 00:11:44.367 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:44.367 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:44.367 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:11:44.367 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:44.367 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:44.368 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:44.368 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:44.368 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:44.368 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:44.368 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:44.368 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:44.368 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:44.368 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:44.368 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:44.368 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:44.368 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:44.368 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:44.368 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:44.368 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:44.368 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:44.368 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:44.368 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:44.368 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:44.368 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:44.368 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:44.368 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:44.368 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:44.368 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:44.368 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:44.368 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:44.368 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.484 ms 00:11:44.368 00:11:44.368 --- 10.0.0.2 ping statistics --- 00:11:44.368 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:44.368 rtt min/avg/max/mdev = 0.484/0.484/0.484/0.000 ms 00:11:44.368 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:44.368 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:44.368 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.230 ms 00:11:44.368 00:11:44.368 --- 10.0.0.1 ping statistics --- 00:11:44.368 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:44.368 rtt min/avg/max/mdev = 0.230/0.230/0.230/0.000 ms 00:11:44.368 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:44.368 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:11:44.368 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:44.368 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:44.368 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:44.368 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:44.368 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:44.368 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:44.368 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:44.368 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:11:44.368 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:44.368 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:44.368 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:44.368 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=395870 00:11:44.368 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:11:44.368 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 395870 00:11:44.368 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@833 -- # '[' -z 395870 ']' 00:11:44.368 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:44.368 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:44.368 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:44.368 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:44.368 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:44.368 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:44.368 [2024-11-20 06:22:15.434420] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:11:44.368 [2024-11-20 06:22:15.434460] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:44.368 [2024-11-20 06:22:15.512488] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:44.368 [2024-11-20 06:22:15.553919] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:44.368 [2024-11-20 06:22:15.553955] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:44.368 [2024-11-20 06:22:15.553962] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:44.368 [2024-11-20 06:22:15.553968] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:44.368 [2024-11-20 06:22:15.553972] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:44.368 [2024-11-20 06:22:15.558221] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:44.368 [2024-11-20 06:22:15.558249] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:44.368 [2024-11-20 06:22:15.558249] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:44.368 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:44.368 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@866 -- # return 0 00:11:44.368 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:44.368 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:44.368 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:44.368 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:44.368 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:44.368 [2024-11-20 06:22:15.866903] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:44.368 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:44.368 06:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:11:44.368 06:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:44.627 06:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:11:44.627 06:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:11:44.886 06:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:11:45.144 06:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=69fac956-efba-45fa-b54e-104cf5fd0745 00:11:45.144 06:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 69fac956-efba-45fa-b54e-104cf5fd0745 lvol 20 00:11:45.144 06:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=884d2d63-f13e-41cb-bcea-b6826afe0e5c 00:11:45.144 06:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:11:45.402 06:22:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 884d2d63-f13e-41cb-bcea-b6826afe0e5c 00:11:45.661 06:22:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:11:45.661 [2024-11-20 06:22:17.472226] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:45.919 06:22:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:45.919 06:22:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=396245 00:11:45.919 06:22:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:11:45.919 06:22:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:11:47.294 06:22:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 884d2d63-f13e-41cb-bcea-b6826afe0e5c MY_SNAPSHOT 00:11:47.294 06:22:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=4d2c9d65-af97-4cca-910c-b5578a034293 00:11:47.295 06:22:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 884d2d63-f13e-41cb-bcea-b6826afe0e5c 30 00:11:47.552 06:22:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 4d2c9d65-af97-4cca-910c-b5578a034293 MY_CLONE 00:11:47.810 06:22:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=bfaf0658-bb77-466e-9553-6a3ac8a5c7fc 00:11:47.810 06:22:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate bfaf0658-bb77-466e-9553-6a3ac8a5c7fc 00:11:48.377 06:22:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 396245 00:11:56.491 Initializing NVMe Controllers 00:11:56.491 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:11:56.491 Controller IO queue size 128, less than required. 00:11:56.491 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:56.491 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:11:56.491 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:11:56.491 Initialization complete. Launching workers. 00:11:56.491 ======================================================== 00:11:56.491 Latency(us) 00:11:56.492 Device Information : IOPS MiB/s Average min max 00:11:56.492 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12205.90 47.68 10491.82 1263.15 64637.47 00:11:56.492 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 12046.00 47.05 10630.70 3549.40 62651.42 00:11:56.492 ======================================================== 00:11:56.492 Total : 24251.90 94.73 10560.80 1263.15 64637.47 00:11:56.492 00:11:56.492 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:11:56.492 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 884d2d63-f13e-41cb-bcea-b6826afe0e5c 00:11:56.749 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 69fac956-efba-45fa-b54e-104cf5fd0745 00:11:57.007 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:11:57.007 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:11:57.007 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:11:57.007 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:57.007 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:11:57.007 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:57.007 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:11:57.007 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:57.007 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:57.007 rmmod nvme_tcp 00:11:57.007 rmmod nvme_fabrics 00:11:57.007 rmmod nvme_keyring 00:11:57.007 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:57.007 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:11:57.007 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:11:57.007 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 395870 ']' 00:11:57.007 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 395870 00:11:57.007 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@952 -- # '[' -z 395870 ']' 00:11:57.007 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # kill -0 395870 00:11:57.007 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@957 -- # uname 00:11:57.007 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:57.007 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 395870 00:11:57.007 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:57.007 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:57.007 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@970 -- # echo 'killing process with pid 395870' 00:11:57.007 killing process with pid 395870 00:11:57.007 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@971 -- # kill 395870 00:11:57.007 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@976 -- # wait 395870 00:11:57.266 06:22:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:57.266 06:22:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:57.266 06:22:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:57.266 06:22:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:11:57.266 06:22:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:11:57.266 06:22:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:57.266 06:22:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:11:57.266 06:22:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:57.266 06:22:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:57.266 06:22:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:57.266 06:22:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:57.266 06:22:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:59.803 06:22:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:59.803 00:11:59.803 real 0m21.967s 00:11:59.803 user 1m3.015s 00:11:59.803 sys 0m7.719s 00:11:59.803 06:22:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:59.803 06:22:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:59.803 ************************************ 00:11:59.803 END TEST nvmf_lvol 00:11:59.803 ************************************ 00:11:59.803 06:22:31 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:11:59.803 06:22:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:11:59.803 06:22:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:59.803 06:22:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:59.803 ************************************ 00:11:59.803 START TEST nvmf_lvs_grow 00:11:59.803 ************************************ 00:11:59.803 06:22:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:11:59.803 * Looking for test storage... 00:11:59.803 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:59.803 06:22:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:59.803 06:22:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lcov --version 00:11:59.803 06:22:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:59.803 06:22:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:59.803 06:22:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:59.803 06:22:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:59.803 06:22:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:59.803 06:22:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:11:59.803 06:22:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:11:59.803 06:22:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:11:59.803 06:22:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:11:59.803 06:22:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:11:59.803 06:22:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:11:59.803 06:22:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:11:59.803 06:22:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:59.803 06:22:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:11:59.803 06:22:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:11:59.803 06:22:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:59.803 06:22:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:59.803 06:22:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:11:59.803 06:22:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:11:59.803 06:22:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:59.803 06:22:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:11:59.803 06:22:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:11:59.803 06:22:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:11:59.803 06:22:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:11:59.803 06:22:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:59.803 06:22:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:11:59.803 06:22:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:11:59.803 06:22:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:59.803 06:22:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:59.803 06:22:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:11:59.803 06:22:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:59.803 06:22:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:59.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:59.803 --rc genhtml_branch_coverage=1 00:11:59.803 --rc genhtml_function_coverage=1 00:11:59.803 --rc genhtml_legend=1 00:11:59.803 --rc geninfo_all_blocks=1 00:11:59.803 --rc geninfo_unexecuted_blocks=1 00:11:59.803 00:11:59.803 ' 00:11:59.804 06:22:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:59.804 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:59.804 --rc genhtml_branch_coverage=1 00:11:59.804 --rc genhtml_function_coverage=1 00:11:59.804 --rc genhtml_legend=1 00:11:59.804 --rc geninfo_all_blocks=1 00:11:59.804 --rc geninfo_unexecuted_blocks=1 00:11:59.804 00:11:59.804 ' 00:11:59.804 06:22:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:59.804 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:59.804 --rc genhtml_branch_coverage=1 00:11:59.804 --rc genhtml_function_coverage=1 00:11:59.804 --rc genhtml_legend=1 00:11:59.804 --rc geninfo_all_blocks=1 00:11:59.804 --rc geninfo_unexecuted_blocks=1 00:11:59.804 00:11:59.804 ' 00:11:59.804 06:22:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:59.804 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:59.804 --rc genhtml_branch_coverage=1 00:11:59.804 --rc genhtml_function_coverage=1 00:11:59.804 --rc genhtml_legend=1 00:11:59.804 --rc geninfo_all_blocks=1 00:11:59.804 --rc geninfo_unexecuted_blocks=1 00:11:59.804 00:11:59.804 ' 00:11:59.804 06:22:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:59.804 06:22:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:11:59.804 06:22:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:59.804 06:22:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:59.804 06:22:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:59.804 06:22:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:59.804 06:22:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:59.804 06:22:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:59.804 06:22:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:59.804 06:22:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:59.804 06:22:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:59.804 06:22:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:59.804 06:22:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:11:59.804 06:22:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:11:59.804 06:22:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:59.804 06:22:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:59.804 06:22:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:59.804 06:22:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:59.804 06:22:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:59.804 06:22:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:11:59.804 06:22:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:59.804 06:22:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:59.804 06:22:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:59.804 06:22:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:59.804 06:22:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:59.804 06:22:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:59.804 06:22:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:11:59.804 06:22:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:59.804 06:22:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:11:59.804 06:22:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:59.804 06:22:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:59.804 06:22:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:59.804 06:22:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:59.804 06:22:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:59.804 06:22:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:59.804 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:59.804 06:22:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:59.804 06:22:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:59.804 06:22:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:59.804 06:22:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:59.804 06:22:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:11:59.804 06:22:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:11:59.804 06:22:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:59.804 06:22:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:59.804 06:22:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:59.804 06:22:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:59.804 06:22:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:59.804 06:22:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:59.804 06:22:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:59.804 06:22:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:59.804 06:22:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:59.804 06:22:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:59.804 06:22:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:11:59.804 06:22:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:06.376 06:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:06.376 06:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:12:06.376 06:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:06.376 06:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:06.376 06:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:06.376 06:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:06.376 06:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:06.376 06:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:12:06.376 06:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:06.376 06:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:12:06.376 06:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:12:06.376 06:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:12:06.376 06:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:12:06.376 06:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:12:06.376 06:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:12:06.376 06:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:06.376 06:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:06.376 06:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:06.376 06:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:06.376 06:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:06.376 06:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:06.376 06:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:06.376 06:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:06.376 06:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:06.376 06:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:06.376 06:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:06.376 06:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:06.376 06:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:06.376 06:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:06.376 06:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:06.376 06:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:06.376 06:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:06.376 06:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:06.376 06:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:06.376 06:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:06.376 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:06.376 06:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:06.376 06:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:06.376 06:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:06.376 06:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:06.376 06:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:06.376 06:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:06.376 06:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:06.376 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:06.376 06:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:06.376 06:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:06.376 06:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:06.376 06:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:06.376 06:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:06.376 06:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:06.376 06:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:06.376 06:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:06.376 06:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:06.376 06:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:06.376 06:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:06.376 06:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:06.376 06:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:06.376 06:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:06.376 06:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:06.376 06:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:06.376 Found net devices under 0000:86:00.0: cvl_0_0 00:12:06.376 06:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:06.376 06:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:06.376 06:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:06.376 06:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:06.376 06:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:06.376 06:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:06.376 06:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:06.376 06:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:06.376 06:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:06.376 Found net devices under 0000:86:00.1: cvl_0_1 00:12:06.376 06:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:06.376 06:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:06.376 06:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:12:06.376 06:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:06.376 06:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:06.376 06:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:06.376 06:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:06.376 06:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:06.376 06:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:06.376 06:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:06.376 06:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:06.376 06:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:06.376 06:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:06.376 06:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:06.376 06:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:06.376 06:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:06.376 06:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:06.376 06:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:06.376 06:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:06.376 06:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:06.376 06:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:06.376 06:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:06.376 06:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:06.376 06:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:06.376 06:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:06.376 06:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:06.376 06:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:06.377 06:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:06.377 06:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:06.377 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:06.377 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.387 ms 00:12:06.377 00:12:06.377 --- 10.0.0.2 ping statistics --- 00:12:06.377 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:06.377 rtt min/avg/max/mdev = 0.387/0.387/0.387/0.000 ms 00:12:06.377 06:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:06.377 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:06.377 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.143 ms 00:12:06.377 00:12:06.377 --- 10.0.0.1 ping statistics --- 00:12:06.377 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:06.377 rtt min/avg/max/mdev = 0.143/0.143/0.143/0.000 ms 00:12:06.377 06:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:06.377 06:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:12:06.377 06:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:06.377 06:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:06.377 06:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:06.377 06:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:06.377 06:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:06.377 06:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:06.377 06:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:06.377 06:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:12:06.377 06:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:06.377 06:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:06.377 06:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:06.377 06:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=401647 00:12:06.377 06:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 401647 00:12:06.377 06:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:12:06.377 06:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # '[' -z 401647 ']' 00:12:06.377 06:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:06.377 06:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:06.377 06:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:06.377 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:06.377 06:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:06.377 06:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:06.377 [2024-11-20 06:22:37.445478] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:12:06.377 [2024-11-20 06:22:37.445523] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:06.377 [2024-11-20 06:22:37.523759] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:06.377 [2024-11-20 06:22:37.562571] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:06.377 [2024-11-20 06:22:37.562606] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:06.377 [2024-11-20 06:22:37.562612] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:06.377 [2024-11-20 06:22:37.562618] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:06.377 [2024-11-20 06:22:37.562625] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:06.377 [2024-11-20 06:22:37.563191] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:06.377 06:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:06.377 06:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@866 -- # return 0 00:12:06.377 06:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:06.377 06:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:06.377 06:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:06.377 06:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:06.377 06:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:06.377 [2024-11-20 06:22:37.866655] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:06.377 06:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:12:06.377 06:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:12:06.377 06:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:06.377 06:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:06.377 ************************************ 00:12:06.377 START TEST lvs_grow_clean 00:12:06.377 ************************************ 00:12:06.377 06:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1127 -- # lvs_grow 00:12:06.377 06:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:12:06.377 06:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:12:06.377 06:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:12:06.377 06:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:12:06.377 06:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:12:06.377 06:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:12:06.377 06:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:06.377 06:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:06.377 06:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:06.377 06:22:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:12:06.377 06:22:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:12:06.636 06:22:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=1216fb98-36dc-4c96-bd13-8c6a08854e1d 00:12:06.636 06:22:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1216fb98-36dc-4c96-bd13-8c6a08854e1d 00:12:06.636 06:22:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:12:06.894 06:22:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:12:06.894 06:22:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:12:06.894 06:22:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 1216fb98-36dc-4c96-bd13-8c6a08854e1d lvol 150 00:12:06.894 06:22:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=6796f611-03d4-4c09-bc1b-02e25b7e4863 00:12:06.894 06:22:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:06.894 06:22:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:12:07.153 [2024-11-20 06:22:38.898127] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:12:07.153 [2024-11-20 06:22:38.898182] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:12:07.153 true 00:12:07.153 06:22:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1216fb98-36dc-4c96-bd13-8c6a08854e1d 00:12:07.153 06:22:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:12:07.412 06:22:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:12:07.412 06:22:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:12:07.670 06:22:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 6796f611-03d4-4c09-bc1b-02e25b7e4863 00:12:07.670 06:22:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:12:07.929 [2024-11-20 06:22:39.616299] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:07.929 06:22:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:08.187 06:22:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:12:08.187 06:22:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=402141 00:12:08.187 06:22:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:08.187 06:22:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 402141 /var/tmp/bdevperf.sock 00:12:08.187 06:22:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # '[' -z 402141 ']' 00:12:08.187 06:22:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:08.187 06:22:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:08.187 06:22:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:08.187 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:08.187 06:22:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:08.187 06:22:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:12:08.188 [2024-11-20 06:22:39.826514] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:12:08.188 [2024-11-20 06:22:39.826555] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid402141 ] 00:12:08.188 [2024-11-20 06:22:39.899408] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:08.188 [2024-11-20 06:22:39.939523] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:08.446 06:22:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:08.446 06:22:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@866 -- # return 0 00:12:08.446 06:22:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:12:08.705 Nvme0n1 00:12:08.705 06:22:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:12:08.705 [ 00:12:08.705 { 00:12:08.705 "name": "Nvme0n1", 00:12:08.705 "aliases": [ 00:12:08.705 "6796f611-03d4-4c09-bc1b-02e25b7e4863" 00:12:08.705 ], 00:12:08.705 "product_name": "NVMe disk", 00:12:08.705 "block_size": 4096, 00:12:08.705 "num_blocks": 38912, 00:12:08.705 "uuid": "6796f611-03d4-4c09-bc1b-02e25b7e4863", 00:12:08.705 "numa_id": 1, 00:12:08.705 "assigned_rate_limits": { 00:12:08.705 "rw_ios_per_sec": 0, 00:12:08.705 "rw_mbytes_per_sec": 0, 00:12:08.705 "r_mbytes_per_sec": 0, 00:12:08.705 "w_mbytes_per_sec": 0 00:12:08.705 }, 00:12:08.705 "claimed": false, 00:12:08.705 "zoned": false, 00:12:08.705 "supported_io_types": { 00:12:08.705 "read": true, 00:12:08.705 "write": true, 00:12:08.705 "unmap": true, 00:12:08.705 "flush": true, 00:12:08.705 "reset": true, 00:12:08.705 "nvme_admin": true, 00:12:08.705 "nvme_io": true, 00:12:08.705 "nvme_io_md": false, 00:12:08.705 "write_zeroes": true, 00:12:08.705 "zcopy": false, 00:12:08.705 "get_zone_info": false, 00:12:08.705 "zone_management": false, 00:12:08.705 "zone_append": false, 00:12:08.705 "compare": true, 00:12:08.705 "compare_and_write": true, 00:12:08.705 "abort": true, 00:12:08.705 "seek_hole": false, 00:12:08.705 "seek_data": false, 00:12:08.705 "copy": true, 00:12:08.705 "nvme_iov_md": false 00:12:08.705 }, 00:12:08.705 "memory_domains": [ 00:12:08.705 { 00:12:08.705 "dma_device_id": "system", 00:12:08.705 "dma_device_type": 1 00:12:08.705 } 00:12:08.705 ], 00:12:08.705 "driver_specific": { 00:12:08.705 "nvme": [ 00:12:08.705 { 00:12:08.705 "trid": { 00:12:08.705 "trtype": "TCP", 00:12:08.705 "adrfam": "IPv4", 00:12:08.705 "traddr": "10.0.0.2", 00:12:08.705 "trsvcid": "4420", 00:12:08.705 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:12:08.705 }, 00:12:08.705 "ctrlr_data": { 00:12:08.705 "cntlid": 1, 00:12:08.705 "vendor_id": "0x8086", 00:12:08.705 "model_number": "SPDK bdev Controller", 00:12:08.705 "serial_number": "SPDK0", 00:12:08.705 "firmware_revision": "25.01", 00:12:08.705 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:12:08.705 "oacs": { 00:12:08.705 "security": 0, 00:12:08.705 "format": 0, 00:12:08.705 "firmware": 0, 00:12:08.705 "ns_manage": 0 00:12:08.705 }, 00:12:08.705 "multi_ctrlr": true, 00:12:08.705 "ana_reporting": false 00:12:08.705 }, 00:12:08.705 "vs": { 00:12:08.705 "nvme_version": "1.3" 00:12:08.705 }, 00:12:08.705 "ns_data": { 00:12:08.705 "id": 1, 00:12:08.705 "can_share": true 00:12:08.705 } 00:12:08.705 } 00:12:08.705 ], 00:12:08.705 "mp_policy": "active_passive" 00:12:08.705 } 00:12:08.705 } 00:12:08.705 ] 00:12:08.705 06:22:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=402335 00:12:08.705 06:22:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:12:08.705 06:22:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:12:08.964 Running I/O for 10 seconds... 00:12:09.900 Latency(us) 00:12:09.900 [2024-11-20T05:22:41.736Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:09.900 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:09.900 Nvme0n1 : 1.00 23099.00 90.23 0.00 0.00 0.00 0.00 0.00 00:12:09.900 [2024-11-20T05:22:41.736Z] =================================================================================================================== 00:12:09.900 [2024-11-20T05:22:41.736Z] Total : 23099.00 90.23 0.00 0.00 0.00 0.00 0.00 00:12:09.900 00:12:10.836 06:22:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 1216fb98-36dc-4c96-bd13-8c6a08854e1d 00:12:10.836 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:10.836 Nvme0n1 : 2.00 23434.00 91.54 0.00 0.00 0.00 0.00 0.00 00:12:10.836 [2024-11-20T05:22:42.672Z] =================================================================================================================== 00:12:10.836 [2024-11-20T05:22:42.672Z] Total : 23434.00 91.54 0.00 0.00 0.00 0.00 0.00 00:12:10.836 00:12:11.095 true 00:12:11.095 06:22:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1216fb98-36dc-4c96-bd13-8c6a08854e1d 00:12:11.095 06:22:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:12:11.095 06:22:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:12:11.095 06:22:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:12:11.095 06:22:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 402335 00:12:12.032 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:12.032 Nvme0n1 : 3.00 23544.00 91.97 0.00 0.00 0.00 0.00 0.00 00:12:12.032 [2024-11-20T05:22:43.868Z] =================================================================================================================== 00:12:12.032 [2024-11-20T05:22:43.869Z] Total : 23544.00 91.97 0.00 0.00 0.00 0.00 0.00 00:12:12.033 00:12:13.057 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:13.057 Nvme0n1 : 4.00 23628.50 92.30 0.00 0.00 0.00 0.00 0.00 00:12:13.057 [2024-11-20T05:22:44.893Z] =================================================================================================================== 00:12:13.057 [2024-11-20T05:22:44.893Z] Total : 23628.50 92.30 0.00 0.00 0.00 0.00 0.00 00:12:13.057 00:12:14.000 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:14.000 Nvme0n1 : 5.00 23709.20 92.61 0.00 0.00 0.00 0.00 0.00 00:12:14.000 [2024-11-20T05:22:45.836Z] =================================================================================================================== 00:12:14.000 [2024-11-20T05:22:45.836Z] Total : 23709.20 92.61 0.00 0.00 0.00 0.00 0.00 00:12:14.000 00:12:14.936 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:14.936 Nvme0n1 : 6.00 23761.00 92.82 0.00 0.00 0.00 0.00 0.00 00:12:14.936 [2024-11-20T05:22:46.772Z] =================================================================================================================== 00:12:14.936 [2024-11-20T05:22:46.772Z] Total : 23761.00 92.82 0.00 0.00 0.00 0.00 0.00 00:12:14.936 00:12:15.873 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:15.873 Nvme0n1 : 7.00 23797.86 92.96 0.00 0.00 0.00 0.00 0.00 00:12:15.873 [2024-11-20T05:22:47.709Z] =================================================================================================================== 00:12:15.873 [2024-11-20T05:22:47.709Z] Total : 23797.86 92.96 0.00 0.00 0.00 0.00 0.00 00:12:15.873 00:12:16.808 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:16.808 Nvme0n1 : 8.00 23835.12 93.11 0.00 0.00 0.00 0.00 0.00 00:12:16.808 [2024-11-20T05:22:48.644Z] =================================================================================================================== 00:12:16.808 [2024-11-20T05:22:48.644Z] Total : 23835.12 93.11 0.00 0.00 0.00 0.00 0.00 00:12:16.808 00:12:18.184 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:18.184 Nvme0n1 : 9.00 23864.78 93.22 0.00 0.00 0.00 0.00 0.00 00:12:18.184 [2024-11-20T05:22:50.020Z] =================================================================================================================== 00:12:18.184 [2024-11-20T05:22:50.020Z] Total : 23864.78 93.22 0.00 0.00 0.00 0.00 0.00 00:12:18.184 00:12:19.119 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:19.119 Nvme0n1 : 10.00 23849.40 93.16 0.00 0.00 0.00 0.00 0.00 00:12:19.119 [2024-11-20T05:22:50.955Z] =================================================================================================================== 00:12:19.119 [2024-11-20T05:22:50.955Z] Total : 23849.40 93.16 0.00 0.00 0.00 0.00 0.00 00:12:19.119 00:12:19.119 00:12:19.119 Latency(us) 00:12:19.119 [2024-11-20T05:22:50.955Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:19.119 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:19.119 Nvme0n1 : 10.00 23847.23 93.15 0.00 0.00 5364.13 3167.57 15978.30 00:12:19.119 [2024-11-20T05:22:50.955Z] =================================================================================================================== 00:12:19.119 [2024-11-20T05:22:50.955Z] Total : 23847.23 93.15 0.00 0.00 5364.13 3167.57 15978.30 00:12:19.119 { 00:12:19.119 "results": [ 00:12:19.119 { 00:12:19.119 "job": "Nvme0n1", 00:12:19.119 "core_mask": "0x2", 00:12:19.119 "workload": "randwrite", 00:12:19.119 "status": "finished", 00:12:19.119 "queue_depth": 128, 00:12:19.119 "io_size": 4096, 00:12:19.119 "runtime": 10.003637, 00:12:19.119 "iops": 23847.22676362607, 00:12:19.119 "mibps": 93.15322954541433, 00:12:19.119 "io_failed": 0, 00:12:19.119 "io_timeout": 0, 00:12:19.119 "avg_latency_us": 5364.132625927219, 00:12:19.119 "min_latency_us": 3167.5733333333333, 00:12:19.119 "max_latency_us": 15978.300952380952 00:12:19.119 } 00:12:19.119 ], 00:12:19.119 "core_count": 1 00:12:19.119 } 00:12:19.119 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 402141 00:12:19.119 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # '[' -z 402141 ']' 00:12:19.119 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # kill -0 402141 00:12:19.119 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@957 -- # uname 00:12:19.119 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:19.119 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 402141 00:12:19.119 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:12:19.119 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:12:19.119 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 402141' 00:12:19.119 killing process with pid 402141 00:12:19.119 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@971 -- # kill 402141 00:12:19.119 Received shutdown signal, test time was about 10.000000 seconds 00:12:19.119 00:12:19.119 Latency(us) 00:12:19.119 [2024-11-20T05:22:50.955Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:19.119 [2024-11-20T05:22:50.955Z] =================================================================================================================== 00:12:19.119 [2024-11-20T05:22:50.955Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:19.119 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@976 -- # wait 402141 00:12:19.119 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:19.377 06:22:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:12:19.635 06:22:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1216fb98-36dc-4c96-bd13-8c6a08854e1d 00:12:19.635 06:22:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:12:19.635 06:22:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:12:19.635 06:22:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:12:19.635 06:22:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:12:19.894 [2024-11-20 06:22:51.633559] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:12:19.894 06:22:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1216fb98-36dc-4c96-bd13-8c6a08854e1d 00:12:19.894 06:22:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:12:19.894 06:22:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1216fb98-36dc-4c96-bd13-8c6a08854e1d 00:12:19.894 06:22:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:19.894 06:22:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:19.894 06:22:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:19.894 06:22:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:19.894 06:22:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:19.894 06:22:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:19.894 06:22:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:19.895 06:22:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:12:19.895 06:22:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1216fb98-36dc-4c96-bd13-8c6a08854e1d 00:12:20.153 request: 00:12:20.153 { 00:12:20.153 "uuid": "1216fb98-36dc-4c96-bd13-8c6a08854e1d", 00:12:20.153 "method": "bdev_lvol_get_lvstores", 00:12:20.153 "req_id": 1 00:12:20.153 } 00:12:20.153 Got JSON-RPC error response 00:12:20.153 response: 00:12:20.153 { 00:12:20.153 "code": -19, 00:12:20.153 "message": "No such device" 00:12:20.153 } 00:12:20.153 06:22:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:12:20.153 06:22:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:20.153 06:22:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:20.153 06:22:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:20.153 06:22:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:20.412 aio_bdev 00:12:20.412 06:22:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 6796f611-03d4-4c09-bc1b-02e25b7e4863 00:12:20.412 06:22:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local bdev_name=6796f611-03d4-4c09-bc1b-02e25b7e4863 00:12:20.412 06:22:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:20.412 06:22:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local i 00:12:20.412 06:22:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:20.413 06:22:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:20.413 06:22:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:12:20.413 06:22:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 6796f611-03d4-4c09-bc1b-02e25b7e4863 -t 2000 00:12:20.671 [ 00:12:20.671 { 00:12:20.671 "name": "6796f611-03d4-4c09-bc1b-02e25b7e4863", 00:12:20.671 "aliases": [ 00:12:20.671 "lvs/lvol" 00:12:20.671 ], 00:12:20.671 "product_name": "Logical Volume", 00:12:20.671 "block_size": 4096, 00:12:20.671 "num_blocks": 38912, 00:12:20.671 "uuid": "6796f611-03d4-4c09-bc1b-02e25b7e4863", 00:12:20.672 "assigned_rate_limits": { 00:12:20.672 "rw_ios_per_sec": 0, 00:12:20.672 "rw_mbytes_per_sec": 0, 00:12:20.672 "r_mbytes_per_sec": 0, 00:12:20.672 "w_mbytes_per_sec": 0 00:12:20.672 }, 00:12:20.672 "claimed": false, 00:12:20.672 "zoned": false, 00:12:20.672 "supported_io_types": { 00:12:20.672 "read": true, 00:12:20.672 "write": true, 00:12:20.672 "unmap": true, 00:12:20.672 "flush": false, 00:12:20.672 "reset": true, 00:12:20.672 "nvme_admin": false, 00:12:20.672 "nvme_io": false, 00:12:20.672 "nvme_io_md": false, 00:12:20.672 "write_zeroes": true, 00:12:20.672 "zcopy": false, 00:12:20.672 "get_zone_info": false, 00:12:20.672 "zone_management": false, 00:12:20.672 "zone_append": false, 00:12:20.672 "compare": false, 00:12:20.672 "compare_and_write": false, 00:12:20.672 "abort": false, 00:12:20.672 "seek_hole": true, 00:12:20.672 "seek_data": true, 00:12:20.672 "copy": false, 00:12:20.672 "nvme_iov_md": false 00:12:20.672 }, 00:12:20.672 "driver_specific": { 00:12:20.672 "lvol": { 00:12:20.672 "lvol_store_uuid": "1216fb98-36dc-4c96-bd13-8c6a08854e1d", 00:12:20.672 "base_bdev": "aio_bdev", 00:12:20.672 "thin_provision": false, 00:12:20.672 "num_allocated_clusters": 38, 00:12:20.672 "snapshot": false, 00:12:20.672 "clone": false, 00:12:20.672 "esnap_clone": false 00:12:20.672 } 00:12:20.672 } 00:12:20.672 } 00:12:20.672 ] 00:12:20.672 06:22:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@909 -- # return 0 00:12:20.672 06:22:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1216fb98-36dc-4c96-bd13-8c6a08854e1d 00:12:20.672 06:22:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:12:20.931 06:22:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:12:20.931 06:22:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1216fb98-36dc-4c96-bd13-8c6a08854e1d 00:12:20.931 06:22:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:12:21.190 06:22:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:12:21.190 06:22:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 6796f611-03d4-4c09-bc1b-02e25b7e4863 00:12:21.190 06:22:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 1216fb98-36dc-4c96-bd13-8c6a08854e1d 00:12:21.449 06:22:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:12:21.709 06:22:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:21.709 00:12:21.709 real 0m15.481s 00:12:21.709 user 0m15.006s 00:12:21.709 sys 0m1.527s 00:12:21.709 06:22:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:21.709 06:22:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:12:21.709 ************************************ 00:12:21.709 END TEST lvs_grow_clean 00:12:21.709 ************************************ 00:12:21.709 06:22:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:12:21.709 06:22:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:12:21.709 06:22:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:21.709 06:22:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:21.709 ************************************ 00:12:21.709 START TEST lvs_grow_dirty 00:12:21.709 ************************************ 00:12:21.709 06:22:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1127 -- # lvs_grow dirty 00:12:21.709 06:22:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:12:21.709 06:22:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:12:21.709 06:22:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:12:21.709 06:22:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:12:21.709 06:22:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:12:21.709 06:22:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:12:21.709 06:22:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:21.709 06:22:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:21.709 06:22:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:21.968 06:22:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:12:21.968 06:22:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:12:22.227 06:22:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=26d692ef-e02b-419a-b9a5-0891b62ec20d 00:12:22.227 06:22:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 26d692ef-e02b-419a-b9a5-0891b62ec20d 00:12:22.227 06:22:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:12:22.486 06:22:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:12:22.486 06:22:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:12:22.486 06:22:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 26d692ef-e02b-419a-b9a5-0891b62ec20d lvol 150 00:12:22.486 06:22:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=3d1f1a8e-4565-40a3-8094-f7e95037b5e7 00:12:22.486 06:22:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:22.486 06:22:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:12:22.745 [2024-11-20 06:22:54.471165] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:12:22.745 [2024-11-20 06:22:54.471222] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:12:22.745 true 00:12:22.745 06:22:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 26d692ef-e02b-419a-b9a5-0891b62ec20d 00:12:22.745 06:22:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:12:23.005 06:22:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:12:23.005 06:22:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:12:23.264 06:22:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 3d1f1a8e-4565-40a3-8094-f7e95037b5e7 00:12:23.264 06:22:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:12:23.523 [2024-11-20 06:22:55.173261] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:23.523 06:22:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:23.782 06:22:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=404747 00:12:23.782 06:22:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:12:23.782 06:22:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:23.782 06:22:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 404747 /var/tmp/bdevperf.sock 00:12:23.782 06:22:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # '[' -z 404747 ']' 00:12:23.782 06:22:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:23.782 06:22:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:23.782 06:22:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:23.782 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:23.782 06:22:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:23.782 06:22:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:23.782 [2024-11-20 06:22:55.413860] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:12:23.782 [2024-11-20 06:22:55.413913] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid404747 ] 00:12:23.782 [2024-11-20 06:22:55.489552] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:23.782 [2024-11-20 06:22:55.531578] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:24.041 06:22:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:24.041 06:22:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@866 -- # return 0 00:12:24.041 06:22:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:12:24.299 Nvme0n1 00:12:24.300 06:22:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:12:24.558 [ 00:12:24.558 { 00:12:24.558 "name": "Nvme0n1", 00:12:24.558 "aliases": [ 00:12:24.558 "3d1f1a8e-4565-40a3-8094-f7e95037b5e7" 00:12:24.558 ], 00:12:24.558 "product_name": "NVMe disk", 00:12:24.558 "block_size": 4096, 00:12:24.558 "num_blocks": 38912, 00:12:24.558 "uuid": "3d1f1a8e-4565-40a3-8094-f7e95037b5e7", 00:12:24.558 "numa_id": 1, 00:12:24.558 "assigned_rate_limits": { 00:12:24.558 "rw_ios_per_sec": 0, 00:12:24.558 "rw_mbytes_per_sec": 0, 00:12:24.558 "r_mbytes_per_sec": 0, 00:12:24.558 "w_mbytes_per_sec": 0 00:12:24.558 }, 00:12:24.558 "claimed": false, 00:12:24.558 "zoned": false, 00:12:24.558 "supported_io_types": { 00:12:24.558 "read": true, 00:12:24.558 "write": true, 00:12:24.558 "unmap": true, 00:12:24.558 "flush": true, 00:12:24.558 "reset": true, 00:12:24.558 "nvme_admin": true, 00:12:24.558 "nvme_io": true, 00:12:24.558 "nvme_io_md": false, 00:12:24.558 "write_zeroes": true, 00:12:24.558 "zcopy": false, 00:12:24.558 "get_zone_info": false, 00:12:24.558 "zone_management": false, 00:12:24.558 "zone_append": false, 00:12:24.558 "compare": true, 00:12:24.558 "compare_and_write": true, 00:12:24.558 "abort": true, 00:12:24.558 "seek_hole": false, 00:12:24.558 "seek_data": false, 00:12:24.558 "copy": true, 00:12:24.558 "nvme_iov_md": false 00:12:24.558 }, 00:12:24.558 "memory_domains": [ 00:12:24.558 { 00:12:24.558 "dma_device_id": "system", 00:12:24.558 "dma_device_type": 1 00:12:24.558 } 00:12:24.558 ], 00:12:24.558 "driver_specific": { 00:12:24.558 "nvme": [ 00:12:24.558 { 00:12:24.558 "trid": { 00:12:24.558 "trtype": "TCP", 00:12:24.558 "adrfam": "IPv4", 00:12:24.558 "traddr": "10.0.0.2", 00:12:24.558 "trsvcid": "4420", 00:12:24.558 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:12:24.558 }, 00:12:24.558 "ctrlr_data": { 00:12:24.558 "cntlid": 1, 00:12:24.558 "vendor_id": "0x8086", 00:12:24.558 "model_number": "SPDK bdev Controller", 00:12:24.558 "serial_number": "SPDK0", 00:12:24.558 "firmware_revision": "25.01", 00:12:24.558 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:12:24.558 "oacs": { 00:12:24.558 "security": 0, 00:12:24.558 "format": 0, 00:12:24.558 "firmware": 0, 00:12:24.558 "ns_manage": 0 00:12:24.558 }, 00:12:24.558 "multi_ctrlr": true, 00:12:24.558 "ana_reporting": false 00:12:24.558 }, 00:12:24.558 "vs": { 00:12:24.558 "nvme_version": "1.3" 00:12:24.558 }, 00:12:24.558 "ns_data": { 00:12:24.558 "id": 1, 00:12:24.558 "can_share": true 00:12:24.558 } 00:12:24.558 } 00:12:24.558 ], 00:12:24.558 "mp_policy": "active_passive" 00:12:24.558 } 00:12:24.558 } 00:12:24.558 ] 00:12:24.558 06:22:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=404974 00:12:24.558 06:22:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:12:24.558 06:22:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:12:24.558 Running I/O for 10 seconds... 00:12:25.494 Latency(us) 00:12:25.494 [2024-11-20T05:22:57.330Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:25.494 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:25.494 Nvme0n1 : 1.00 23455.00 91.62 0.00 0.00 0.00 0.00 0.00 00:12:25.494 [2024-11-20T05:22:57.330Z] =================================================================================================================== 00:12:25.494 [2024-11-20T05:22:57.330Z] Total : 23455.00 91.62 0.00 0.00 0.00 0.00 0.00 00:12:25.494 00:12:26.431 06:22:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 26d692ef-e02b-419a-b9a5-0891b62ec20d 00:12:26.690 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:26.690 Nvme0n1 : 2.00 23636.00 92.33 0.00 0.00 0.00 0.00 0.00 00:12:26.690 [2024-11-20T05:22:58.526Z] =================================================================================================================== 00:12:26.690 [2024-11-20T05:22:58.526Z] Total : 23636.00 92.33 0.00 0.00 0.00 0.00 0.00 00:12:26.690 00:12:26.690 true 00:12:26.690 06:22:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 26d692ef-e02b-419a-b9a5-0891b62ec20d 00:12:26.690 06:22:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:12:26.949 06:22:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:12:26.949 06:22:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:12:26.949 06:22:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 404974 00:12:27.521 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:27.521 Nvme0n1 : 3.00 23665.00 92.44 0.00 0.00 0.00 0.00 0.00 00:12:27.521 [2024-11-20T05:22:59.357Z] =================================================================================================================== 00:12:27.521 [2024-11-20T05:22:59.357Z] Total : 23665.00 92.44 0.00 0.00 0.00 0.00 0.00 00:12:27.521 00:12:28.898 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:28.898 Nvme0n1 : 4.00 23665.00 92.44 0.00 0.00 0.00 0.00 0.00 00:12:28.898 [2024-11-20T05:23:00.734Z] =================================================================================================================== 00:12:28.898 [2024-11-20T05:23:00.734Z] Total : 23665.00 92.44 0.00 0.00 0.00 0.00 0.00 00:12:28.898 00:12:29.835 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:29.835 Nvme0n1 : 5.00 23719.80 92.66 0.00 0.00 0.00 0.00 0.00 00:12:29.835 [2024-11-20T05:23:01.671Z] =================================================================================================================== 00:12:29.835 [2024-11-20T05:23:01.671Z] Total : 23719.80 92.66 0.00 0.00 0.00 0.00 0.00 00:12:29.835 00:12:30.771 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:30.771 Nvme0n1 : 6.00 23768.00 92.84 0.00 0.00 0.00 0.00 0.00 00:12:30.771 [2024-11-20T05:23:02.608Z] =================================================================================================================== 00:12:30.772 [2024-11-20T05:23:02.608Z] Total : 23768.00 92.84 0.00 0.00 0.00 0.00 0.00 00:12:30.772 00:12:31.708 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:31.708 Nvme0n1 : 7.00 23808.14 93.00 0.00 0.00 0.00 0.00 0.00 00:12:31.708 [2024-11-20T05:23:03.544Z] =================================================================================================================== 00:12:31.708 [2024-11-20T05:23:03.544Z] Total : 23808.14 93.00 0.00 0.00 0.00 0.00 0.00 00:12:31.708 00:12:32.645 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:32.645 Nvme0n1 : 8.00 23828.38 93.08 0.00 0.00 0.00 0.00 0.00 00:12:32.645 [2024-11-20T05:23:04.481Z] =================================================================================================================== 00:12:32.645 [2024-11-20T05:23:04.481Z] Total : 23828.38 93.08 0.00 0.00 0.00 0.00 0.00 00:12:32.645 00:12:33.582 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:33.582 Nvme0n1 : 9.00 23848.67 93.16 0.00 0.00 0.00 0.00 0.00 00:12:33.582 [2024-11-20T05:23:05.418Z] =================================================================================================================== 00:12:33.582 [2024-11-20T05:23:05.418Z] Total : 23848.67 93.16 0.00 0.00 0.00 0.00 0.00 00:12:33.582 00:12:34.518 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:34.518 Nvme0n1 : 10.00 23857.90 93.19 0.00 0.00 0.00 0.00 0.00 00:12:34.518 [2024-11-20T05:23:06.354Z] =================================================================================================================== 00:12:34.518 [2024-11-20T05:23:06.354Z] Total : 23857.90 93.19 0.00 0.00 0.00 0.00 0.00 00:12:34.518 00:12:34.518 00:12:34.518 Latency(us) 00:12:34.518 [2024-11-20T05:23:06.354Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:34.518 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:34.518 Nvme0n1 : 10.00 23860.70 93.21 0.00 0.00 5361.51 3167.57 11796.48 00:12:34.518 [2024-11-20T05:23:06.354Z] =================================================================================================================== 00:12:34.518 [2024-11-20T05:23:06.354Z] Total : 23860.70 93.21 0.00 0.00 5361.51 3167.57 11796.48 00:12:34.518 { 00:12:34.518 "results": [ 00:12:34.518 { 00:12:34.518 "job": "Nvme0n1", 00:12:34.518 "core_mask": "0x2", 00:12:34.518 "workload": "randwrite", 00:12:34.518 "status": "finished", 00:12:34.518 "queue_depth": 128, 00:12:34.518 "io_size": 4096, 00:12:34.518 "runtime": 10.004191, 00:12:34.518 "iops": 23860.699980638114, 00:12:34.518 "mibps": 93.20585929936763, 00:12:34.518 "io_failed": 0, 00:12:34.518 "io_timeout": 0, 00:12:34.518 "avg_latency_us": 5361.511134379326, 00:12:34.518 "min_latency_us": 3167.5733333333333, 00:12:34.518 "max_latency_us": 11796.48 00:12:34.518 } 00:12:34.518 ], 00:12:34.518 "core_count": 1 00:12:34.518 } 00:12:34.777 06:23:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 404747 00:12:34.777 06:23:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # '[' -z 404747 ']' 00:12:34.777 06:23:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # kill -0 404747 00:12:34.777 06:23:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@957 -- # uname 00:12:34.777 06:23:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:34.777 06:23:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 404747 00:12:34.777 06:23:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:12:34.777 06:23:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:12:34.778 06:23:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@970 -- # echo 'killing process with pid 404747' 00:12:34.778 killing process with pid 404747 00:12:34.778 06:23:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@971 -- # kill 404747 00:12:34.778 Received shutdown signal, test time was about 10.000000 seconds 00:12:34.778 00:12:34.778 Latency(us) 00:12:34.778 [2024-11-20T05:23:06.614Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:34.778 [2024-11-20T05:23:06.614Z] =================================================================================================================== 00:12:34.778 [2024-11-20T05:23:06.614Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:34.778 06:23:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@976 -- # wait 404747 00:12:34.778 06:23:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:35.037 06:23:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:12:35.296 06:23:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 26d692ef-e02b-419a-b9a5-0891b62ec20d 00:12:35.296 06:23:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:12:35.555 06:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:12:35.555 06:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:12:35.555 06:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 401647 00:12:35.555 06:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 401647 00:12:35.555 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 401647 Killed "${NVMF_APP[@]}" "$@" 00:12:35.555 06:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:12:35.555 06:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:12:35.555 06:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:35.555 06:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:35.555 06:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:35.555 06:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=406829 00:12:35.555 06:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 406829 00:12:35.555 06:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:12:35.555 06:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # '[' -z 406829 ']' 00:12:35.555 06:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:35.555 06:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:35.555 06:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:35.555 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:35.555 06:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:35.555 06:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:35.555 [2024-11-20 06:23:07.262815] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:12:35.555 [2024-11-20 06:23:07.262861] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:35.555 [2024-11-20 06:23:07.342144] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:35.555 [2024-11-20 06:23:07.382257] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:35.555 [2024-11-20 06:23:07.382292] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:35.555 [2024-11-20 06:23:07.382299] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:35.555 [2024-11-20 06:23:07.382305] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:35.555 [2024-11-20 06:23:07.382309] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:35.555 [2024-11-20 06:23:07.382885] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:35.821 06:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:35.821 06:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@866 -- # return 0 00:12:35.821 06:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:35.821 06:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:35.821 06:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:35.821 06:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:35.821 06:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:36.081 [2024-11-20 06:23:07.679998] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:12:36.081 [2024-11-20 06:23:07.680086] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:12:36.081 [2024-11-20 06:23:07.680111] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:12:36.081 06:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:12:36.081 06:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 3d1f1a8e-4565-40a3-8094-f7e95037b5e7 00:12:36.081 06:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local bdev_name=3d1f1a8e-4565-40a3-8094-f7e95037b5e7 00:12:36.081 06:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:36.081 06:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local i 00:12:36.081 06:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:36.081 06:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:36.081 06:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:12:36.081 06:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 3d1f1a8e-4565-40a3-8094-f7e95037b5e7 -t 2000 00:12:36.340 [ 00:12:36.340 { 00:12:36.340 "name": "3d1f1a8e-4565-40a3-8094-f7e95037b5e7", 00:12:36.340 "aliases": [ 00:12:36.340 "lvs/lvol" 00:12:36.340 ], 00:12:36.340 "product_name": "Logical Volume", 00:12:36.340 "block_size": 4096, 00:12:36.340 "num_blocks": 38912, 00:12:36.340 "uuid": "3d1f1a8e-4565-40a3-8094-f7e95037b5e7", 00:12:36.340 "assigned_rate_limits": { 00:12:36.340 "rw_ios_per_sec": 0, 00:12:36.340 "rw_mbytes_per_sec": 0, 00:12:36.340 "r_mbytes_per_sec": 0, 00:12:36.340 "w_mbytes_per_sec": 0 00:12:36.340 }, 00:12:36.340 "claimed": false, 00:12:36.340 "zoned": false, 00:12:36.340 "supported_io_types": { 00:12:36.340 "read": true, 00:12:36.340 "write": true, 00:12:36.340 "unmap": true, 00:12:36.340 "flush": false, 00:12:36.340 "reset": true, 00:12:36.340 "nvme_admin": false, 00:12:36.340 "nvme_io": false, 00:12:36.340 "nvme_io_md": false, 00:12:36.340 "write_zeroes": true, 00:12:36.340 "zcopy": false, 00:12:36.340 "get_zone_info": false, 00:12:36.340 "zone_management": false, 00:12:36.340 "zone_append": false, 00:12:36.340 "compare": false, 00:12:36.340 "compare_and_write": false, 00:12:36.340 "abort": false, 00:12:36.340 "seek_hole": true, 00:12:36.340 "seek_data": true, 00:12:36.340 "copy": false, 00:12:36.340 "nvme_iov_md": false 00:12:36.340 }, 00:12:36.340 "driver_specific": { 00:12:36.340 "lvol": { 00:12:36.340 "lvol_store_uuid": "26d692ef-e02b-419a-b9a5-0891b62ec20d", 00:12:36.340 "base_bdev": "aio_bdev", 00:12:36.340 "thin_provision": false, 00:12:36.340 "num_allocated_clusters": 38, 00:12:36.340 "snapshot": false, 00:12:36.340 "clone": false, 00:12:36.340 "esnap_clone": false 00:12:36.340 } 00:12:36.340 } 00:12:36.340 } 00:12:36.340 ] 00:12:36.340 06:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@909 -- # return 0 00:12:36.340 06:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 26d692ef-e02b-419a-b9a5-0891b62ec20d 00:12:36.340 06:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:12:36.599 06:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:12:36.599 06:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 26d692ef-e02b-419a-b9a5-0891b62ec20d 00:12:36.599 06:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:12:36.859 06:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:12:36.859 06:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:12:36.859 [2024-11-20 06:23:08.657054] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:12:37.118 06:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 26d692ef-e02b-419a-b9a5-0891b62ec20d 00:12:37.118 06:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:12:37.118 06:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 26d692ef-e02b-419a-b9a5-0891b62ec20d 00:12:37.118 06:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:37.118 06:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:37.118 06:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:37.118 06:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:37.118 06:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:37.118 06:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:37.118 06:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:37.118 06:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:12:37.118 06:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 26d692ef-e02b-419a-b9a5-0891b62ec20d 00:12:37.118 request: 00:12:37.118 { 00:12:37.118 "uuid": "26d692ef-e02b-419a-b9a5-0891b62ec20d", 00:12:37.118 "method": "bdev_lvol_get_lvstores", 00:12:37.118 "req_id": 1 00:12:37.118 } 00:12:37.118 Got JSON-RPC error response 00:12:37.118 response: 00:12:37.118 { 00:12:37.118 "code": -19, 00:12:37.118 "message": "No such device" 00:12:37.118 } 00:12:37.118 06:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:12:37.118 06:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:37.118 06:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:37.118 06:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:37.118 06:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:37.377 aio_bdev 00:12:37.377 06:23:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 3d1f1a8e-4565-40a3-8094-f7e95037b5e7 00:12:37.377 06:23:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local bdev_name=3d1f1a8e-4565-40a3-8094-f7e95037b5e7 00:12:37.377 06:23:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:37.377 06:23:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local i 00:12:37.377 06:23:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:37.377 06:23:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:37.377 06:23:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:12:37.638 06:23:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 3d1f1a8e-4565-40a3-8094-f7e95037b5e7 -t 2000 00:12:37.638 [ 00:12:37.638 { 00:12:37.638 "name": "3d1f1a8e-4565-40a3-8094-f7e95037b5e7", 00:12:37.638 "aliases": [ 00:12:37.638 "lvs/lvol" 00:12:37.638 ], 00:12:37.638 "product_name": "Logical Volume", 00:12:37.638 "block_size": 4096, 00:12:37.638 "num_blocks": 38912, 00:12:37.638 "uuid": "3d1f1a8e-4565-40a3-8094-f7e95037b5e7", 00:12:37.638 "assigned_rate_limits": { 00:12:37.638 "rw_ios_per_sec": 0, 00:12:37.638 "rw_mbytes_per_sec": 0, 00:12:37.638 "r_mbytes_per_sec": 0, 00:12:37.638 "w_mbytes_per_sec": 0 00:12:37.638 }, 00:12:37.638 "claimed": false, 00:12:37.638 "zoned": false, 00:12:37.638 "supported_io_types": { 00:12:37.638 "read": true, 00:12:37.638 "write": true, 00:12:37.638 "unmap": true, 00:12:37.638 "flush": false, 00:12:37.638 "reset": true, 00:12:37.638 "nvme_admin": false, 00:12:37.638 "nvme_io": false, 00:12:37.638 "nvme_io_md": false, 00:12:37.638 "write_zeroes": true, 00:12:37.638 "zcopy": false, 00:12:37.638 "get_zone_info": false, 00:12:37.638 "zone_management": false, 00:12:37.638 "zone_append": false, 00:12:37.638 "compare": false, 00:12:37.638 "compare_and_write": false, 00:12:37.638 "abort": false, 00:12:37.638 "seek_hole": true, 00:12:37.638 "seek_data": true, 00:12:37.638 "copy": false, 00:12:37.638 "nvme_iov_md": false 00:12:37.638 }, 00:12:37.638 "driver_specific": { 00:12:37.638 "lvol": { 00:12:37.638 "lvol_store_uuid": "26d692ef-e02b-419a-b9a5-0891b62ec20d", 00:12:37.638 "base_bdev": "aio_bdev", 00:12:37.638 "thin_provision": false, 00:12:37.638 "num_allocated_clusters": 38, 00:12:37.638 "snapshot": false, 00:12:37.638 "clone": false, 00:12:37.638 "esnap_clone": false 00:12:37.638 } 00:12:37.638 } 00:12:37.638 } 00:12:37.638 ] 00:12:37.638 06:23:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@909 -- # return 0 00:12:37.638 06:23:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 26d692ef-e02b-419a-b9a5-0891b62ec20d 00:12:37.638 06:23:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:12:37.896 06:23:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:12:37.896 06:23:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 26d692ef-e02b-419a-b9a5-0891b62ec20d 00:12:37.896 06:23:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:12:38.156 06:23:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:12:38.156 06:23:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 3d1f1a8e-4565-40a3-8094-f7e95037b5e7 00:12:38.415 06:23:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 26d692ef-e02b-419a-b9a5-0891b62ec20d 00:12:38.415 06:23:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:12:38.674 06:23:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:38.675 00:12:38.675 real 0m16.924s 00:12:38.675 user 0m43.667s 00:12:38.675 sys 0m3.735s 00:12:38.675 06:23:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:38.675 06:23:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:38.675 ************************************ 00:12:38.675 END TEST lvs_grow_dirty 00:12:38.675 ************************************ 00:12:38.675 06:23:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:12:38.675 06:23:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # type=--id 00:12:38.675 06:23:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@811 -- # id=0 00:12:38.675 06:23:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # '[' --id = --pid ']' 00:12:38.675 06:23:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:12:38.675 06:23:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # shm_files=nvmf_trace.0 00:12:38.675 06:23:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # [[ -z nvmf_trace.0 ]] 00:12:38.675 06:23:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@822 -- # for n in $shm_files 00:12:38.675 06:23:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:12:38.675 nvmf_trace.0 00:12:38.675 06:23:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # return 0 00:12:38.675 06:23:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:12:38.675 06:23:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:38.675 06:23:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:12:38.675 06:23:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:38.675 06:23:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:12:38.675 06:23:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:38.675 06:23:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:38.675 rmmod nvme_tcp 00:12:38.933 rmmod nvme_fabrics 00:12:38.933 rmmod nvme_keyring 00:12:38.933 06:23:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:38.933 06:23:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:12:38.933 06:23:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:12:38.933 06:23:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 406829 ']' 00:12:38.933 06:23:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 406829 00:12:38.933 06:23:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # '[' -z 406829 ']' 00:12:38.933 06:23:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # kill -0 406829 00:12:38.933 06:23:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@957 -- # uname 00:12:38.933 06:23:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:38.933 06:23:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 406829 00:12:38.933 06:23:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:38.933 06:23:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:38.933 06:23:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@970 -- # echo 'killing process with pid 406829' 00:12:38.933 killing process with pid 406829 00:12:38.933 06:23:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@971 -- # kill 406829 00:12:38.933 06:23:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@976 -- # wait 406829 00:12:38.933 06:23:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:38.933 06:23:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:38.934 06:23:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:38.934 06:23:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:12:38.934 06:23:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:12:38.934 06:23:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:38.934 06:23:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:12:39.194 06:23:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:39.194 06:23:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:39.194 06:23:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:39.194 06:23:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:39.194 06:23:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:41.099 06:23:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:41.099 00:12:41.099 real 0m41.670s 00:12:41.099 user 1m4.294s 00:12:41.099 sys 0m10.203s 00:12:41.099 06:23:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:41.099 06:23:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:41.099 ************************************ 00:12:41.099 END TEST nvmf_lvs_grow 00:12:41.099 ************************************ 00:12:41.099 06:23:12 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:12:41.099 06:23:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:12:41.099 06:23:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:41.099 06:23:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:41.099 ************************************ 00:12:41.099 START TEST nvmf_bdev_io_wait 00:12:41.099 ************************************ 00:12:41.099 06:23:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:12:41.359 * Looking for test storage... 00:12:41.359 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:41.359 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:41.359 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lcov --version 00:12:41.359 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:41.359 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:41.359 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:41.359 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:41.359 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:41.359 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:12:41.359 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:12:41.359 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:12:41.359 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:12:41.359 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:12:41.359 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:12:41.359 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:12:41.359 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:41.359 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:12:41.359 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:12:41.359 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:41.359 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:41.359 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:12:41.359 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:12:41.359 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:41.359 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:12:41.359 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:12:41.359 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:12:41.359 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:12:41.359 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:41.359 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:12:41.360 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:12:41.360 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:41.360 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:41.360 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:12:41.360 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:41.360 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:41.360 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:41.360 --rc genhtml_branch_coverage=1 00:12:41.360 --rc genhtml_function_coverage=1 00:12:41.360 --rc genhtml_legend=1 00:12:41.360 --rc geninfo_all_blocks=1 00:12:41.360 --rc geninfo_unexecuted_blocks=1 00:12:41.360 00:12:41.360 ' 00:12:41.360 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:41.360 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:41.360 --rc genhtml_branch_coverage=1 00:12:41.360 --rc genhtml_function_coverage=1 00:12:41.360 --rc genhtml_legend=1 00:12:41.360 --rc geninfo_all_blocks=1 00:12:41.360 --rc geninfo_unexecuted_blocks=1 00:12:41.360 00:12:41.360 ' 00:12:41.360 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:41.360 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:41.360 --rc genhtml_branch_coverage=1 00:12:41.360 --rc genhtml_function_coverage=1 00:12:41.360 --rc genhtml_legend=1 00:12:41.360 --rc geninfo_all_blocks=1 00:12:41.360 --rc geninfo_unexecuted_blocks=1 00:12:41.360 00:12:41.360 ' 00:12:41.360 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:41.360 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:41.360 --rc genhtml_branch_coverage=1 00:12:41.360 --rc genhtml_function_coverage=1 00:12:41.360 --rc genhtml_legend=1 00:12:41.360 --rc geninfo_all_blocks=1 00:12:41.360 --rc geninfo_unexecuted_blocks=1 00:12:41.360 00:12:41.360 ' 00:12:41.360 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:41.360 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:12:41.360 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:41.360 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:41.360 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:41.360 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:41.360 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:41.360 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:41.360 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:41.360 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:41.360 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:41.360 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:41.360 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:12:41.360 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:12:41.360 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:41.360 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:41.360 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:41.360 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:41.360 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:41.360 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:12:41.360 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:41.360 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:41.360 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:41.360 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:41.360 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:41.360 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:41.360 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:12:41.360 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:41.360 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:12:41.360 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:41.360 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:41.360 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:41.360 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:41.360 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:41.360 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:41.360 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:41.360 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:41.360 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:41.360 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:41.360 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:41.360 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:41.360 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:12:41.360 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:41.360 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:41.360 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:41.360 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:41.360 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:41.360 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:41.360 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:41.360 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:41.360 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:41.360 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:41.360 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:12:41.360 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:47.955 06:23:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:47.955 06:23:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:12:47.955 06:23:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:47.955 06:23:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:47.955 06:23:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:47.955 06:23:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:47.955 06:23:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:47.955 06:23:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:12:47.955 06:23:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:47.955 06:23:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:12:47.955 06:23:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:12:47.955 06:23:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:12:47.955 06:23:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:12:47.955 06:23:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:12:47.955 06:23:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:12:47.955 06:23:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:47.955 06:23:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:47.955 06:23:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:47.955 06:23:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:47.955 06:23:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:47.955 06:23:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:47.955 06:23:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:47.955 06:23:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:47.955 06:23:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:47.955 06:23:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:47.955 06:23:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:47.955 06:23:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:47.955 06:23:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:47.955 06:23:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:47.955 06:23:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:47.955 06:23:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:47.955 06:23:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:47.955 06:23:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:47.955 06:23:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:47.955 06:23:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:47.955 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:47.955 06:23:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:47.955 06:23:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:47.955 06:23:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:47.955 06:23:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:47.955 06:23:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:47.955 06:23:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:47.955 06:23:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:47.955 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:47.955 06:23:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:47.955 06:23:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:47.955 06:23:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:47.955 06:23:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:47.955 06:23:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:47.955 06:23:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:47.955 06:23:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:47.955 06:23:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:47.955 06:23:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:47.955 06:23:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:47.955 06:23:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:47.955 06:23:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:47.955 06:23:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:47.955 06:23:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:47.955 06:23:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:47.955 06:23:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:47.955 Found net devices under 0000:86:00.0: cvl_0_0 00:12:47.955 06:23:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:47.955 06:23:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:47.955 06:23:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:47.955 06:23:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:47.955 06:23:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:47.955 06:23:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:47.955 06:23:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:47.955 06:23:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:47.955 06:23:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:47.955 Found net devices under 0000:86:00.1: cvl_0_1 00:12:47.955 06:23:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:47.955 06:23:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:47.955 06:23:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:12:47.955 06:23:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:47.955 06:23:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:47.955 06:23:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:47.955 06:23:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:47.955 06:23:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:47.955 06:23:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:47.955 06:23:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:47.955 06:23:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:47.955 06:23:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:47.955 06:23:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:47.955 06:23:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:47.955 06:23:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:47.955 06:23:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:47.955 06:23:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:47.955 06:23:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:47.955 06:23:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:47.955 06:23:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:47.955 06:23:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:47.955 06:23:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:47.955 06:23:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:47.955 06:23:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:47.955 06:23:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:47.955 06:23:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:47.955 06:23:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:47.955 06:23:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:47.955 06:23:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:47.955 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:47.956 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.483 ms 00:12:47.956 00:12:47.956 --- 10.0.0.2 ping statistics --- 00:12:47.956 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:47.956 rtt min/avg/max/mdev = 0.483/0.483/0.483/0.000 ms 00:12:47.956 06:23:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:47.956 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:47.956 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.206 ms 00:12:47.956 00:12:47.956 --- 10.0.0.1 ping statistics --- 00:12:47.956 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:47.956 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:12:47.956 06:23:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:47.956 06:23:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:12:47.956 06:23:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:47.956 06:23:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:47.956 06:23:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:47.956 06:23:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:47.956 06:23:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:47.956 06:23:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:47.956 06:23:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:47.956 06:23:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:12:47.956 06:23:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:47.956 06:23:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:47.956 06:23:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:47.956 06:23:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=410890 00:12:47.956 06:23:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 410890 00:12:47.956 06:23:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:12:47.956 06:23:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # '[' -z 410890 ']' 00:12:47.956 06:23:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:47.956 06:23:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:47.956 06:23:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:47.956 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:47.956 06:23:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:47.956 06:23:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:47.956 [2024-11-20 06:23:19.230786] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:12:47.956 [2024-11-20 06:23:19.230831] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:47.956 [2024-11-20 06:23:19.309389] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:47.956 [2024-11-20 06:23:19.353164] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:47.956 [2024-11-20 06:23:19.353206] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:47.956 [2024-11-20 06:23:19.353213] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:47.956 [2024-11-20 06:23:19.353219] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:47.956 [2024-11-20 06:23:19.353224] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:47.956 [2024-11-20 06:23:19.354666] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:47.956 [2024-11-20 06:23:19.354775] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:47.956 [2024-11-20 06:23:19.354804] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:47.956 [2024-11-20 06:23:19.354804] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:47.956 06:23:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:47.956 06:23:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@866 -- # return 0 00:12:47.956 06:23:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:47.956 06:23:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:47.956 06:23:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:47.956 06:23:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:47.956 06:23:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:12:47.956 06:23:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.956 06:23:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:47.956 06:23:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.956 06:23:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:12:47.956 06:23:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.956 06:23:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:47.956 06:23:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.956 06:23:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:47.956 06:23:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.956 06:23:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:47.956 [2024-11-20 06:23:19.491354] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:47.956 06:23:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.956 06:23:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:47.956 06:23:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.956 06:23:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:47.956 Malloc0 00:12:47.956 06:23:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.956 06:23:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:47.956 06:23:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.956 06:23:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:47.956 06:23:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.956 06:23:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:47.956 06:23:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.956 06:23:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:47.956 06:23:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.956 06:23:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:47.956 06:23:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.956 06:23:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:47.956 [2024-11-20 06:23:19.538821] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:47.956 06:23:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.956 06:23:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=411092 00:12:47.956 06:23:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:12:47.956 06:23:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:12:47.956 06:23:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=411095 00:12:47.956 06:23:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:12:47.956 06:23:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:12:47.956 06:23:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:12:47.956 06:23:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:12:47.956 { 00:12:47.956 "params": { 00:12:47.956 "name": "Nvme$subsystem", 00:12:47.956 "trtype": "$TEST_TRANSPORT", 00:12:47.956 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:47.956 "adrfam": "ipv4", 00:12:47.956 "trsvcid": "$NVMF_PORT", 00:12:47.956 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:47.956 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:47.956 "hdgst": ${hdgst:-false}, 00:12:47.956 "ddgst": ${ddgst:-false} 00:12:47.956 }, 00:12:47.956 "method": "bdev_nvme_attach_controller" 00:12:47.956 } 00:12:47.956 EOF 00:12:47.956 )") 00:12:47.956 06:23:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:12:47.956 06:23:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=411098 00:12:47.956 06:23:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:12:47.956 06:23:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:12:47.956 06:23:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:12:47.956 06:23:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:12:47.956 06:23:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:12:47.956 06:23:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:12:47.956 06:23:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=411102 00:12:47.957 06:23:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:12:47.957 { 00:12:47.957 "params": { 00:12:47.957 "name": "Nvme$subsystem", 00:12:47.957 "trtype": "$TEST_TRANSPORT", 00:12:47.957 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:47.957 "adrfam": "ipv4", 00:12:47.957 "trsvcid": "$NVMF_PORT", 00:12:47.957 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:47.957 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:47.957 "hdgst": ${hdgst:-false}, 00:12:47.957 "ddgst": ${ddgst:-false} 00:12:47.957 }, 00:12:47.957 "method": "bdev_nvme_attach_controller" 00:12:47.957 } 00:12:47.957 EOF 00:12:47.957 )") 00:12:47.957 06:23:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:12:47.957 06:23:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:12:47.957 06:23:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:12:47.957 06:23:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:12:47.957 06:23:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:12:47.957 06:23:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:12:47.957 { 00:12:47.957 "params": { 00:12:47.957 "name": "Nvme$subsystem", 00:12:47.957 "trtype": "$TEST_TRANSPORT", 00:12:47.957 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:47.957 "adrfam": "ipv4", 00:12:47.957 "trsvcid": "$NVMF_PORT", 00:12:47.957 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:47.957 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:47.957 "hdgst": ${hdgst:-false}, 00:12:47.957 "ddgst": ${ddgst:-false} 00:12:47.957 }, 00:12:47.957 "method": "bdev_nvme_attach_controller" 00:12:47.957 } 00:12:47.957 EOF 00:12:47.957 )") 00:12:47.957 06:23:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:12:47.957 06:23:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:12:47.957 06:23:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:12:47.957 06:23:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:12:47.957 06:23:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:12:47.957 06:23:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:12:47.957 06:23:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:12:47.957 { 00:12:47.957 "params": { 00:12:47.957 "name": "Nvme$subsystem", 00:12:47.957 "trtype": "$TEST_TRANSPORT", 00:12:47.957 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:47.957 "adrfam": "ipv4", 00:12:47.957 "trsvcid": "$NVMF_PORT", 00:12:47.957 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:47.957 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:47.957 "hdgst": ${hdgst:-false}, 00:12:47.957 "ddgst": ${ddgst:-false} 00:12:47.957 }, 00:12:47.957 "method": "bdev_nvme_attach_controller" 00:12:47.957 } 00:12:47.957 EOF 00:12:47.957 )") 00:12:47.957 06:23:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:12:47.957 06:23:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 411092 00:12:47.957 06:23:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:12:47.957 06:23:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:12:47.957 06:23:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:12:47.957 06:23:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:12:47.957 06:23:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:12:47.957 06:23:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:12:47.957 "params": { 00:12:47.957 "name": "Nvme1", 00:12:47.957 "trtype": "tcp", 00:12:47.957 "traddr": "10.0.0.2", 00:12:47.957 "adrfam": "ipv4", 00:12:47.957 "trsvcid": "4420", 00:12:47.957 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:47.957 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:47.957 "hdgst": false, 00:12:47.957 "ddgst": false 00:12:47.957 }, 00:12:47.957 "method": "bdev_nvme_attach_controller" 00:12:47.957 }' 00:12:47.957 06:23:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:12:47.957 06:23:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:12:47.957 06:23:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:12:47.957 "params": { 00:12:47.957 "name": "Nvme1", 00:12:47.957 "trtype": "tcp", 00:12:47.957 "traddr": "10.0.0.2", 00:12:47.957 "adrfam": "ipv4", 00:12:47.957 "trsvcid": "4420", 00:12:47.957 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:47.957 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:47.957 "hdgst": false, 00:12:47.957 "ddgst": false 00:12:47.957 }, 00:12:47.957 "method": "bdev_nvme_attach_controller" 00:12:47.957 }' 00:12:47.957 06:23:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:12:47.957 06:23:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:12:47.957 "params": { 00:12:47.957 "name": "Nvme1", 00:12:47.957 "trtype": "tcp", 00:12:47.957 "traddr": "10.0.0.2", 00:12:47.957 "adrfam": "ipv4", 00:12:47.957 "trsvcid": "4420", 00:12:47.957 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:47.957 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:47.957 "hdgst": false, 00:12:47.957 "ddgst": false 00:12:47.957 }, 00:12:47.957 "method": "bdev_nvme_attach_controller" 00:12:47.957 }' 00:12:47.957 06:23:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:12:47.957 06:23:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:12:47.957 "params": { 00:12:47.957 "name": "Nvme1", 00:12:47.957 "trtype": "tcp", 00:12:47.957 "traddr": "10.0.0.2", 00:12:47.957 "adrfam": "ipv4", 00:12:47.957 "trsvcid": "4420", 00:12:47.957 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:47.957 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:47.957 "hdgst": false, 00:12:47.957 "ddgst": false 00:12:47.957 }, 00:12:47.957 "method": "bdev_nvme_attach_controller" 00:12:47.957 }' 00:12:47.957 [2024-11-20 06:23:19.590398] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:12:47.957 [2024-11-20 06:23:19.590443] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:12:47.957 [2024-11-20 06:23:19.592328] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:12:47.957 [2024-11-20 06:23:19.592353] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:12:47.957 [2024-11-20 06:23:19.592380] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:12:47.957 [2024-11-20 06:23:19.592396] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:12:47.957 [2024-11-20 06:23:19.596633] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:12:47.957 [2024-11-20 06:23:19.596678] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:12:47.957 [2024-11-20 06:23:19.751539] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:48.253 [2024-11-20 06:23:19.787237] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:12:48.253 [2024-11-20 06:23:19.846198] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:48.253 [2024-11-20 06:23:19.888624] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:12:48.253 [2024-11-20 06:23:19.946150] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:48.253 [2024-11-20 06:23:19.997520] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:48.253 [2024-11-20 06:23:20.000079] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:12:48.253 [2024-11-20 06:23:20.038718] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:12:48.518 Running I/O for 1 seconds... 00:12:48.518 Running I/O for 1 seconds... 00:12:48.518 Running I/O for 1 seconds... 00:12:48.518 Running I/O for 1 seconds... 00:12:49.454 8042.00 IOPS, 31.41 MiB/s 00:12:49.454 Latency(us) 00:12:49.454 [2024-11-20T05:23:21.290Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:49.454 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:12:49.454 Nvme1n1 : 1.02 8056.64 31.47 0.00 0.00 15806.99 6459.98 24217.11 00:12:49.454 [2024-11-20T05:23:21.290Z] =================================================================================================================== 00:12:49.454 [2024-11-20T05:23:21.290Z] Total : 8056.64 31.47 0.00 0.00 15806.99 6459.98 24217.11 00:12:49.454 12477.00 IOPS, 48.74 MiB/s 00:12:49.454 Latency(us) 00:12:49.454 [2024-11-20T05:23:21.290Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:49.454 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:12:49.454 Nvme1n1 : 1.01 12536.19 48.97 0.00 0.00 10178.37 4587.52 20721.86 00:12:49.454 [2024-11-20T05:23:21.290Z] =================================================================================================================== 00:12:49.454 [2024-11-20T05:23:21.290Z] Total : 12536.19 48.97 0.00 0.00 10178.37 4587.52 20721.86 00:12:49.454 7705.00 IOPS, 30.10 MiB/s 00:12:49.454 Latency(us) 00:12:49.454 [2024-11-20T05:23:21.290Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:49.454 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:12:49.454 Nvme1n1 : 1.00 7801.75 30.48 0.00 0.00 16369.87 3386.03 38198.13 00:12:49.454 [2024-11-20T05:23:21.290Z] =================================================================================================================== 00:12:49.454 [2024-11-20T05:23:21.290Z] Total : 7801.75 30.48 0.00 0.00 16369.87 3386.03 38198.13 00:12:49.454 06:23:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 411095 00:12:49.454 252144.00 IOPS, 984.94 MiB/s 00:12:49.454 Latency(us) 00:12:49.454 [2024-11-20T05:23:21.290Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:49.454 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:12:49.454 Nvme1n1 : 1.00 251738.14 983.35 0.00 0.00 505.76 238.93 1599.39 00:12:49.454 [2024-11-20T05:23:21.290Z] =================================================================================================================== 00:12:49.454 [2024-11-20T05:23:21.290Z] Total : 251738.14 983.35 0.00 0.00 505.76 238.93 1599.39 00:12:49.714 06:23:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 411098 00:12:49.714 06:23:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 411102 00:12:49.714 06:23:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:49.714 06:23:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.714 06:23:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:49.714 06:23:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.714 06:23:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:12:49.714 06:23:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:12:49.714 06:23:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:49.714 06:23:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:12:49.714 06:23:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:49.714 06:23:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:12:49.714 06:23:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:49.714 06:23:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:49.714 rmmod nvme_tcp 00:12:49.714 rmmod nvme_fabrics 00:12:49.714 rmmod nvme_keyring 00:12:49.714 06:23:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:49.714 06:23:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:12:49.714 06:23:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:12:49.714 06:23:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 410890 ']' 00:12:49.714 06:23:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 410890 00:12:49.714 06:23:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # '[' -z 410890 ']' 00:12:49.714 06:23:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # kill -0 410890 00:12:49.714 06:23:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@957 -- # uname 00:12:49.714 06:23:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:49.714 06:23:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 410890 00:12:49.714 06:23:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:49.714 06:23:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:49.714 06:23:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@970 -- # echo 'killing process with pid 410890' 00:12:49.714 killing process with pid 410890 00:12:49.714 06:23:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@971 -- # kill 410890 00:12:49.714 06:23:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@976 -- # wait 410890 00:12:49.974 06:23:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:49.974 06:23:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:49.974 06:23:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:49.974 06:23:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:12:49.974 06:23:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:12:49.974 06:23:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:49.974 06:23:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:12:49.974 06:23:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:49.974 06:23:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:49.974 06:23:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:49.974 06:23:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:49.974 06:23:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:52.513 06:23:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:52.513 00:12:52.513 real 0m10.843s 00:12:52.513 user 0m16.294s 00:12:52.513 sys 0m6.136s 00:12:52.513 06:23:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:52.513 06:23:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:52.513 ************************************ 00:12:52.513 END TEST nvmf_bdev_io_wait 00:12:52.513 ************************************ 00:12:52.513 06:23:23 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:12:52.513 06:23:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:12:52.513 06:23:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:52.513 06:23:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:52.513 ************************************ 00:12:52.513 START TEST nvmf_queue_depth 00:12:52.513 ************************************ 00:12:52.513 06:23:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:12:52.513 * Looking for test storage... 00:12:52.513 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:52.513 06:23:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:52.513 06:23:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lcov --version 00:12:52.513 06:23:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:52.513 06:23:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:52.513 06:23:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:52.513 06:23:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:52.513 06:23:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:52.513 06:23:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:12:52.513 06:23:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:12:52.513 06:23:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:12:52.513 06:23:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:12:52.513 06:23:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:12:52.513 06:23:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:12:52.513 06:23:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:12:52.513 06:23:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:52.513 06:23:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:12:52.513 06:23:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:12:52.513 06:23:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:52.513 06:23:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:52.513 06:23:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:12:52.513 06:23:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:12:52.513 06:23:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:52.513 06:23:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:12:52.513 06:23:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:12:52.513 06:23:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:12:52.513 06:23:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:12:52.513 06:23:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:52.513 06:23:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:12:52.513 06:23:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:12:52.513 06:23:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:52.513 06:23:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:52.513 06:23:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:12:52.513 06:23:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:52.513 06:23:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:52.513 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:52.513 --rc genhtml_branch_coverage=1 00:12:52.513 --rc genhtml_function_coverage=1 00:12:52.513 --rc genhtml_legend=1 00:12:52.513 --rc geninfo_all_blocks=1 00:12:52.513 --rc geninfo_unexecuted_blocks=1 00:12:52.513 00:12:52.513 ' 00:12:52.513 06:23:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:52.513 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:52.513 --rc genhtml_branch_coverage=1 00:12:52.513 --rc genhtml_function_coverage=1 00:12:52.513 --rc genhtml_legend=1 00:12:52.513 --rc geninfo_all_blocks=1 00:12:52.513 --rc geninfo_unexecuted_blocks=1 00:12:52.513 00:12:52.513 ' 00:12:52.513 06:23:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:52.513 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:52.514 --rc genhtml_branch_coverage=1 00:12:52.514 --rc genhtml_function_coverage=1 00:12:52.514 --rc genhtml_legend=1 00:12:52.514 --rc geninfo_all_blocks=1 00:12:52.514 --rc geninfo_unexecuted_blocks=1 00:12:52.514 00:12:52.514 ' 00:12:52.514 06:23:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:52.514 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:52.514 --rc genhtml_branch_coverage=1 00:12:52.514 --rc genhtml_function_coverage=1 00:12:52.514 --rc genhtml_legend=1 00:12:52.514 --rc geninfo_all_blocks=1 00:12:52.514 --rc geninfo_unexecuted_blocks=1 00:12:52.514 00:12:52.514 ' 00:12:52.514 06:23:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:52.514 06:23:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:12:52.514 06:23:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:52.514 06:23:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:52.514 06:23:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:52.514 06:23:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:52.514 06:23:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:52.514 06:23:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:52.514 06:23:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:52.514 06:23:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:52.514 06:23:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:52.514 06:23:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:52.514 06:23:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:12:52.514 06:23:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:12:52.514 06:23:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:52.514 06:23:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:52.514 06:23:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:52.514 06:23:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:52.514 06:23:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:52.514 06:23:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:12:52.514 06:23:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:52.514 06:23:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:52.514 06:23:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:52.514 06:23:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:52.514 06:23:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:52.514 06:23:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:52.514 06:23:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:12:52.514 06:23:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:52.514 06:23:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:12:52.514 06:23:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:52.514 06:23:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:52.514 06:23:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:52.514 06:23:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:52.514 06:23:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:52.514 06:23:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:52.514 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:52.514 06:23:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:52.514 06:23:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:52.514 06:23:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:52.514 06:23:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:12:52.514 06:23:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:12:52.514 06:23:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:52.514 06:23:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:12:52.514 06:23:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:52.514 06:23:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:52.514 06:23:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:52.514 06:23:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:52.514 06:23:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:52.514 06:23:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:52.514 06:23:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:52.514 06:23:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:52.514 06:23:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:52.514 06:23:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:52.514 06:23:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:12:52.514 06:23:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:59.087 06:23:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:59.087 06:23:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:12:59.087 06:23:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:59.087 06:23:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:59.087 06:23:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:59.087 06:23:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:59.087 06:23:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:59.087 06:23:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:12:59.087 06:23:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:59.087 06:23:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:12:59.087 06:23:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:12:59.087 06:23:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:12:59.087 06:23:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:12:59.088 06:23:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:12:59.088 06:23:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:12:59.088 06:23:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:59.088 06:23:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:59.088 06:23:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:59.088 06:23:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:59.088 06:23:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:59.088 06:23:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:59.088 06:23:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:59.088 06:23:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:59.088 06:23:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:59.088 06:23:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:59.088 06:23:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:59.088 06:23:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:59.088 06:23:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:59.088 06:23:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:59.088 06:23:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:59.088 06:23:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:59.088 06:23:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:59.088 06:23:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:59.088 06:23:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:59.088 06:23:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:59.088 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:59.088 06:23:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:59.088 06:23:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:59.088 06:23:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:59.088 06:23:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:59.088 06:23:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:59.088 06:23:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:59.088 06:23:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:59.088 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:59.088 06:23:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:59.088 06:23:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:59.088 06:23:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:59.088 06:23:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:59.088 06:23:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:59.088 06:23:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:59.088 06:23:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:59.088 06:23:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:59.088 06:23:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:59.088 06:23:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:59.088 06:23:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:59.088 06:23:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:59.088 06:23:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:59.088 06:23:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:59.088 06:23:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:59.088 06:23:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:59.088 Found net devices under 0000:86:00.0: cvl_0_0 00:12:59.088 06:23:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:59.088 06:23:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:59.088 06:23:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:59.088 06:23:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:59.088 06:23:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:59.088 06:23:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:59.088 06:23:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:59.088 06:23:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:59.088 06:23:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:59.088 Found net devices under 0000:86:00.1: cvl_0_1 00:12:59.088 06:23:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:59.088 06:23:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:59.088 06:23:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:12:59.088 06:23:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:59.088 06:23:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:59.088 06:23:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:59.088 06:23:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:59.088 06:23:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:59.088 06:23:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:59.088 06:23:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:59.088 06:23:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:59.088 06:23:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:59.088 06:23:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:59.088 06:23:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:59.088 06:23:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:59.088 06:23:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:59.088 06:23:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:59.088 06:23:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:59.088 06:23:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:59.088 06:23:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:59.088 06:23:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:59.088 06:23:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:59.088 06:23:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:59.088 06:23:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:59.088 06:23:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:59.088 06:23:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:59.088 06:23:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:59.088 06:23:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:59.088 06:23:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:59.088 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:59.088 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.454 ms 00:12:59.088 00:12:59.088 --- 10.0.0.2 ping statistics --- 00:12:59.088 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:59.088 rtt min/avg/max/mdev = 0.454/0.454/0.454/0.000 ms 00:12:59.088 06:23:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:59.088 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:59.088 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.213 ms 00:12:59.088 00:12:59.088 --- 10.0.0.1 ping statistics --- 00:12:59.088 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:59.088 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:12:59.088 06:23:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:59.088 06:23:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:12:59.088 06:23:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:59.088 06:23:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:59.088 06:23:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:59.088 06:23:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:59.088 06:23:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:59.088 06:23:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:59.088 06:23:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:59.088 06:23:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:12:59.088 06:23:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:59.088 06:23:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:59.089 06:23:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:59.089 06:23:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=414934 00:12:59.089 06:23:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 414934 00:12:59.089 06:23:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:59.089 06:23:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@833 -- # '[' -z 414934 ']' 00:12:59.089 06:23:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:59.089 06:23:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:59.089 06:23:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:59.089 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:59.089 06:23:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:59.089 06:23:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:59.089 [2024-11-20 06:23:30.073133] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:12:59.089 [2024-11-20 06:23:30.073191] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:59.089 [2024-11-20 06:23:30.155330] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:59.089 [2024-11-20 06:23:30.194559] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:59.089 [2024-11-20 06:23:30.194593] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:59.089 [2024-11-20 06:23:30.194601] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:59.089 [2024-11-20 06:23:30.194607] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:59.089 [2024-11-20 06:23:30.194612] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:59.089 [2024-11-20 06:23:30.195196] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:59.089 06:23:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:59.089 06:23:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@866 -- # return 0 00:12:59.089 06:23:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:59.089 06:23:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:59.089 06:23:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:59.347 06:23:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:59.347 06:23:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:59.347 06:23:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.347 06:23:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:59.347 [2024-11-20 06:23:30.956074] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:59.347 06:23:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.347 06:23:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:59.347 06:23:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.347 06:23:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:59.347 Malloc0 00:12:59.348 06:23:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.348 06:23:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:59.348 06:23:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.348 06:23:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:59.348 06:23:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.348 06:23:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:59.348 06:23:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.348 06:23:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:59.348 06:23:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.348 06:23:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:59.348 06:23:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.348 06:23:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:59.348 [2024-11-20 06:23:31.006371] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:59.348 06:23:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.348 06:23:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=415180 00:12:59.348 06:23:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:12:59.348 06:23:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:59.348 06:23:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 415180 /var/tmp/bdevperf.sock 00:12:59.348 06:23:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@833 -- # '[' -z 415180 ']' 00:12:59.348 06:23:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:59.348 06:23:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:59.348 06:23:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:59.348 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:59.348 06:23:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:59.348 06:23:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:59.348 [2024-11-20 06:23:31.059127] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:12:59.348 [2024-11-20 06:23:31.059168] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid415180 ] 00:12:59.348 [2024-11-20 06:23:31.133736] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:59.348 [2024-11-20 06:23:31.174141] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:59.606 06:23:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:59.606 06:23:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@866 -- # return 0 00:12:59.606 06:23:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:12:59.606 06:23:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.606 06:23:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:59.865 NVMe0n1 00:12:59.865 06:23:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.865 06:23:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:12:59.865 Running I/O for 10 seconds... 00:13:01.736 12157.00 IOPS, 47.49 MiB/s [2024-11-20T05:23:34.950Z] 12287.50 IOPS, 48.00 MiB/s [2024-11-20T05:23:35.887Z] 12288.00 IOPS, 48.00 MiB/s [2024-11-20T05:23:36.824Z] 12288.50 IOPS, 48.00 MiB/s [2024-11-20T05:23:37.760Z] 12396.60 IOPS, 48.42 MiB/s [2024-11-20T05:23:38.696Z] 12451.00 IOPS, 48.64 MiB/s [2024-11-20T05:23:39.633Z] 12433.29 IOPS, 48.57 MiB/s [2024-11-20T05:23:41.011Z] 12451.38 IOPS, 48.64 MiB/s [2024-11-20T05:23:41.947Z] 12505.44 IOPS, 48.85 MiB/s [2024-11-20T05:23:41.947Z] 12511.70 IOPS, 48.87 MiB/s 00:13:10.111 Latency(us) 00:13:10.111 [2024-11-20T05:23:41.947Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:10.111 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:13:10.111 Verification LBA range: start 0x0 length 0x4000 00:13:10.111 NVMe0n1 : 10.05 12550.34 49.02 0.00 0.00 81302.43 10860.25 51180.50 00:13:10.111 [2024-11-20T05:23:41.947Z] =================================================================================================================== 00:13:10.111 [2024-11-20T05:23:41.947Z] Total : 12550.34 49.02 0.00 0.00 81302.43 10860.25 51180.50 00:13:10.111 { 00:13:10.111 "results": [ 00:13:10.112 { 00:13:10.112 "job": "NVMe0n1", 00:13:10.112 "core_mask": "0x1", 00:13:10.112 "workload": "verify", 00:13:10.112 "status": "finished", 00:13:10.112 "verify_range": { 00:13:10.112 "start": 0, 00:13:10.112 "length": 16384 00:13:10.112 }, 00:13:10.112 "queue_depth": 1024, 00:13:10.112 "io_size": 4096, 00:13:10.112 "runtime": 10.0508, 00:13:10.112 "iops": 12550.344251203884, 00:13:10.112 "mibps": 49.02478223126517, 00:13:10.112 "io_failed": 0, 00:13:10.112 "io_timeout": 0, 00:13:10.112 "avg_latency_us": 81302.42605580074, 00:13:10.112 "min_latency_us": 10860.251428571428, 00:13:10.112 "max_latency_us": 51180.49523809524 00:13:10.112 } 00:13:10.112 ], 00:13:10.112 "core_count": 1 00:13:10.112 } 00:13:10.112 06:23:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 415180 00:13:10.112 06:23:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' -z 415180 ']' 00:13:10.112 06:23:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # kill -0 415180 00:13:10.112 06:23:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # uname 00:13:10.112 06:23:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:10.112 06:23:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 415180 00:13:10.112 06:23:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:10.112 06:23:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:10.112 06:23:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@970 -- # echo 'killing process with pid 415180' 00:13:10.112 killing process with pid 415180 00:13:10.112 06:23:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@971 -- # kill 415180 00:13:10.112 Received shutdown signal, test time was about 10.000000 seconds 00:13:10.112 00:13:10.112 Latency(us) 00:13:10.112 [2024-11-20T05:23:41.948Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:10.112 [2024-11-20T05:23:41.948Z] =================================================================================================================== 00:13:10.112 [2024-11-20T05:23:41.948Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:10.112 06:23:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@976 -- # wait 415180 00:13:10.112 06:23:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:13:10.112 06:23:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:13:10.112 06:23:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:10.112 06:23:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:13:10.112 06:23:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:10.112 06:23:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:13:10.112 06:23:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:10.112 06:23:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:10.112 rmmod nvme_tcp 00:13:10.112 rmmod nvme_fabrics 00:13:10.112 rmmod nvme_keyring 00:13:10.112 06:23:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:10.112 06:23:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:13:10.112 06:23:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:13:10.112 06:23:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 414934 ']' 00:13:10.112 06:23:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 414934 00:13:10.112 06:23:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' -z 414934 ']' 00:13:10.112 06:23:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # kill -0 414934 00:13:10.112 06:23:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # uname 00:13:10.112 06:23:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:10.112 06:23:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 414934 00:13:10.371 06:23:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:13:10.371 06:23:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:13:10.371 06:23:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@970 -- # echo 'killing process with pid 414934' 00:13:10.371 killing process with pid 414934 00:13:10.371 06:23:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@971 -- # kill 414934 00:13:10.371 06:23:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@976 -- # wait 414934 00:13:10.371 06:23:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:10.371 06:23:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:10.371 06:23:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:10.371 06:23:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:13:10.371 06:23:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:13:10.371 06:23:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:10.371 06:23:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:13:10.371 06:23:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:10.371 06:23:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:10.371 06:23:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:10.371 06:23:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:10.371 06:23:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:12.908 06:23:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:12.908 00:13:12.908 real 0m20.410s 00:13:12.908 user 0m23.957s 00:13:12.908 sys 0m6.083s 00:13:12.908 06:23:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:12.908 06:23:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:12.908 ************************************ 00:13:12.908 END TEST nvmf_queue_depth 00:13:12.908 ************************************ 00:13:12.908 06:23:44 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:13:12.908 06:23:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:13:12.908 06:23:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:12.908 06:23:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:13:12.908 ************************************ 00:13:12.908 START TEST nvmf_target_multipath 00:13:12.908 ************************************ 00:13:12.908 06:23:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:13:12.908 * Looking for test storage... 00:13:12.908 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:12.908 06:23:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:13:12.908 06:23:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lcov --version 00:13:12.908 06:23:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:13:12.908 06:23:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:13:12.908 06:23:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:12.908 06:23:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:12.908 06:23:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:12.908 06:23:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:13:12.908 06:23:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:13:12.908 06:23:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:13:12.908 06:23:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:13:12.908 06:23:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:13:12.908 06:23:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:13:12.908 06:23:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:13:12.908 06:23:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:12.908 06:23:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:13:12.908 06:23:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:13:12.908 06:23:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:12.908 06:23:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:12.908 06:23:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:13:12.908 06:23:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:13:12.908 06:23:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:12.908 06:23:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:13:12.908 06:23:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:13:12.908 06:23:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:13:12.909 06:23:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:13:12.909 06:23:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:12.909 06:23:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:13:12.909 06:23:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:13:12.909 06:23:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:12.909 06:23:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:12.909 06:23:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:13:12.909 06:23:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:12.909 06:23:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:13:12.909 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:12.909 --rc genhtml_branch_coverage=1 00:13:12.909 --rc genhtml_function_coverage=1 00:13:12.909 --rc genhtml_legend=1 00:13:12.909 --rc geninfo_all_blocks=1 00:13:12.909 --rc geninfo_unexecuted_blocks=1 00:13:12.909 00:13:12.909 ' 00:13:12.909 06:23:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:13:12.909 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:12.909 --rc genhtml_branch_coverage=1 00:13:12.909 --rc genhtml_function_coverage=1 00:13:12.909 --rc genhtml_legend=1 00:13:12.909 --rc geninfo_all_blocks=1 00:13:12.909 --rc geninfo_unexecuted_blocks=1 00:13:12.909 00:13:12.909 ' 00:13:12.909 06:23:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:13:12.909 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:12.909 --rc genhtml_branch_coverage=1 00:13:12.909 --rc genhtml_function_coverage=1 00:13:12.909 --rc genhtml_legend=1 00:13:12.909 --rc geninfo_all_blocks=1 00:13:12.909 --rc geninfo_unexecuted_blocks=1 00:13:12.909 00:13:12.909 ' 00:13:12.909 06:23:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:13:12.909 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:12.909 --rc genhtml_branch_coverage=1 00:13:12.909 --rc genhtml_function_coverage=1 00:13:12.909 --rc genhtml_legend=1 00:13:12.909 --rc geninfo_all_blocks=1 00:13:12.909 --rc geninfo_unexecuted_blocks=1 00:13:12.909 00:13:12.909 ' 00:13:12.909 06:23:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:12.909 06:23:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:13:12.909 06:23:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:12.909 06:23:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:12.909 06:23:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:12.909 06:23:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:12.909 06:23:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:12.909 06:23:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:12.909 06:23:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:12.909 06:23:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:12.909 06:23:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:12.909 06:23:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:12.909 06:23:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:13:12.909 06:23:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:13:12.909 06:23:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:12.909 06:23:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:12.909 06:23:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:12.909 06:23:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:12.909 06:23:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:12.909 06:23:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:13:12.909 06:23:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:12.909 06:23:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:12.909 06:23:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:12.909 06:23:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:12.909 06:23:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:12.909 06:23:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:12.909 06:23:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:13:12.909 06:23:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:12.909 06:23:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:13:12.909 06:23:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:12.909 06:23:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:12.909 06:23:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:12.909 06:23:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:12.909 06:23:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:12.909 06:23:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:12.909 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:12.909 06:23:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:12.909 06:23:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:12.909 06:23:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:12.909 06:23:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:12.909 06:23:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:12.909 06:23:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:13:12.909 06:23:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:12.909 06:23:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:13:12.909 06:23:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:12.909 06:23:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:12.909 06:23:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:12.909 06:23:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:12.909 06:23:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:12.909 06:23:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:12.909 06:23:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:12.909 06:23:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:12.909 06:23:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:12.909 06:23:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:12.909 06:23:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:13:12.909 06:23:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:13:19.484 06:23:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:19.484 06:23:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:13:19.484 06:23:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:19.484 06:23:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:19.484 06:23:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:19.484 06:23:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:19.484 06:23:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:19.484 06:23:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:13:19.484 06:23:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:19.484 06:23:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:13:19.484 06:23:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:13:19.484 06:23:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:13:19.484 06:23:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:13:19.484 06:23:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:13:19.484 06:23:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:13:19.484 06:23:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:19.484 06:23:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:19.484 06:23:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:19.484 06:23:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:19.484 06:23:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:19.484 06:23:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:19.484 06:23:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:19.484 06:23:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:19.484 06:23:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:19.484 06:23:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:19.484 06:23:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:19.484 06:23:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:19.484 06:23:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:19.484 06:23:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:19.484 06:23:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:19.484 06:23:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:19.484 06:23:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:19.484 06:23:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:19.484 06:23:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:19.484 06:23:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:19.484 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:19.484 06:23:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:19.484 06:23:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:19.484 06:23:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:19.484 06:23:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:19.484 06:23:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:19.484 06:23:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:19.484 06:23:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:19.484 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:19.484 06:23:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:19.484 06:23:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:19.484 06:23:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:19.484 06:23:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:19.484 06:23:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:19.484 06:23:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:19.484 06:23:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:19.484 06:23:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:19.484 06:23:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:19.484 06:23:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:19.484 06:23:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:19.484 06:23:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:19.484 06:23:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:19.484 06:23:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:19.484 06:23:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:19.484 06:23:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:19.484 Found net devices under 0000:86:00.0: cvl_0_0 00:13:19.484 06:23:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:19.484 06:23:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:19.484 06:23:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:19.484 06:23:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:19.484 06:23:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:19.484 06:23:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:19.484 06:23:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:19.484 06:23:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:19.484 06:23:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:19.484 Found net devices under 0000:86:00.1: cvl_0_1 00:13:19.484 06:23:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:19.484 06:23:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:19.484 06:23:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:13:19.484 06:23:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:19.484 06:23:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:19.484 06:23:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:19.484 06:23:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:19.484 06:23:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:19.484 06:23:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:19.484 06:23:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:19.484 06:23:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:19.485 06:23:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:19.485 06:23:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:19.485 06:23:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:19.485 06:23:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:19.485 06:23:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:19.485 06:23:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:19.485 06:23:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:19.485 06:23:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:19.485 06:23:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:19.485 06:23:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:19.485 06:23:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:19.485 06:23:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:19.485 06:23:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:19.485 06:23:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:19.485 06:23:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:19.485 06:23:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:19.485 06:23:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:19.485 06:23:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:19.485 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:19.485 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.342 ms 00:13:19.485 00:13:19.485 --- 10.0.0.2 ping statistics --- 00:13:19.485 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:19.485 rtt min/avg/max/mdev = 0.342/0.342/0.342/0.000 ms 00:13:19.485 06:23:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:19.485 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:19.485 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.182 ms 00:13:19.485 00:13:19.485 --- 10.0.0.1 ping statistics --- 00:13:19.485 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:19.485 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:13:19.485 06:23:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:19.485 06:23:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:13:19.485 06:23:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:19.485 06:23:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:19.485 06:23:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:19.485 06:23:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:19.485 06:23:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:19.485 06:23:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:19.485 06:23:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:19.485 06:23:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:13:19.485 06:23:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:13:19.485 only one NIC for nvmf test 00:13:19.485 06:23:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:13:19.485 06:23:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:19.485 06:23:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:13:19.485 06:23:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:19.485 06:23:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:13:19.485 06:23:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:19.485 06:23:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:19.485 rmmod nvme_tcp 00:13:19.485 rmmod nvme_fabrics 00:13:19.485 rmmod nvme_keyring 00:13:19.485 06:23:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:19.485 06:23:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:13:19.485 06:23:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:13:19.485 06:23:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:13:19.485 06:23:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:19.485 06:23:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:19.485 06:23:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:19.485 06:23:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:13:19.485 06:23:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:13:19.485 06:23:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:19.485 06:23:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:13:19.485 06:23:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:19.485 06:23:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:19.485 06:23:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:19.485 06:23:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:19.485 06:23:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:20.864 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:20.864 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:13:20.864 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:13:20.864 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:20.864 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:13:20.864 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:20.864 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:13:20.864 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:20.864 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:20.864 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:20.864 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:13:20.864 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:13:20.864 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:13:20.864 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:20.864 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:20.864 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:20.864 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:13:21.123 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:13:21.123 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:13:21.123 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:21.123 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:21.123 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:21.123 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:21.123 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:21.123 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:21.123 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:21.123 00:13:21.123 real 0m8.403s 00:13:21.123 user 0m1.838s 00:13:21.123 sys 0m4.558s 00:13:21.123 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:21.123 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:13:21.123 ************************************ 00:13:21.123 END TEST nvmf_target_multipath 00:13:21.123 ************************************ 00:13:21.123 06:23:52 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:13:21.123 06:23:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:13:21.123 06:23:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:21.123 06:23:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:13:21.123 ************************************ 00:13:21.123 START TEST nvmf_zcopy 00:13:21.123 ************************************ 00:13:21.123 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:13:21.123 * Looking for test storage... 00:13:21.123 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:21.124 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:13:21.124 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lcov --version 00:13:21.124 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:13:21.124 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:13:21.124 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:21.124 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:21.124 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:21.124 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:13:21.124 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:13:21.124 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:13:21.124 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:13:21.124 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:13:21.124 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:13:21.124 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:13:21.124 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:21.124 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:13:21.124 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:13:21.124 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:21.124 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:21.124 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:13:21.124 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:13:21.124 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:21.124 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:13:21.124 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:13:21.383 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:13:21.383 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:13:21.383 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:21.383 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:13:21.383 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:13:21.383 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:21.383 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:21.383 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:13:21.383 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:21.383 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:13:21.383 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:21.383 --rc genhtml_branch_coverage=1 00:13:21.383 --rc genhtml_function_coverage=1 00:13:21.383 --rc genhtml_legend=1 00:13:21.383 --rc geninfo_all_blocks=1 00:13:21.383 --rc geninfo_unexecuted_blocks=1 00:13:21.383 00:13:21.383 ' 00:13:21.383 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:13:21.383 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:21.383 --rc genhtml_branch_coverage=1 00:13:21.383 --rc genhtml_function_coverage=1 00:13:21.383 --rc genhtml_legend=1 00:13:21.383 --rc geninfo_all_blocks=1 00:13:21.383 --rc geninfo_unexecuted_blocks=1 00:13:21.383 00:13:21.383 ' 00:13:21.383 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:13:21.383 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:21.383 --rc genhtml_branch_coverage=1 00:13:21.383 --rc genhtml_function_coverage=1 00:13:21.383 --rc genhtml_legend=1 00:13:21.383 --rc geninfo_all_blocks=1 00:13:21.383 --rc geninfo_unexecuted_blocks=1 00:13:21.383 00:13:21.383 ' 00:13:21.383 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:13:21.383 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:21.384 --rc genhtml_branch_coverage=1 00:13:21.384 --rc genhtml_function_coverage=1 00:13:21.384 --rc genhtml_legend=1 00:13:21.384 --rc geninfo_all_blocks=1 00:13:21.384 --rc geninfo_unexecuted_blocks=1 00:13:21.384 00:13:21.384 ' 00:13:21.384 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:21.384 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:13:21.384 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:21.384 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:21.384 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:21.384 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:21.384 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:21.384 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:21.384 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:21.384 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:21.384 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:21.384 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:21.384 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:13:21.384 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:13:21.384 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:21.384 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:21.384 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:21.384 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:21.384 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:21.384 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:13:21.384 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:21.384 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:21.384 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:21.384 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:21.384 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:21.384 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:21.384 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:13:21.384 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:21.384 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:13:21.384 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:21.384 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:21.384 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:21.384 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:21.384 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:21.384 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:21.384 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:21.384 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:21.384 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:21.384 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:21.384 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:13:21.384 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:21.384 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:21.384 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:21.384 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:21.384 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:21.384 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:21.384 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:21.384 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:21.384 06:23:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:21.384 06:23:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:21.384 06:23:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:13:21.384 06:23:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:27.955 06:23:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:27.955 06:23:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:13:27.955 06:23:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:27.955 06:23:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:27.955 06:23:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:27.955 06:23:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:27.955 06:23:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:27.955 06:23:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:13:27.955 06:23:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:27.955 06:23:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:13:27.956 06:23:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:13:27.956 06:23:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:13:27.956 06:23:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:13:27.956 06:23:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:13:27.956 06:23:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:13:27.956 06:23:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:27.956 06:23:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:27.956 06:23:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:27.956 06:23:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:27.956 06:23:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:27.956 06:23:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:27.956 06:23:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:27.956 06:23:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:27.956 06:23:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:27.956 06:23:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:27.956 06:23:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:27.956 06:23:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:27.956 06:23:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:27.956 06:23:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:27.956 06:23:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:27.956 06:23:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:27.956 06:23:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:27.956 06:23:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:27.956 06:23:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:27.956 06:23:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:27.956 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:27.956 06:23:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:27.956 06:23:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:27.956 06:23:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:27.956 06:23:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:27.956 06:23:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:27.956 06:23:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:27.956 06:23:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:27.956 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:27.956 06:23:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:27.956 06:23:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:27.956 06:23:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:27.956 06:23:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:27.956 06:23:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:27.956 06:23:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:27.956 06:23:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:27.956 06:23:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:27.956 06:23:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:27.956 06:23:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:27.956 06:23:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:27.956 06:23:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:27.956 06:23:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:27.956 06:23:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:27.956 06:23:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:27.956 06:23:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:27.956 Found net devices under 0000:86:00.0: cvl_0_0 00:13:27.956 06:23:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:27.956 06:23:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:27.956 06:23:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:27.956 06:23:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:27.956 06:23:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:27.956 06:23:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:27.956 06:23:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:27.956 06:23:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:27.956 06:23:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:27.956 Found net devices under 0000:86:00.1: cvl_0_1 00:13:27.956 06:23:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:27.956 06:23:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:27.956 06:23:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:13:27.956 06:23:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:27.956 06:23:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:27.956 06:23:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:27.956 06:23:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:27.956 06:23:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:27.956 06:23:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:27.956 06:23:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:27.956 06:23:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:27.956 06:23:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:27.956 06:23:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:27.956 06:23:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:27.956 06:23:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:27.956 06:23:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:27.956 06:23:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:27.956 06:23:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:27.956 06:23:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:27.956 06:23:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:27.956 06:23:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:27.956 06:23:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:27.956 06:23:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:27.956 06:23:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:27.956 06:23:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:27.956 06:23:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:27.956 06:23:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:27.956 06:23:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:27.956 06:23:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:27.956 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:27.956 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.462 ms 00:13:27.956 00:13:27.956 --- 10.0.0.2 ping statistics --- 00:13:27.956 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:27.956 rtt min/avg/max/mdev = 0.462/0.462/0.462/0.000 ms 00:13:27.956 06:23:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:27.956 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:27.956 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.150 ms 00:13:27.956 00:13:27.956 --- 10.0.0.1 ping statistics --- 00:13:27.956 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:27.956 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:13:27.956 06:23:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:27.956 06:23:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:13:27.956 06:23:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:27.956 06:23:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:27.956 06:23:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:27.956 06:23:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:27.956 06:23:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:27.956 06:23:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:27.956 06:23:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:27.957 06:23:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:13:27.957 06:23:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:27.957 06:23:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:27.957 06:23:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:27.957 06:23:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=424095 00:13:27.957 06:23:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 424095 00:13:27.957 06:23:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:27.957 06:23:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@833 -- # '[' -z 424095 ']' 00:13:27.957 06:23:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:27.957 06:23:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:27.957 06:23:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:27.957 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:27.957 06:23:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:27.957 06:23:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:27.957 [2024-11-20 06:23:59.038269] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:13:27.957 [2024-11-20 06:23:59.038318] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:27.957 [2024-11-20 06:23:59.115738] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:27.957 [2024-11-20 06:23:59.156362] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:27.957 [2024-11-20 06:23:59.156400] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:27.957 [2024-11-20 06:23:59.156407] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:27.957 [2024-11-20 06:23:59.156414] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:27.957 [2024-11-20 06:23:59.156419] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:27.957 [2024-11-20 06:23:59.156977] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:27.957 06:23:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:27.957 06:23:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@866 -- # return 0 00:13:27.957 06:23:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:27.957 06:23:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:27.957 06:23:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:27.957 06:23:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:27.957 06:23:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:13:27.957 06:23:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:13:27.957 06:23:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.957 06:23:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:27.957 [2024-11-20 06:23:59.303942] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:27.957 06:23:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.957 06:23:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:27.957 06:23:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.957 06:23:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:27.957 06:23:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.957 06:23:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:27.957 06:23:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.957 06:23:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:27.957 [2024-11-20 06:23:59.328157] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:27.957 06:23:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.957 06:23:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:27.957 06:23:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.957 06:23:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:27.957 06:23:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.957 06:23:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:13:27.957 06:23:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.957 06:23:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:27.957 malloc0 00:13:27.957 06:23:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.957 06:23:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:13:27.957 06:23:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.957 06:23:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:27.957 06:23:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.957 06:23:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:13:27.957 06:23:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:13:27.957 06:23:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:13:27.957 06:23:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:13:27.957 06:23:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:13:27.957 06:23:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:13:27.957 { 00:13:27.957 "params": { 00:13:27.957 "name": "Nvme$subsystem", 00:13:27.957 "trtype": "$TEST_TRANSPORT", 00:13:27.957 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:27.957 "adrfam": "ipv4", 00:13:27.957 "trsvcid": "$NVMF_PORT", 00:13:27.957 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:27.957 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:27.957 "hdgst": ${hdgst:-false}, 00:13:27.957 "ddgst": ${ddgst:-false} 00:13:27.957 }, 00:13:27.957 "method": "bdev_nvme_attach_controller" 00:13:27.957 } 00:13:27.957 EOF 00:13:27.957 )") 00:13:27.957 06:23:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:13:27.957 06:23:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:13:27.957 06:23:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:13:27.957 06:23:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:13:27.957 "params": { 00:13:27.957 "name": "Nvme1", 00:13:27.957 "trtype": "tcp", 00:13:27.957 "traddr": "10.0.0.2", 00:13:27.957 "adrfam": "ipv4", 00:13:27.957 "trsvcid": "4420", 00:13:27.957 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:27.957 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:27.957 "hdgst": false, 00:13:27.957 "ddgst": false 00:13:27.957 }, 00:13:27.957 "method": "bdev_nvme_attach_controller" 00:13:27.957 }' 00:13:27.957 [2024-11-20 06:23:59.410026] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:13:27.957 [2024-11-20 06:23:59.410067] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid424118 ] 00:13:27.957 [2024-11-20 06:23:59.486822] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:27.957 [2024-11-20 06:23:59.527310] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:27.957 Running I/O for 10 seconds... 00:13:30.056 8618.00 IOPS, 67.33 MiB/s [2024-11-20T05:24:02.826Z] 8739.00 IOPS, 68.27 MiB/s [2024-11-20T05:24:03.761Z] 8780.00 IOPS, 68.59 MiB/s [2024-11-20T05:24:05.136Z] 8800.00 IOPS, 68.75 MiB/s [2024-11-20T05:24:06.071Z] 8812.60 IOPS, 68.85 MiB/s [2024-11-20T05:24:07.006Z] 8828.83 IOPS, 68.98 MiB/s [2024-11-20T05:24:07.942Z] 8826.14 IOPS, 68.95 MiB/s [2024-11-20T05:24:08.876Z] 8831.50 IOPS, 69.00 MiB/s [2024-11-20T05:24:09.810Z] 8835.67 IOPS, 69.03 MiB/s [2024-11-20T05:24:09.810Z] 8838.90 IOPS, 69.05 MiB/s 00:13:37.974 Latency(us) 00:13:37.974 [2024-11-20T05:24:09.810Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:37.974 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:13:37.974 Verification LBA range: start 0x0 length 0x1000 00:13:37.974 Nvme1n1 : 10.05 8809.03 68.82 0.00 0.00 14432.93 1466.76 40694.74 00:13:37.974 [2024-11-20T05:24:09.810Z] =================================================================================================================== 00:13:37.974 [2024-11-20T05:24:09.810Z] Total : 8809.03 68.82 0.00 0.00 14432.93 1466.76 40694.74 00:13:38.233 06:24:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:13:38.233 06:24:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=426403 00:13:38.233 06:24:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:13:38.233 06:24:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:13:38.233 06:24:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:38.233 06:24:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:13:38.233 06:24:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:13:38.233 06:24:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:13:38.233 06:24:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:13:38.233 { 00:13:38.233 "params": { 00:13:38.233 "name": "Nvme$subsystem", 00:13:38.233 "trtype": "$TEST_TRANSPORT", 00:13:38.233 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:38.233 "adrfam": "ipv4", 00:13:38.233 "trsvcid": "$NVMF_PORT", 00:13:38.233 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:38.233 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:38.233 "hdgst": ${hdgst:-false}, 00:13:38.233 "ddgst": ${ddgst:-false} 00:13:38.233 }, 00:13:38.233 "method": "bdev_nvme_attach_controller" 00:13:38.233 } 00:13:38.233 EOF 00:13:38.233 )") 00:13:38.233 06:24:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:13:38.233 [2024-11-20 06:24:09.962873] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.233 [2024-11-20 06:24:09.962913] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.233 06:24:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:13:38.233 06:24:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:13:38.233 06:24:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:13:38.233 "params": { 00:13:38.233 "name": "Nvme1", 00:13:38.233 "trtype": "tcp", 00:13:38.233 "traddr": "10.0.0.2", 00:13:38.233 "adrfam": "ipv4", 00:13:38.233 "trsvcid": "4420", 00:13:38.233 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:38.233 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:38.233 "hdgst": false, 00:13:38.233 "ddgst": false 00:13:38.233 }, 00:13:38.233 "method": "bdev_nvme_attach_controller" 00:13:38.233 }' 00:13:38.233 [2024-11-20 06:24:09.974868] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.233 [2024-11-20 06:24:09.974883] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.233 [2024-11-20 06:24:09.985838] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:13:38.233 [2024-11-20 06:24:09.985882] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid426403 ] 00:13:38.233 [2024-11-20 06:24:09.986895] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.233 [2024-11-20 06:24:09.986908] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.233 [2024-11-20 06:24:09.998928] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.233 [2024-11-20 06:24:09.998939] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.233 [2024-11-20 06:24:10.010961] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.233 [2024-11-20 06:24:10.010972] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.233 [2024-11-20 06:24:10.022989] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.233 [2024-11-20 06:24:10.023001] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.233 [2024-11-20 06:24:10.035023] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.233 [2024-11-20 06:24:10.035041] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.233 [2024-11-20 06:24:10.045129] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:38.233 [2024-11-20 06:24:10.047056] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.233 [2024-11-20 06:24:10.047067] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.233 [2024-11-20 06:24:10.059091] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.233 [2024-11-20 06:24:10.059110] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.492 [2024-11-20 06:24:10.071151] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.492 [2024-11-20 06:24:10.071174] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.492 [2024-11-20 06:24:10.083157] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.492 [2024-11-20 06:24:10.083168] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.492 [2024-11-20 06:24:10.088957] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:38.492 [2024-11-20 06:24:10.095184] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.492 [2024-11-20 06:24:10.095196] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.492 [2024-11-20 06:24:10.107232] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.492 [2024-11-20 06:24:10.107252] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.492 [2024-11-20 06:24:10.119257] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.492 [2024-11-20 06:24:10.119273] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.492 [2024-11-20 06:24:10.131286] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.492 [2024-11-20 06:24:10.131299] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.492 [2024-11-20 06:24:10.143315] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.492 [2024-11-20 06:24:10.143328] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.492 [2024-11-20 06:24:10.155350] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.492 [2024-11-20 06:24:10.155363] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.492 [2024-11-20 06:24:10.167378] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.492 [2024-11-20 06:24:10.167388] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.492 [2024-11-20 06:24:10.179440] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.492 [2024-11-20 06:24:10.179462] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.492 [2024-11-20 06:24:10.191452] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.492 [2024-11-20 06:24:10.191468] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.492 [2024-11-20 06:24:10.203482] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.492 [2024-11-20 06:24:10.203497] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.492 [2024-11-20 06:24:10.215515] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.492 [2024-11-20 06:24:10.215529] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.492 [2024-11-20 06:24:10.227541] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.492 [2024-11-20 06:24:10.227551] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.492 [2024-11-20 06:24:10.239574] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.492 [2024-11-20 06:24:10.239584] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.492 [2024-11-20 06:24:10.251610] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.492 [2024-11-20 06:24:10.251627] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.492 [2024-11-20 06:24:10.263639] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.492 [2024-11-20 06:24:10.263652] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.492 [2024-11-20 06:24:10.275672] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.492 [2024-11-20 06:24:10.275682] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.492 [2024-11-20 06:24:10.287704] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.492 [2024-11-20 06:24:10.287714] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.492 [2024-11-20 06:24:10.299742] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.492 [2024-11-20 06:24:10.299756] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.492 [2024-11-20 06:24:10.311771] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.492 [2024-11-20 06:24:10.311781] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.492 [2024-11-20 06:24:10.323814] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.492 [2024-11-20 06:24:10.323831] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.751 [2024-11-20 06:24:10.335843] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.751 [2024-11-20 06:24:10.335859] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.751 [2024-11-20 06:24:10.347874] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.751 [2024-11-20 06:24:10.347886] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.751 [2024-11-20 06:24:10.359915] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.751 [2024-11-20 06:24:10.359934] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.751 Running I/O for 5 seconds... 00:13:38.751 [2024-11-20 06:24:10.371946] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.751 [2024-11-20 06:24:10.371964] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.751 [2024-11-20 06:24:10.388598] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.751 [2024-11-20 06:24:10.388634] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.751 [2024-11-20 06:24:10.398133] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.751 [2024-11-20 06:24:10.398152] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.751 [2024-11-20 06:24:10.407127] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.751 [2024-11-20 06:24:10.407148] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.751 [2024-11-20 06:24:10.422146] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.751 [2024-11-20 06:24:10.422168] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.751 [2024-11-20 06:24:10.433118] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.751 [2024-11-20 06:24:10.433139] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.751 [2024-11-20 06:24:10.447913] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.751 [2024-11-20 06:24:10.447932] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.751 [2024-11-20 06:24:10.463557] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.751 [2024-11-20 06:24:10.463576] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.751 [2024-11-20 06:24:10.472873] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.751 [2024-11-20 06:24:10.472893] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.751 [2024-11-20 06:24:10.487115] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.751 [2024-11-20 06:24:10.487134] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.751 [2024-11-20 06:24:10.500672] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.751 [2024-11-20 06:24:10.500692] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.751 [2024-11-20 06:24:10.514426] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.751 [2024-11-20 06:24:10.514445] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.751 [2024-11-20 06:24:10.523363] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.751 [2024-11-20 06:24:10.523382] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.751 [2024-11-20 06:24:10.532186] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.751 [2024-11-20 06:24:10.532211] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.751 [2024-11-20 06:24:10.546527] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.751 [2024-11-20 06:24:10.546546] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.751 [2024-11-20 06:24:10.560060] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.751 [2024-11-20 06:24:10.560079] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.751 [2024-11-20 06:24:10.574171] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.751 [2024-11-20 06:24:10.574192] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.010 [2024-11-20 06:24:10.588709] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.010 [2024-11-20 06:24:10.588730] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.010 [2024-11-20 06:24:10.602633] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.010 [2024-11-20 06:24:10.602653] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.010 [2024-11-20 06:24:10.616498] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.010 [2024-11-20 06:24:10.616517] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.010 [2024-11-20 06:24:10.625453] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.010 [2024-11-20 06:24:10.625472] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.010 [2024-11-20 06:24:10.639446] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.010 [2024-11-20 06:24:10.639468] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.010 [2024-11-20 06:24:10.648442] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.010 [2024-11-20 06:24:10.648462] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.010 [2024-11-20 06:24:10.658303] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.010 [2024-11-20 06:24:10.658322] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.010 [2024-11-20 06:24:10.672855] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.010 [2024-11-20 06:24:10.672876] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.010 [2024-11-20 06:24:10.681889] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.010 [2024-11-20 06:24:10.681909] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.010 [2024-11-20 06:24:10.696106] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.010 [2024-11-20 06:24:10.696127] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.010 [2024-11-20 06:24:10.704862] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.010 [2024-11-20 06:24:10.704882] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.010 [2024-11-20 06:24:10.718948] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.010 [2024-11-20 06:24:10.718968] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.010 [2024-11-20 06:24:10.727936] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.010 [2024-11-20 06:24:10.727955] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.010 [2024-11-20 06:24:10.741960] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.010 [2024-11-20 06:24:10.741979] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.010 [2024-11-20 06:24:10.755442] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.010 [2024-11-20 06:24:10.755467] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.010 [2024-11-20 06:24:10.769003] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.010 [2024-11-20 06:24:10.769022] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.010 [2024-11-20 06:24:10.777713] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.010 [2024-11-20 06:24:10.777732] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.010 [2024-11-20 06:24:10.787142] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.010 [2024-11-20 06:24:10.787161] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.010 [2024-11-20 06:24:10.801247] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.010 [2024-11-20 06:24:10.801266] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.010 [2024-11-20 06:24:10.814974] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.010 [2024-11-20 06:24:10.814993] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.010 [2024-11-20 06:24:10.828656] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.010 [2024-11-20 06:24:10.828676] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.010 [2024-11-20 06:24:10.842293] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.010 [2024-11-20 06:24:10.842314] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.269 [2024-11-20 06:24:10.856344] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.269 [2024-11-20 06:24:10.856364] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.269 [2024-11-20 06:24:10.865305] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.269 [2024-11-20 06:24:10.865325] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.269 [2024-11-20 06:24:10.879624] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.269 [2024-11-20 06:24:10.879643] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.269 [2024-11-20 06:24:10.888617] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.269 [2024-11-20 06:24:10.888636] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.269 [2024-11-20 06:24:10.902464] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.269 [2024-11-20 06:24:10.902483] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.269 [2024-11-20 06:24:10.915751] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.269 [2024-11-20 06:24:10.915770] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.269 [2024-11-20 06:24:10.929412] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.269 [2024-11-20 06:24:10.929432] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.269 [2024-11-20 06:24:10.942918] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.269 [2024-11-20 06:24:10.942938] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.269 [2024-11-20 06:24:10.951854] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.269 [2024-11-20 06:24:10.951874] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.269 [2024-11-20 06:24:10.965798] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.269 [2024-11-20 06:24:10.965817] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.269 [2024-11-20 06:24:10.974756] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.269 [2024-11-20 06:24:10.974774] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.269 [2024-11-20 06:24:10.988421] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.269 [2024-11-20 06:24:10.988439] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.269 [2024-11-20 06:24:11.002080] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.269 [2024-11-20 06:24:11.002105] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.269 [2024-11-20 06:24:11.016101] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.269 [2024-11-20 06:24:11.016121] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.269 [2024-11-20 06:24:11.029673] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.269 [2024-11-20 06:24:11.029692] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.270 [2024-11-20 06:24:11.038806] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.270 [2024-11-20 06:24:11.038824] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.270 [2024-11-20 06:24:11.052555] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.270 [2024-11-20 06:24:11.052574] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.270 [2024-11-20 06:24:11.066016] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.270 [2024-11-20 06:24:11.066036] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.270 [2024-11-20 06:24:11.079731] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.270 [2024-11-20 06:24:11.079753] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.270 [2024-11-20 06:24:11.093384] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.270 [2024-11-20 06:24:11.093405] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.270 [2024-11-20 06:24:11.102342] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.270 [2024-11-20 06:24:11.102365] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.529 [2024-11-20 06:24:11.111255] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.529 [2024-11-20 06:24:11.111277] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.529 [2024-11-20 06:24:11.125767] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.529 [2024-11-20 06:24:11.125793] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.529 [2024-11-20 06:24:11.134807] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.529 [2024-11-20 06:24:11.134827] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.529 [2024-11-20 06:24:11.144168] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.529 [2024-11-20 06:24:11.144187] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.529 [2024-11-20 06:24:11.153575] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.529 [2024-11-20 06:24:11.153594] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.529 [2024-11-20 06:24:11.162734] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.529 [2024-11-20 06:24:11.162754] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.529 [2024-11-20 06:24:11.176995] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.529 [2024-11-20 06:24:11.177014] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.529 [2024-11-20 06:24:11.190630] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.529 [2024-11-20 06:24:11.190654] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.529 [2024-11-20 06:24:11.204485] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.529 [2024-11-20 06:24:11.204505] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.529 [2024-11-20 06:24:11.218301] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.529 [2024-11-20 06:24:11.218320] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.529 [2024-11-20 06:24:11.232273] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.529 [2024-11-20 06:24:11.232293] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.529 [2024-11-20 06:24:11.243444] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.529 [2024-11-20 06:24:11.243465] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.529 [2024-11-20 06:24:11.252186] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.529 [2024-11-20 06:24:11.252215] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.529 [2024-11-20 06:24:11.261281] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.529 [2024-11-20 06:24:11.261307] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.529 [2024-11-20 06:24:11.275972] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.529 [2024-11-20 06:24:11.275991] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.529 [2024-11-20 06:24:11.286683] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.529 [2024-11-20 06:24:11.286704] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.529 [2024-11-20 06:24:11.300699] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.529 [2024-11-20 06:24:11.300719] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.529 [2024-11-20 06:24:11.313782] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.529 [2024-11-20 06:24:11.313802] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.529 [2024-11-20 06:24:11.327453] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.529 [2024-11-20 06:24:11.327473] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.529 [2024-11-20 06:24:11.340800] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.529 [2024-11-20 06:24:11.340820] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.529 [2024-11-20 06:24:11.350048] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.529 [2024-11-20 06:24:11.350067] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.788 [2024-11-20 06:24:11.364043] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.788 [2024-11-20 06:24:11.364065] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.788 16919.00 IOPS, 132.18 MiB/s [2024-11-20T05:24:11.624Z] [2024-11-20 06:24:11.372757] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.788 [2024-11-20 06:24:11.372778] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.788 [2024-11-20 06:24:11.386860] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.788 [2024-11-20 06:24:11.386880] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.788 [2024-11-20 06:24:11.399971] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.788 [2024-11-20 06:24:11.399995] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.788 [2024-11-20 06:24:11.409030] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.788 [2024-11-20 06:24:11.409050] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.788 [2024-11-20 06:24:11.423336] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.788 [2024-11-20 06:24:11.423356] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.788 [2024-11-20 06:24:11.436577] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.788 [2024-11-20 06:24:11.436596] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.788 [2024-11-20 06:24:11.445214] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.788 [2024-11-20 06:24:11.445233] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.788 [2024-11-20 06:24:11.453873] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.788 [2024-11-20 06:24:11.453892] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.788 [2024-11-20 06:24:11.462889] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.788 [2024-11-20 06:24:11.462908] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.788 [2024-11-20 06:24:11.477306] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.788 [2024-11-20 06:24:11.477327] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.788 [2024-11-20 06:24:11.491021] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.788 [2024-11-20 06:24:11.491041] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.788 [2024-11-20 06:24:11.499588] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.788 [2024-11-20 06:24:11.499607] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.788 [2024-11-20 06:24:11.508719] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.788 [2024-11-20 06:24:11.508738] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.788 [2024-11-20 06:24:11.523217] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.788 [2024-11-20 06:24:11.523237] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.788 [2024-11-20 06:24:11.536383] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.788 [2024-11-20 06:24:11.536402] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.788 [2024-11-20 06:24:11.549876] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.788 [2024-11-20 06:24:11.549895] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.788 [2024-11-20 06:24:11.563466] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.788 [2024-11-20 06:24:11.563486] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.788 [2024-11-20 06:24:11.577147] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.788 [2024-11-20 06:24:11.577167] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.788 [2024-11-20 06:24:11.585902] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.788 [2024-11-20 06:24:11.585921] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.788 [2024-11-20 06:24:11.600349] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.788 [2024-11-20 06:24:11.600367] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.788 [2024-11-20 06:24:11.614674] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.788 [2024-11-20 06:24:11.614694] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.047 [2024-11-20 06:24:11.628665] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.047 [2024-11-20 06:24:11.628690] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.047 [2024-11-20 06:24:11.642009] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.047 [2024-11-20 06:24:11.642029] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.047 [2024-11-20 06:24:11.651000] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.047 [2024-11-20 06:24:11.651019] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.047 [2024-11-20 06:24:11.664700] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.047 [2024-11-20 06:24:11.664719] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.047 [2024-11-20 06:24:11.673304] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.047 [2024-11-20 06:24:11.673323] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.047 [2024-11-20 06:24:11.682590] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.047 [2024-11-20 06:24:11.682609] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.047 [2024-11-20 06:24:11.697048] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.047 [2024-11-20 06:24:11.697075] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.047 [2024-11-20 06:24:11.710710] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.047 [2024-11-20 06:24:11.710735] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.047 [2024-11-20 06:24:11.724451] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.047 [2024-11-20 06:24:11.724473] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.047 [2024-11-20 06:24:11.737676] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.047 [2024-11-20 06:24:11.737696] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.047 [2024-11-20 06:24:11.751077] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.047 [2024-11-20 06:24:11.751097] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.047 [2024-11-20 06:24:11.764656] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.047 [2024-11-20 06:24:11.764676] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.047 [2024-11-20 06:24:11.778589] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.047 [2024-11-20 06:24:11.778610] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.047 [2024-11-20 06:24:11.792454] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.047 [2024-11-20 06:24:11.792474] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.047 [2024-11-20 06:24:11.801771] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.047 [2024-11-20 06:24:11.801789] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.047 [2024-11-20 06:24:11.816101] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.047 [2024-11-20 06:24:11.816120] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.047 [2024-11-20 06:24:11.829745] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.047 [2024-11-20 06:24:11.829764] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.047 [2024-11-20 06:24:11.838909] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.047 [2024-11-20 06:24:11.838928] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.047 [2024-11-20 06:24:11.852787] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.047 [2024-11-20 06:24:11.852806] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.047 [2024-11-20 06:24:11.866601] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.047 [2024-11-20 06:24:11.866624] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.306 [2024-11-20 06:24:11.880799] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.306 [2024-11-20 06:24:11.880820] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.306 [2024-11-20 06:24:11.892048] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.306 [2024-11-20 06:24:11.892069] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.306 [2024-11-20 06:24:11.905832] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.306 [2024-11-20 06:24:11.905852] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.306 [2024-11-20 06:24:11.919315] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.306 [2024-11-20 06:24:11.919334] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.306 [2024-11-20 06:24:11.932898] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.306 [2024-11-20 06:24:11.932917] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.306 [2024-11-20 06:24:11.946423] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.306 [2024-11-20 06:24:11.946443] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.306 [2024-11-20 06:24:11.960313] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.306 [2024-11-20 06:24:11.960332] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.306 [2024-11-20 06:24:11.973963] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.306 [2024-11-20 06:24:11.973983] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.306 [2024-11-20 06:24:11.987787] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.306 [2024-11-20 06:24:11.987806] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.306 [2024-11-20 06:24:12.001567] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.306 [2024-11-20 06:24:12.001586] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.306 [2024-11-20 06:24:12.015224] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.306 [2024-11-20 06:24:12.015244] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.306 [2024-11-20 06:24:12.028893] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.306 [2024-11-20 06:24:12.028913] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.306 [2024-11-20 06:24:12.042631] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.306 [2024-11-20 06:24:12.042650] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.306 [2024-11-20 06:24:12.056082] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.306 [2024-11-20 06:24:12.056101] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.306 [2024-11-20 06:24:12.070267] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.306 [2024-11-20 06:24:12.070286] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.306 [2024-11-20 06:24:12.080892] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.306 [2024-11-20 06:24:12.080911] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.306 [2024-11-20 06:24:12.094820] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.306 [2024-11-20 06:24:12.094839] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.306 [2024-11-20 06:24:12.108637] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.306 [2024-11-20 06:24:12.108656] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.306 [2024-11-20 06:24:12.121957] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.306 [2024-11-20 06:24:12.121978] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.306 [2024-11-20 06:24:12.135862] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.306 [2024-11-20 06:24:12.135882] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.564 [2024-11-20 06:24:12.149402] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.564 [2024-11-20 06:24:12.149423] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.564 [2024-11-20 06:24:12.162768] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.564 [2024-11-20 06:24:12.162788] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.564 [2024-11-20 06:24:12.176755] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.564 [2024-11-20 06:24:12.176774] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.564 [2024-11-20 06:24:12.190357] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.564 [2024-11-20 06:24:12.190376] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.564 [2024-11-20 06:24:12.204040] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.564 [2024-11-20 06:24:12.204059] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.564 [2024-11-20 06:24:12.217646] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.564 [2024-11-20 06:24:12.217665] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.564 [2024-11-20 06:24:12.231811] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.564 [2024-11-20 06:24:12.231830] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.564 [2024-11-20 06:24:12.242090] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.564 [2024-11-20 06:24:12.242109] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.564 [2024-11-20 06:24:12.255892] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.564 [2024-11-20 06:24:12.255912] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.564 [2024-11-20 06:24:12.269577] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.564 [2024-11-20 06:24:12.269597] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.564 [2024-11-20 06:24:12.283052] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.564 [2024-11-20 06:24:12.283071] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.564 [2024-11-20 06:24:12.296824] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.564 [2024-11-20 06:24:12.296843] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.565 [2024-11-20 06:24:12.310331] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.565 [2024-11-20 06:24:12.310351] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.565 [2024-11-20 06:24:12.324108] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.565 [2024-11-20 06:24:12.324127] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.565 [2024-11-20 06:24:12.337484] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.565 [2024-11-20 06:24:12.337503] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.565 [2024-11-20 06:24:12.351011] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.565 [2024-11-20 06:24:12.351030] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.565 [2024-11-20 06:24:12.365187] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.565 [2024-11-20 06:24:12.365211] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.565 17026.00 IOPS, 133.02 MiB/s [2024-11-20T05:24:12.401Z] [2024-11-20 06:24:12.376152] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.565 [2024-11-20 06:24:12.376172] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.565 [2024-11-20 06:24:12.390497] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.565 [2024-11-20 06:24:12.390515] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.823 [2024-11-20 06:24:12.403978] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.823 [2024-11-20 06:24:12.403999] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.823 [2024-11-20 06:24:12.417779] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.823 [2024-11-20 06:24:12.417799] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.823 [2024-11-20 06:24:12.431476] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.823 [2024-11-20 06:24:12.431496] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.823 [2024-11-20 06:24:12.445150] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.823 [2024-11-20 06:24:12.445169] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.823 [2024-11-20 06:24:12.458550] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.823 [2024-11-20 06:24:12.458572] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.823 [2024-11-20 06:24:12.471997] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.823 [2024-11-20 06:24:12.472017] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.823 [2024-11-20 06:24:12.485281] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.823 [2024-11-20 06:24:12.485302] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.823 [2024-11-20 06:24:12.499034] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.823 [2024-11-20 06:24:12.499057] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.823 [2024-11-20 06:24:12.513115] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.823 [2024-11-20 06:24:12.513136] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.823 [2024-11-20 06:24:12.524248] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.823 [2024-11-20 06:24:12.524268] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.823 [2024-11-20 06:24:12.538436] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.823 [2024-11-20 06:24:12.538456] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.823 [2024-11-20 06:24:12.552491] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.823 [2024-11-20 06:24:12.552511] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.823 [2024-11-20 06:24:12.565864] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.823 [2024-11-20 06:24:12.565884] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.823 [2024-11-20 06:24:12.579990] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.823 [2024-11-20 06:24:12.580010] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.823 [2024-11-20 06:24:12.591399] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.823 [2024-11-20 06:24:12.591419] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.823 [2024-11-20 06:24:12.605888] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.823 [2024-11-20 06:24:12.605907] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.823 [2024-11-20 06:24:12.615221] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.823 [2024-11-20 06:24:12.615246] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.823 [2024-11-20 06:24:12.629113] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.823 [2024-11-20 06:24:12.629132] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.823 [2024-11-20 06:24:12.642791] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.823 [2024-11-20 06:24:12.642811] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.082 [2024-11-20 06:24:12.656563] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.082 [2024-11-20 06:24:12.656584] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.082 [2024-11-20 06:24:12.670151] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.082 [2024-11-20 06:24:12.670172] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.082 [2024-11-20 06:24:12.683609] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.082 [2024-11-20 06:24:12.683630] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.082 [2024-11-20 06:24:12.697093] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.082 [2024-11-20 06:24:12.697113] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.082 [2024-11-20 06:24:12.710800] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.082 [2024-11-20 06:24:12.710822] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.082 [2024-11-20 06:24:12.724247] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.082 [2024-11-20 06:24:12.724268] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.082 [2024-11-20 06:24:12.738126] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.082 [2024-11-20 06:24:12.738147] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.082 [2024-11-20 06:24:12.751874] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.082 [2024-11-20 06:24:12.751894] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.082 [2024-11-20 06:24:12.765816] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.082 [2024-11-20 06:24:12.765836] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.082 [2024-11-20 06:24:12.779542] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.082 [2024-11-20 06:24:12.779562] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.082 [2024-11-20 06:24:12.793048] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.082 [2024-11-20 06:24:12.793069] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.082 [2024-11-20 06:24:12.806708] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.082 [2024-11-20 06:24:12.806728] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.082 [2024-11-20 06:24:12.820513] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.082 [2024-11-20 06:24:12.820533] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.082 [2024-11-20 06:24:12.834159] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.082 [2024-11-20 06:24:12.834179] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.082 [2024-11-20 06:24:12.848082] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.082 [2024-11-20 06:24:12.848102] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.082 [2024-11-20 06:24:12.862475] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.082 [2024-11-20 06:24:12.862495] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.082 [2024-11-20 06:24:12.875948] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.082 [2024-11-20 06:24:12.875972] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.082 [2024-11-20 06:24:12.889816] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.082 [2024-11-20 06:24:12.889835] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.082 [2024-11-20 06:24:12.903458] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.082 [2024-11-20 06:24:12.903477] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.341 [2024-11-20 06:24:12.917276] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.341 [2024-11-20 06:24:12.917299] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.341 [2024-11-20 06:24:12.930880] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.341 [2024-11-20 06:24:12.930902] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.341 [2024-11-20 06:24:12.944517] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.341 [2024-11-20 06:24:12.944537] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.341 [2024-11-20 06:24:12.953437] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.341 [2024-11-20 06:24:12.953457] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.341 [2024-11-20 06:24:12.962803] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.341 [2024-11-20 06:24:12.962822] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.341 [2024-11-20 06:24:12.976990] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.341 [2024-11-20 06:24:12.977010] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.341 [2024-11-20 06:24:12.990805] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.341 [2024-11-20 06:24:12.990824] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.341 [2024-11-20 06:24:13.004686] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.341 [2024-11-20 06:24:13.004705] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.341 [2024-11-20 06:24:13.018131] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.341 [2024-11-20 06:24:13.018151] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.341 [2024-11-20 06:24:13.031982] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.341 [2024-11-20 06:24:13.032002] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.341 [2024-11-20 06:24:13.040897] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.341 [2024-11-20 06:24:13.040916] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.341 [2024-11-20 06:24:13.055047] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.341 [2024-11-20 06:24:13.055066] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.341 [2024-11-20 06:24:13.063832] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.341 [2024-11-20 06:24:13.063851] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.341 [2024-11-20 06:24:13.073083] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.341 [2024-11-20 06:24:13.073102] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.341 [2024-11-20 06:24:13.086951] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.341 [2024-11-20 06:24:13.086970] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.341 [2024-11-20 06:24:13.100640] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.341 [2024-11-20 06:24:13.100659] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.341 [2024-11-20 06:24:13.114469] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.341 [2024-11-20 06:24:13.114493] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.341 [2024-11-20 06:24:13.128036] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.341 [2024-11-20 06:24:13.128056] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.341 [2024-11-20 06:24:13.141936] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.341 [2024-11-20 06:24:13.141956] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.341 [2024-11-20 06:24:13.150715] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.341 [2024-11-20 06:24:13.150735] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.341 [2024-11-20 06:24:13.165079] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.341 [2024-11-20 06:24:13.165109] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.599 [2024-11-20 06:24:13.178980] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.599 [2024-11-20 06:24:13.179002] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.599 [2024-11-20 06:24:13.188458] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.599 [2024-11-20 06:24:13.188477] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.599 [2024-11-20 06:24:13.202190] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.599 [2024-11-20 06:24:13.202214] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.600 [2024-11-20 06:24:13.215454] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.600 [2024-11-20 06:24:13.215473] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.600 [2024-11-20 06:24:13.224379] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.600 [2024-11-20 06:24:13.224398] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.600 [2024-11-20 06:24:13.238053] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.600 [2024-11-20 06:24:13.238072] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.600 [2024-11-20 06:24:13.251543] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.600 [2024-11-20 06:24:13.251562] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.600 [2024-11-20 06:24:13.260271] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.600 [2024-11-20 06:24:13.260290] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.600 [2024-11-20 06:24:13.269574] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.600 [2024-11-20 06:24:13.269593] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.600 [2024-11-20 06:24:13.278555] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.600 [2024-11-20 06:24:13.278574] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.600 [2024-11-20 06:24:13.292878] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.600 [2024-11-20 06:24:13.292898] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.600 [2024-11-20 06:24:13.301585] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.600 [2024-11-20 06:24:13.301604] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.600 [2024-11-20 06:24:13.310411] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.600 [2024-11-20 06:24:13.310429] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.600 [2024-11-20 06:24:13.318915] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.600 [2024-11-20 06:24:13.318933] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.600 [2024-11-20 06:24:13.333125] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.600 [2024-11-20 06:24:13.333149] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.600 [2024-11-20 06:24:13.346473] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.600 [2024-11-20 06:24:13.346494] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.600 [2024-11-20 06:24:13.359862] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.600 [2024-11-20 06:24:13.359881] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.600 [2024-11-20 06:24:13.373379] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.600 [2024-11-20 06:24:13.373398] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.600 17055.00 IOPS, 133.24 MiB/s [2024-11-20T05:24:13.436Z] [2024-11-20 06:24:13.386724] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.600 [2024-11-20 06:24:13.386744] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.600 [2024-11-20 06:24:13.399918] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.600 [2024-11-20 06:24:13.399937] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.600 [2024-11-20 06:24:13.413747] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.600 [2024-11-20 06:24:13.413766] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.600 [2024-11-20 06:24:13.422435] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.600 [2024-11-20 06:24:13.422453] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.859 [2024-11-20 06:24:13.436499] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.859 [2024-11-20 06:24:13.436520] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.859 [2024-11-20 06:24:13.445256] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.859 [2024-11-20 06:24:13.445276] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.859 [2024-11-20 06:24:13.454566] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.859 [2024-11-20 06:24:13.454585] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.859 [2024-11-20 06:24:13.468555] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.859 [2024-11-20 06:24:13.468574] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.859 [2024-11-20 06:24:13.481327] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.859 [2024-11-20 06:24:13.481346] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.859 [2024-11-20 06:24:13.494908] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.859 [2024-11-20 06:24:13.494928] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.859 [2024-11-20 06:24:13.508052] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.859 [2024-11-20 06:24:13.508071] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.859 [2024-11-20 06:24:13.516570] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.859 [2024-11-20 06:24:13.516589] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.859 [2024-11-20 06:24:13.530954] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.859 [2024-11-20 06:24:13.530973] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.859 [2024-11-20 06:24:13.544780] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.859 [2024-11-20 06:24:13.544799] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.859 [2024-11-20 06:24:13.558901] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.859 [2024-11-20 06:24:13.558920] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.859 [2024-11-20 06:24:13.572726] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.859 [2024-11-20 06:24:13.572745] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.859 [2024-11-20 06:24:13.586823] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.859 [2024-11-20 06:24:13.586842] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.859 [2024-11-20 06:24:13.597437] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.859 [2024-11-20 06:24:13.597456] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.859 [2024-11-20 06:24:13.606580] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.859 [2024-11-20 06:24:13.606599] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.859 [2024-11-20 06:24:13.616496] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.859 [2024-11-20 06:24:13.616515] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.859 [2024-11-20 06:24:13.625727] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.859 [2024-11-20 06:24:13.625747] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.859 [2024-11-20 06:24:13.640001] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.859 [2024-11-20 06:24:13.640020] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.859 [2024-11-20 06:24:13.654082] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.859 [2024-11-20 06:24:13.654102] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.859 [2024-11-20 06:24:13.662991] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.859 [2024-11-20 06:24:13.663010] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.859 [2024-11-20 06:24:13.676705] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.859 [2024-11-20 06:24:13.676724] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.859 [2024-11-20 06:24:13.689996] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.859 [2024-11-20 06:24:13.690017] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.118 [2024-11-20 06:24:13.699033] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.118 [2024-11-20 06:24:13.699054] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.118 [2024-11-20 06:24:13.712980] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.118 [2024-11-20 06:24:13.712999] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.119 [2024-11-20 06:24:13.726441] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.119 [2024-11-20 06:24:13.726460] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.119 [2024-11-20 06:24:13.740264] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.119 [2024-11-20 06:24:13.740283] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.119 [2024-11-20 06:24:13.753570] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.119 [2024-11-20 06:24:13.753590] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.119 [2024-11-20 06:24:13.766928] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.119 [2024-11-20 06:24:13.766949] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.119 [2024-11-20 06:24:13.780704] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.119 [2024-11-20 06:24:13.780722] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.119 [2024-11-20 06:24:13.789516] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.119 [2024-11-20 06:24:13.789535] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.119 [2024-11-20 06:24:13.803684] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.119 [2024-11-20 06:24:13.803703] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.119 [2024-11-20 06:24:13.812404] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.119 [2024-11-20 06:24:13.812423] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.119 [2024-11-20 06:24:13.821657] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.119 [2024-11-20 06:24:13.821675] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.119 [2024-11-20 06:24:13.835465] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.119 [2024-11-20 06:24:13.835483] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.119 [2024-11-20 06:24:13.844374] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.119 [2024-11-20 06:24:13.844393] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.119 [2024-11-20 06:24:13.858462] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.119 [2024-11-20 06:24:13.858483] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.119 [2024-11-20 06:24:13.872115] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.119 [2024-11-20 06:24:13.872138] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.119 [2024-11-20 06:24:13.881209] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.119 [2024-11-20 06:24:13.881230] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.119 [2024-11-20 06:24:13.895566] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.119 [2024-11-20 06:24:13.895587] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.119 [2024-11-20 06:24:13.909598] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.119 [2024-11-20 06:24:13.909619] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.119 [2024-11-20 06:24:13.922964] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.119 [2024-11-20 06:24:13.922984] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.119 [2024-11-20 06:24:13.932219] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.119 [2024-11-20 06:24:13.932238] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.119 [2024-11-20 06:24:13.945887] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.119 [2024-11-20 06:24:13.945907] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.378 [2024-11-20 06:24:13.959297] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.378 [2024-11-20 06:24:13.959318] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.378 [2024-11-20 06:24:13.968587] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.378 [2024-11-20 06:24:13.968607] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.378 [2024-11-20 06:24:13.982691] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.378 [2024-11-20 06:24:13.982712] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.378 [2024-11-20 06:24:13.996242] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.378 [2024-11-20 06:24:13.996262] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.378 [2024-11-20 06:24:14.005057] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.378 [2024-11-20 06:24:14.005076] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.378 [2024-11-20 06:24:14.019382] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.378 [2024-11-20 06:24:14.019402] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.378 [2024-11-20 06:24:14.028133] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.378 [2024-11-20 06:24:14.028153] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.378 [2024-11-20 06:24:14.037550] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.378 [2024-11-20 06:24:14.037569] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.378 [2024-11-20 06:24:14.046473] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.378 [2024-11-20 06:24:14.046492] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.378 [2024-11-20 06:24:14.055500] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.378 [2024-11-20 06:24:14.055519] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.378 [2024-11-20 06:24:14.069949] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.378 [2024-11-20 06:24:14.069971] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.378 [2024-11-20 06:24:14.078995] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.378 [2024-11-20 06:24:14.079015] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.378 [2024-11-20 06:24:14.088178] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.379 [2024-11-20 06:24:14.088198] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.379 [2024-11-20 06:24:14.102553] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.379 [2024-11-20 06:24:14.102572] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.379 [2024-11-20 06:24:14.116411] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.379 [2024-11-20 06:24:14.116431] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.379 [2024-11-20 06:24:14.129694] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.379 [2024-11-20 06:24:14.129714] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.379 [2024-11-20 06:24:14.138480] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.379 [2024-11-20 06:24:14.138499] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.379 [2024-11-20 06:24:14.152748] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.379 [2024-11-20 06:24:14.152768] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.379 [2024-11-20 06:24:14.161837] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.379 [2024-11-20 06:24:14.161856] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.379 [2024-11-20 06:24:14.176719] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.379 [2024-11-20 06:24:14.176740] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.379 [2024-11-20 06:24:14.191455] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.379 [2024-11-20 06:24:14.191485] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.379 [2024-11-20 06:24:14.205156] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.379 [2024-11-20 06:24:14.205175] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.638 [2024-11-20 06:24:14.214256] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.638 [2024-11-20 06:24:14.214281] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.638 [2024-11-20 06:24:14.228801] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.638 [2024-11-20 06:24:14.228822] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.638 [2024-11-20 06:24:14.237790] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.638 [2024-11-20 06:24:14.237814] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.638 [2024-11-20 06:24:14.251948] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.638 [2024-11-20 06:24:14.251967] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.638 [2024-11-20 06:24:14.260798] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.638 [2024-11-20 06:24:14.260818] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.638 [2024-11-20 06:24:14.269664] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.638 [2024-11-20 06:24:14.269683] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.638 [2024-11-20 06:24:14.278950] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.638 [2024-11-20 06:24:14.278969] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.638 [2024-11-20 06:24:14.288077] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.638 [2024-11-20 06:24:14.288096] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.638 [2024-11-20 06:24:14.302274] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.638 [2024-11-20 06:24:14.302293] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.638 [2024-11-20 06:24:14.311198] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.638 [2024-11-20 06:24:14.311224] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.638 [2024-11-20 06:24:14.325373] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.638 [2024-11-20 06:24:14.325392] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.638 [2024-11-20 06:24:14.338684] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.638 [2024-11-20 06:24:14.338704] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.638 [2024-11-20 06:24:14.352282] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.638 [2024-11-20 06:24:14.352302] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.638 [2024-11-20 06:24:14.365924] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.638 [2024-11-20 06:24:14.365943] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.638 17075.25 IOPS, 133.40 MiB/s [2024-11-20T05:24:14.474Z] [2024-11-20 06:24:14.379585] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.638 [2024-11-20 06:24:14.379605] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.638 [2024-11-20 06:24:14.393534] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.638 [2024-11-20 06:24:14.393553] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.638 [2024-11-20 06:24:14.402399] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.638 [2024-11-20 06:24:14.402418] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.638 [2024-11-20 06:24:14.411714] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.638 [2024-11-20 06:24:14.411734] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.638 [2024-11-20 06:24:14.425654] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.638 [2024-11-20 06:24:14.425673] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.638 [2024-11-20 06:24:14.434320] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.638 [2024-11-20 06:24:14.434339] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.638 [2024-11-20 06:24:14.448289] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.638 [2024-11-20 06:24:14.448308] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.638 [2024-11-20 06:24:14.462019] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.638 [2024-11-20 06:24:14.462042] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.896 [2024-11-20 06:24:14.471126] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.896 [2024-11-20 06:24:14.471147] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.896 [2024-11-20 06:24:14.484758] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.896 [2024-11-20 06:24:14.484778] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.896 [2024-11-20 06:24:14.493564] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.896 [2024-11-20 06:24:14.493585] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.896 [2024-11-20 06:24:14.502878] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.896 [2024-11-20 06:24:14.502898] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.896 [2024-11-20 06:24:14.516960] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.896 [2024-11-20 06:24:14.516980] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.896 [2024-11-20 06:24:14.530479] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.896 [2024-11-20 06:24:14.530499] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.896 [2024-11-20 06:24:14.544581] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.897 [2024-11-20 06:24:14.544601] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.897 [2024-11-20 06:24:14.553650] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.897 [2024-11-20 06:24:14.553669] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.897 [2024-11-20 06:24:14.567577] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.897 [2024-11-20 06:24:14.567597] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.897 [2024-11-20 06:24:14.581088] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.897 [2024-11-20 06:24:14.581107] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.897 [2024-11-20 06:24:14.590406] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.897 [2024-11-20 06:24:14.590425] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.897 [2024-11-20 06:24:14.604163] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.897 [2024-11-20 06:24:14.604182] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.897 [2024-11-20 06:24:14.612988] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.897 [2024-11-20 06:24:14.613006] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.897 [2024-11-20 06:24:14.627023] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.897 [2024-11-20 06:24:14.627042] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.897 [2024-11-20 06:24:14.640183] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.897 [2024-11-20 06:24:14.640208] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.897 [2024-11-20 06:24:14.649435] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.897 [2024-11-20 06:24:14.649454] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.897 [2024-11-20 06:24:14.664296] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.897 [2024-11-20 06:24:14.664315] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.897 [2024-11-20 06:24:14.677853] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.897 [2024-11-20 06:24:14.677873] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.897 [2024-11-20 06:24:14.686885] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.897 [2024-11-20 06:24:14.686911] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.897 [2024-11-20 06:24:14.701067] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.897 [2024-11-20 06:24:14.701087] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.897 [2024-11-20 06:24:14.714764] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.897 [2024-11-20 06:24:14.714784] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.897 [2024-11-20 06:24:14.728318] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.897 [2024-11-20 06:24:14.728339] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:43.156 [2024-11-20 06:24:14.742385] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:43.156 [2024-11-20 06:24:14.742405] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:43.156 [2024-11-20 06:24:14.756449] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:43.156 [2024-11-20 06:24:14.756468] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:43.156 [2024-11-20 06:24:14.770011] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:43.156 [2024-11-20 06:24:14.770031] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:43.156 [2024-11-20 06:24:14.783652] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:43.156 [2024-11-20 06:24:14.783672] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:43.156 [2024-11-20 06:24:14.797612] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:43.156 [2024-11-20 06:24:14.797633] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:43.156 [2024-11-20 06:24:14.806460] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:43.156 [2024-11-20 06:24:14.806480] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:43.156 [2024-11-20 06:24:14.820524] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:43.156 [2024-11-20 06:24:14.820543] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:43.156 [2024-11-20 06:24:14.834901] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:43.156 [2024-11-20 06:24:14.834921] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:43.156 [2024-11-20 06:24:14.845945] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:43.156 [2024-11-20 06:24:14.845964] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:43.156 [2024-11-20 06:24:14.860480] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:43.156 [2024-11-20 06:24:14.860499] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:43.156 [2024-11-20 06:24:14.874361] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:43.156 [2024-11-20 06:24:14.874381] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:43.156 [2024-11-20 06:24:14.883003] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:43.156 [2024-11-20 06:24:14.883022] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:43.156 [2024-11-20 06:24:14.897481] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:43.156 [2024-11-20 06:24:14.897501] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:43.156 [2024-11-20 06:24:14.906398] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:43.156 [2024-11-20 06:24:14.906417] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:43.156 [2024-11-20 06:24:14.920685] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:43.156 [2024-11-20 06:24:14.920703] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:43.156 [2024-11-20 06:24:14.929454] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:43.156 [2024-11-20 06:24:14.929472] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:43.156 [2024-11-20 06:24:14.943671] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:43.156 [2024-11-20 06:24:14.943692] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:43.156 [2024-11-20 06:24:14.957211] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:43.156 [2024-11-20 06:24:14.957246] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:43.156 [2024-11-20 06:24:14.971063] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:43.156 [2024-11-20 06:24:14.971083] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:43.156 [2024-11-20 06:24:14.984343] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:43.156 [2024-11-20 06:24:14.984362] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:43.414 [2024-11-20 06:24:14.998423] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:43.414 [2024-11-20 06:24:14.998444] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:43.414 [2024-11-20 06:24:15.012107] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:43.414 [2024-11-20 06:24:15.012127] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:43.414 [2024-11-20 06:24:15.025729] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:43.414 [2024-11-20 06:24:15.025749] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:43.414 [2024-11-20 06:24:15.039354] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:43.414 [2024-11-20 06:24:15.039373] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:43.414 [2024-11-20 06:24:15.053001] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:43.414 [2024-11-20 06:24:15.053021] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:43.414 [2024-11-20 06:24:15.061900] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:43.414 [2024-11-20 06:24:15.061920] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:43.414 [2024-11-20 06:24:15.071254] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:43.414 [2024-11-20 06:24:15.071273] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:43.414 [2024-11-20 06:24:15.085117] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:43.414 [2024-11-20 06:24:15.085136] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:43.414 [2024-11-20 06:24:15.098419] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:43.414 [2024-11-20 06:24:15.098437] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:43.414 [2024-11-20 06:24:15.112171] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:43.414 [2024-11-20 06:24:15.112191] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:43.414 [2024-11-20 06:24:15.125629] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:43.414 [2024-11-20 06:24:15.125648] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:43.414 [2024-11-20 06:24:15.139423] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:43.414 [2024-11-20 06:24:15.139442] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:43.414 [2024-11-20 06:24:15.153068] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:43.414 [2024-11-20 06:24:15.153088] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:43.414 [2024-11-20 06:24:15.166519] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:43.414 [2024-11-20 06:24:15.166537] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:43.414 [2024-11-20 06:24:15.180206] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:43.414 [2024-11-20 06:24:15.180227] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:43.414 [2024-11-20 06:24:15.194113] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:43.414 [2024-11-20 06:24:15.194133] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:43.414 [2024-11-20 06:24:15.207669] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:43.414 [2024-11-20 06:24:15.207688] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:43.414 [2024-11-20 06:24:15.221408] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:43.414 [2024-11-20 06:24:15.221428] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:43.414 [2024-11-20 06:24:15.234836] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:43.414 [2024-11-20 06:24:15.234858] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:43.673 [2024-11-20 06:24:15.249047] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:43.673 [2024-11-20 06:24:15.249068] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:43.673 [2024-11-20 06:24:15.260617] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:43.673 [2024-11-20 06:24:15.260638] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:43.673 [2024-11-20 06:24:15.274887] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:43.673 [2024-11-20 06:24:15.274910] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:43.673 [2024-11-20 06:24:15.283894] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:43.673 [2024-11-20 06:24:15.283915] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:43.673 [2024-11-20 06:24:15.293259] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:43.673 [2024-11-20 06:24:15.293279] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:43.673 [2024-11-20 06:24:15.307668] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:43.673 [2024-11-20 06:24:15.307688] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:43.673 [2024-11-20 06:24:15.321041] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:43.673 [2024-11-20 06:24:15.321061] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:43.673 [2024-11-20 06:24:15.330722] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:43.673 [2024-11-20 06:24:15.330742] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:43.673 [2024-11-20 06:24:15.344471] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:43.673 [2024-11-20 06:24:15.344491] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:43.673 [2024-11-20 06:24:15.353174] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:43.673 [2024-11-20 06:24:15.353194] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:43.673 [2024-11-20 06:24:15.367622] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:43.673 [2024-11-20 06:24:15.367642] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:43.673 17082.40 IOPS, 133.46 MiB/s [2024-11-20T05:24:15.509Z] [2024-11-20 06:24:15.381488] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:43.673 [2024-11-20 06:24:15.381509] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:43.673 00:13:43.673 Latency(us) 00:13:43.673 [2024-11-20T05:24:15.509Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:43.673 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:13:43.673 Nvme1n1 : 5.01 17086.29 133.49 0.00 0.00 7484.59 3308.01 16976.94 00:13:43.673 [2024-11-20T05:24:15.509Z] =================================================================================================================== 00:13:43.673 [2024-11-20T05:24:15.509Z] Total : 17086.29 133.49 0.00 0.00 7484.59 3308.01 16976.94 00:13:43.673 [2024-11-20 06:24:15.391526] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:43.673 [2024-11-20 06:24:15.391545] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:43.673 [2024-11-20 06:24:15.403552] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:43.673 [2024-11-20 06:24:15.403569] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:43.673 [2024-11-20 06:24:15.415603] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:43.673 [2024-11-20 06:24:15.415624] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:43.673 [2024-11-20 06:24:15.427623] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:43.673 [2024-11-20 06:24:15.427641] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:43.673 [2024-11-20 06:24:15.439660] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:43.673 [2024-11-20 06:24:15.439677] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:43.673 [2024-11-20 06:24:15.451688] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:43.673 [2024-11-20 06:24:15.451706] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:43.673 [2024-11-20 06:24:15.463719] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:43.673 [2024-11-20 06:24:15.463737] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:43.673 [2024-11-20 06:24:15.475749] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:43.673 [2024-11-20 06:24:15.475766] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:43.673 [2024-11-20 06:24:15.487779] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:43.673 [2024-11-20 06:24:15.487795] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:43.674 [2024-11-20 06:24:15.499812] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:43.674 [2024-11-20 06:24:15.499824] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:43.932 [2024-11-20 06:24:15.511857] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:43.932 [2024-11-20 06:24:15.511877] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:43.932 [2024-11-20 06:24:15.523882] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:43.932 [2024-11-20 06:24:15.523896] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:43.932 [2024-11-20 06:24:15.535909] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:43.932 [2024-11-20 06:24:15.535921] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:43.932 [2024-11-20 06:24:15.547940] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:43.932 [2024-11-20 06:24:15.547951] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:43.932 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (426403) - No such process 00:13:43.932 06:24:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 426403 00:13:43.932 06:24:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:43.932 06:24:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.932 06:24:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:43.932 06:24:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.932 06:24:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:43.932 06:24:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.932 06:24:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:43.932 delay0 00:13:43.932 06:24:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.932 06:24:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:13:43.932 06:24:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.932 06:24:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:43.932 06:24:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.932 06:24:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:13:43.932 [2024-11-20 06:24:15.742339] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:13:50.492 Initializing NVMe Controllers 00:13:50.492 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:50.492 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:50.492 Initialization complete. Launching workers. 00:13:50.492 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 109 00:13:50.492 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 396, failed to submit 33 00:13:50.492 success 204, unsuccessful 192, failed 0 00:13:50.492 06:24:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:13:50.492 06:24:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:13:50.492 06:24:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:50.492 06:24:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:13:50.492 06:24:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:50.492 06:24:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:13:50.492 06:24:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:50.492 06:24:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:50.492 rmmod nvme_tcp 00:13:50.492 rmmod nvme_fabrics 00:13:50.492 rmmod nvme_keyring 00:13:50.492 06:24:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:50.492 06:24:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:13:50.492 06:24:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:13:50.492 06:24:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 424095 ']' 00:13:50.493 06:24:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 424095 00:13:50.493 06:24:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@952 -- # '[' -z 424095 ']' 00:13:50.493 06:24:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # kill -0 424095 00:13:50.493 06:24:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@957 -- # uname 00:13:50.493 06:24:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:50.493 06:24:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 424095 00:13:50.493 06:24:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:13:50.493 06:24:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:13:50.493 06:24:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@970 -- # echo 'killing process with pid 424095' 00:13:50.493 killing process with pid 424095 00:13:50.493 06:24:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@971 -- # kill 424095 00:13:50.493 06:24:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@976 -- # wait 424095 00:13:50.493 06:24:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:50.493 06:24:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:50.493 06:24:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:50.493 06:24:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:13:50.493 06:24:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:13:50.493 06:24:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:50.493 06:24:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:13:50.493 06:24:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:50.493 06:24:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:50.493 06:24:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:50.493 06:24:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:50.493 06:24:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:52.398 06:24:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:52.398 00:13:52.398 real 0m31.396s 00:13:52.398 user 0m41.937s 00:13:52.398 sys 0m11.060s 00:13:52.398 06:24:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:52.398 06:24:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:52.398 ************************************ 00:13:52.398 END TEST nvmf_zcopy 00:13:52.398 ************************************ 00:13:52.398 06:24:24 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:13:52.398 06:24:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:13:52.398 06:24:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:52.398 06:24:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:13:52.657 ************************************ 00:13:52.657 START TEST nvmf_nmic 00:13:52.657 ************************************ 00:13:52.657 06:24:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:13:52.657 * Looking for test storage... 00:13:52.657 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:52.657 06:24:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:13:52.657 06:24:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # lcov --version 00:13:52.657 06:24:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:13:52.657 06:24:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:13:52.657 06:24:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:52.657 06:24:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:52.657 06:24:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:52.657 06:24:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:13:52.657 06:24:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:13:52.657 06:24:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:13:52.657 06:24:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:13:52.657 06:24:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:13:52.657 06:24:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:13:52.657 06:24:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:13:52.657 06:24:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:52.658 06:24:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:13:52.658 06:24:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:13:52.658 06:24:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:52.658 06:24:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:52.658 06:24:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:13:52.658 06:24:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:13:52.658 06:24:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:52.658 06:24:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:13:52.658 06:24:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:13:52.658 06:24:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:13:52.658 06:24:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:13:52.658 06:24:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:52.658 06:24:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:13:52.658 06:24:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:13:52.658 06:24:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:52.658 06:24:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:52.658 06:24:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:13:52.658 06:24:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:52.658 06:24:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:13:52.658 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:52.658 --rc genhtml_branch_coverage=1 00:13:52.658 --rc genhtml_function_coverage=1 00:13:52.658 --rc genhtml_legend=1 00:13:52.658 --rc geninfo_all_blocks=1 00:13:52.658 --rc geninfo_unexecuted_blocks=1 00:13:52.658 00:13:52.658 ' 00:13:52.658 06:24:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:13:52.658 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:52.658 --rc genhtml_branch_coverage=1 00:13:52.658 --rc genhtml_function_coverage=1 00:13:52.658 --rc genhtml_legend=1 00:13:52.658 --rc geninfo_all_blocks=1 00:13:52.658 --rc geninfo_unexecuted_blocks=1 00:13:52.658 00:13:52.658 ' 00:13:52.658 06:24:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:13:52.658 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:52.658 --rc genhtml_branch_coverage=1 00:13:52.658 --rc genhtml_function_coverage=1 00:13:52.658 --rc genhtml_legend=1 00:13:52.658 --rc geninfo_all_blocks=1 00:13:52.658 --rc geninfo_unexecuted_blocks=1 00:13:52.658 00:13:52.658 ' 00:13:52.658 06:24:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:13:52.658 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:52.658 --rc genhtml_branch_coverage=1 00:13:52.658 --rc genhtml_function_coverage=1 00:13:52.658 --rc genhtml_legend=1 00:13:52.658 --rc geninfo_all_blocks=1 00:13:52.658 --rc geninfo_unexecuted_blocks=1 00:13:52.658 00:13:52.658 ' 00:13:52.658 06:24:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:52.658 06:24:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:13:52.658 06:24:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:52.658 06:24:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:52.658 06:24:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:52.658 06:24:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:52.658 06:24:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:52.658 06:24:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:52.658 06:24:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:52.658 06:24:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:52.658 06:24:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:52.658 06:24:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:52.658 06:24:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:13:52.658 06:24:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:13:52.658 06:24:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:52.658 06:24:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:52.658 06:24:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:52.658 06:24:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:52.658 06:24:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:52.658 06:24:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:13:52.658 06:24:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:52.658 06:24:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:52.658 06:24:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:52.658 06:24:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:52.658 06:24:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:52.658 06:24:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:52.658 06:24:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:13:52.658 06:24:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:52.658 06:24:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:13:52.658 06:24:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:52.658 06:24:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:52.658 06:24:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:52.658 06:24:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:52.658 06:24:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:52.658 06:24:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:52.658 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:52.658 06:24:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:52.658 06:24:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:52.658 06:24:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:52.658 06:24:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:52.658 06:24:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:52.658 06:24:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:13:52.658 06:24:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:52.658 06:24:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:52.658 06:24:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:52.658 06:24:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:52.658 06:24:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:52.658 06:24:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:52.658 06:24:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:52.658 06:24:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:52.658 06:24:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:52.658 06:24:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:52.658 06:24:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:13:52.658 06:24:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:59.228 06:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:59.228 06:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:13:59.228 06:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:59.228 06:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:59.228 06:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:59.228 06:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:59.228 06:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:59.228 06:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:13:59.228 06:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:59.228 06:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:13:59.229 06:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:13:59.229 06:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:13:59.229 06:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:13:59.229 06:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:13:59.229 06:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:13:59.229 06:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:59.229 06:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:59.229 06:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:59.229 06:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:59.229 06:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:59.229 06:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:59.229 06:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:59.229 06:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:59.229 06:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:59.229 06:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:59.229 06:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:59.229 06:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:59.229 06:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:59.229 06:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:59.229 06:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:59.229 06:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:59.229 06:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:59.229 06:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:59.229 06:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:59.229 06:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:59.229 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:59.229 06:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:59.229 06:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:59.229 06:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:59.229 06:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:59.229 06:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:59.229 06:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:59.229 06:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:59.229 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:59.229 06:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:59.229 06:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:59.229 06:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:59.229 06:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:59.229 06:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:59.229 06:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:59.229 06:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:59.229 06:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:59.229 06:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:59.229 06:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:59.229 06:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:59.229 06:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:59.229 06:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:59.229 06:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:59.229 06:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:59.229 06:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:59.229 Found net devices under 0000:86:00.0: cvl_0_0 00:13:59.229 06:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:59.229 06:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:59.229 06:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:59.229 06:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:59.229 06:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:59.229 06:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:59.229 06:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:59.229 06:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:59.229 06:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:59.229 Found net devices under 0000:86:00.1: cvl_0_1 00:13:59.229 06:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:59.229 06:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:59.229 06:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:13:59.229 06:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:59.229 06:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:59.229 06:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:59.229 06:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:59.229 06:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:59.229 06:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:59.229 06:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:59.229 06:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:59.229 06:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:59.229 06:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:59.229 06:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:59.229 06:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:59.229 06:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:59.229 06:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:59.229 06:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:59.229 06:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:59.229 06:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:59.229 06:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:59.229 06:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:59.229 06:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:59.229 06:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:59.229 06:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:59.229 06:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:59.229 06:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:59.229 06:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:59.229 06:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:59.229 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:59.229 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.474 ms 00:13:59.229 00:13:59.229 --- 10.0.0.2 ping statistics --- 00:13:59.229 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:59.229 rtt min/avg/max/mdev = 0.474/0.474/0.474/0.000 ms 00:13:59.229 06:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:59.229 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:59.229 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.191 ms 00:13:59.229 00:13:59.229 --- 10.0.0.1 ping statistics --- 00:13:59.229 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:59.229 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:13:59.229 06:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:59.229 06:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:13:59.229 06:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:59.229 06:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:59.229 06:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:59.229 06:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:59.229 06:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:59.229 06:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:59.229 06:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:59.229 06:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:13:59.229 06:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:59.229 06:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:59.229 06:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:59.229 06:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=431855 00:13:59.230 06:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:59.230 06:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 431855 00:13:59.230 06:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@833 -- # '[' -z 431855 ']' 00:13:59.230 06:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:59.230 06:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:59.230 06:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:59.230 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:59.230 06:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:59.230 06:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:59.230 [2024-11-20 06:24:30.527351] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:13:59.230 [2024-11-20 06:24:30.527398] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:59.230 [2024-11-20 06:24:30.605833] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:59.230 [2024-11-20 06:24:30.649159] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:59.230 [2024-11-20 06:24:30.649198] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:59.230 [2024-11-20 06:24:30.649208] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:59.230 [2024-11-20 06:24:30.649215] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:59.230 [2024-11-20 06:24:30.649220] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:59.230 [2024-11-20 06:24:30.650638] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:59.230 [2024-11-20 06:24:30.650748] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:59.230 [2024-11-20 06:24:30.650875] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:59.230 [2024-11-20 06:24:30.650876] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:59.230 06:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:59.230 06:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@866 -- # return 0 00:13:59.230 06:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:59.230 06:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:59.230 06:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:59.230 06:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:59.230 06:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:59.230 06:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.230 06:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:59.230 [2024-11-20 06:24:30.787894] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:59.230 06:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.230 06:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:59.230 06:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.230 06:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:59.230 Malloc0 00:13:59.230 06:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.230 06:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:59.230 06:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.230 06:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:59.230 06:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.230 06:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:59.230 06:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.230 06:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:59.230 06:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.230 06:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:59.230 06:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.230 06:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:59.230 [2024-11-20 06:24:30.849736] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:59.230 06:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.230 06:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:13:59.230 test case1: single bdev can't be used in multiple subsystems 00:13:59.230 06:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:13:59.230 06:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.230 06:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:59.230 06:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.230 06:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:13:59.230 06:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.230 06:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:59.230 06:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.230 06:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:13:59.230 06:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:13:59.230 06:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.230 06:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:59.230 [2024-11-20 06:24:30.877634] bdev.c:8189:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:13:59.230 [2024-11-20 06:24:30.877656] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:13:59.230 [2024-11-20 06:24:30.877663] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:59.230 request: 00:13:59.230 { 00:13:59.230 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:13:59.230 "namespace": { 00:13:59.230 "bdev_name": "Malloc0", 00:13:59.230 "no_auto_visible": false 00:13:59.230 }, 00:13:59.230 "method": "nvmf_subsystem_add_ns", 00:13:59.230 "req_id": 1 00:13:59.230 } 00:13:59.230 Got JSON-RPC error response 00:13:59.230 response: 00:13:59.230 { 00:13:59.230 "code": -32602, 00:13:59.230 "message": "Invalid parameters" 00:13:59.230 } 00:13:59.230 06:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:13:59.230 06:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:13:59.230 06:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:13:59.230 06:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:13:59.230 Adding namespace failed - expected result. 00:13:59.230 06:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:13:59.230 test case2: host connect to nvmf target in multiple paths 00:13:59.230 06:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:13:59.230 06:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.230 06:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:59.230 [2024-11-20 06:24:30.889785] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:13:59.230 06:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.230 06:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:00.165 06:24:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:14:01.539 06:24:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:14:01.539 06:24:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1200 -- # local i=0 00:14:01.539 06:24:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:14:01.539 06:24:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:14:01.539 06:24:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # sleep 2 00:14:03.433 06:24:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:14:03.433 06:24:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:14:03.433 06:24:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:14:03.433 06:24:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:14:03.433 06:24:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:14:03.433 06:24:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # return 0 00:14:03.433 06:24:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:14:03.433 [global] 00:14:03.433 thread=1 00:14:03.433 invalidate=1 00:14:03.433 rw=write 00:14:03.433 time_based=1 00:14:03.433 runtime=1 00:14:03.433 ioengine=libaio 00:14:03.433 direct=1 00:14:03.433 bs=4096 00:14:03.433 iodepth=1 00:14:03.433 norandommap=0 00:14:03.433 numjobs=1 00:14:03.433 00:14:03.433 verify_dump=1 00:14:03.433 verify_backlog=512 00:14:03.433 verify_state_save=0 00:14:03.433 do_verify=1 00:14:03.433 verify=crc32c-intel 00:14:03.433 [job0] 00:14:03.433 filename=/dev/nvme0n1 00:14:03.433 Could not set queue depth (nvme0n1) 00:14:03.690 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:03.690 fio-3.35 00:14:03.690 Starting 1 thread 00:14:05.064 00:14:05.064 job0: (groupid=0, jobs=1): err= 0: pid=432931: Wed Nov 20 06:24:36 2024 00:14:05.064 read: IOPS=2543, BW=9.93MiB/s (10.4MB/s)(9.95MiB/1001msec) 00:14:05.064 slat (nsec): min=6993, max=22425, avg=8029.04, stdev=1245.13 00:14:05.064 clat (usec): min=171, max=294, avg=218.48, stdev=20.17 00:14:05.064 lat (usec): min=178, max=301, avg=226.51, stdev=20.35 00:14:05.064 clat percentiles (usec): 00:14:05.064 | 1.00th=[ 190], 5.00th=[ 196], 10.00th=[ 198], 20.00th=[ 202], 00:14:05.064 | 30.00th=[ 204], 40.00th=[ 206], 50.00th=[ 210], 60.00th=[ 219], 00:14:05.064 | 70.00th=[ 233], 80.00th=[ 239], 90.00th=[ 245], 95.00th=[ 253], 00:14:05.064 | 99.00th=[ 277], 99.50th=[ 281], 99.90th=[ 293], 99.95th=[ 293], 00:14:05.064 | 99.99th=[ 293] 00:14:05.064 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:14:05.064 slat (nsec): min=10074, max=45682, avg=11233.56, stdev=1952.35 00:14:05.064 clat (usec): min=112, max=311, avg=148.13, stdev=14.56 00:14:05.064 lat (usec): min=122, max=356, avg=159.36, stdev=14.86 00:14:05.064 clat percentiles (usec): 00:14:05.064 | 1.00th=[ 119], 5.00th=[ 121], 10.00th=[ 123], 20.00th=[ 133], 00:14:05.064 | 30.00th=[ 147], 40.00th=[ 151], 50.00th=[ 153], 60.00th=[ 155], 00:14:05.064 | 70.00th=[ 157], 80.00th=[ 159], 90.00th=[ 163], 95.00th=[ 167], 00:14:05.064 | 99.00th=[ 174], 99.50th=[ 176], 99.90th=[ 182], 99.95th=[ 184], 00:14:05.064 | 99.99th=[ 310] 00:14:05.064 bw ( KiB/s): min=12288, max=12288, per=100.00%, avg=12288.00, stdev= 0.00, samples=1 00:14:05.064 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:14:05.064 lat (usec) : 250=97.00%, 500=3.00% 00:14:05.064 cpu : usr=4.10%, sys=7.90%, ctx=5106, majf=0, minf=1 00:14:05.064 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:05.064 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:05.064 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:05.064 issued rwts: total=2546,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:05.064 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:05.064 00:14:05.064 Run status group 0 (all jobs): 00:14:05.064 READ: bw=9.93MiB/s (10.4MB/s), 9.93MiB/s-9.93MiB/s (10.4MB/s-10.4MB/s), io=9.95MiB (10.4MB), run=1001-1001msec 00:14:05.064 WRITE: bw=9.99MiB/s (10.5MB/s), 9.99MiB/s-9.99MiB/s (10.5MB/s-10.5MB/s), io=10.0MiB (10.5MB), run=1001-1001msec 00:14:05.064 00:14:05.064 Disk stats (read/write): 00:14:05.064 nvme0n1: ios=2200/2560, merge=0/0, ticks=451/354, in_queue=805, util=91.18% 00:14:05.064 06:24:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:05.064 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:14:05.064 06:24:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:05.064 06:24:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1221 -- # local i=0 00:14:05.064 06:24:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:14:05.064 06:24:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:05.064 06:24:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:14:05.064 06:24:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:05.064 06:24:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1233 -- # return 0 00:14:05.064 06:24:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:14:05.064 06:24:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:14:05.064 06:24:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:05.064 06:24:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:14:05.064 06:24:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:05.064 06:24:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:14:05.064 06:24:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:05.064 06:24:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:05.064 rmmod nvme_tcp 00:14:05.064 rmmod nvme_fabrics 00:14:05.323 rmmod nvme_keyring 00:14:05.323 06:24:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:05.323 06:24:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:14:05.323 06:24:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:14:05.323 06:24:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 431855 ']' 00:14:05.323 06:24:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 431855 00:14:05.323 06:24:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@952 -- # '[' -z 431855 ']' 00:14:05.323 06:24:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # kill -0 431855 00:14:05.323 06:24:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@957 -- # uname 00:14:05.323 06:24:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:05.324 06:24:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 431855 00:14:05.324 06:24:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:05.324 06:24:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:05.324 06:24:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@970 -- # echo 'killing process with pid 431855' 00:14:05.324 killing process with pid 431855 00:14:05.324 06:24:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@971 -- # kill 431855 00:14:05.324 06:24:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@976 -- # wait 431855 00:14:05.583 06:24:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:05.583 06:24:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:05.583 06:24:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:05.583 06:24:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:14:05.583 06:24:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:14:05.583 06:24:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:14:05.583 06:24:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:05.583 06:24:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:05.583 06:24:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:05.583 06:24:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:05.583 06:24:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:05.583 06:24:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:07.496 06:24:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:07.496 00:14:07.496 real 0m14.978s 00:14:07.496 user 0m32.840s 00:14:07.496 sys 0m5.324s 00:14:07.496 06:24:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:07.496 06:24:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:07.496 ************************************ 00:14:07.496 END TEST nvmf_nmic 00:14:07.496 ************************************ 00:14:07.496 06:24:39 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:14:07.496 06:24:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:14:07.496 06:24:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:07.496 06:24:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:14:07.496 ************************************ 00:14:07.496 START TEST nvmf_fio_target 00:14:07.496 ************************************ 00:14:07.496 06:24:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:14:07.756 * Looking for test storage... 00:14:07.756 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:07.756 06:24:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:07.756 06:24:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lcov --version 00:14:07.756 06:24:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:07.756 06:24:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:07.756 06:24:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:07.756 06:24:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:07.756 06:24:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:07.756 06:24:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:14:07.756 06:24:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:14:07.756 06:24:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:14:07.756 06:24:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:14:07.756 06:24:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:14:07.756 06:24:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:14:07.756 06:24:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:14:07.756 06:24:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:07.756 06:24:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:14:07.756 06:24:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:14:07.756 06:24:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:07.756 06:24:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:07.756 06:24:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:14:07.756 06:24:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:14:07.756 06:24:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:07.756 06:24:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:14:07.756 06:24:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:14:07.756 06:24:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:14:07.756 06:24:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:14:07.756 06:24:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:07.756 06:24:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:14:07.756 06:24:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:14:07.757 06:24:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:07.757 06:24:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:07.757 06:24:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:14:07.757 06:24:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:07.757 06:24:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:07.757 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:07.757 --rc genhtml_branch_coverage=1 00:14:07.757 --rc genhtml_function_coverage=1 00:14:07.757 --rc genhtml_legend=1 00:14:07.757 --rc geninfo_all_blocks=1 00:14:07.757 --rc geninfo_unexecuted_blocks=1 00:14:07.757 00:14:07.757 ' 00:14:07.757 06:24:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:07.757 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:07.757 --rc genhtml_branch_coverage=1 00:14:07.757 --rc genhtml_function_coverage=1 00:14:07.757 --rc genhtml_legend=1 00:14:07.757 --rc geninfo_all_blocks=1 00:14:07.757 --rc geninfo_unexecuted_blocks=1 00:14:07.757 00:14:07.757 ' 00:14:07.757 06:24:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:07.757 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:07.757 --rc genhtml_branch_coverage=1 00:14:07.757 --rc genhtml_function_coverage=1 00:14:07.757 --rc genhtml_legend=1 00:14:07.757 --rc geninfo_all_blocks=1 00:14:07.757 --rc geninfo_unexecuted_blocks=1 00:14:07.757 00:14:07.757 ' 00:14:07.757 06:24:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:07.757 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:07.757 --rc genhtml_branch_coverage=1 00:14:07.757 --rc genhtml_function_coverage=1 00:14:07.757 --rc genhtml_legend=1 00:14:07.757 --rc geninfo_all_blocks=1 00:14:07.757 --rc geninfo_unexecuted_blocks=1 00:14:07.757 00:14:07.757 ' 00:14:07.757 06:24:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:07.757 06:24:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:14:07.757 06:24:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:07.757 06:24:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:07.757 06:24:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:07.757 06:24:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:07.757 06:24:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:07.757 06:24:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:07.757 06:24:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:07.757 06:24:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:07.757 06:24:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:07.757 06:24:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:07.757 06:24:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:14:07.757 06:24:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:14:07.757 06:24:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:07.757 06:24:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:07.757 06:24:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:07.757 06:24:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:07.757 06:24:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:07.757 06:24:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:14:07.757 06:24:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:07.757 06:24:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:07.757 06:24:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:07.757 06:24:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:07.757 06:24:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:07.757 06:24:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:07.757 06:24:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:14:07.757 06:24:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:07.757 06:24:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:14:07.757 06:24:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:07.757 06:24:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:07.757 06:24:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:07.757 06:24:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:07.757 06:24:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:07.757 06:24:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:07.757 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:07.757 06:24:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:07.757 06:24:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:07.757 06:24:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:07.758 06:24:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:07.758 06:24:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:07.758 06:24:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:07.758 06:24:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:14:07.758 06:24:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:07.758 06:24:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:07.758 06:24:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:07.758 06:24:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:07.758 06:24:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:07.758 06:24:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:07.758 06:24:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:07.758 06:24:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:07.758 06:24:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:07.758 06:24:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:07.758 06:24:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:14:07.758 06:24:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:14:14.329 06:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:14.329 06:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:14:14.329 06:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:14.329 06:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:14.329 06:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:14.329 06:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:14.329 06:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:14.329 06:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:14:14.329 06:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:14.329 06:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:14:14.329 06:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:14:14.329 06:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:14:14.329 06:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:14:14.329 06:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:14:14.329 06:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:14:14.329 06:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:14.329 06:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:14.329 06:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:14.329 06:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:14.329 06:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:14.329 06:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:14.329 06:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:14.329 06:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:14.329 06:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:14.329 06:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:14.329 06:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:14.329 06:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:14.329 06:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:14.329 06:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:14.329 06:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:14.329 06:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:14.329 06:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:14.329 06:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:14.329 06:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:14.329 06:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:14:14.329 Found 0000:86:00.0 (0x8086 - 0x159b) 00:14:14.329 06:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:14.329 06:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:14.329 06:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:14.329 06:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:14.329 06:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:14.329 06:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:14.329 06:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:14:14.329 Found 0000:86:00.1 (0x8086 - 0x159b) 00:14:14.329 06:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:14.329 06:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:14.329 06:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:14.329 06:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:14.329 06:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:14.329 06:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:14.329 06:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:14.329 06:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:14.329 06:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:14.329 06:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:14.329 06:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:14.329 06:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:14.329 06:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:14.329 06:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:14.329 06:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:14.329 06:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:14:14.329 Found net devices under 0000:86:00.0: cvl_0_0 00:14:14.329 06:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:14.329 06:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:14.329 06:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:14.329 06:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:14.329 06:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:14.329 06:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:14.329 06:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:14.329 06:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:14.329 06:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:14:14.329 Found net devices under 0000:86:00.1: cvl_0_1 00:14:14.329 06:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:14.329 06:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:14.329 06:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:14:14.329 06:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:14.330 06:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:14.330 06:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:14.330 06:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:14.330 06:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:14.330 06:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:14.330 06:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:14.330 06:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:14.330 06:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:14.330 06:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:14.330 06:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:14.330 06:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:14.330 06:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:14.330 06:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:14.330 06:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:14.330 06:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:14.330 06:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:14.330 06:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:14.330 06:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:14.330 06:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:14.330 06:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:14.330 06:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:14.330 06:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:14.330 06:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:14.330 06:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:14.330 06:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:14.330 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:14.330 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.478 ms 00:14:14.330 00:14:14.330 --- 10.0.0.2 ping statistics --- 00:14:14.330 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:14.330 rtt min/avg/max/mdev = 0.478/0.478/0.478/0.000 ms 00:14:14.330 06:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:14.330 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:14.330 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.220 ms 00:14:14.330 00:14:14.330 --- 10.0.0.1 ping statistics --- 00:14:14.330 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:14.330 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:14:14.330 06:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:14.330 06:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:14:14.330 06:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:14.330 06:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:14.330 06:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:14.330 06:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:14.330 06:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:14.330 06:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:14.330 06:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:14.330 06:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:14:14.330 06:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:14.330 06:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:14.330 06:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:14:14.330 06:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=436692 00:14:14.330 06:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 436692 00:14:14.330 06:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:14.330 06:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@833 -- # '[' -z 436692 ']' 00:14:14.330 06:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:14.330 06:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:14.330 06:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:14.330 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:14.330 06:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:14.330 06:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:14:14.330 [2024-11-20 06:24:45.598878] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:14:14.330 [2024-11-20 06:24:45.598920] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:14.330 [2024-11-20 06:24:45.676635] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:14.330 [2024-11-20 06:24:45.718805] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:14.330 [2024-11-20 06:24:45.718840] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:14.330 [2024-11-20 06:24:45.718847] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:14.330 [2024-11-20 06:24:45.718853] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:14.330 [2024-11-20 06:24:45.718858] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:14.330 [2024-11-20 06:24:45.720351] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:14.330 [2024-11-20 06:24:45.720457] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:14.330 [2024-11-20 06:24:45.720567] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:14.330 [2024-11-20 06:24:45.720568] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:14.330 06:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:14.330 06:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@866 -- # return 0 00:14:14.330 06:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:14.330 06:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:14.330 06:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:14:14.330 06:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:14.330 06:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:14.330 [2024-11-20 06:24:46.013807] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:14.330 06:24:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:14.588 06:24:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:14:14.588 06:24:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:14.846 06:24:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:14:14.846 06:24:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:15.104 06:24:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:14:15.104 06:24:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:15.104 06:24:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:14:15.104 06:24:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:14:15.362 06:24:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:15.620 06:24:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:14:15.620 06:24:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:15.878 06:24:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:14:15.878 06:24:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:16.135 06:24:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:14:16.135 06:24:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:14:16.135 06:24:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:16.393 06:24:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:14:16.393 06:24:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:16.651 06:24:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:14:16.651 06:24:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:16.909 06:24:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:16.909 [2024-11-20 06:24:48.707917] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:16.909 06:24:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:14:17.166 06:24:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:14:17.424 06:24:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:18.798 06:24:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:14:18.798 06:24:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1200 -- # local i=0 00:14:18.798 06:24:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:14:18.798 06:24:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # [[ -n 4 ]] 00:14:18.798 06:24:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # nvme_device_counter=4 00:14:18.798 06:24:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # sleep 2 00:14:20.696 06:24:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:14:20.696 06:24:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:14:20.696 06:24:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:14:20.696 06:24:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # nvme_devices=4 00:14:20.696 06:24:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:14:20.696 06:24:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # return 0 00:14:20.696 06:24:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:14:20.696 [global] 00:14:20.696 thread=1 00:14:20.696 invalidate=1 00:14:20.696 rw=write 00:14:20.696 time_based=1 00:14:20.696 runtime=1 00:14:20.696 ioengine=libaio 00:14:20.696 direct=1 00:14:20.696 bs=4096 00:14:20.696 iodepth=1 00:14:20.696 norandommap=0 00:14:20.696 numjobs=1 00:14:20.696 00:14:20.696 verify_dump=1 00:14:20.696 verify_backlog=512 00:14:20.696 verify_state_save=0 00:14:20.696 do_verify=1 00:14:20.696 verify=crc32c-intel 00:14:20.696 [job0] 00:14:20.696 filename=/dev/nvme0n1 00:14:20.696 [job1] 00:14:20.696 filename=/dev/nvme0n2 00:14:20.696 [job2] 00:14:20.696 filename=/dev/nvme0n3 00:14:20.696 [job3] 00:14:20.696 filename=/dev/nvme0n4 00:14:20.696 Could not set queue depth (nvme0n1) 00:14:20.696 Could not set queue depth (nvme0n2) 00:14:20.697 Could not set queue depth (nvme0n3) 00:14:20.697 Could not set queue depth (nvme0n4) 00:14:20.954 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:20.954 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:20.954 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:20.954 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:20.954 fio-3.35 00:14:20.954 Starting 4 threads 00:14:22.328 00:14:22.328 job0: (groupid=0, jobs=1): err= 0: pid=438043: Wed Nov 20 06:24:53 2024 00:14:22.328 read: IOPS=2536, BW=9.91MiB/s (10.4MB/s)(9.92MiB/1001msec) 00:14:22.328 slat (nsec): min=6953, max=39745, avg=8047.96, stdev=1210.77 00:14:22.328 clat (usec): min=158, max=1287, avg=215.93, stdev=37.58 00:14:22.328 lat (usec): min=166, max=1295, avg=223.98, stdev=37.65 00:14:22.328 clat percentiles (usec): 00:14:22.328 | 1.00th=[ 165], 5.00th=[ 172], 10.00th=[ 176], 20.00th=[ 184], 00:14:22.328 | 30.00th=[ 192], 40.00th=[ 202], 50.00th=[ 215], 60.00th=[ 231], 00:14:22.328 | 70.00th=[ 241], 80.00th=[ 247], 90.00th=[ 255], 95.00th=[ 260], 00:14:22.328 | 99.00th=[ 269], 99.50th=[ 273], 99.90th=[ 334], 99.95th=[ 502], 00:14:22.328 | 99.99th=[ 1287] 00:14:22.328 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:14:22.328 slat (nsec): min=10206, max=42289, avg=11509.34, stdev=1645.65 00:14:22.328 clat (usec): min=111, max=361, avg=150.98, stdev=25.82 00:14:22.328 lat (usec): min=123, max=372, avg=162.49, stdev=26.11 00:14:22.328 clat percentiles (usec): 00:14:22.328 | 1.00th=[ 118], 5.00th=[ 121], 10.00th=[ 124], 20.00th=[ 128], 00:14:22.328 | 30.00th=[ 133], 40.00th=[ 139], 50.00th=[ 147], 60.00th=[ 153], 00:14:22.328 | 70.00th=[ 161], 80.00th=[ 176], 90.00th=[ 188], 95.00th=[ 196], 00:14:22.328 | 99.00th=[ 233], 99.50th=[ 251], 99.90th=[ 269], 99.95th=[ 297], 00:14:22.328 | 99.99th=[ 363] 00:14:22.328 bw ( KiB/s): min=12288, max=12288, per=62.10%, avg=12288.00, stdev= 0.00, samples=1 00:14:22.328 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:14:22.328 lat (usec) : 250=91.72%, 500=8.24%, 750=0.02% 00:14:22.328 lat (msec) : 2=0.02% 00:14:22.328 cpu : usr=5.30%, sys=6.90%, ctx=5099, majf=0, minf=1 00:14:22.328 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:22.328 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:22.328 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:22.328 issued rwts: total=2539,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:22.328 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:22.328 job1: (groupid=0, jobs=1): err= 0: pid=438044: Wed Nov 20 06:24:53 2024 00:14:22.328 read: IOPS=23, BW=92.8KiB/s (95.0kB/s)(96.0KiB/1035msec) 00:14:22.328 slat (nsec): min=10918, max=23824, avg=17016.17, stdev=3010.89 00:14:22.328 clat (usec): min=443, max=42067, avg=39349.91, stdev=8293.05 00:14:22.328 lat (usec): min=459, max=42089, avg=39366.93, stdev=8293.12 00:14:22.328 clat percentiles (usec): 00:14:22.328 | 1.00th=[ 445], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:14:22.328 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:14:22.328 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:14:22.328 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:14:22.328 | 99.99th=[42206] 00:14:22.328 write: IOPS=494, BW=1979KiB/s (2026kB/s)(2048KiB/1035msec); 0 zone resets 00:14:22.328 slat (nsec): min=10808, max=42931, avg=14167.09, stdev=2263.59 00:14:22.328 clat (usec): min=127, max=306, avg=157.19, stdev=14.26 00:14:22.328 lat (usec): min=143, max=349, avg=171.36, stdev=15.03 00:14:22.328 clat percentiles (usec): 00:14:22.328 | 1.00th=[ 139], 5.00th=[ 143], 10.00th=[ 145], 20.00th=[ 147], 00:14:22.328 | 30.00th=[ 151], 40.00th=[ 153], 50.00th=[ 155], 60.00th=[ 159], 00:14:22.328 | 70.00th=[ 161], 80.00th=[ 167], 90.00th=[ 172], 95.00th=[ 178], 00:14:22.328 | 99.00th=[ 200], 99.50th=[ 210], 99.90th=[ 306], 99.95th=[ 306], 00:14:22.328 | 99.99th=[ 306] 00:14:22.328 bw ( KiB/s): min= 4096, max= 4096, per=20.70%, avg=4096.00, stdev= 0.00, samples=1 00:14:22.328 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:14:22.328 lat (usec) : 250=95.15%, 500=0.56% 00:14:22.328 lat (msec) : 50=4.29% 00:14:22.328 cpu : usr=0.29%, sys=0.68%, ctx=537, majf=0, minf=1 00:14:22.328 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:22.328 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:22.328 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:22.328 issued rwts: total=24,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:22.328 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:22.328 job2: (groupid=0, jobs=1): err= 0: pid=438046: Wed Nov 20 06:24:53 2024 00:14:22.328 read: IOPS=21, BW=87.1KiB/s (89.2kB/s)(88.0KiB/1010msec) 00:14:22.328 slat (nsec): min=10929, max=34153, avg=23236.73, stdev=3618.41 00:14:22.328 clat (usec): min=40777, max=41961, avg=41047.64, stdev=298.36 00:14:22.328 lat (usec): min=40788, max=41985, avg=41070.88, stdev=298.94 00:14:22.328 clat percentiles (usec): 00:14:22.328 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[40633], 00:14:22.328 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:14:22.328 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:14:22.328 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:14:22.328 | 99.99th=[42206] 00:14:22.328 write: IOPS=506, BW=2028KiB/s (2076kB/s)(2048KiB/1010msec); 0 zone resets 00:14:22.328 slat (nsec): min=11357, max=47011, avg=12871.27, stdev=2482.07 00:14:22.328 clat (usec): min=136, max=305, avg=191.81, stdev=23.12 00:14:22.328 lat (usec): min=150, max=318, avg=204.68, stdev=23.32 00:14:22.328 clat percentiles (usec): 00:14:22.328 | 1.00th=[ 149], 5.00th=[ 161], 10.00th=[ 165], 20.00th=[ 174], 00:14:22.328 | 30.00th=[ 180], 40.00th=[ 184], 50.00th=[ 190], 60.00th=[ 194], 00:14:22.328 | 70.00th=[ 200], 80.00th=[ 208], 90.00th=[ 221], 95.00th=[ 235], 00:14:22.328 | 99.00th=[ 262], 99.50th=[ 269], 99.90th=[ 306], 99.95th=[ 306], 00:14:22.328 | 99.99th=[ 306] 00:14:22.328 bw ( KiB/s): min= 4096, max= 4096, per=20.70%, avg=4096.00, stdev= 0.00, samples=1 00:14:22.328 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:14:22.328 lat (usec) : 250=92.88%, 500=3.00% 00:14:22.328 lat (msec) : 50=4.12% 00:14:22.328 cpu : usr=0.69%, sys=0.69%, ctx=535, majf=0, minf=1 00:14:22.328 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:22.328 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:22.328 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:22.328 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:22.328 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:22.328 job3: (groupid=0, jobs=1): err= 0: pid=438047: Wed Nov 20 06:24:53 2024 00:14:22.329 read: IOPS=1121, BW=4488KiB/s (4595kB/s)(4492KiB/1001msec) 00:14:22.329 slat (nsec): min=7630, max=39595, avg=9208.82, stdev=1790.81 00:14:22.329 clat (usec): min=191, max=41367, avg=626.36, stdev=4018.25 00:14:22.329 lat (usec): min=199, max=41378, avg=635.57, stdev=4019.07 00:14:22.329 clat percentiles (usec): 00:14:22.329 | 1.00th=[ 198], 5.00th=[ 206], 10.00th=[ 210], 20.00th=[ 217], 00:14:22.329 | 30.00th=[ 221], 40.00th=[ 223], 50.00th=[ 227], 60.00th=[ 231], 00:14:22.329 | 70.00th=[ 235], 80.00th=[ 239], 90.00th=[ 245], 95.00th=[ 251], 00:14:22.329 | 99.00th=[ 343], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:14:22.329 | 99.99th=[41157] 00:14:22.329 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:14:22.329 slat (nsec): min=11320, max=50019, avg=13288.59, stdev=2396.16 00:14:22.329 clat (usec): min=124, max=343, avg=167.72, stdev=23.88 00:14:22.329 lat (usec): min=139, max=375, avg=181.01, stdev=24.54 00:14:22.329 clat percentiles (usec): 00:14:22.329 | 1.00th=[ 135], 5.00th=[ 141], 10.00th=[ 145], 20.00th=[ 151], 00:14:22.329 | 30.00th=[ 153], 40.00th=[ 157], 50.00th=[ 161], 60.00th=[ 167], 00:14:22.329 | 70.00th=[ 176], 80.00th=[ 184], 90.00th=[ 200], 95.00th=[ 210], 00:14:22.329 | 99.00th=[ 247], 99.50th=[ 260], 99.90th=[ 338], 99.95th=[ 343], 00:14:22.329 | 99.99th=[ 343] 00:14:22.329 bw ( KiB/s): min= 4096, max= 4096, per=20.70%, avg=4096.00, stdev= 0.00, samples=1 00:14:22.329 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:14:22.329 lat (usec) : 250=97.18%, 500=2.41% 00:14:22.329 lat (msec) : 50=0.41% 00:14:22.329 cpu : usr=1.90%, sys=4.90%, ctx=2660, majf=0, minf=1 00:14:22.329 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:22.329 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:22.329 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:22.329 issued rwts: total=1123,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:22.329 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:22.329 00:14:22.329 Run status group 0 (all jobs): 00:14:22.329 READ: bw=14.0MiB/s (14.7MB/s), 87.1KiB/s-9.91MiB/s (89.2kB/s-10.4MB/s), io=14.5MiB (15.2MB), run=1001-1035msec 00:14:22.329 WRITE: bw=19.3MiB/s (20.3MB/s), 1979KiB/s-9.99MiB/s (2026kB/s-10.5MB/s), io=20.0MiB (21.0MB), run=1001-1035msec 00:14:22.329 00:14:22.329 Disk stats (read/write): 00:14:22.329 nvme0n1: ios=2098/2376, merge=0/0, ticks=439/326, in_queue=765, util=86.87% 00:14:22.329 nvme0n2: ios=42/512, merge=0/0, ticks=1641/81, in_queue=1722, util=90.04% 00:14:22.329 nvme0n3: ios=41/512, merge=0/0, ticks=1642/92, in_queue=1734, util=93.65% 00:14:22.329 nvme0n4: ios=898/1024, merge=0/0, ticks=1535/167, in_queue=1702, util=94.33% 00:14:22.329 06:24:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:14:22.329 [global] 00:14:22.329 thread=1 00:14:22.329 invalidate=1 00:14:22.329 rw=randwrite 00:14:22.329 time_based=1 00:14:22.329 runtime=1 00:14:22.329 ioengine=libaio 00:14:22.329 direct=1 00:14:22.329 bs=4096 00:14:22.329 iodepth=1 00:14:22.329 norandommap=0 00:14:22.329 numjobs=1 00:14:22.329 00:14:22.329 verify_dump=1 00:14:22.329 verify_backlog=512 00:14:22.329 verify_state_save=0 00:14:22.329 do_verify=1 00:14:22.329 verify=crc32c-intel 00:14:22.329 [job0] 00:14:22.329 filename=/dev/nvme0n1 00:14:22.329 [job1] 00:14:22.329 filename=/dev/nvme0n2 00:14:22.329 [job2] 00:14:22.329 filename=/dev/nvme0n3 00:14:22.329 [job3] 00:14:22.329 filename=/dev/nvme0n4 00:14:22.329 Could not set queue depth (nvme0n1) 00:14:22.329 Could not set queue depth (nvme0n2) 00:14:22.329 Could not set queue depth (nvme0n3) 00:14:22.329 Could not set queue depth (nvme0n4) 00:14:22.587 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:22.587 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:22.587 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:22.587 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:22.587 fio-3.35 00:14:22.587 Starting 4 threads 00:14:23.988 00:14:23.988 job0: (groupid=0, jobs=1): err= 0: pid=438421: Wed Nov 20 06:24:55 2024 00:14:23.988 read: IOPS=1024, BW=4099KiB/s (4197kB/s)(4148KiB/1012msec) 00:14:23.988 slat (nsec): min=7093, max=42037, avg=8416.57, stdev=2273.41 00:14:23.988 clat (usec): min=181, max=41984, avg=721.12, stdev=4389.54 00:14:23.988 lat (usec): min=188, max=42008, avg=729.54, stdev=4390.76 00:14:23.988 clat percentiles (usec): 00:14:23.988 | 1.00th=[ 215], 5.00th=[ 227], 10.00th=[ 231], 20.00th=[ 235], 00:14:23.988 | 30.00th=[ 239], 40.00th=[ 243], 50.00th=[ 247], 60.00th=[ 249], 00:14:23.988 | 70.00th=[ 253], 80.00th=[ 258], 90.00th=[ 265], 95.00th=[ 269], 00:14:23.988 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:14:23.988 | 99.99th=[42206] 00:14:23.988 write: IOPS=1517, BW=6071KiB/s (6217kB/s)(6144KiB/1012msec); 0 zone resets 00:14:23.988 slat (nsec): min=10462, max=38334, avg=11773.39, stdev=1615.52 00:14:23.988 clat (usec): min=111, max=349, avg=149.11, stdev=23.84 00:14:23.988 lat (usec): min=122, max=360, avg=160.88, stdev=24.25 00:14:23.988 clat percentiles (usec): 00:14:23.988 | 1.00th=[ 118], 5.00th=[ 123], 10.00th=[ 126], 20.00th=[ 130], 00:14:23.988 | 30.00th=[ 135], 40.00th=[ 139], 50.00th=[ 143], 60.00th=[ 149], 00:14:23.988 | 70.00th=[ 159], 80.00th=[ 172], 90.00th=[ 182], 95.00th=[ 188], 00:14:23.988 | 99.00th=[ 210], 99.50th=[ 247], 99.90th=[ 334], 99.95th=[ 351], 00:14:23.988 | 99.99th=[ 351] 00:14:23.988 bw ( KiB/s): min= 784, max=11504, per=49.52%, avg=6144.00, stdev=7580.18, samples=2 00:14:23.988 iops : min= 196, max= 2876, avg=1536.00, stdev=1895.05, samples=2 00:14:23.988 lat (usec) : 250=84.61%, 500=14.92% 00:14:23.988 lat (msec) : 50=0.47% 00:14:23.988 cpu : usr=2.57%, sys=3.66%, ctx=2576, majf=0, minf=1 00:14:23.988 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:23.988 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:23.988 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:23.988 issued rwts: total=1037,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:23.988 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:23.988 job1: (groupid=0, jobs=1): err= 0: pid=438422: Wed Nov 20 06:24:55 2024 00:14:23.988 read: IOPS=22, BW=88.4KiB/s (90.5kB/s)(92.0KiB/1041msec) 00:14:23.988 slat (nsec): min=9703, max=23974, avg=22376.35, stdev=3061.29 00:14:23.988 clat (usec): min=40812, max=41857, avg=41029.61, stdev=221.99 00:14:23.988 lat (usec): min=40835, max=41880, avg=41051.98, stdev=220.72 00:14:23.988 clat percentiles (usec): 00:14:23.988 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[40633], 00:14:23.988 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:14:23.988 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:14:23.988 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:14:23.988 | 99.99th=[41681] 00:14:23.988 write: IOPS=491, BW=1967KiB/s (2015kB/s)(2048KiB/1041msec); 0 zone resets 00:14:23.988 slat (nsec): min=9581, max=37010, avg=10588.22, stdev=1525.97 00:14:23.988 clat (usec): min=126, max=344, avg=174.90, stdev=17.61 00:14:23.988 lat (usec): min=136, max=381, avg=185.49, stdev=18.14 00:14:23.988 clat percentiles (usec): 00:14:23.988 | 1.00th=[ 143], 5.00th=[ 151], 10.00th=[ 157], 20.00th=[ 163], 00:14:23.988 | 30.00th=[ 167], 40.00th=[ 172], 50.00th=[ 176], 60.00th=[ 180], 00:14:23.988 | 70.00th=[ 182], 80.00th=[ 186], 90.00th=[ 194], 95.00th=[ 200], 00:14:23.988 | 99.00th=[ 219], 99.50th=[ 247], 99.90th=[ 347], 99.95th=[ 347], 00:14:23.988 | 99.99th=[ 347] 00:14:23.988 bw ( KiB/s): min= 4096, max= 4096, per=33.01%, avg=4096.00, stdev= 0.00, samples=1 00:14:23.988 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:14:23.988 lat (usec) : 250=95.33%, 500=0.37% 00:14:23.988 lat (msec) : 50=4.30% 00:14:23.988 cpu : usr=0.19%, sys=0.58%, ctx=537, majf=0, minf=1 00:14:23.988 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:23.988 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:23.988 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:23.988 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:23.988 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:23.988 job2: (groupid=0, jobs=1): err= 0: pid=438423: Wed Nov 20 06:24:55 2024 00:14:23.988 read: IOPS=37, BW=150KiB/s (154kB/s)(152KiB/1013msec) 00:14:23.988 slat (nsec): min=8998, max=26234, avg=17769.53, stdev=6765.16 00:14:23.988 clat (usec): min=206, max=42010, avg=23755.31, stdev=20318.14 00:14:23.988 lat (usec): min=216, max=42033, avg=23773.08, stdev=20316.26 00:14:23.988 clat percentiles (usec): 00:14:23.988 | 1.00th=[ 206], 5.00th=[ 219], 10.00th=[ 227], 20.00th=[ 235], 00:14:23.988 | 30.00th=[ 253], 40.00th=[ 367], 50.00th=[40109], 60.00th=[40633], 00:14:23.988 | 70.00th=[40633], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:14:23.988 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:14:23.988 | 99.99th=[42206] 00:14:23.988 write: IOPS=505, BW=2022KiB/s (2070kB/s)(2048KiB/1013msec); 0 zone resets 00:14:23.988 slat (nsec): min=11230, max=58003, avg=12447.20, stdev=2785.58 00:14:23.988 clat (usec): min=127, max=345, avg=196.69, stdev=25.46 00:14:23.988 lat (usec): min=139, max=403, avg=209.14, stdev=26.10 00:14:23.988 clat percentiles (usec): 00:14:23.988 | 1.00th=[ 139], 5.00th=[ 157], 10.00th=[ 165], 20.00th=[ 178], 00:14:23.988 | 30.00th=[ 192], 40.00th=[ 196], 50.00th=[ 200], 60.00th=[ 202], 00:14:23.988 | 70.00th=[ 206], 80.00th=[ 210], 90.00th=[ 215], 95.00th=[ 227], 00:14:23.988 | 99.00th=[ 281], 99.50th=[ 285], 99.90th=[ 347], 99.95th=[ 347], 00:14:23.988 | 99.99th=[ 347] 00:14:23.988 bw ( KiB/s): min= 4096, max= 4096, per=33.01%, avg=4096.00, stdev= 0.00, samples=1 00:14:23.988 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:14:23.988 lat (usec) : 250=91.09%, 500=4.91% 00:14:23.988 lat (msec) : 50=4.00% 00:14:23.988 cpu : usr=0.49%, sys=0.89%, ctx=552, majf=0, minf=1 00:14:23.988 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:23.988 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:23.988 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:23.988 issued rwts: total=38,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:23.988 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:23.988 job3: (groupid=0, jobs=1): err= 0: pid=438424: Wed Nov 20 06:24:55 2024 00:14:23.988 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:14:23.988 slat (nsec): min=6902, max=23616, avg=8099.63, stdev=2493.37 00:14:23.988 clat (usec): min=169, max=41214, avg=1713.16, stdev=7716.99 00:14:23.988 lat (usec): min=176, max=41223, avg=1721.26, stdev=7718.94 00:14:23.988 clat percentiles (usec): 00:14:23.988 | 1.00th=[ 176], 5.00th=[ 180], 10.00th=[ 184], 20.00th=[ 190], 00:14:23.988 | 30.00th=[ 194], 40.00th=[ 196], 50.00th=[ 200], 60.00th=[ 204], 00:14:23.988 | 70.00th=[ 208], 80.00th=[ 212], 90.00th=[ 221], 95.00th=[ 233], 00:14:23.988 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:14:23.988 | 99.99th=[41157] 00:14:23.988 write: IOPS=668, BW=2673KiB/s (2737kB/s)(2676KiB/1001msec); 0 zone resets 00:14:23.988 slat (nsec): min=9347, max=38478, avg=10804.22, stdev=2172.63 00:14:23.988 clat (usec): min=114, max=346, avg=162.78, stdev=22.35 00:14:23.988 lat (usec): min=127, max=357, avg=173.58, stdev=22.75 00:14:23.988 clat percentiles (usec): 00:14:23.988 | 1.00th=[ 123], 5.00th=[ 128], 10.00th=[ 133], 20.00th=[ 143], 00:14:23.988 | 30.00th=[ 155], 40.00th=[ 159], 50.00th=[ 165], 60.00th=[ 169], 00:14:23.988 | 70.00th=[ 174], 80.00th=[ 178], 90.00th=[ 188], 95.00th=[ 194], 00:14:23.988 | 99.00th=[ 215], 99.50th=[ 255], 99.90th=[ 347], 99.95th=[ 347], 00:14:23.988 | 99.99th=[ 347] 00:14:23.989 bw ( KiB/s): min= 4096, max= 4096, per=33.01%, avg=4096.00, stdev= 0.00, samples=1 00:14:23.989 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:14:23.989 lat (usec) : 250=98.05%, 500=0.34% 00:14:23.989 lat (msec) : 50=1.61% 00:14:23.989 cpu : usr=0.40%, sys=1.30%, ctx=1182, majf=0, minf=2 00:14:23.989 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:23.989 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:23.989 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:23.989 issued rwts: total=512,669,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:23.989 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:23.989 00:14:23.989 Run status group 0 (all jobs): 00:14:23.989 READ: bw=6186KiB/s (6335kB/s), 88.4KiB/s-4099KiB/s (90.5kB/s-4197kB/s), io=6440KiB (6595kB), run=1001-1041msec 00:14:23.989 WRITE: bw=12.1MiB/s (12.7MB/s), 1967KiB/s-6071KiB/s (2015kB/s-6217kB/s), io=12.6MiB (13.2MB), run=1001-1041msec 00:14:23.989 00:14:23.989 Disk stats (read/write): 00:14:23.989 nvme0n1: ios=1082/1536, merge=0/0, ticks=1447/218, in_queue=1665, util=86.27% 00:14:23.989 nvme0n2: ios=67/512, merge=0/0, ticks=1371/90, in_queue=1461, util=90.25% 00:14:23.989 nvme0n3: ios=53/512, merge=0/0, ticks=1642/97, in_queue=1739, util=93.56% 00:14:23.989 nvme0n4: ios=75/512, merge=0/0, ticks=800/83, in_queue=883, util=95.29% 00:14:23.989 06:24:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:14:23.989 [global] 00:14:23.989 thread=1 00:14:23.989 invalidate=1 00:14:23.989 rw=write 00:14:23.989 time_based=1 00:14:23.989 runtime=1 00:14:23.989 ioengine=libaio 00:14:23.989 direct=1 00:14:23.989 bs=4096 00:14:23.989 iodepth=128 00:14:23.989 norandommap=0 00:14:23.989 numjobs=1 00:14:23.989 00:14:23.989 verify_dump=1 00:14:23.989 verify_backlog=512 00:14:23.989 verify_state_save=0 00:14:23.989 do_verify=1 00:14:23.989 verify=crc32c-intel 00:14:23.989 [job0] 00:14:23.989 filename=/dev/nvme0n1 00:14:23.989 [job1] 00:14:23.989 filename=/dev/nvme0n2 00:14:23.989 [job2] 00:14:23.989 filename=/dev/nvme0n3 00:14:23.989 [job3] 00:14:23.989 filename=/dev/nvme0n4 00:14:23.989 Could not set queue depth (nvme0n1) 00:14:23.989 Could not set queue depth (nvme0n2) 00:14:23.989 Could not set queue depth (nvme0n3) 00:14:23.989 Could not set queue depth (nvme0n4) 00:14:24.249 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:24.249 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:24.249 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:24.249 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:24.249 fio-3.35 00:14:24.249 Starting 4 threads 00:14:25.620 00:14:25.620 job0: (groupid=0, jobs=1): err= 0: pid=438796: Wed Nov 20 06:24:57 2024 00:14:25.620 read: IOPS=3412, BW=13.3MiB/s (14.0MB/s)(13.4MiB/1006msec) 00:14:25.620 slat (nsec): min=1613, max=14534k, avg=126461.50, stdev=819718.42 00:14:25.620 clat (usec): min=4399, max=36121, avg=16147.05, stdev=5882.44 00:14:25.620 lat (usec): min=6251, max=36127, avg=16273.51, stdev=5947.98 00:14:25.620 clat percentiles (usec): 00:14:25.620 | 1.00th=[ 6652], 5.00th=[ 8979], 10.00th=[ 9896], 20.00th=[11338], 00:14:25.620 | 30.00th=[11600], 40.00th=[13304], 50.00th=[15270], 60.00th=[16319], 00:14:25.620 | 70.00th=[18744], 80.00th=[21627], 90.00th=[25297], 95.00th=[26870], 00:14:25.621 | 99.00th=[31589], 99.50th=[32900], 99.90th=[34866], 99.95th=[35914], 00:14:25.621 | 99.99th=[35914] 00:14:25.621 write: IOPS=3562, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1006msec); 0 zone resets 00:14:25.621 slat (usec): min=2, max=6949, avg=143.22, stdev=584.83 00:14:25.621 clat (usec): min=365, max=45825, avg=20068.45, stdev=10282.29 00:14:25.621 lat (usec): min=382, max=45898, avg=20211.67, stdev=10349.26 00:14:25.621 clat percentiles (usec): 00:14:25.621 | 1.00th=[ 2900], 5.00th=[ 6783], 10.00th=[ 8225], 20.00th=[ 9503], 00:14:25.621 | 30.00th=[10683], 40.00th=[15270], 50.00th=[21365], 60.00th=[23200], 00:14:25.621 | 70.00th=[25035], 80.00th=[29492], 90.00th=[33424], 95.00th=[37487], 00:14:25.621 | 99.00th=[44303], 99.50th=[44827], 99.90th=[45876], 99.95th=[45876], 00:14:25.621 | 99.99th=[45876] 00:14:25.621 bw ( KiB/s): min=12288, max=16384, per=19.10%, avg=14336.00, stdev=2896.31, samples=2 00:14:25.621 iops : min= 3072, max= 4096, avg=3584.00, stdev=724.08, samples=2 00:14:25.621 lat (usec) : 500=0.04%, 1000=0.10% 00:14:25.621 lat (msec) : 2=0.11%, 4=0.70%, 10=15.85%, 20=43.52%, 50=39.68% 00:14:25.621 cpu : usr=2.49%, sys=5.97%, ctx=461, majf=0, minf=1 00:14:25.621 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:14:25.621 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:25.621 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:25.621 issued rwts: total=3433,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:25.621 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:25.621 job1: (groupid=0, jobs=1): err= 0: pid=438799: Wed Nov 20 06:24:57 2024 00:14:25.621 read: IOPS=6283, BW=24.5MiB/s (25.7MB/s)(24.7MiB/1005msec) 00:14:25.621 slat (nsec): min=1387, max=9676.7k, avg=86602.29, stdev=632632.23 00:14:25.621 clat (usec): min=3222, max=20793, avg=10618.51, stdev=2792.30 00:14:25.621 lat (usec): min=3233, max=20812, avg=10705.12, stdev=2828.49 00:14:25.621 clat percentiles (usec): 00:14:25.621 | 1.00th=[ 4047], 5.00th=[ 7242], 10.00th=[ 8160], 20.00th=[ 9241], 00:14:25.621 | 30.00th=[ 9503], 40.00th=[ 9634], 50.00th=[ 9765], 60.00th=[10028], 00:14:25.621 | 70.00th=[10683], 80.00th=[12518], 90.00th=[15270], 95.00th=[16581], 00:14:25.621 | 99.00th=[18220], 99.50th=[18744], 99.90th=[20055], 99.95th=[20579], 00:14:25.621 | 99.99th=[20841] 00:14:25.621 write: IOPS=6622, BW=25.9MiB/s (27.1MB/s)(26.0MiB/1005msec); 0 zone resets 00:14:25.621 slat (usec): min=2, max=6353, avg=62.18, stdev=229.38 00:14:25.621 clat (usec): min=1536, max=20614, avg=9071.25, stdev=2248.93 00:14:25.621 lat (usec): min=1550, max=20618, avg=9133.43, stdev=2268.69 00:14:25.621 clat percentiles (usec): 00:14:25.621 | 1.00th=[ 2933], 5.00th=[ 4424], 10.00th=[ 5866], 20.00th=[ 7308], 00:14:25.621 | 30.00th=[ 8848], 40.00th=[ 9634], 50.00th=[ 9896], 60.00th=[10028], 00:14:25.621 | 70.00th=[10028], 80.00th=[10159], 90.00th=[10290], 95.00th=[11600], 00:14:25.621 | 99.00th=[16319], 99.50th=[16909], 99.90th=[18482], 99.95th=[18744], 00:14:25.621 | 99.99th=[20579] 00:14:25.621 bw ( KiB/s): min=26416, max=26832, per=35.48%, avg=26624.00, stdev=294.16, samples=2 00:14:25.621 iops : min= 6604, max= 6708, avg=6656.00, stdev=73.54, samples=2 00:14:25.621 lat (msec) : 2=0.02%, 4=2.35%, 10=58.01%, 20=39.57%, 50=0.06% 00:14:25.621 cpu : usr=4.48%, sys=6.37%, ctx=850, majf=0, minf=2 00:14:25.621 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:14:25.621 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:25.621 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:25.621 issued rwts: total=6315,6656,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:25.621 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:25.621 job2: (groupid=0, jobs=1): err= 0: pid=438800: Wed Nov 20 06:24:57 2024 00:14:25.621 read: IOPS=3056, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1005msec) 00:14:25.621 slat (nsec): min=1029, max=12710k, avg=132023.67, stdev=803602.67 00:14:25.621 clat (usec): min=5901, max=32411, avg=17429.33, stdev=4221.10 00:14:25.621 lat (usec): min=5908, max=32416, avg=17561.36, stdev=4274.46 00:14:25.621 clat percentiles (usec): 00:14:25.621 | 1.00th=[ 6390], 5.00th=[10683], 10.00th=[12911], 20.00th=[14615], 00:14:25.621 | 30.00th=[15139], 40.00th=[16057], 50.00th=[16909], 60.00th=[18220], 00:14:25.621 | 70.00th=[18744], 80.00th=[20317], 90.00th=[23462], 95.00th=[25035], 00:14:25.621 | 99.00th=[30802], 99.50th=[32113], 99.90th=[32113], 99.95th=[32113], 00:14:25.621 | 99.99th=[32375] 00:14:25.621 write: IOPS=3497, BW=13.7MiB/s (14.3MB/s)(13.7MiB/1005msec); 0 zone resets 00:14:25.621 slat (usec): min=2, max=19437, avg=161.78, stdev=986.50 00:14:25.621 clat (usec): min=441, max=55948, avg=20837.36, stdev=8711.82 00:14:25.621 lat (usec): min=5860, max=55976, avg=20999.15, stdev=8789.99 00:14:25.621 clat percentiles (usec): 00:14:25.621 | 1.00th=[ 6325], 5.00th=[11469], 10.00th=[12387], 20.00th=[14353], 00:14:25.621 | 30.00th=[15533], 40.00th=[17433], 50.00th=[20055], 60.00th=[21365], 00:14:25.621 | 70.00th=[22152], 80.00th=[24511], 90.00th=[30802], 95.00th=[38536], 00:14:25.621 | 99.00th=[55313], 99.50th=[55837], 99.90th=[55837], 99.95th=[55837], 00:14:25.621 | 99.99th=[55837] 00:14:25.621 bw ( KiB/s): min=12288, max=14808, per=18.05%, avg=13548.00, stdev=1781.91, samples=2 00:14:25.621 iops : min= 3072, max= 3702, avg=3387.00, stdev=445.48, samples=2 00:14:25.621 lat (usec) : 500=0.02% 00:14:25.621 lat (msec) : 10=2.43%, 20=60.77%, 50=35.57%, 100=1.21% 00:14:25.621 cpu : usr=2.99%, sys=4.08%, ctx=296, majf=0, minf=1 00:14:25.621 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:14:25.621 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:25.621 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:25.621 issued rwts: total=3072,3515,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:25.621 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:25.621 job3: (groupid=0, jobs=1): err= 0: pid=438802: Wed Nov 20 06:24:57 2024 00:14:25.621 read: IOPS=4913, BW=19.2MiB/s (20.1MB/s)(19.3MiB/1006msec) 00:14:25.621 slat (nsec): min=1488, max=29164k, avg=95465.41, stdev=665673.14 00:14:25.621 clat (usec): min=4840, max=45323, avg=12283.42, stdev=5200.38 00:14:25.621 lat (usec): min=4846, max=49964, avg=12378.88, stdev=5234.60 00:14:25.621 clat percentiles (usec): 00:14:25.621 | 1.00th=[ 5276], 5.00th=[ 8094], 10.00th=[ 9110], 20.00th=[10552], 00:14:25.621 | 30.00th=[10945], 40.00th=[10945], 50.00th=[11207], 60.00th=[11338], 00:14:25.621 | 70.00th=[11600], 80.00th=[12649], 90.00th=[14615], 95.00th=[18482], 00:14:25.621 | 99.00th=[44303], 99.50th=[44303], 99.90th=[45351], 99.95th=[45351], 00:14:25.621 | 99.99th=[45351] 00:14:25.621 write: IOPS=5089, BW=19.9MiB/s (20.8MB/s)(20.0MiB/1006msec); 0 zone resets 00:14:25.621 slat (usec): min=2, max=39183, avg=97.27, stdev=761.26 00:14:25.621 clat (usec): min=6329, max=45646, avg=12017.57, stdev=3666.36 00:14:25.621 lat (usec): min=6342, max=45679, avg=12114.85, stdev=3741.34 00:14:25.621 clat percentiles (usec): 00:14:25.621 | 1.00th=[ 7046], 5.00th=[ 9372], 10.00th=[10290], 20.00th=[10683], 00:14:25.621 | 30.00th=[10945], 40.00th=[11207], 50.00th=[11207], 60.00th=[11338], 00:14:25.621 | 70.00th=[11469], 80.00th=[11731], 90.00th=[13435], 95.00th=[18482], 00:14:25.621 | 99.00th=[31589], 99.50th=[31589], 99.90th=[31589], 99.95th=[34341], 00:14:25.621 | 99.99th=[45876] 00:14:25.621 bw ( KiB/s): min=20480, max=20480, per=27.29%, avg=20480.00, stdev= 0.00, samples=2 00:14:25.621 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=2 00:14:25.621 lat (msec) : 10=10.97%, 20=84.38%, 50=4.65% 00:14:25.621 cpu : usr=3.28%, sys=6.37%, ctx=590, majf=0, minf=1 00:14:25.621 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:14:25.621 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:25.621 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:25.621 issued rwts: total=4943,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:25.621 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:25.621 00:14:25.621 Run status group 0 (all jobs): 00:14:25.621 READ: bw=69.0MiB/s (72.3MB/s), 11.9MiB/s-24.5MiB/s (12.5MB/s-25.7MB/s), io=69.4MiB (72.8MB), run=1005-1006msec 00:14:25.621 WRITE: bw=73.3MiB/s (76.8MB/s), 13.7MiB/s-25.9MiB/s (14.3MB/s-27.1MB/s), io=73.7MiB (77.3MB), run=1005-1006msec 00:14:25.621 00:14:25.621 Disk stats (read/write): 00:14:25.621 nvme0n1: ios=2884/3072, merge=0/0, ticks=33068/43249, in_queue=76317, util=96.09% 00:14:25.621 nvme0n2: ios=5142/5263, merge=0/0, ticks=53114/46881, in_queue=99995, util=91.79% 00:14:25.621 nvme0n3: ios=2616/2675, merge=0/0, ticks=22373/26981, in_queue=49354, util=88.65% 00:14:25.621 nvme0n4: ios=3956/4096, merge=0/0, ticks=26109/23452, in_queue=49561, util=100.00% 00:14:25.621 06:24:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:14:25.621 [global] 00:14:25.622 thread=1 00:14:25.622 invalidate=1 00:14:25.622 rw=randwrite 00:14:25.622 time_based=1 00:14:25.622 runtime=1 00:14:25.622 ioengine=libaio 00:14:25.622 direct=1 00:14:25.622 bs=4096 00:14:25.622 iodepth=128 00:14:25.622 norandommap=0 00:14:25.622 numjobs=1 00:14:25.622 00:14:25.622 verify_dump=1 00:14:25.622 verify_backlog=512 00:14:25.622 verify_state_save=0 00:14:25.622 do_verify=1 00:14:25.622 verify=crc32c-intel 00:14:25.622 [job0] 00:14:25.622 filename=/dev/nvme0n1 00:14:25.622 [job1] 00:14:25.622 filename=/dev/nvme0n2 00:14:25.622 [job2] 00:14:25.622 filename=/dev/nvme0n3 00:14:25.622 [job3] 00:14:25.622 filename=/dev/nvme0n4 00:14:25.622 Could not set queue depth (nvme0n1) 00:14:25.622 Could not set queue depth (nvme0n2) 00:14:25.622 Could not set queue depth (nvme0n3) 00:14:25.622 Could not set queue depth (nvme0n4) 00:14:25.878 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:25.878 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:25.878 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:25.878 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:25.878 fio-3.35 00:14:25.878 Starting 4 threads 00:14:27.245 00:14:27.245 job0: (groupid=0, jobs=1): err= 0: pid=439170: Wed Nov 20 06:24:58 2024 00:14:27.245 read: IOPS=5194, BW=20.3MiB/s (21.3MB/s)(20.4MiB/1003msec) 00:14:27.245 slat (nsec): min=1061, max=27630k, avg=88812.76, stdev=706241.05 00:14:27.245 clat (usec): min=2222, max=61223, avg=11596.13, stdev=7454.66 00:14:27.245 lat (usec): min=2228, max=61246, avg=11684.94, stdev=7492.34 00:14:27.245 clat percentiles (usec): 00:14:27.245 | 1.00th=[ 5276], 5.00th=[ 6783], 10.00th=[ 7373], 20.00th=[ 8848], 00:14:27.245 | 30.00th=[ 9503], 40.00th=[ 9634], 50.00th=[ 9896], 60.00th=[10028], 00:14:27.245 | 70.00th=[10290], 80.00th=[11600], 90.00th=[16057], 95.00th=[23462], 00:14:27.245 | 99.00th=[49546], 99.50th=[57934], 99.90th=[57934], 99.95th=[57934], 00:14:27.245 | 99.99th=[61080] 00:14:27.245 write: IOPS=5615, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1003msec); 0 zone resets 00:14:27.245 slat (nsec): min=1671, max=17909k, avg=90329.96, stdev=591496.33 00:14:27.245 clat (usec): min=1149, max=77074, avg=11860.32, stdev=9314.69 00:14:27.245 lat (usec): min=1160, max=77083, avg=11950.65, stdev=9367.63 00:14:27.245 clat percentiles (usec): 00:14:27.245 | 1.00th=[ 3163], 5.00th=[ 6521], 10.00th=[ 7701], 20.00th=[ 9110], 00:14:27.245 | 30.00th=[ 9372], 40.00th=[ 9634], 50.00th=[ 9896], 60.00th=[10159], 00:14:27.245 | 70.00th=[10290], 80.00th=[11076], 90.00th=[14484], 95.00th=[23200], 00:14:27.245 | 99.00th=[66323], 99.50th=[76022], 99.90th=[77071], 99.95th=[77071], 00:14:27.245 | 99.99th=[77071] 00:14:27.245 bw ( KiB/s): min=22344, max=22416, per=31.02%, avg=22380.00, stdev=50.91, samples=2 00:14:27.245 iops : min= 5586, max= 5604, avg=5595.00, stdev=12.73, samples=2 00:14:27.245 lat (msec) : 2=0.35%, 4=0.62%, 10=56.37%, 20=35.64%, 50=5.62% 00:14:27.245 lat (msec) : 100=1.40% 00:14:27.245 cpu : usr=2.40%, sys=5.59%, ctx=529, majf=0, minf=1 00:14:27.245 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:14:27.245 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:27.245 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:27.245 issued rwts: total=5210,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:27.245 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:27.245 job1: (groupid=0, jobs=1): err= 0: pid=439171: Wed Nov 20 06:24:58 2024 00:14:27.245 read: IOPS=2801, BW=10.9MiB/s (11.5MB/s)(11.0MiB/1004msec) 00:14:27.245 slat (nsec): min=1328, max=25932k, avg=198228.06, stdev=1396842.33 00:14:27.245 clat (usec): min=2598, max=99318, avg=23019.43, stdev=15410.28 00:14:27.245 lat (usec): min=4946, max=99328, avg=23217.65, stdev=15536.85 00:14:27.245 clat percentiles (usec): 00:14:27.245 | 1.00th=[ 6849], 5.00th=[ 9241], 10.00th=[11994], 20.00th=[14353], 00:14:27.245 | 30.00th=[15795], 40.00th=[16909], 50.00th=[17957], 60.00th=[18482], 00:14:27.245 | 70.00th=[19268], 80.00th=[30802], 90.00th=[46924], 95.00th=[56886], 00:14:27.245 | 99.00th=[76022], 99.50th=[76022], 99.90th=[83362], 99.95th=[90702], 00:14:27.245 | 99.99th=[99091] 00:14:27.245 write: IOPS=3059, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1004msec); 0 zone resets 00:14:27.245 slat (nsec): min=1873, max=15776k, avg=136305.66, stdev=838846.67 00:14:27.245 clat (usec): min=724, max=59845, avg=20212.10, stdev=12923.84 00:14:27.245 lat (usec): min=734, max=59848, avg=20348.41, stdev=12994.09 00:14:27.245 clat percentiles (usec): 00:14:27.245 | 1.00th=[ 5342], 5.00th=[ 6783], 10.00th=[ 7308], 20.00th=[ 9634], 00:14:27.245 | 30.00th=[11338], 40.00th=[13829], 50.00th=[16450], 60.00th=[19530], 00:14:27.245 | 70.00th=[22414], 80.00th=[32375], 90.00th=[40109], 95.00th=[49021], 00:14:27.245 | 99.00th=[59507], 99.50th=[59507], 99.90th=[60031], 99.95th=[60031], 00:14:27.245 | 99.99th=[60031] 00:14:27.245 bw ( KiB/s): min=12288, max=12288, per=17.03%, avg=12288.00, stdev= 0.00, samples=2 00:14:27.245 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=2 00:14:27.245 lat (usec) : 750=0.07% 00:14:27.245 lat (msec) : 4=0.02%, 10=15.36%, 20=51.13%, 50=26.88%, 100=6.54% 00:14:27.245 cpu : usr=1.79%, sys=3.19%, ctx=314, majf=0, minf=1 00:14:27.245 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:14:27.245 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:27.245 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:27.245 issued rwts: total=2813,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:27.245 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:27.245 job2: (groupid=0, jobs=1): err= 0: pid=439176: Wed Nov 20 06:24:58 2024 00:14:27.245 read: IOPS=5104, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec) 00:14:27.245 slat (nsec): min=1097, max=24663k, avg=87555.93, stdev=747017.93 00:14:27.245 clat (usec): min=4507, max=42085, avg=12463.61, stdev=5746.45 00:14:27.245 lat (usec): min=5457, max=47732, avg=12551.16, stdev=5788.58 00:14:27.245 clat percentiles (usec): 00:14:27.245 | 1.00th=[ 6128], 5.00th=[ 7308], 10.00th=[ 7504], 20.00th=[ 9110], 00:14:27.245 | 30.00th=[10159], 40.00th=[10945], 50.00th=[11338], 60.00th=[11731], 00:14:27.245 | 70.00th=[12256], 80.00th=[13435], 90.00th=[16909], 95.00th=[24511], 00:14:27.245 | 99.00th=[38011], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:14:27.245 | 99.99th=[42206] 00:14:27.245 write: IOPS=5460, BW=21.3MiB/s (22.4MB/s)(21.4MiB/1003msec); 0 zone resets 00:14:27.245 slat (usec): min=2, max=13947, avg=83.11, stdev=657.29 00:14:27.245 clat (usec): min=433, max=54772, avg=11514.38, stdev=5954.39 00:14:27.245 lat (usec): min=998, max=54793, avg=11597.49, stdev=5998.23 00:14:27.245 clat percentiles (usec): 00:14:27.245 | 1.00th=[ 4228], 5.00th=[ 6063], 10.00th=[ 7242], 20.00th=[ 8848], 00:14:27.245 | 30.00th=[ 9241], 40.00th=[ 9896], 50.00th=[10552], 60.00th=[11076], 00:14:27.245 | 70.00th=[11338], 80.00th=[11731], 90.00th=[15401], 95.00th=[20841], 00:14:27.245 | 99.00th=[43254], 99.50th=[48497], 99.90th=[48497], 99.95th=[48497], 00:14:27.245 | 99.99th=[54789] 00:14:27.245 bw ( KiB/s): min=20480, max=22312, per=29.66%, avg=21396.00, stdev=1295.42, samples=2 00:14:27.245 iops : min= 5120, max= 5578, avg=5349.00, stdev=323.85, samples=2 00:14:27.245 lat (usec) : 500=0.01%, 1000=0.01% 00:14:27.245 lat (msec) : 4=0.42%, 10=35.19%, 20=57.35%, 50=7.01%, 100=0.01% 00:14:27.245 cpu : usr=3.49%, sys=5.99%, ctx=324, majf=0, minf=1 00:14:27.245 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:14:27.245 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:27.245 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:27.245 issued rwts: total=5120,5477,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:27.245 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:27.245 job3: (groupid=0, jobs=1): err= 0: pid=439179: Wed Nov 20 06:24:58 2024 00:14:27.245 read: IOPS=3555, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1008msec) 00:14:27.245 slat (nsec): min=1392, max=12454k, avg=125540.23, stdev=864697.50 00:14:27.246 clat (usec): min=5281, max=38141, avg=15273.38, stdev=4926.68 00:14:27.246 lat (usec): min=5287, max=38149, avg=15398.92, stdev=4992.69 00:14:27.246 clat percentiles (usec): 00:14:27.246 | 1.00th=[ 7046], 5.00th=[ 9896], 10.00th=[11076], 20.00th=[12256], 00:14:27.246 | 30.00th=[13042], 40.00th=[13566], 50.00th=[14091], 60.00th=[14615], 00:14:27.246 | 70.00th=[15533], 80.00th=[17433], 90.00th=[20317], 95.00th=[27132], 00:14:27.246 | 99.00th=[33817], 99.50th=[36963], 99.90th=[38011], 99.95th=[38011], 00:14:27.246 | 99.99th=[38011] 00:14:27.246 write: IOPS=3966, BW=15.5MiB/s (16.2MB/s)(15.6MiB/1008msec); 0 zone resets 00:14:27.246 slat (usec): min=2, max=15012, avg=131.12, stdev=700.87 00:14:27.246 clat (usec): min=2892, max=41033, avg=18232.89, stdev=7003.00 00:14:27.246 lat (usec): min=2902, max=41038, avg=18364.01, stdev=7061.30 00:14:27.246 clat percentiles (usec): 00:14:27.246 | 1.00th=[ 5473], 5.00th=[ 8455], 10.00th=[10159], 20.00th=[11600], 00:14:27.246 | 30.00th=[13435], 40.00th=[15401], 50.00th=[17957], 60.00th=[20055], 00:14:27.246 | 70.00th=[22152], 80.00th=[24511], 90.00th=[27132], 95.00th=[29754], 00:14:27.246 | 99.00th=[38011], 99.50th=[39060], 99.90th=[41157], 99.95th=[41157], 00:14:27.246 | 99.99th=[41157] 00:14:27.246 bw ( KiB/s): min=13768, max=17200, per=21.46%, avg=15484.00, stdev=2426.79, samples=2 00:14:27.246 iops : min= 3442, max= 4300, avg=3871.00, stdev=606.70, samples=2 00:14:27.246 lat (msec) : 4=0.24%, 10=7.27%, 20=65.27%, 50=27.22% 00:14:27.246 cpu : usr=3.08%, sys=5.06%, ctx=376, majf=0, minf=1 00:14:27.246 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:14:27.246 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:27.246 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:27.246 issued rwts: total=3584,3998,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:27.246 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:27.246 00:14:27.246 Run status group 0 (all jobs): 00:14:27.246 READ: bw=64.8MiB/s (68.0MB/s), 10.9MiB/s-20.3MiB/s (11.5MB/s-21.3MB/s), io=65.3MiB (68.5MB), run=1003-1008msec 00:14:27.246 WRITE: bw=70.4MiB/s (73.9MB/s), 12.0MiB/s-21.9MiB/s (12.5MB/s-23.0MB/s), io=71.0MiB (74.5MB), run=1003-1008msec 00:14:27.246 00:14:27.246 Disk stats (read/write): 00:14:27.246 nvme0n1: ios=4246/4608, merge=0/0, ticks=22345/25107, in_queue=47452, util=87.37% 00:14:27.246 nvme0n2: ios=2578/2631, merge=0/0, ticks=23007/16402, in_queue=39409, util=94.93% 00:14:27.246 nvme0n3: ios=4343/4608, merge=0/0, ticks=43196/38377, in_queue=81573, util=97.82% 00:14:27.246 nvme0n4: ios=3130/3327, merge=0/0, ticks=46509/58199, in_queue=104708, util=98.32% 00:14:27.246 06:24:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:14:27.246 06:24:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=439403 00:14:27.246 06:24:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:14:27.246 06:24:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:14:27.246 [global] 00:14:27.246 thread=1 00:14:27.246 invalidate=1 00:14:27.246 rw=read 00:14:27.246 time_based=1 00:14:27.246 runtime=10 00:14:27.246 ioengine=libaio 00:14:27.246 direct=1 00:14:27.246 bs=4096 00:14:27.246 iodepth=1 00:14:27.246 norandommap=1 00:14:27.246 numjobs=1 00:14:27.246 00:14:27.246 [job0] 00:14:27.246 filename=/dev/nvme0n1 00:14:27.246 [job1] 00:14:27.246 filename=/dev/nvme0n2 00:14:27.246 [job2] 00:14:27.246 filename=/dev/nvme0n3 00:14:27.246 [job3] 00:14:27.246 filename=/dev/nvme0n4 00:14:27.246 Could not set queue depth (nvme0n1) 00:14:27.246 Could not set queue depth (nvme0n2) 00:14:27.246 Could not set queue depth (nvme0n3) 00:14:27.246 Could not set queue depth (nvme0n4) 00:14:27.246 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:27.246 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:27.246 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:27.246 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:27.246 fio-3.35 00:14:27.246 Starting 4 threads 00:14:30.542 06:25:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:14:30.542 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=274432, buflen=4096 00:14:30.542 fio: pid=439636, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:14:30.542 06:25:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:14:30.542 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=55848960, buflen=4096 00:14:30.542 fio: pid=439631, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:14:30.542 06:25:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:30.542 06:25:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:14:30.542 06:25:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:30.542 06:25:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:14:30.542 fio: io_u error on file /dev/nvme0n1: Input/output error: read offset=364544, buflen=4096 00:14:30.542 fio: pid=439601, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:14:30.801 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=2199552, buflen=4096 00:14:30.801 fio: pid=439614, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:14:30.801 06:25:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:30.801 06:25:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:14:30.801 00:14:30.801 job0: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=439601: Wed Nov 20 06:25:02 2024 00:14:30.801 read: IOPS=28, BW=112KiB/s (115kB/s)(356KiB/3167msec) 00:14:30.801 slat (usec): min=8, max=21766, avg=441.12, stdev=2588.04 00:14:30.801 clat (usec): min=231, max=44924, avg=35125.35, stdev=14448.15 00:14:30.801 lat (usec): min=245, max=50831, avg=35495.78, stdev=14110.74 00:14:30.801 clat percentiles (usec): 00:14:30.801 | 1.00th=[ 231], 5.00th=[ 273], 10.00th=[ 314], 20.00th=[40633], 00:14:30.801 | 30.00th=[40633], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:14:30.801 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:14:30.801 | 99.00th=[44827], 99.50th=[44827], 99.90th=[44827], 99.95th=[44827], 00:14:30.801 | 99.99th=[44827] 00:14:30.801 bw ( KiB/s): min= 96, max= 139, per=0.64%, avg=109.83, stdev=16.76, samples=6 00:14:30.801 iops : min= 24, max= 34, avg=27.33, stdev= 3.93, samples=6 00:14:30.801 lat (usec) : 250=2.22%, 500=11.11% 00:14:30.801 lat (msec) : 2=1.11%, 50=84.44% 00:14:30.801 cpu : usr=0.00%, sys=0.28%, ctx=92, majf=0, minf=2 00:14:30.801 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:30.801 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:30.801 complete : 0=1.1%, 4=98.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:30.801 issued rwts: total=90,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:30.801 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:30.801 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=439614: Wed Nov 20 06:25:02 2024 00:14:30.801 read: IOPS=160, BW=643KiB/s (658kB/s)(2148KiB/3343msec) 00:14:30.801 slat (usec): min=6, max=17719, avg=42.47, stdev=763.57 00:14:30.801 clat (usec): min=179, max=45088, avg=6160.04, stdev=14447.55 00:14:30.801 lat (usec): min=185, max=59007, avg=6202.54, stdev=14553.21 00:14:30.801 clat percentiles (usec): 00:14:30.801 | 1.00th=[ 184], 5.00th=[ 188], 10.00th=[ 192], 20.00th=[ 198], 00:14:30.801 | 30.00th=[ 202], 40.00th=[ 206], 50.00th=[ 210], 60.00th=[ 217], 00:14:30.801 | 70.00th=[ 223], 80.00th=[ 239], 90.00th=[41157], 95.00th=[41157], 00:14:30.801 | 99.00th=[42206], 99.50th=[42206], 99.90th=[44827], 99.95th=[44827], 00:14:30.801 | 99.99th=[44827] 00:14:30.801 bw ( KiB/s): min= 92, max= 3432, per=4.11%, avg=704.67, stdev=1341.90, samples=6 00:14:30.801 iops : min= 23, max= 858, avg=176.17, stdev=335.47, samples=6 00:14:30.801 lat (usec) : 250=83.09%, 500=2.23% 00:14:30.801 lat (msec) : 50=14.50% 00:14:30.801 cpu : usr=0.12%, sys=0.12%, ctx=541, majf=0, minf=1 00:14:30.801 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:30.801 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:30.801 complete : 0=0.2%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:30.801 issued rwts: total=538,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:30.801 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:30.801 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=439631: Wed Nov 20 06:25:02 2024 00:14:30.801 read: IOPS=4628, BW=18.1MiB/s (19.0MB/s)(53.3MiB/2946msec) 00:14:30.801 slat (nsec): min=6398, max=40936, avg=7522.94, stdev=1086.32 00:14:30.801 clat (usec): min=164, max=41446, avg=205.79, stdev=358.62 00:14:30.801 lat (usec): min=171, max=41454, avg=213.31, stdev=358.63 00:14:30.801 clat percentiles (usec): 00:14:30.801 | 1.00th=[ 174], 5.00th=[ 180], 10.00th=[ 184], 20.00th=[ 188], 00:14:30.801 | 30.00th=[ 194], 40.00th=[ 198], 50.00th=[ 200], 60.00th=[ 204], 00:14:30.801 | 70.00th=[ 208], 80.00th=[ 215], 90.00th=[ 221], 95.00th=[ 229], 00:14:30.801 | 99.00th=[ 245], 99.50th=[ 258], 99.90th=[ 371], 99.95th=[ 465], 00:14:30.801 | 99.99th=[ 4080] 00:14:30.801 bw ( KiB/s): min=17576, max=18904, per=100.00%, avg=18510.40, stdev=539.11, samples=5 00:14:30.801 iops : min= 4394, max= 4726, avg=4627.60, stdev=134.78, samples=5 00:14:30.801 lat (usec) : 250=99.28%, 500=0.67% 00:14:30.801 lat (msec) : 2=0.01%, 4=0.02%, 10=0.01%, 50=0.01% 00:14:30.801 cpu : usr=1.05%, sys=4.35%, ctx=13637, majf=0, minf=2 00:14:30.801 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:30.801 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:30.801 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:30.801 issued rwts: total=13636,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:30.801 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:30.801 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=439636: Wed Nov 20 06:25:02 2024 00:14:30.801 read: IOPS=24, BW=98.2KiB/s (101kB/s)(268KiB/2728msec) 00:14:30.801 slat (nsec): min=5915, max=25025, avg=21608.12, stdev=4928.22 00:14:30.801 clat (usec): min=356, max=41147, avg=40366.21, stdev=4962.52 00:14:30.801 lat (usec): min=381, max=41171, avg=40387.79, stdev=4962.10 00:14:30.801 clat percentiles (usec): 00:14:30.801 | 1.00th=[ 355], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:14:30.801 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:14:30.801 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:14:30.801 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:14:30.801 | 99.99th=[41157] 00:14:30.801 bw ( KiB/s): min= 96, max= 104, per=0.58%, avg=99.20, stdev= 4.38, samples=5 00:14:30.801 iops : min= 24, max= 26, avg=24.80, stdev= 1.10, samples=5 00:14:30.801 lat (usec) : 500=1.47% 00:14:30.801 lat (msec) : 50=97.06% 00:14:30.801 cpu : usr=0.07%, sys=0.00%, ctx=71, majf=0, minf=2 00:14:30.801 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:30.801 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:30.801 complete : 0=1.4%, 4=98.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:30.801 issued rwts: total=68,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:30.801 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:30.801 00:14:30.801 Run status group 0 (all jobs): 00:14:30.801 READ: bw=16.7MiB/s (17.6MB/s), 98.2KiB/s-18.1MiB/s (101kB/s-19.0MB/s), io=56.0MiB (58.7MB), run=2728-3343msec 00:14:30.801 00:14:30.801 Disk stats (read/write): 00:14:30.801 nvme0n1: ios=86/0, merge=0/0, ticks=3045/0, in_queue=3045, util=94.85% 00:14:30.801 nvme0n2: ios=531/0, merge=0/0, ticks=3061/0, in_queue=3061, util=96.13% 00:14:30.801 nvme0n3: ios=13347/0, merge=0/0, ticks=3567/0, in_queue=3567, util=98.85% 00:14:30.801 nvme0n4: ios=98/0, merge=0/0, ticks=3259/0, in_queue=3259, util=99.89% 00:14:31.059 06:25:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:31.059 06:25:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:14:31.317 06:25:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:31.317 06:25:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:14:31.317 06:25:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:31.317 06:25:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:14:31.574 06:25:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:31.574 06:25:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:14:31.832 06:25:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:14:31.832 06:25:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 439403 00:14:31.832 06:25:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:14:31.832 06:25:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:31.832 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:31.832 06:25:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:31.832 06:25:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1221 -- # local i=0 00:14:31.832 06:25:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:14:31.832 06:25:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:32.090 06:25:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:14:32.090 06:25:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:32.090 06:25:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1233 -- # return 0 00:14:32.090 06:25:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:14:32.090 06:25:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:14:32.090 nvmf hotplug test: fio failed as expected 00:14:32.090 06:25:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:32.090 06:25:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:14:32.090 06:25:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:14:32.090 06:25:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:14:32.090 06:25:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:14:32.090 06:25:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:14:32.090 06:25:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:32.090 06:25:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:14:32.090 06:25:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:32.090 06:25:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:14:32.090 06:25:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:32.090 06:25:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:32.090 rmmod nvme_tcp 00:14:32.090 rmmod nvme_fabrics 00:14:32.350 rmmod nvme_keyring 00:14:32.350 06:25:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:32.350 06:25:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:14:32.350 06:25:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:14:32.350 06:25:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 436692 ']' 00:14:32.350 06:25:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 436692 00:14:32.350 06:25:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@952 -- # '[' -z 436692 ']' 00:14:32.350 06:25:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # kill -0 436692 00:14:32.350 06:25:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@957 -- # uname 00:14:32.350 06:25:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:32.350 06:25:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 436692 00:14:32.350 06:25:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:32.350 06:25:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:32.350 06:25:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 436692' 00:14:32.350 killing process with pid 436692 00:14:32.350 06:25:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@971 -- # kill 436692 00:14:32.350 06:25:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@976 -- # wait 436692 00:14:32.350 06:25:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:32.350 06:25:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:32.350 06:25:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:32.350 06:25:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:14:32.350 06:25:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:14:32.350 06:25:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:32.350 06:25:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:14:32.350 06:25:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:32.350 06:25:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:32.350 06:25:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:32.350 06:25:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:32.350 06:25:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:34.889 06:25:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:34.889 00:14:34.889 real 0m26.907s 00:14:34.889 user 1m47.042s 00:14:34.889 sys 0m8.294s 00:14:34.889 06:25:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:34.889 06:25:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:14:34.889 ************************************ 00:14:34.889 END TEST nvmf_fio_target 00:14:34.889 ************************************ 00:14:34.889 06:25:06 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:14:34.889 06:25:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:14:34.889 06:25:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:34.889 06:25:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:14:34.889 ************************************ 00:14:34.889 START TEST nvmf_bdevio 00:14:34.889 ************************************ 00:14:34.889 06:25:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:14:34.889 * Looking for test storage... 00:14:34.889 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:34.889 06:25:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:34.889 06:25:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lcov --version 00:14:34.889 06:25:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:34.889 06:25:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:34.889 06:25:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:34.889 06:25:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:34.889 06:25:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:34.889 06:25:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:14:34.889 06:25:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:14:34.889 06:25:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:14:34.889 06:25:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:14:34.889 06:25:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:14:34.889 06:25:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:14:34.889 06:25:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:14:34.889 06:25:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:34.889 06:25:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:14:34.889 06:25:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:14:34.889 06:25:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:34.889 06:25:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:34.889 06:25:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:14:34.889 06:25:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:14:34.889 06:25:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:34.889 06:25:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:14:34.889 06:25:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:14:34.889 06:25:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:14:34.889 06:25:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:14:34.889 06:25:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:34.889 06:25:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:14:34.889 06:25:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:14:34.889 06:25:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:34.889 06:25:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:34.889 06:25:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:14:34.889 06:25:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:34.889 06:25:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:34.889 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:34.889 --rc genhtml_branch_coverage=1 00:14:34.889 --rc genhtml_function_coverage=1 00:14:34.889 --rc genhtml_legend=1 00:14:34.889 --rc geninfo_all_blocks=1 00:14:34.889 --rc geninfo_unexecuted_blocks=1 00:14:34.889 00:14:34.889 ' 00:14:34.889 06:25:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:34.889 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:34.889 --rc genhtml_branch_coverage=1 00:14:34.889 --rc genhtml_function_coverage=1 00:14:34.889 --rc genhtml_legend=1 00:14:34.889 --rc geninfo_all_blocks=1 00:14:34.889 --rc geninfo_unexecuted_blocks=1 00:14:34.889 00:14:34.889 ' 00:14:34.889 06:25:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:34.889 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:34.889 --rc genhtml_branch_coverage=1 00:14:34.889 --rc genhtml_function_coverage=1 00:14:34.889 --rc genhtml_legend=1 00:14:34.889 --rc geninfo_all_blocks=1 00:14:34.889 --rc geninfo_unexecuted_blocks=1 00:14:34.889 00:14:34.889 ' 00:14:34.889 06:25:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:34.889 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:34.889 --rc genhtml_branch_coverage=1 00:14:34.889 --rc genhtml_function_coverage=1 00:14:34.889 --rc genhtml_legend=1 00:14:34.889 --rc geninfo_all_blocks=1 00:14:34.889 --rc geninfo_unexecuted_blocks=1 00:14:34.889 00:14:34.889 ' 00:14:34.889 06:25:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:34.889 06:25:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:14:34.889 06:25:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:34.889 06:25:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:34.889 06:25:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:34.889 06:25:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:34.889 06:25:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:34.889 06:25:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:34.889 06:25:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:34.889 06:25:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:34.889 06:25:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:34.889 06:25:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:34.889 06:25:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:14:34.890 06:25:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:14:34.890 06:25:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:34.890 06:25:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:34.890 06:25:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:34.890 06:25:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:34.890 06:25:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:34.890 06:25:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:14:34.890 06:25:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:34.890 06:25:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:34.890 06:25:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:34.890 06:25:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:34.890 06:25:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:34.890 06:25:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:34.890 06:25:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:14:34.890 06:25:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:34.890 06:25:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:14:34.890 06:25:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:34.890 06:25:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:34.890 06:25:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:34.890 06:25:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:34.890 06:25:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:34.890 06:25:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:34.890 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:34.890 06:25:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:34.890 06:25:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:34.890 06:25:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:34.890 06:25:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:34.890 06:25:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:34.890 06:25:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:14:34.890 06:25:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:34.890 06:25:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:34.890 06:25:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:34.890 06:25:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:34.890 06:25:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:34.890 06:25:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:34.890 06:25:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:34.890 06:25:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:34.890 06:25:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:34.890 06:25:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:34.890 06:25:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:14:34.890 06:25:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:41.463 06:25:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:41.463 06:25:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:14:41.463 06:25:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:41.463 06:25:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:41.463 06:25:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:41.463 06:25:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:41.463 06:25:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:41.463 06:25:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:14:41.463 06:25:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:41.463 06:25:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:14:41.463 06:25:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:14:41.463 06:25:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:14:41.463 06:25:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:14:41.463 06:25:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:14:41.463 06:25:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:14:41.463 06:25:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:41.463 06:25:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:41.463 06:25:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:41.463 06:25:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:41.463 06:25:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:41.463 06:25:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:41.463 06:25:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:41.464 06:25:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:41.464 06:25:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:41.464 06:25:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:41.464 06:25:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:41.464 06:25:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:41.464 06:25:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:41.464 06:25:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:41.464 06:25:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:41.464 06:25:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:41.464 06:25:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:41.464 06:25:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:41.464 06:25:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:41.464 06:25:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:14:41.464 Found 0000:86:00.0 (0x8086 - 0x159b) 00:14:41.464 06:25:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:41.464 06:25:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:41.464 06:25:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:41.464 06:25:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:41.464 06:25:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:41.464 06:25:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:41.464 06:25:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:14:41.464 Found 0000:86:00.1 (0x8086 - 0x159b) 00:14:41.464 06:25:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:41.464 06:25:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:41.464 06:25:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:41.464 06:25:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:41.464 06:25:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:41.464 06:25:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:41.464 06:25:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:41.464 06:25:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:41.464 06:25:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:41.464 06:25:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:41.464 06:25:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:41.464 06:25:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:41.464 06:25:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:41.464 06:25:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:41.464 06:25:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:41.464 06:25:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:14:41.464 Found net devices under 0000:86:00.0: cvl_0_0 00:14:41.464 06:25:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:41.464 06:25:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:41.464 06:25:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:41.464 06:25:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:41.464 06:25:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:41.464 06:25:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:41.464 06:25:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:41.464 06:25:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:41.464 06:25:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:14:41.464 Found net devices under 0000:86:00.1: cvl_0_1 00:14:41.464 06:25:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:41.464 06:25:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:41.464 06:25:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:14:41.464 06:25:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:41.464 06:25:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:41.464 06:25:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:41.464 06:25:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:41.464 06:25:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:41.464 06:25:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:41.464 06:25:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:41.464 06:25:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:41.464 06:25:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:41.464 06:25:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:41.464 06:25:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:41.464 06:25:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:41.464 06:25:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:41.464 06:25:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:41.464 06:25:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:41.464 06:25:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:41.464 06:25:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:41.464 06:25:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:41.464 06:25:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:41.464 06:25:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:41.464 06:25:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:41.464 06:25:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:41.464 06:25:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:41.464 06:25:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:41.464 06:25:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:41.464 06:25:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:41.464 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:41.464 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.470 ms 00:14:41.464 00:14:41.464 --- 10.0.0.2 ping statistics --- 00:14:41.464 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:41.464 rtt min/avg/max/mdev = 0.470/0.470/0.470/0.000 ms 00:14:41.464 06:25:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:41.464 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:41.464 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.202 ms 00:14:41.464 00:14:41.464 --- 10.0.0.1 ping statistics --- 00:14:41.464 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:41.464 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:14:41.464 06:25:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:41.464 06:25:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:14:41.464 06:25:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:41.464 06:25:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:41.464 06:25:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:41.464 06:25:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:41.464 06:25:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:41.464 06:25:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:41.464 06:25:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:41.464 06:25:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:14:41.464 06:25:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:41.464 06:25:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:41.464 06:25:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:41.464 06:25:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=444014 00:14:41.464 06:25:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 444014 00:14:41.464 06:25:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:14:41.464 06:25:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@833 -- # '[' -z 444014 ']' 00:14:41.464 06:25:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:41.464 06:25:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:41.464 06:25:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:41.464 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:41.464 06:25:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:41.464 06:25:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:41.464 [2024-11-20 06:25:12.543587] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:14:41.465 [2024-11-20 06:25:12.543632] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:41.465 [2024-11-20 06:25:12.621783] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:41.465 [2024-11-20 06:25:12.663338] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:41.465 [2024-11-20 06:25:12.663373] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:41.465 [2024-11-20 06:25:12.663380] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:41.465 [2024-11-20 06:25:12.663386] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:41.465 [2024-11-20 06:25:12.663391] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:41.465 [2024-11-20 06:25:12.665014] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:14:41.465 [2024-11-20 06:25:12.665125] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:14:41.465 [2024-11-20 06:25:12.665249] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:41.465 [2024-11-20 06:25:12.665250] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:14:41.465 06:25:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:41.465 06:25:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@866 -- # return 0 00:14:41.465 06:25:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:41.465 06:25:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:41.465 06:25:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:41.465 06:25:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:41.465 06:25:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:41.465 06:25:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.465 06:25:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:41.465 [2024-11-20 06:25:12.801347] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:41.465 06:25:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.465 06:25:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:41.465 06:25:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.465 06:25:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:41.465 Malloc0 00:14:41.465 06:25:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.465 06:25:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:41.465 06:25:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.465 06:25:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:41.465 06:25:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.465 06:25:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:41.465 06:25:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.465 06:25:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:41.465 06:25:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.465 06:25:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:41.465 06:25:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.465 06:25:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:41.465 [2024-11-20 06:25:12.863797] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:41.465 06:25:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.465 06:25:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:14:41.465 06:25:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:14:41.465 06:25:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:14:41.465 06:25:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:14:41.465 06:25:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:14:41.465 06:25:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:14:41.465 { 00:14:41.465 "params": { 00:14:41.465 "name": "Nvme$subsystem", 00:14:41.465 "trtype": "$TEST_TRANSPORT", 00:14:41.465 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:41.465 "adrfam": "ipv4", 00:14:41.465 "trsvcid": "$NVMF_PORT", 00:14:41.465 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:41.465 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:41.465 "hdgst": ${hdgst:-false}, 00:14:41.465 "ddgst": ${ddgst:-false} 00:14:41.465 }, 00:14:41.465 "method": "bdev_nvme_attach_controller" 00:14:41.465 } 00:14:41.465 EOF 00:14:41.465 )") 00:14:41.465 06:25:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:14:41.465 06:25:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:14:41.465 06:25:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:14:41.465 06:25:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:14:41.465 "params": { 00:14:41.465 "name": "Nvme1", 00:14:41.465 "trtype": "tcp", 00:14:41.465 "traddr": "10.0.0.2", 00:14:41.465 "adrfam": "ipv4", 00:14:41.465 "trsvcid": "4420", 00:14:41.465 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:41.465 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:41.465 "hdgst": false, 00:14:41.465 "ddgst": false 00:14:41.465 }, 00:14:41.465 "method": "bdev_nvme_attach_controller" 00:14:41.465 }' 00:14:41.465 [2024-11-20 06:25:12.916175] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:14:41.465 [2024-11-20 06:25:12.916224] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid444038 ] 00:14:41.465 [2024-11-20 06:25:12.989191] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:41.465 [2024-11-20 06:25:13.032907] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:41.465 [2024-11-20 06:25:13.033017] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:41.465 [2024-11-20 06:25:13.033017] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:41.723 I/O targets: 00:14:41.723 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:14:41.723 00:14:41.723 00:14:41.723 CUnit - A unit testing framework for C - Version 2.1-3 00:14:41.723 http://cunit.sourceforge.net/ 00:14:41.723 00:14:41.723 00:14:41.723 Suite: bdevio tests on: Nvme1n1 00:14:41.723 Test: blockdev write read block ...passed 00:14:41.723 Test: blockdev write zeroes read block ...passed 00:14:41.723 Test: blockdev write zeroes read no split ...passed 00:14:41.723 Test: blockdev write zeroes read split ...passed 00:14:41.723 Test: blockdev write zeroes read split partial ...passed 00:14:41.723 Test: blockdev reset ...[2024-11-20 06:25:13.548763] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:14:41.723 [2024-11-20 06:25:13.548825] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x543340 (9): Bad file descriptor 00:14:41.981 [2024-11-20 06:25:13.652324] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:14:41.981 passed 00:14:41.981 Test: blockdev write read 8 blocks ...passed 00:14:41.981 Test: blockdev write read size > 128k ...passed 00:14:41.981 Test: blockdev write read invalid size ...passed 00:14:41.981 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:41.981 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:41.981 Test: blockdev write read max offset ...passed 00:14:41.981 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:42.240 Test: blockdev writev readv 8 blocks ...passed 00:14:42.240 Test: blockdev writev readv 30 x 1block ...passed 00:14:42.240 Test: blockdev writev readv block ...passed 00:14:42.240 Test: blockdev writev readv size > 128k ...passed 00:14:42.240 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:42.240 Test: blockdev comparev and writev ...[2024-11-20 06:25:13.862958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:42.240 [2024-11-20 06:25:13.862987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:42.240 [2024-11-20 06:25:13.863001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:42.240 [2024-11-20 06:25:13.863009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:14:42.240 [2024-11-20 06:25:13.863258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:42.240 [2024-11-20 06:25:13.863268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:14:42.240 [2024-11-20 06:25:13.863280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:42.240 [2024-11-20 06:25:13.863287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:14:42.240 [2024-11-20 06:25:13.863526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:42.240 [2024-11-20 06:25:13.863535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:14:42.240 [2024-11-20 06:25:13.863550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:42.240 [2024-11-20 06:25:13.863558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:14:42.240 [2024-11-20 06:25:13.863777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:42.240 [2024-11-20 06:25:13.863787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:14:42.240 [2024-11-20 06:25:13.863798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:42.240 [2024-11-20 06:25:13.863804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:14:42.240 passed 00:14:42.240 Test: blockdev nvme passthru rw ...passed 00:14:42.240 Test: blockdev nvme passthru vendor specific ...[2024-11-20 06:25:13.946568] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:42.240 [2024-11-20 06:25:13.946583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:14:42.240 [2024-11-20 06:25:13.946689] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:42.240 [2024-11-20 06:25:13.946698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:14:42.240 [2024-11-20 06:25:13.946792] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:42.240 [2024-11-20 06:25:13.946800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:14:42.240 [2024-11-20 06:25:13.946907] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:42.240 [2024-11-20 06:25:13.946916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:14:42.240 passed 00:14:42.240 Test: blockdev nvme admin passthru ...passed 00:14:42.240 Test: blockdev copy ...passed 00:14:42.240 00:14:42.240 Run Summary: Type Total Ran Passed Failed Inactive 00:14:42.240 suites 1 1 n/a 0 0 00:14:42.240 tests 23 23 23 0 0 00:14:42.240 asserts 152 152 152 0 n/a 00:14:42.240 00:14:42.240 Elapsed time = 1.223 seconds 00:14:42.499 06:25:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:42.499 06:25:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.499 06:25:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:42.499 06:25:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.499 06:25:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:14:42.499 06:25:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:14:42.499 06:25:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:42.499 06:25:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:14:42.499 06:25:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:42.499 06:25:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:14:42.499 06:25:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:42.499 06:25:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:42.499 rmmod nvme_tcp 00:14:42.499 rmmod nvme_fabrics 00:14:42.499 rmmod nvme_keyring 00:14:42.499 06:25:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:42.499 06:25:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:14:42.499 06:25:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:14:42.499 06:25:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 444014 ']' 00:14:42.499 06:25:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 444014 00:14:42.499 06:25:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@952 -- # '[' -z 444014 ']' 00:14:42.499 06:25:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # kill -0 444014 00:14:42.499 06:25:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@957 -- # uname 00:14:42.499 06:25:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:42.499 06:25:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 444014 00:14:42.499 06:25:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # process_name=reactor_3 00:14:42.499 06:25:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@962 -- # '[' reactor_3 = sudo ']' 00:14:42.499 06:25:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@970 -- # echo 'killing process with pid 444014' 00:14:42.499 killing process with pid 444014 00:14:42.499 06:25:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@971 -- # kill 444014 00:14:42.499 06:25:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@976 -- # wait 444014 00:14:42.759 06:25:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:42.759 06:25:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:42.759 06:25:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:42.759 06:25:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:14:42.759 06:25:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:14:42.759 06:25:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:42.759 06:25:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:14:42.759 06:25:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:42.759 06:25:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:42.759 06:25:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:42.759 06:25:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:42.759 06:25:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:45.303 06:25:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:45.303 00:14:45.303 real 0m10.215s 00:14:45.303 user 0m11.461s 00:14:45.303 sys 0m4.983s 00:14:45.303 06:25:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:45.303 06:25:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:45.303 ************************************ 00:14:45.303 END TEST nvmf_bdevio 00:14:45.303 ************************************ 00:14:45.303 06:25:16 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:14:45.303 00:14:45.303 real 4m37.462s 00:14:45.303 user 10m24.406s 00:14:45.303 sys 1m37.493s 00:14:45.303 06:25:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:45.303 06:25:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:14:45.303 ************************************ 00:14:45.303 END TEST nvmf_target_core 00:14:45.303 ************************************ 00:14:45.303 06:25:16 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:14:45.303 06:25:16 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:14:45.303 06:25:16 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:45.303 06:25:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:45.303 ************************************ 00:14:45.303 START TEST nvmf_target_extra 00:14:45.303 ************************************ 00:14:45.303 06:25:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:14:45.303 * Looking for test storage... 00:14:45.303 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:14:45.303 06:25:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:45.303 06:25:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1691 -- # lcov --version 00:14:45.303 06:25:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:45.303 06:25:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:45.303 06:25:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:45.303 06:25:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:45.303 06:25:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:45.303 06:25:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:14:45.303 06:25:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:14:45.303 06:25:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:14:45.303 06:25:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:14:45.303 06:25:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:14:45.303 06:25:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:14:45.303 06:25:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:14:45.303 06:25:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:45.303 06:25:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:14:45.303 06:25:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:14:45.303 06:25:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:45.303 06:25:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:45.303 06:25:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:14:45.303 06:25:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:14:45.303 06:25:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:45.303 06:25:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:14:45.303 06:25:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:14:45.303 06:25:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:14:45.303 06:25:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:14:45.303 06:25:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:45.303 06:25:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:14:45.303 06:25:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:14:45.303 06:25:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:45.303 06:25:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:45.303 06:25:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:14:45.303 06:25:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:45.303 06:25:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:45.303 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:45.303 --rc genhtml_branch_coverage=1 00:14:45.303 --rc genhtml_function_coverage=1 00:14:45.303 --rc genhtml_legend=1 00:14:45.303 --rc geninfo_all_blocks=1 00:14:45.303 --rc geninfo_unexecuted_blocks=1 00:14:45.303 00:14:45.303 ' 00:14:45.303 06:25:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:45.303 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:45.303 --rc genhtml_branch_coverage=1 00:14:45.303 --rc genhtml_function_coverage=1 00:14:45.303 --rc genhtml_legend=1 00:14:45.303 --rc geninfo_all_blocks=1 00:14:45.303 --rc geninfo_unexecuted_blocks=1 00:14:45.303 00:14:45.303 ' 00:14:45.303 06:25:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:45.303 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:45.303 --rc genhtml_branch_coverage=1 00:14:45.303 --rc genhtml_function_coverage=1 00:14:45.303 --rc genhtml_legend=1 00:14:45.303 --rc geninfo_all_blocks=1 00:14:45.303 --rc geninfo_unexecuted_blocks=1 00:14:45.303 00:14:45.303 ' 00:14:45.303 06:25:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:45.303 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:45.303 --rc genhtml_branch_coverage=1 00:14:45.303 --rc genhtml_function_coverage=1 00:14:45.303 --rc genhtml_legend=1 00:14:45.303 --rc geninfo_all_blocks=1 00:14:45.303 --rc geninfo_unexecuted_blocks=1 00:14:45.303 00:14:45.303 ' 00:14:45.303 06:25:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:45.303 06:25:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:14:45.303 06:25:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:45.303 06:25:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:45.303 06:25:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:45.303 06:25:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:45.303 06:25:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:45.303 06:25:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:45.303 06:25:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:45.303 06:25:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:45.303 06:25:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:45.303 06:25:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:45.303 06:25:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:14:45.303 06:25:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:14:45.303 06:25:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:45.303 06:25:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:45.303 06:25:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:45.303 06:25:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:45.303 06:25:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:45.303 06:25:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:14:45.303 06:25:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:45.303 06:25:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:45.303 06:25:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:45.303 06:25:16 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:45.303 06:25:16 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:45.303 06:25:16 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:45.303 06:25:16 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:14:45.304 06:25:16 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:45.304 06:25:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:14:45.304 06:25:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:45.304 06:25:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:45.304 06:25:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:45.304 06:25:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:45.304 06:25:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:45.304 06:25:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:45.304 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:45.304 06:25:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:45.304 06:25:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:45.304 06:25:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:45.304 06:25:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:14:45.304 06:25:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:14:45.304 06:25:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:14:45.304 06:25:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:14:45.304 06:25:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:14:45.304 06:25:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:45.304 06:25:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:45.304 ************************************ 00:14:45.304 START TEST nvmf_example 00:14:45.304 ************************************ 00:14:45.304 06:25:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:14:45.304 * Looking for test storage... 00:14:45.304 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:45.304 06:25:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:45.304 06:25:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1691 -- # lcov --version 00:14:45.304 06:25:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:45.304 06:25:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:45.304 06:25:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:45.304 06:25:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:45.304 06:25:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:45.304 06:25:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:14:45.304 06:25:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:14:45.304 06:25:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:14:45.304 06:25:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:14:45.304 06:25:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:14:45.304 06:25:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:14:45.304 06:25:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:14:45.304 06:25:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:45.304 06:25:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:14:45.304 06:25:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:14:45.304 06:25:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:45.304 06:25:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:45.304 06:25:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:14:45.304 06:25:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:14:45.304 06:25:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:45.304 06:25:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:14:45.304 06:25:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:14:45.304 06:25:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:14:45.304 06:25:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:14:45.304 06:25:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:45.304 06:25:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:14:45.304 06:25:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:14:45.304 06:25:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:45.304 06:25:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:45.304 06:25:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:14:45.304 06:25:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:45.304 06:25:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:45.304 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:45.304 --rc genhtml_branch_coverage=1 00:14:45.304 --rc genhtml_function_coverage=1 00:14:45.304 --rc genhtml_legend=1 00:14:45.304 --rc geninfo_all_blocks=1 00:14:45.304 --rc geninfo_unexecuted_blocks=1 00:14:45.304 00:14:45.304 ' 00:14:45.304 06:25:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:45.304 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:45.304 --rc genhtml_branch_coverage=1 00:14:45.304 --rc genhtml_function_coverage=1 00:14:45.304 --rc genhtml_legend=1 00:14:45.304 --rc geninfo_all_blocks=1 00:14:45.304 --rc geninfo_unexecuted_blocks=1 00:14:45.304 00:14:45.304 ' 00:14:45.304 06:25:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:45.304 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:45.304 --rc genhtml_branch_coverage=1 00:14:45.304 --rc genhtml_function_coverage=1 00:14:45.304 --rc genhtml_legend=1 00:14:45.304 --rc geninfo_all_blocks=1 00:14:45.304 --rc geninfo_unexecuted_blocks=1 00:14:45.304 00:14:45.304 ' 00:14:45.304 06:25:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:45.304 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:45.304 --rc genhtml_branch_coverage=1 00:14:45.304 --rc genhtml_function_coverage=1 00:14:45.304 --rc genhtml_legend=1 00:14:45.304 --rc geninfo_all_blocks=1 00:14:45.304 --rc geninfo_unexecuted_blocks=1 00:14:45.304 00:14:45.304 ' 00:14:45.304 06:25:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:45.304 06:25:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:14:45.304 06:25:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:45.304 06:25:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:45.304 06:25:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:45.304 06:25:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:45.304 06:25:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:45.304 06:25:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:45.304 06:25:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:45.304 06:25:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:45.304 06:25:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:45.304 06:25:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:45.304 06:25:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:14:45.304 06:25:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:14:45.304 06:25:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:45.304 06:25:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:45.304 06:25:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:45.304 06:25:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:45.304 06:25:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:45.304 06:25:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:14:45.304 06:25:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:45.304 06:25:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:45.304 06:25:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:45.304 06:25:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:45.304 06:25:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:45.305 06:25:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:45.305 06:25:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:14:45.305 06:25:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:45.305 06:25:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:14:45.305 06:25:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:45.305 06:25:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:45.305 06:25:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:45.305 06:25:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:45.305 06:25:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:45.305 06:25:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:45.305 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:45.305 06:25:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:45.305 06:25:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:45.305 06:25:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:45.305 06:25:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:14:45.305 06:25:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:14:45.305 06:25:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:14:45.305 06:25:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:14:45.305 06:25:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:14:45.305 06:25:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:14:45.305 06:25:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:14:45.305 06:25:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:14:45.305 06:25:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:45.305 06:25:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:14:45.305 06:25:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:14:45.305 06:25:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:45.305 06:25:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:45.305 06:25:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:45.305 06:25:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:45.305 06:25:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:45.305 06:25:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:45.305 06:25:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:45.305 06:25:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:45.305 06:25:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:45.305 06:25:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:45.305 06:25:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:14:45.305 06:25:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:14:51.884 06:25:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:51.884 06:25:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:14:51.884 06:25:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:51.884 06:25:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:51.884 06:25:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:51.884 06:25:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:51.884 06:25:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:51.884 06:25:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:14:51.884 06:25:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:51.884 06:25:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:14:51.884 06:25:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:14:51.884 06:25:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:14:51.884 06:25:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:14:51.884 06:25:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:14:51.884 06:25:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:14:51.884 06:25:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:51.884 06:25:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:51.884 06:25:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:51.884 06:25:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:51.884 06:25:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:51.884 06:25:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:51.884 06:25:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:51.884 06:25:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:51.884 06:25:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:51.884 06:25:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:51.884 06:25:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:51.884 06:25:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:51.884 06:25:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:51.884 06:25:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:51.884 06:25:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:51.884 06:25:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:51.884 06:25:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:51.884 06:25:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:51.884 06:25:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:51.884 06:25:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:14:51.884 Found 0000:86:00.0 (0x8086 - 0x159b) 00:14:51.884 06:25:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:51.884 06:25:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:51.884 06:25:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:51.884 06:25:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:51.884 06:25:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:51.884 06:25:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:51.884 06:25:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:14:51.884 Found 0000:86:00.1 (0x8086 - 0x159b) 00:14:51.884 06:25:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:51.884 06:25:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:51.884 06:25:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:51.884 06:25:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:51.884 06:25:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:51.884 06:25:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:51.884 06:25:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:51.884 06:25:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:51.884 06:25:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:51.884 06:25:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:51.884 06:25:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:51.884 06:25:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:51.884 06:25:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:51.885 06:25:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:51.885 06:25:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:51.885 06:25:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:14:51.885 Found net devices under 0000:86:00.0: cvl_0_0 00:14:51.885 06:25:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:51.885 06:25:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:51.885 06:25:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:51.885 06:25:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:51.885 06:25:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:51.885 06:25:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:51.885 06:25:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:51.885 06:25:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:51.885 06:25:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:14:51.885 Found net devices under 0000:86:00.1: cvl_0_1 00:14:51.885 06:25:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:51.885 06:25:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:51.885 06:25:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:14:51.885 06:25:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:51.885 06:25:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:51.885 06:25:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:51.885 06:25:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:51.885 06:25:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:51.885 06:25:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:51.885 06:25:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:51.885 06:25:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:51.885 06:25:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:51.885 06:25:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:51.885 06:25:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:51.885 06:25:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:51.885 06:25:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:51.885 06:25:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:51.885 06:25:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:51.885 06:25:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:51.885 06:25:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:51.885 06:25:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:51.885 06:25:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:51.885 06:25:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:51.885 06:25:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:51.885 06:25:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:51.885 06:25:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:51.885 06:25:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:51.885 06:25:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:51.885 06:25:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:51.885 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:51.885 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.500 ms 00:14:51.885 00:14:51.885 --- 10.0.0.2 ping statistics --- 00:14:51.885 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:51.885 rtt min/avg/max/mdev = 0.500/0.500/0.500/0.000 ms 00:14:51.885 06:25:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:51.885 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:51.885 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.139 ms 00:14:51.885 00:14:51.885 --- 10.0.0.1 ping statistics --- 00:14:51.885 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:51.885 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:14:51.885 06:25:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:51.885 06:25:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:14:51.885 06:25:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:51.885 06:25:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:51.885 06:25:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:51.885 06:25:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:51.885 06:25:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:51.885 06:25:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:51.885 06:25:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:51.885 06:25:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:14:51.885 06:25:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:14:51.885 06:25:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:51.885 06:25:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:14:51.885 06:25:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:14:51.885 06:25:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:14:51.885 06:25:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=447879 00:14:51.885 06:25:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:14:51.885 06:25:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:51.885 06:25:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 447879 00:14:51.885 06:25:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@833 -- # '[' -z 447879 ']' 00:14:51.885 06:25:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:51.885 06:25:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:51.885 06:25:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:51.885 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:51.885 06:25:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:51.885 06:25:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:14:52.143 06:25:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:52.143 06:25:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@866 -- # return 0 00:14:52.143 06:25:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:14:52.143 06:25:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:52.143 06:25:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:14:52.401 06:25:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:52.401 06:25:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.401 06:25:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:14:52.401 06:25:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.401 06:25:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:14:52.401 06:25:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.401 06:25:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:14:52.401 06:25:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.401 06:25:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:14:52.401 06:25:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:52.401 06:25:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.401 06:25:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:14:52.401 06:25:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.401 06:25:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:14:52.401 06:25:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:52.401 06:25:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.401 06:25:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:14:52.401 06:25:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.401 06:25:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:52.401 06:25:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.401 06:25:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:14:52.401 06:25:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.401 06:25:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:14:52.401 06:25:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:15:02.477 Initializing NVMe Controllers 00:15:02.477 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:02.477 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:02.477 Initialization complete. Launching workers. 00:15:02.477 ======================================================== 00:15:02.477 Latency(us) 00:15:02.477 Device Information : IOPS MiB/s Average min max 00:15:02.477 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18193.26 71.07 3517.15 684.01 15579.35 00:15:02.477 ======================================================== 00:15:02.477 Total : 18193.26 71.07 3517.15 684.01 15579.35 00:15:02.477 00:15:02.477 06:25:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:15:02.477 06:25:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:15:02.477 06:25:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:02.477 06:25:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:15:02.477 06:25:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:02.477 06:25:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:15:02.477 06:25:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:02.477 06:25:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:02.477 rmmod nvme_tcp 00:15:02.477 rmmod nvme_fabrics 00:15:02.477 rmmod nvme_keyring 00:15:02.477 06:25:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:02.477 06:25:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:15:02.477 06:25:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:15:02.736 06:25:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 447879 ']' 00:15:02.736 06:25:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 447879 00:15:02.736 06:25:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@952 -- # '[' -z 447879 ']' 00:15:02.736 06:25:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # kill -0 447879 00:15:02.736 06:25:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@957 -- # uname 00:15:02.736 06:25:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:02.736 06:25:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 447879 00:15:02.736 06:25:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # process_name=nvmf 00:15:02.736 06:25:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@962 -- # '[' nvmf = sudo ']' 00:15:02.736 06:25:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@970 -- # echo 'killing process with pid 447879' 00:15:02.736 killing process with pid 447879 00:15:02.736 06:25:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@971 -- # kill 447879 00:15:02.736 06:25:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@976 -- # wait 447879 00:15:02.736 nvmf threads initialize successfully 00:15:02.736 bdev subsystem init successfully 00:15:02.736 created a nvmf target service 00:15:02.736 create targets's poll groups done 00:15:02.736 all subsystems of target started 00:15:02.736 nvmf target is running 00:15:02.736 all subsystems of target stopped 00:15:02.736 destroy targets's poll groups done 00:15:02.736 destroyed the nvmf target service 00:15:02.736 bdev subsystem finish successfully 00:15:02.736 nvmf threads destroy successfully 00:15:02.736 06:25:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:02.736 06:25:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:02.736 06:25:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:02.736 06:25:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:15:02.736 06:25:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore 00:15:02.736 06:25:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save 00:15:02.736 06:25:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:02.736 06:25:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:02.736 06:25:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:15:02.736 06:25:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:02.736 06:25:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:02.736 06:25:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:05.274 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:15:05.274 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:15:05.274 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:05.274 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:15:05.274 00:15:05.274 real 0m19.804s 00:15:05.274 user 0m45.804s 00:15:05.274 sys 0m6.148s 00:15:05.274 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:05.274 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:15:05.275 ************************************ 00:15:05.275 END TEST nvmf_example 00:15:05.275 ************************************ 00:15:05.275 06:25:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:15:05.275 06:25:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:15:05.275 06:25:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:05.275 06:25:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:05.275 ************************************ 00:15:05.275 START TEST nvmf_filesystem 00:15:05.275 ************************************ 00:15:05.275 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:15:05.275 * Looking for test storage... 00:15:05.275 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:05.275 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:15:05.275 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lcov --version 00:15:05.275 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:15:05.275 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:15:05.275 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:05.275 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:05.275 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:05.275 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:15:05.275 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:15:05.275 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:15:05.275 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:15:05.275 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:15:05.275 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:15:05.275 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:15:05.275 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:05.275 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:15:05.275 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:15:05.275 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:05.275 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:05.275 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:15:05.275 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:15:05.275 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:05.275 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:15:05.275 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:15:05.275 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:15:05.275 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:15:05.275 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:05.275 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:15:05.275 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:15:05.275 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:05.275 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:05.275 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:15:05.275 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:05.275 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:15:05.275 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:05.275 --rc genhtml_branch_coverage=1 00:15:05.275 --rc genhtml_function_coverage=1 00:15:05.275 --rc genhtml_legend=1 00:15:05.275 --rc geninfo_all_blocks=1 00:15:05.275 --rc geninfo_unexecuted_blocks=1 00:15:05.275 00:15:05.275 ' 00:15:05.275 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:15:05.275 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:05.275 --rc genhtml_branch_coverage=1 00:15:05.275 --rc genhtml_function_coverage=1 00:15:05.275 --rc genhtml_legend=1 00:15:05.275 --rc geninfo_all_blocks=1 00:15:05.275 --rc geninfo_unexecuted_blocks=1 00:15:05.275 00:15:05.275 ' 00:15:05.275 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:15:05.275 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:05.275 --rc genhtml_branch_coverage=1 00:15:05.275 --rc genhtml_function_coverage=1 00:15:05.275 --rc genhtml_legend=1 00:15:05.275 --rc geninfo_all_blocks=1 00:15:05.275 --rc geninfo_unexecuted_blocks=1 00:15:05.275 00:15:05.275 ' 00:15:05.275 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:15:05.275 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:05.275 --rc genhtml_branch_coverage=1 00:15:05.275 --rc genhtml_function_coverage=1 00:15:05.275 --rc genhtml_legend=1 00:15:05.275 --rc geninfo_all_blocks=1 00:15:05.275 --rc geninfo_unexecuted_blocks=1 00:15:05.275 00:15:05.275 ' 00:15:05.275 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:15:05.275 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:15:05.275 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:15:05.275 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:15:05.275 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:15:05.275 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:15:05.275 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:15:05.275 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:15:05.275 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:15:05.275 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:15:05.275 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:15:05.275 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:15:05.275 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:15:05.275 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:15:05.275 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:15:05.275 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:15:05.275 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:15:05.275 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:15:05.275 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:15:05.275 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:15:05.275 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:15:05.275 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:15:05.275 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:15:05.275 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:15:05.275 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:15:05.275 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:15:05.275 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:15:05.275 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:15:05.275 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:15:05.275 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:15:05.275 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:15:05.275 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:15:05.275 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:15:05.275 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:15:05.275 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:15:05.275 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:15:05.275 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:15:05.275 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:15:05.275 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:15:05.275 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:15:05.276 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:15:05.276 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:15:05.276 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:15:05.276 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:15:05.276 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:15:05.276 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:15:05.276 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:15:05.276 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:15:05.276 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:15:05.276 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:15:05.276 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:15:05.276 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:15:05.276 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:15:05.276 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:15:05.276 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:15:05.276 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:15:05.276 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:15:05.276 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:15:05.276 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:15:05.276 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:15:05.276 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:15:05.276 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:15:05.276 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:15:05.276 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:15:05.276 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:15:05.276 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:15:05.276 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:15:05.276 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:15:05.276 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:15:05.276 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:15:05.276 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:15:05.276 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:15:05.276 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:15:05.276 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:15:05.276 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:15:05.276 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:15:05.276 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:15:05.276 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:15:05.276 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:15:05.276 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:15:05.276 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:15:05.276 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:15:05.276 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:15:05.276 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:15:05.276 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:15:05.276 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:15:05.276 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:15:05.276 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:15:05.276 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:15:05.276 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:15:05.276 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:15:05.276 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:15:05.276 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:15:05.276 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:15:05.276 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:15:05.276 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:15:05.276 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:15:05.276 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:15:05.276 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:15:05.276 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:15:05.276 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:15:05.276 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:15:05.276 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:15:05.276 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:15:05.276 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:15:05.276 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:15:05.276 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:15:05.276 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:15:05.276 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:15:05.276 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:15:05.276 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:15:05.276 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:15:05.276 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:15:05.276 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:15:05.276 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:15:05.276 #define SPDK_CONFIG_H 00:15:05.276 #define SPDK_CONFIG_AIO_FSDEV 1 00:15:05.276 #define SPDK_CONFIG_APPS 1 00:15:05.276 #define SPDK_CONFIG_ARCH native 00:15:05.276 #undef SPDK_CONFIG_ASAN 00:15:05.276 #undef SPDK_CONFIG_AVAHI 00:15:05.276 #undef SPDK_CONFIG_CET 00:15:05.276 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:15:05.276 #define SPDK_CONFIG_COVERAGE 1 00:15:05.276 #define SPDK_CONFIG_CROSS_PREFIX 00:15:05.276 #undef SPDK_CONFIG_CRYPTO 00:15:05.276 #undef SPDK_CONFIG_CRYPTO_MLX5 00:15:05.276 #undef SPDK_CONFIG_CUSTOMOCF 00:15:05.276 #undef SPDK_CONFIG_DAOS 00:15:05.276 #define SPDK_CONFIG_DAOS_DIR 00:15:05.276 #define SPDK_CONFIG_DEBUG 1 00:15:05.276 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:15:05.276 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:15:05.276 #define SPDK_CONFIG_DPDK_INC_DIR 00:15:05.276 #define SPDK_CONFIG_DPDK_LIB_DIR 00:15:05.276 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:15:05.276 #undef SPDK_CONFIG_DPDK_UADK 00:15:05.276 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:15:05.276 #define SPDK_CONFIG_EXAMPLES 1 00:15:05.276 #undef SPDK_CONFIG_FC 00:15:05.276 #define SPDK_CONFIG_FC_PATH 00:15:05.276 #define SPDK_CONFIG_FIO_PLUGIN 1 00:15:05.276 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:15:05.276 #define SPDK_CONFIG_FSDEV 1 00:15:05.276 #undef SPDK_CONFIG_FUSE 00:15:05.276 #undef SPDK_CONFIG_FUZZER 00:15:05.276 #define SPDK_CONFIG_FUZZER_LIB 00:15:05.276 #undef SPDK_CONFIG_GOLANG 00:15:05.276 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:15:05.276 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:15:05.276 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:15:05.276 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:15:05.276 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:15:05.276 #undef SPDK_CONFIG_HAVE_LIBBSD 00:15:05.276 #undef SPDK_CONFIG_HAVE_LZ4 00:15:05.276 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:15:05.276 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:15:05.276 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:15:05.276 #define SPDK_CONFIG_IDXD 1 00:15:05.276 #define SPDK_CONFIG_IDXD_KERNEL 1 00:15:05.276 #undef SPDK_CONFIG_IPSEC_MB 00:15:05.276 #define SPDK_CONFIG_IPSEC_MB_DIR 00:15:05.276 #define SPDK_CONFIG_ISAL 1 00:15:05.276 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:15:05.276 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:15:05.276 #define SPDK_CONFIG_LIBDIR 00:15:05.276 #undef SPDK_CONFIG_LTO 00:15:05.276 #define SPDK_CONFIG_MAX_LCORES 128 00:15:05.276 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:15:05.276 #define SPDK_CONFIG_NVME_CUSE 1 00:15:05.276 #undef SPDK_CONFIG_OCF 00:15:05.276 #define SPDK_CONFIG_OCF_PATH 00:15:05.277 #define SPDK_CONFIG_OPENSSL_PATH 00:15:05.277 #undef SPDK_CONFIG_PGO_CAPTURE 00:15:05.277 #define SPDK_CONFIG_PGO_DIR 00:15:05.277 #undef SPDK_CONFIG_PGO_USE 00:15:05.277 #define SPDK_CONFIG_PREFIX /usr/local 00:15:05.277 #undef SPDK_CONFIG_RAID5F 00:15:05.277 #undef SPDK_CONFIG_RBD 00:15:05.277 #define SPDK_CONFIG_RDMA 1 00:15:05.277 #define SPDK_CONFIG_RDMA_PROV verbs 00:15:05.277 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:15:05.277 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:15:05.277 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:15:05.277 #define SPDK_CONFIG_SHARED 1 00:15:05.277 #undef SPDK_CONFIG_SMA 00:15:05.277 #define SPDK_CONFIG_TESTS 1 00:15:05.277 #undef SPDK_CONFIG_TSAN 00:15:05.277 #define SPDK_CONFIG_UBLK 1 00:15:05.277 #define SPDK_CONFIG_UBSAN 1 00:15:05.277 #undef SPDK_CONFIG_UNIT_TESTS 00:15:05.277 #undef SPDK_CONFIG_URING 00:15:05.277 #define SPDK_CONFIG_URING_PATH 00:15:05.277 #undef SPDK_CONFIG_URING_ZNS 00:15:05.277 #undef SPDK_CONFIG_USDT 00:15:05.277 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:15:05.277 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:15:05.277 #define SPDK_CONFIG_VFIO_USER 1 00:15:05.277 #define SPDK_CONFIG_VFIO_USER_DIR 00:15:05.277 #define SPDK_CONFIG_VHOST 1 00:15:05.277 #define SPDK_CONFIG_VIRTIO 1 00:15:05.277 #undef SPDK_CONFIG_VTUNE 00:15:05.277 #define SPDK_CONFIG_VTUNE_DIR 00:15:05.277 #define SPDK_CONFIG_WERROR 1 00:15:05.277 #define SPDK_CONFIG_WPDK_DIR 00:15:05.277 #undef SPDK_CONFIG_XNVME 00:15:05.277 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:15:05.277 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:15:05.277 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:05.277 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:15:05.277 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:05.277 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:05.277 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:05.277 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:05.277 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:05.277 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:05.277 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:15:05.277 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:05.277 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:15:05.277 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:15:05.277 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:15:05.277 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:15:05.277 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:15:05.277 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:15:05.277 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:15:05.277 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:15:05.277 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:15:05.277 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:15:05.277 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:15:05.277 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:15:05.277 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:15:05.277 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:15:05.277 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:15:05.277 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:15:05.277 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:15:05.277 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:15:05.277 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:15:05.277 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:15:05.277 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:15:05.277 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:15:05.277 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:15:05.277 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:15:05.277 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:15:05.277 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:15:05.277 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:15:05.277 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:15:05.277 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:15:05.277 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:15:05.277 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:15:05.277 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:15:05.277 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:15:05.277 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:15:05.277 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:15:05.277 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:15:05.277 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:15:05.277 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:15:05.277 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:15:05.277 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:15:05.277 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:15:05.277 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:15:05.277 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:15:05.277 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:15:05.277 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:15:05.277 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:15:05.277 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:15:05.277 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:15:05.277 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:15:05.277 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:15:05.277 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:15:05.277 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:15:05.277 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:15:05.277 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:15:05.277 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:15:05.277 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:15:05.277 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:15:05.277 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:15:05.278 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:15:05.278 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:15:05.278 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:15:05.278 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:15:05.278 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:15:05.278 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:15:05.278 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:15:05.278 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:15:05.278 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:15:05.278 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:15:05.278 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:15:05.278 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:15:05.278 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:15:05.278 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:15:05.278 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:15:05.278 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:15:05.278 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:15:05.278 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:15:05.278 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:15:05.278 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:15:05.278 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:15:05.278 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:15:05.278 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:15:05.278 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:15:05.278 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:15:05.278 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:15:05.278 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:15:05.278 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:15:05.278 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:15:05.278 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:15:05.278 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:15:05.278 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:15:05.278 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:15:05.278 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:15:05.278 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:15:05.278 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:15:05.278 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:15:05.278 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:15:05.278 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:15:05.278 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:15:05.278 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:15:05.278 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:15:05.278 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:15:05.278 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:15:05.278 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:15:05.278 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:15:05.278 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:15:05.278 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:15:05.278 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:15:05.278 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:15:05.278 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:15:05.278 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:15:05.278 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:15:05.278 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:15:05.278 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:15:05.278 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:15:05.278 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:15:05.278 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:15:05.278 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:15:05.278 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:15:05.278 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:15:05.278 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:15:05.278 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:15:05.278 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:15:05.278 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:15:05.278 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:15:05.278 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:15:05.278 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:15:05.278 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:15:05.278 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:15:05.278 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:15:05.278 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:15:05.278 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:15:05.278 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:15:05.278 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:15:05.278 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:15:05.278 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:15:05.278 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:15:05.278 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:15:05.278 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:15:05.278 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:15:05.278 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:15:05.278 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:15:05.278 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:15:05.278 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:15:05.278 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:15:05.278 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:15:05.278 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:15:05.278 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:15:05.278 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:15:05.278 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:15:05.278 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:15:05.279 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:15:05.279 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:15:05.279 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:15:05.279 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:15:05.279 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:15:05.279 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # export PYTHONDONTWRITEBYTECODE=1 00:15:05.279 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # PYTHONDONTWRITEBYTECODE=1 00:15:05.279 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@197 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:15:05.279 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@197 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:15:05.279 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:15:05.279 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:15:05.279 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@202 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:15:05.279 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@203 -- # rm -rf /var/tmp/asan_suppression_file 00:15:05.279 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # cat 00:15:05.279 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # echo leak:libfuse3.so 00:15:05.279 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:15:05.279 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:15:05.279 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:15:05.279 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:15:05.279 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # '[' -z /var/spdk/dependencies ']' 00:15:05.279 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@249 -- # export DEPENDENCY_DIR 00:15:05.279 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:15:05.279 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:15:05.279 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:15:05.279 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:15:05.279 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@257 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:15:05.279 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@257 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:15:05.279 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:15:05.279 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:15:05.279 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:15:05.279 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:15:05.279 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:15:05.279 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:15:05.279 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # _LCOV_MAIN=0 00:15:05.279 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@266 -- # _LCOV_LLVM=1 00:15:05.279 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV= 00:15:05.279 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # [[ '' == *clang* ]] 00:15:05.279 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]] 00:15:05.279 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:15:05.279 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # _lcov_opt[_LCOV_MAIN]= 00:15:05.279 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # lcov_opt= 00:15:05.279 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@276 -- # '[' 0 -eq 0 ']' 00:15:05.279 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@277 -- # export valgrind= 00:15:05.279 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@277 -- # valgrind= 00:15:05.279 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@283 -- # uname -s 00:15:05.279 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@283 -- # '[' Linux = Linux ']' 00:15:05.279 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@284 -- # HUGEMEM=4096 00:15:05.279 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # export CLEAR_HUGE=yes 00:15:05.279 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # CLEAR_HUGE=yes 00:15:05.279 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # MAKE=make 00:15:05.279 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@288 -- # MAKEFLAGS=-j96 00:15:05.279 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@304 -- # export HUGEMEM=4096 00:15:05.279 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@304 -- # HUGEMEM=4096 00:15:05.279 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # NO_HUGE=() 00:15:05.279 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@307 -- # TEST_MODE= 00:15:05.279 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # for i in "$@" 00:15:05.279 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # case "$i" in 00:15:05.279 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@314 -- # TEST_TRANSPORT=tcp 00:15:05.279 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # [[ -z 450277 ]] 00:15:05.279 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # kill -0 450277 00:15:05.279 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1676 -- # set_test_storage 2147483648 00:15:05.279 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@339 -- # [[ -v testdir ]] 00:15:05.279 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # local requested_size=2147483648 00:15:05.279 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@342 -- # local mount target_dir 00:15:05.279 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local -A mounts fss sizes avails uses 00:15:05.279 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@345 -- # local source fs size avail mount use 00:15:05.280 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local storage_fallback storage_candidates 00:15:05.280 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # mktemp -udt spdk.XXXXXX 00:15:05.280 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # storage_fallback=/tmp/spdk.vUR7Fz 00:15:05.280 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@354 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:15:05.280 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # [[ -n '' ]] 00:15:05.280 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # [[ -n '' ]] 00:15:05.280 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@366 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.vUR7Fz/tests/target /tmp/spdk.vUR7Fz 00:15:05.280 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@369 -- # requested_size=2214592512 00:15:05.280 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:15:05.280 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # grep -v Filesystem 00:15:05.280 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # df -T 00:15:05.280 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_devtmpfs 00:15:05.280 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=devtmpfs 00:15:05.280 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=67108864 00:15:05.280 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=67108864 00:15:05.280 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=0 00:15:05.280 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:15:05.280 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=/dev/pmem0 00:15:05.280 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=ext2 00:15:05.280 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=4096 00:15:05.280 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=5284429824 00:15:05.280 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=5284425728 00:15:05.280 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:15:05.280 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_root 00:15:05.280 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=overlay 00:15:05.280 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=189133148160 00:15:05.280 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=195963973632 00:15:05.280 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=6830825472 00:15:05.280 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:15:05.280 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:15:05.280 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:15:05.280 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=97970618368 00:15:05.280 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=97981984768 00:15:05.280 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=11366400 00:15:05.280 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:15:05.280 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:15:05.280 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:15:05.280 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=39169753088 00:15:05.280 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=39192797184 00:15:05.280 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=23044096 00:15:05.280 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:15:05.280 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:15:05.280 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:15:05.280 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=97981431808 00:15:05.280 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=97981988864 00:15:05.280 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=557056 00:15:05.280 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:15:05.280 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:15:05.280 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:15:05.280 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=19596382208 00:15:05.280 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=19596394496 00:15:05.280 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=12288 00:15:05.280 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:15:05.280 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@377 -- # printf '* Looking for test storage...\n' 00:15:05.280 * Looking for test storage... 00:15:05.280 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # local target_space new_size 00:15:05.280 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # for target_dir in "${storage_candidates[@]}" 00:15:05.280 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:05.280 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # awk '$1 !~ /Filesystem/{print $6}' 00:15:05.280 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # mount=/ 00:15:05.280 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # target_space=189133148160 00:15:05.280 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@386 -- # (( target_space == 0 || target_space < requested_size )) 00:15:05.280 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # (( target_space >= requested_size )) 00:15:05.280 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ overlay == tmpfs ]] 00:15:05.280 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ overlay == ramfs ]] 00:15:05.280 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ / == / ]] 00:15:05.280 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@392 -- # new_size=9045417984 00:15:05.280 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # (( new_size * 100 / sizes[/] > 95 )) 00:15:05.280 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@398 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:05.280 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@398 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:05.280 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@399 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:05.280 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:05.280 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # return 0 00:15:05.280 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set -o errtrace 00:15:05.280 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1679 -- # shopt -s extdebug 00:15:05.280 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:15:05.280 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:15:05.280 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1683 -- # true 00:15:05.280 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1685 -- # xtrace_fd 00:15:05.280 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:15:05.280 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:15:05.280 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:15:05.280 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:15:05.280 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:15:05.280 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:15:05.280 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:15:05.280 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:15:05.280 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:15:05.280 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lcov --version 00:15:05.280 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:15:05.542 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:15:05.542 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:05.542 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:05.542 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:05.542 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:15:05.542 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:15:05.542 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:15:05.542 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:15:05.542 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:15:05.542 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:15:05.542 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:15:05.542 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:05.542 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:15:05.542 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:15:05.542 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:05.542 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:05.542 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:15:05.542 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:15:05.542 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:05.542 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:15:05.542 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:15:05.542 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:15:05.542 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:15:05.542 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:05.542 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:15:05.542 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:15:05.542 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:05.542 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:05.542 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:15:05.542 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:05.542 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:15:05.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:05.542 --rc genhtml_branch_coverage=1 00:15:05.542 --rc genhtml_function_coverage=1 00:15:05.542 --rc genhtml_legend=1 00:15:05.542 --rc geninfo_all_blocks=1 00:15:05.542 --rc geninfo_unexecuted_blocks=1 00:15:05.542 00:15:05.542 ' 00:15:05.542 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:15:05.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:05.542 --rc genhtml_branch_coverage=1 00:15:05.542 --rc genhtml_function_coverage=1 00:15:05.542 --rc genhtml_legend=1 00:15:05.542 --rc geninfo_all_blocks=1 00:15:05.542 --rc geninfo_unexecuted_blocks=1 00:15:05.542 00:15:05.542 ' 00:15:05.542 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:15:05.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:05.542 --rc genhtml_branch_coverage=1 00:15:05.542 --rc genhtml_function_coverage=1 00:15:05.542 --rc genhtml_legend=1 00:15:05.542 --rc geninfo_all_blocks=1 00:15:05.542 --rc geninfo_unexecuted_blocks=1 00:15:05.542 00:15:05.542 ' 00:15:05.542 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:15:05.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:05.542 --rc genhtml_branch_coverage=1 00:15:05.542 --rc genhtml_function_coverage=1 00:15:05.542 --rc genhtml_legend=1 00:15:05.542 --rc geninfo_all_blocks=1 00:15:05.542 --rc geninfo_unexecuted_blocks=1 00:15:05.542 00:15:05.542 ' 00:15:05.542 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:05.542 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:15:05.542 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:05.542 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:05.542 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:05.542 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:05.542 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:05.542 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:05.542 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:05.542 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:05.542 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:05.542 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:05.542 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:05.542 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:15:05.542 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:05.542 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:05.542 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:05.542 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:05.542 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:05.542 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:15:05.542 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:05.542 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:05.542 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:05.542 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:05.542 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:05.542 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:05.542 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:15:05.542 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:05.543 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:15:05.543 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:05.543 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:05.543 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:05.543 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:05.543 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:05.543 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:05.543 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:05.543 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:05.543 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:05.543 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:05.543 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:15:05.543 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:05.543 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:15:05.543 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:05.543 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:05.543 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:05.543 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:05.543 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:05.543 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:05.543 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:05.543 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:05.543 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:15:05.543 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:15:05.543 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:15:05.543 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:15:12.111 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:12.111 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:15:12.111 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:12.111 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:12.111 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:12.111 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:12.111 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:12.111 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:15:12.111 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:12.111 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:15:12.111 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:15:12.111 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:15:12.111 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:15:12.111 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:15:12.111 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:15:12.111 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:12.111 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:12.111 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:12.111 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:12.111 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:12.111 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:12.111 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:12.111 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:12.111 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:12.111 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:12.111 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:12.111 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:12.111 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:12.111 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:15:12.111 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:15:12.111 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:15:12.111 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:15:12.111 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:12.111 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:12.111 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:15:12.111 Found 0000:86:00.0 (0x8086 - 0x159b) 00:15:12.111 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:12.111 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:12.111 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:12.111 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:12.111 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:12.111 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:12.111 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:15:12.111 Found 0000:86:00.1 (0x8086 - 0x159b) 00:15:12.111 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:12.111 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:12.111 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:12.111 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:12.111 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:12.111 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:12.111 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:15:12.111 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:15:12.111 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:12.111 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:12.111 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:12.111 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:12.111 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:12.111 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:12.111 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:12.111 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:15:12.111 Found net devices under 0000:86:00.0: cvl_0_0 00:15:12.111 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:12.111 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:12.111 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:12.111 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:12.111 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:12.111 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:12.111 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:12.111 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:12.111 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:15:12.111 Found net devices under 0000:86:00.1: cvl_0_1 00:15:12.111 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:12.111 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:15:12.111 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:15:12.111 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:15:12.111 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:15:12.112 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:15:12.112 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:12.112 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:12.112 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:12.112 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:12.112 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:15:12.112 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:12.112 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:12.112 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:15:12.112 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:15:12.112 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:12.112 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:12.112 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:15:12.112 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:15:12.112 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:15:12.112 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:12.112 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:12.112 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:12.112 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:15:12.112 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:12.112 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:12.112 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:12.112 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:15:12.112 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:15:12.112 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:12.112 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.387 ms 00:15:12.112 00:15:12.112 --- 10.0.0.2 ping statistics --- 00:15:12.112 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:12.112 rtt min/avg/max/mdev = 0.387/0.387/0.387/0.000 ms 00:15:12.112 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:12.112 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:12.112 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.219 ms 00:15:12.112 00:15:12.112 --- 10.0.0.1 ping statistics --- 00:15:12.112 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:12.112 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:15:12.112 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:12.112 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:15:12.112 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:12.112 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:12.112 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:12.112 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:12.112 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:12.112 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:12.112 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:12.112 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:15:12.112 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:15:12.112 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:12.112 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:15:12.112 ************************************ 00:15:12.112 START TEST nvmf_filesystem_no_in_capsule 00:15:12.112 ************************************ 00:15:12.112 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1127 -- # nvmf_filesystem_part 0 00:15:12.112 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:15:12.112 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:15:12.112 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:12.112 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:12.112 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:12.112 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=453532 00:15:12.112 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 453532 00:15:12.112 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:12.112 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@833 -- # '[' -z 453532 ']' 00:15:12.112 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:12.112 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:12.112 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:12.112 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:12.112 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:12.112 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:12.112 [2024-11-20 06:25:43.280123] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:15:12.112 [2024-11-20 06:25:43.280170] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:12.112 [2024-11-20 06:25:43.359840] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:12.112 [2024-11-20 06:25:43.400813] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:12.112 [2024-11-20 06:25:43.400852] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:12.112 [2024-11-20 06:25:43.400859] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:12.112 [2024-11-20 06:25:43.400865] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:12.112 [2024-11-20 06:25:43.400869] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:12.112 [2024-11-20 06:25:43.402474] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:12.112 [2024-11-20 06:25:43.402580] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:12.112 [2024-11-20 06:25:43.402663] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:12.112 [2024-11-20 06:25:43.402664] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:12.112 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:12.112 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@866 -- # return 0 00:15:12.112 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:12.112 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:12.112 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:12.112 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:12.112 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:15:12.112 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:15:12.112 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.112 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:12.112 [2024-11-20 06:25:43.551804] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:12.112 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.112 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:15:12.112 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.112 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:12.112 Malloc1 00:15:12.112 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.112 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:12.112 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.112 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:12.112 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.112 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:12.113 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.113 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:12.113 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.113 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:12.113 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.113 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:12.113 [2024-11-20 06:25:43.694128] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:12.113 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.113 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:15:12.113 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bdev_name=Malloc1 00:15:12.113 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local bdev_info 00:15:12.113 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bs 00:15:12.113 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local nb 00:15:12.113 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:15:12.113 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.113 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:12.113 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.113 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:15:12.113 { 00:15:12.113 "name": "Malloc1", 00:15:12.113 "aliases": [ 00:15:12.113 "4deb0035-5ac8-4913-90ab-0d494891bb41" 00:15:12.113 ], 00:15:12.113 "product_name": "Malloc disk", 00:15:12.113 "block_size": 512, 00:15:12.113 "num_blocks": 1048576, 00:15:12.113 "uuid": "4deb0035-5ac8-4913-90ab-0d494891bb41", 00:15:12.113 "assigned_rate_limits": { 00:15:12.113 "rw_ios_per_sec": 0, 00:15:12.113 "rw_mbytes_per_sec": 0, 00:15:12.113 "r_mbytes_per_sec": 0, 00:15:12.113 "w_mbytes_per_sec": 0 00:15:12.113 }, 00:15:12.113 "claimed": true, 00:15:12.113 "claim_type": "exclusive_write", 00:15:12.113 "zoned": false, 00:15:12.113 "supported_io_types": { 00:15:12.113 "read": true, 00:15:12.113 "write": true, 00:15:12.113 "unmap": true, 00:15:12.113 "flush": true, 00:15:12.113 "reset": true, 00:15:12.113 "nvme_admin": false, 00:15:12.113 "nvme_io": false, 00:15:12.113 "nvme_io_md": false, 00:15:12.113 "write_zeroes": true, 00:15:12.113 "zcopy": true, 00:15:12.113 "get_zone_info": false, 00:15:12.113 "zone_management": false, 00:15:12.113 "zone_append": false, 00:15:12.113 "compare": false, 00:15:12.113 "compare_and_write": false, 00:15:12.113 "abort": true, 00:15:12.113 "seek_hole": false, 00:15:12.113 "seek_data": false, 00:15:12.113 "copy": true, 00:15:12.113 "nvme_iov_md": false 00:15:12.113 }, 00:15:12.113 "memory_domains": [ 00:15:12.113 { 00:15:12.113 "dma_device_id": "system", 00:15:12.113 "dma_device_type": 1 00:15:12.113 }, 00:15:12.113 { 00:15:12.113 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:12.113 "dma_device_type": 2 00:15:12.113 } 00:15:12.113 ], 00:15:12.113 "driver_specific": {} 00:15:12.113 } 00:15:12.113 ]' 00:15:12.113 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:15:12.113 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # bs=512 00:15:12.113 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:15:12.113 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # nb=1048576 00:15:12.113 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1389 -- # bdev_size=512 00:15:12.113 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1390 -- # echo 512 00:15:12.113 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:15:12.113 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:13.485 06:25:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:15:13.486 06:25:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # local i=0 00:15:13.486 06:25:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:15:13.486 06:25:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:15:13.486 06:25:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # sleep 2 00:15:15.382 06:25:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:15:15.383 06:25:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:15:15.383 06:25:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:15:15.383 06:25:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:15:15.383 06:25:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:15:15.383 06:25:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # return 0 00:15:15.383 06:25:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:15:15.383 06:25:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:15:15.383 06:25:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:15:15.383 06:25:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:15:15.383 06:25:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:15:15.383 06:25:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:15:15.383 06:25:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:15:15.383 06:25:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:15:15.383 06:25:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:15:15.383 06:25:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:15:15.383 06:25:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:15:15.383 06:25:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:15:15.948 06:25:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:15:16.879 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:15:16.879 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:15:16.879 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:15:16.879 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:16.879 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:16.879 ************************************ 00:15:16.879 START TEST filesystem_ext4 00:15:16.879 ************************************ 00:15:16.879 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create ext4 nvme0n1 00:15:16.879 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:15:16.879 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:15:16.879 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:15:16.879 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # local fstype=ext4 00:15:16.879 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:15:16.879 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local i=0 00:15:16.879 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local force 00:15:16.879 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # '[' ext4 = ext4 ']' 00:15:16.879 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@934 -- # force=-F 00:15:16.879 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@939 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:15:16.879 mke2fs 1.47.0 (5-Feb-2023) 00:15:16.879 Discarding device blocks: 0/522240 done 00:15:16.879 Creating filesystem with 522240 1k blocks and 130560 inodes 00:15:16.879 Filesystem UUID: 28dbeefb-644e-43a5-a628-e2272756f236 00:15:16.879 Superblock backups stored on blocks: 00:15:16.879 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:15:16.879 00:15:16.879 Allocating group tables: 0/64 done 00:15:17.136 Writing inode tables: 0/64 done 00:15:17.136 Creating journal (8192 blocks): done 00:15:19.073 Writing superblocks and filesystem accounting information: 0/64 done 00:15:19.073 00:15:19.073 06:25:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@947 -- # return 0 00:15:19.073 06:25:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:15:25.628 06:25:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:15:25.628 06:25:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:15:25.628 06:25:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:15:25.628 06:25:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:15:25.628 06:25:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:15:25.628 06:25:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:15:25.628 06:25:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 453532 00:15:25.628 06:25:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:15:25.628 06:25:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:15:25.628 06:25:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:15:25.628 06:25:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:15:25.628 00:15:25.628 real 0m8.161s 00:15:25.628 user 0m0.022s 00:15:25.628 sys 0m0.081s 00:15:25.628 06:25:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:25.628 06:25:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:15:25.628 ************************************ 00:15:25.628 END TEST filesystem_ext4 00:15:25.628 ************************************ 00:15:25.628 06:25:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:15:25.628 06:25:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:15:25.628 06:25:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:25.628 06:25:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:25.628 ************************************ 00:15:25.628 START TEST filesystem_btrfs 00:15:25.628 ************************************ 00:15:25.628 06:25:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create btrfs nvme0n1 00:15:25.628 06:25:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:15:25.628 06:25:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:15:25.628 06:25:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:15:25.628 06:25:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@928 -- # local fstype=btrfs 00:15:25.628 06:25:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:15:25.628 06:25:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local i=0 00:15:25.628 06:25:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local force 00:15:25.628 06:25:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # '[' btrfs = ext4 ']' 00:15:25.628 06:25:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@936 -- # force=-f 00:15:25.628 06:25:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@939 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:15:25.628 btrfs-progs v6.8.1 00:15:25.628 See https://btrfs.readthedocs.io for more information. 00:15:25.628 00:15:25.628 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:15:25.628 NOTE: several default settings have changed in version 5.15, please make sure 00:15:25.628 this does not affect your deployments: 00:15:25.628 - DUP for metadata (-m dup) 00:15:25.628 - enabled no-holes (-O no-holes) 00:15:25.628 - enabled free-space-tree (-R free-space-tree) 00:15:25.628 00:15:25.628 Label: (null) 00:15:25.628 UUID: 08dd6161-56a2-42e3-aa3e-519f47659ab4 00:15:25.628 Node size: 16384 00:15:25.628 Sector size: 4096 (CPU page size: 4096) 00:15:25.628 Filesystem size: 510.00MiB 00:15:25.628 Block group profiles: 00:15:25.628 Data: single 8.00MiB 00:15:25.628 Metadata: DUP 32.00MiB 00:15:25.628 System: DUP 8.00MiB 00:15:25.628 SSD detected: yes 00:15:25.628 Zoned device: no 00:15:25.628 Features: extref, skinny-metadata, no-holes, free-space-tree 00:15:25.628 Checksum: crc32c 00:15:25.628 Number of devices: 1 00:15:25.628 Devices: 00:15:25.628 ID SIZE PATH 00:15:25.628 1 510.00MiB /dev/nvme0n1p1 00:15:25.628 00:15:25.628 06:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@947 -- # return 0 00:15:25.628 06:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:15:26.194 06:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:15:26.194 06:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:15:26.194 06:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:15:26.194 06:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:15:26.194 06:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:15:26.194 06:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:15:26.194 06:25:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 453532 00:15:26.194 06:25:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:15:26.194 06:25:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:15:26.452 06:25:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:15:26.452 06:25:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:15:26.452 00:15:26.452 real 0m1.194s 00:15:26.452 user 0m0.026s 00:15:26.452 sys 0m0.111s 00:15:26.452 06:25:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:26.452 06:25:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:15:26.452 ************************************ 00:15:26.452 END TEST filesystem_btrfs 00:15:26.452 ************************************ 00:15:26.452 06:25:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:15:26.452 06:25:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:15:26.452 06:25:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:26.452 06:25:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:26.452 ************************************ 00:15:26.452 START TEST filesystem_xfs 00:15:26.452 ************************************ 00:15:26.452 06:25:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create xfs nvme0n1 00:15:26.452 06:25:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:15:26.452 06:25:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:15:26.452 06:25:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:15:26.452 06:25:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@928 -- # local fstype=xfs 00:15:26.452 06:25:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:15:26.452 06:25:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local i=0 00:15:26.452 06:25:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local force 00:15:26.452 06:25:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # '[' xfs = ext4 ']' 00:15:26.452 06:25:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@936 -- # force=-f 00:15:26.452 06:25:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@939 -- # mkfs.xfs -f /dev/nvme0n1p1 00:15:26.452 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:15:26.452 = sectsz=512 attr=2, projid32bit=1 00:15:26.452 = crc=1 finobt=1, sparse=1, rmapbt=0 00:15:26.452 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:15:26.452 data = bsize=4096 blocks=130560, imaxpct=25 00:15:26.452 = sunit=0 swidth=0 blks 00:15:26.452 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:15:26.452 log =internal log bsize=4096 blocks=16384, version=2 00:15:26.452 = sectsz=512 sunit=0 blks, lazy-count=1 00:15:26.452 realtime =none extsz=4096 blocks=0, rtextents=0 00:15:27.384 Discarding blocks...Done. 00:15:27.384 06:25:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@947 -- # return 0 00:15:27.384 06:25:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:15:29.909 06:26:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:15:29.909 06:26:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:15:29.909 06:26:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:15:29.909 06:26:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:15:29.909 06:26:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:15:29.909 06:26:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:15:29.909 06:26:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 453532 00:15:29.909 06:26:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:15:29.909 06:26:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:15:29.909 06:26:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:15:29.909 06:26:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:15:29.909 00:15:29.909 real 0m3.286s 00:15:29.909 user 0m0.024s 00:15:29.909 sys 0m0.075s 00:15:29.909 06:26:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:29.909 06:26:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:15:29.909 ************************************ 00:15:29.909 END TEST filesystem_xfs 00:15:29.909 ************************************ 00:15:29.909 06:26:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:15:29.909 06:26:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:15:30.166 06:26:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:30.166 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:30.166 06:26:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:30.166 06:26:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1221 -- # local i=0 00:15:30.166 06:26:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:15:30.166 06:26:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:30.166 06:26:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:15:30.166 06:26:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:30.166 06:26:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1233 -- # return 0 00:15:30.166 06:26:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:30.166 06:26:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.166 06:26:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:30.166 06:26:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.166 06:26:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:30.166 06:26:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 453532 00:15:30.166 06:26:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # '[' -z 453532 ']' 00:15:30.166 06:26:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # kill -0 453532 00:15:30.166 06:26:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@957 -- # uname 00:15:30.166 06:26:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:30.166 06:26:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 453532 00:15:30.166 06:26:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:30.166 06:26:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:30.167 06:26:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@970 -- # echo 'killing process with pid 453532' 00:15:30.167 killing process with pid 453532 00:15:30.167 06:26:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@971 -- # kill 453532 00:15:30.167 06:26:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@976 -- # wait 453532 00:15:30.425 06:26:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:15:30.425 00:15:30.425 real 0m19.031s 00:15:30.425 user 1m14.936s 00:15:30.425 sys 0m1.467s 00:15:30.425 06:26:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:30.425 06:26:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:30.425 ************************************ 00:15:30.425 END TEST nvmf_filesystem_no_in_capsule 00:15:30.425 ************************************ 00:15:30.684 06:26:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:15:30.684 06:26:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:15:30.684 06:26:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:30.684 06:26:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:15:30.684 ************************************ 00:15:30.684 START TEST nvmf_filesystem_in_capsule 00:15:30.684 ************************************ 00:15:30.684 06:26:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1127 -- # nvmf_filesystem_part 4096 00:15:30.684 06:26:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:15:30.684 06:26:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:15:30.684 06:26:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:30.684 06:26:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:30.685 06:26:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:30.685 06:26:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=456765 00:15:30.685 06:26:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 456765 00:15:30.685 06:26:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:30.685 06:26:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@833 -- # '[' -z 456765 ']' 00:15:30.685 06:26:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:30.685 06:26:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:30.685 06:26:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:30.685 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:30.685 06:26:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:30.685 06:26:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:30.685 [2024-11-20 06:26:02.387980] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:15:30.685 [2024-11-20 06:26:02.388020] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:30.685 [2024-11-20 06:26:02.447218] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:30.685 [2024-11-20 06:26:02.489015] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:30.685 [2024-11-20 06:26:02.489052] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:30.685 [2024-11-20 06:26:02.489059] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:30.685 [2024-11-20 06:26:02.489065] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:30.685 [2024-11-20 06:26:02.489070] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:30.685 [2024-11-20 06:26:02.490630] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:30.685 [2024-11-20 06:26:02.490741] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:30.685 [2024-11-20 06:26:02.490872] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:30.685 [2024-11-20 06:26:02.490873] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:30.943 06:26:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:30.943 06:26:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@866 -- # return 0 00:15:30.943 06:26:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:30.943 06:26:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:30.943 06:26:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:30.943 06:26:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:30.943 06:26:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:15:30.943 06:26:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:15:30.943 06:26:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.943 06:26:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:30.943 [2024-11-20 06:26:02.626688] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:30.943 06:26:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.943 06:26:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:15:30.943 06:26:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.943 06:26:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:30.943 Malloc1 00:15:30.943 06:26:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.943 06:26:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:30.943 06:26:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.943 06:26:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:30.943 06:26:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.943 06:26:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:30.943 06:26:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.943 06:26:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:30.943 06:26:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.943 06:26:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:30.943 06:26:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.943 06:26:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:30.943 [2024-11-20 06:26:02.773192] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:31.202 06:26:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.202 06:26:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:15:31.202 06:26:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bdev_name=Malloc1 00:15:31.202 06:26:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local bdev_info 00:15:31.203 06:26:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bs 00:15:31.203 06:26:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local nb 00:15:31.203 06:26:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:15:31.203 06:26:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.203 06:26:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:31.203 06:26:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.203 06:26:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:15:31.203 { 00:15:31.203 "name": "Malloc1", 00:15:31.203 "aliases": [ 00:15:31.203 "52850608-8149-4f47-a8dd-747f04e7aa71" 00:15:31.203 ], 00:15:31.203 "product_name": "Malloc disk", 00:15:31.203 "block_size": 512, 00:15:31.203 "num_blocks": 1048576, 00:15:31.203 "uuid": "52850608-8149-4f47-a8dd-747f04e7aa71", 00:15:31.203 "assigned_rate_limits": { 00:15:31.203 "rw_ios_per_sec": 0, 00:15:31.203 "rw_mbytes_per_sec": 0, 00:15:31.203 "r_mbytes_per_sec": 0, 00:15:31.203 "w_mbytes_per_sec": 0 00:15:31.203 }, 00:15:31.203 "claimed": true, 00:15:31.203 "claim_type": "exclusive_write", 00:15:31.203 "zoned": false, 00:15:31.203 "supported_io_types": { 00:15:31.203 "read": true, 00:15:31.203 "write": true, 00:15:31.203 "unmap": true, 00:15:31.203 "flush": true, 00:15:31.203 "reset": true, 00:15:31.203 "nvme_admin": false, 00:15:31.203 "nvme_io": false, 00:15:31.203 "nvme_io_md": false, 00:15:31.203 "write_zeroes": true, 00:15:31.203 "zcopy": true, 00:15:31.203 "get_zone_info": false, 00:15:31.203 "zone_management": false, 00:15:31.203 "zone_append": false, 00:15:31.203 "compare": false, 00:15:31.203 "compare_and_write": false, 00:15:31.203 "abort": true, 00:15:31.203 "seek_hole": false, 00:15:31.203 "seek_data": false, 00:15:31.203 "copy": true, 00:15:31.203 "nvme_iov_md": false 00:15:31.203 }, 00:15:31.203 "memory_domains": [ 00:15:31.203 { 00:15:31.203 "dma_device_id": "system", 00:15:31.203 "dma_device_type": 1 00:15:31.203 }, 00:15:31.203 { 00:15:31.203 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:31.203 "dma_device_type": 2 00:15:31.203 } 00:15:31.203 ], 00:15:31.203 "driver_specific": {} 00:15:31.203 } 00:15:31.203 ]' 00:15:31.203 06:26:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:15:31.203 06:26:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # bs=512 00:15:31.203 06:26:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:15:31.203 06:26:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # nb=1048576 00:15:31.203 06:26:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1389 -- # bdev_size=512 00:15:31.203 06:26:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1390 -- # echo 512 00:15:31.203 06:26:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:15:31.203 06:26:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:32.576 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:15:32.576 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # local i=0 00:15:32.576 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:15:32.576 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:15:32.576 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # sleep 2 00:15:34.475 06:26:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:15:34.475 06:26:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:15:34.475 06:26:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:15:34.475 06:26:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:15:34.475 06:26:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:15:34.475 06:26:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # return 0 00:15:34.475 06:26:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:15:34.475 06:26:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:15:34.475 06:26:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:15:34.475 06:26:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:15:34.475 06:26:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:15:34.475 06:26:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:15:34.475 06:26:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:15:34.475 06:26:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:15:34.475 06:26:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:15:34.475 06:26:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:15:34.475 06:26:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:15:34.733 06:26:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:15:35.298 06:26:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:15:36.670 06:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:15:36.670 06:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:15:36.670 06:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:15:36.670 06:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:36.670 06:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:36.670 ************************************ 00:15:36.670 START TEST filesystem_in_capsule_ext4 00:15:36.670 ************************************ 00:15:36.670 06:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create ext4 nvme0n1 00:15:36.670 06:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:15:36.670 06:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:15:36.670 06:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:15:36.670 06:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # local fstype=ext4 00:15:36.670 06:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:15:36.670 06:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local i=0 00:15:36.670 06:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local force 00:15:36.670 06:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # '[' ext4 = ext4 ']' 00:15:36.670 06:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@934 -- # force=-F 00:15:36.670 06:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@939 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:15:36.670 mke2fs 1.47.0 (5-Feb-2023) 00:15:36.670 Discarding device blocks: 0/522240 done 00:15:36.670 Creating filesystem with 522240 1k blocks and 130560 inodes 00:15:36.670 Filesystem UUID: 85bceef5-3cfa-422e-9420-9c8e195e1cd0 00:15:36.670 Superblock backups stored on blocks: 00:15:36.670 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:15:36.670 00:15:36.670 Allocating group tables: 0/64 done 00:15:36.670 Writing inode tables: 0/64 done 00:15:38.566 Creating journal (8192 blocks): done 00:15:39.390 Writing superblocks and filesystem accounting information: 0/6410/64 done 00:15:39.390 00:15:39.390 06:26:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@947 -- # return 0 00:15:39.390 06:26:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:15:45.943 06:26:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:15:45.943 06:26:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:15:45.943 06:26:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:15:45.943 06:26:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:15:45.943 06:26:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:15:45.943 06:26:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:15:45.943 06:26:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 456765 00:15:45.943 06:26:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:15:45.943 06:26:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:15:45.943 06:26:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:15:45.943 06:26:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:15:45.943 00:15:45.943 real 0m9.214s 00:15:45.943 user 0m0.037s 00:15:45.943 sys 0m0.066s 00:15:45.943 06:26:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:45.943 06:26:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:15:45.943 ************************************ 00:15:45.943 END TEST filesystem_in_capsule_ext4 00:15:45.943 ************************************ 00:15:45.943 06:26:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:15:45.943 06:26:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:15:45.943 06:26:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:45.943 06:26:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:45.943 ************************************ 00:15:45.943 START TEST filesystem_in_capsule_btrfs 00:15:45.943 ************************************ 00:15:45.943 06:26:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create btrfs nvme0n1 00:15:45.943 06:26:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:15:45.943 06:26:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:15:45.943 06:26:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:15:45.943 06:26:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@928 -- # local fstype=btrfs 00:15:45.943 06:26:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:15:45.943 06:26:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local i=0 00:15:45.943 06:26:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local force 00:15:45.943 06:26:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # '[' btrfs = ext4 ']' 00:15:45.943 06:26:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@936 -- # force=-f 00:15:45.943 06:26:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@939 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:15:45.943 btrfs-progs v6.8.1 00:15:45.943 See https://btrfs.readthedocs.io for more information. 00:15:45.943 00:15:45.943 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:15:45.943 NOTE: several default settings have changed in version 5.15, please make sure 00:15:45.943 this does not affect your deployments: 00:15:45.943 - DUP for metadata (-m dup) 00:15:45.943 - enabled no-holes (-O no-holes) 00:15:45.943 - enabled free-space-tree (-R free-space-tree) 00:15:45.943 00:15:45.943 Label: (null) 00:15:45.943 UUID: b7225b42-8b2f-4d00-a0a7-3a5533acf72a 00:15:45.943 Node size: 16384 00:15:45.943 Sector size: 4096 (CPU page size: 4096) 00:15:45.943 Filesystem size: 510.00MiB 00:15:45.943 Block group profiles: 00:15:45.943 Data: single 8.00MiB 00:15:45.943 Metadata: DUP 32.00MiB 00:15:45.943 System: DUP 8.00MiB 00:15:45.943 SSD detected: yes 00:15:45.943 Zoned device: no 00:15:45.943 Features: extref, skinny-metadata, no-holes, free-space-tree 00:15:45.944 Checksum: crc32c 00:15:45.944 Number of devices: 1 00:15:45.944 Devices: 00:15:45.944 ID SIZE PATH 00:15:45.944 1 510.00MiB /dev/nvme0n1p1 00:15:45.944 00:15:45.944 06:26:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@947 -- # return 0 00:15:45.944 06:26:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:15:46.509 06:26:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:15:46.509 06:26:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:15:46.509 06:26:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:15:46.509 06:26:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:15:46.509 06:26:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:15:46.509 06:26:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:15:46.509 06:26:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 456765 00:15:46.509 06:26:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:15:46.509 06:26:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:15:46.509 06:26:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:15:46.509 06:26:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:15:46.510 00:15:46.510 real 0m0.945s 00:15:46.510 user 0m0.028s 00:15:46.510 sys 0m0.114s 00:15:46.510 06:26:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:46.510 06:26:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:15:46.510 ************************************ 00:15:46.510 END TEST filesystem_in_capsule_btrfs 00:15:46.510 ************************************ 00:15:46.767 06:26:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:15:46.767 06:26:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:15:46.767 06:26:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:46.767 06:26:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:46.767 ************************************ 00:15:46.767 START TEST filesystem_in_capsule_xfs 00:15:46.767 ************************************ 00:15:46.767 06:26:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create xfs nvme0n1 00:15:46.767 06:26:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:15:46.767 06:26:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:15:46.767 06:26:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:15:46.767 06:26:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@928 -- # local fstype=xfs 00:15:46.767 06:26:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:15:46.767 06:26:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local i=0 00:15:46.767 06:26:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local force 00:15:46.767 06:26:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # '[' xfs = ext4 ']' 00:15:46.767 06:26:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@936 -- # force=-f 00:15:46.767 06:26:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@939 -- # mkfs.xfs -f /dev/nvme0n1p1 00:15:46.767 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:15:46.767 = sectsz=512 attr=2, projid32bit=1 00:15:46.767 = crc=1 finobt=1, sparse=1, rmapbt=0 00:15:46.767 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:15:46.767 data = bsize=4096 blocks=130560, imaxpct=25 00:15:46.767 = sunit=0 swidth=0 blks 00:15:46.767 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:15:46.767 log =internal log bsize=4096 blocks=16384, version=2 00:15:46.767 = sectsz=512 sunit=0 blks, lazy-count=1 00:15:46.767 realtime =none extsz=4096 blocks=0, rtextents=0 00:15:47.715 Discarding blocks...Done. 00:15:47.715 06:26:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@947 -- # return 0 00:15:47.715 06:26:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:15:49.665 06:26:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:15:49.665 06:26:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:15:49.665 06:26:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:15:49.665 06:26:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:15:49.665 06:26:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:15:49.665 06:26:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:15:49.665 06:26:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 456765 00:15:49.665 06:26:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:15:49.665 06:26:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:15:49.665 06:26:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:15:49.665 06:26:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:15:49.665 00:15:49.665 real 0m2.973s 00:15:49.665 user 0m0.023s 00:15:49.665 sys 0m0.077s 00:15:49.665 06:26:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:49.665 06:26:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:15:49.665 ************************************ 00:15:49.665 END TEST filesystem_in_capsule_xfs 00:15:49.665 ************************************ 00:15:49.665 06:26:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:15:49.922 06:26:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:15:49.922 06:26:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:50.186 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:50.186 06:26:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:50.186 06:26:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1221 -- # local i=0 00:15:50.186 06:26:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:15:50.186 06:26:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:50.186 06:26:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:15:50.186 06:26:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:50.186 06:26:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1233 -- # return 0 00:15:50.186 06:26:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:50.186 06:26:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.186 06:26:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:50.186 06:26:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.186 06:26:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:50.186 06:26:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 456765 00:15:50.186 06:26:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # '[' -z 456765 ']' 00:15:50.186 06:26:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # kill -0 456765 00:15:50.186 06:26:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@957 -- # uname 00:15:50.186 06:26:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:50.186 06:26:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 456765 00:15:50.186 06:26:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:50.186 06:26:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:50.186 06:26:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@970 -- # echo 'killing process with pid 456765' 00:15:50.186 killing process with pid 456765 00:15:50.186 06:26:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@971 -- # kill 456765 00:15:50.186 06:26:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@976 -- # wait 456765 00:15:50.452 06:26:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:15:50.452 00:15:50.452 real 0m19.905s 00:15:50.452 user 1m18.453s 00:15:50.452 sys 0m1.436s 00:15:50.452 06:26:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:50.452 06:26:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:50.452 ************************************ 00:15:50.452 END TEST nvmf_filesystem_in_capsule 00:15:50.452 ************************************ 00:15:50.452 06:26:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:15:50.452 06:26:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:50.452 06:26:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:15:50.452 06:26:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:50.452 06:26:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:15:50.452 06:26:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:50.452 06:26:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:50.452 rmmod nvme_tcp 00:15:50.711 rmmod nvme_fabrics 00:15:50.711 rmmod nvme_keyring 00:15:50.711 06:26:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:50.711 06:26:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:15:50.711 06:26:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:15:50.711 06:26:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:15:50.711 06:26:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:50.711 06:26:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:50.711 06:26:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:50.711 06:26:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:15:50.711 06:26:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save 00:15:50.712 06:26:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:50.712 06:26:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore 00:15:50.712 06:26:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:50.712 06:26:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:15:50.712 06:26:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:50.712 06:26:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:50.712 06:26:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:52.617 06:26:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:15:52.617 00:15:52.617 real 0m47.701s 00:15:52.617 user 2m35.474s 00:15:52.617 sys 0m7.586s 00:15:52.617 06:26:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:52.617 06:26:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:15:52.617 ************************************ 00:15:52.617 END TEST nvmf_filesystem 00:15:52.617 ************************************ 00:15:52.877 06:26:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:15:52.877 06:26:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:15:52.877 06:26:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:52.877 06:26:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:52.877 ************************************ 00:15:52.877 START TEST nvmf_target_discovery 00:15:52.877 ************************************ 00:15:52.877 06:26:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:15:52.877 * Looking for test storage... 00:15:52.877 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:52.877 06:26:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:15:52.877 06:26:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1691 -- # lcov --version 00:15:52.877 06:26:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:15:52.877 06:26:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:15:52.877 06:26:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:52.877 06:26:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:52.877 06:26:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:52.877 06:26:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:15:52.877 06:26:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:15:52.877 06:26:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:15:52.877 06:26:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:15:52.877 06:26:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:15:52.877 06:26:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:15:52.877 06:26:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:15:52.877 06:26:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:52.877 06:26:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:15:52.877 06:26:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:15:52.877 06:26:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:52.877 06:26:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:52.877 06:26:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:15:52.877 06:26:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:15:52.877 06:26:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:52.877 06:26:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:15:52.877 06:26:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:15:52.877 06:26:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:15:52.877 06:26:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:15:52.877 06:26:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:52.877 06:26:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:15:52.877 06:26:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:15:52.877 06:26:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:52.877 06:26:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:52.877 06:26:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:15:52.877 06:26:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:52.877 06:26:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:15:52.877 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:52.877 --rc genhtml_branch_coverage=1 00:15:52.877 --rc genhtml_function_coverage=1 00:15:52.877 --rc genhtml_legend=1 00:15:52.877 --rc geninfo_all_blocks=1 00:15:52.877 --rc geninfo_unexecuted_blocks=1 00:15:52.877 00:15:52.877 ' 00:15:52.877 06:26:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:15:52.877 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:52.877 --rc genhtml_branch_coverage=1 00:15:52.877 --rc genhtml_function_coverage=1 00:15:52.877 --rc genhtml_legend=1 00:15:52.877 --rc geninfo_all_blocks=1 00:15:52.877 --rc geninfo_unexecuted_blocks=1 00:15:52.877 00:15:52.877 ' 00:15:52.877 06:26:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:15:52.877 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:52.877 --rc genhtml_branch_coverage=1 00:15:52.877 --rc genhtml_function_coverage=1 00:15:52.877 --rc genhtml_legend=1 00:15:52.877 --rc geninfo_all_blocks=1 00:15:52.877 --rc geninfo_unexecuted_blocks=1 00:15:52.877 00:15:52.877 ' 00:15:52.877 06:26:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:15:52.877 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:52.877 --rc genhtml_branch_coverage=1 00:15:52.877 --rc genhtml_function_coverage=1 00:15:52.877 --rc genhtml_legend=1 00:15:52.877 --rc geninfo_all_blocks=1 00:15:52.877 --rc geninfo_unexecuted_blocks=1 00:15:52.877 00:15:52.877 ' 00:15:52.877 06:26:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:52.877 06:26:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:15:52.877 06:26:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:52.877 06:26:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:52.877 06:26:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:52.877 06:26:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:52.877 06:26:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:52.878 06:26:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:52.878 06:26:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:52.878 06:26:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:52.878 06:26:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:52.878 06:26:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:52.878 06:26:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:52.878 06:26:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:15:52.878 06:26:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:52.878 06:26:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:52.878 06:26:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:52.878 06:26:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:52.878 06:26:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:52.878 06:26:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:15:52.878 06:26:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:52.878 06:26:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:52.878 06:26:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:52.878 06:26:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:52.878 06:26:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:52.878 06:26:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:52.878 06:26:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:15:52.878 06:26:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:52.878 06:26:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:15:52.878 06:26:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:52.878 06:26:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:52.878 06:26:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:52.878 06:26:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:52.878 06:26:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:52.878 06:26:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:52.878 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:52.878 06:26:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:52.878 06:26:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:52.878 06:26:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:52.878 06:26:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:15:52.878 06:26:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:15:52.878 06:26:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:15:52.878 06:26:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:15:52.878 06:26:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:15:52.878 06:26:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:52.878 06:26:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:52.878 06:26:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:52.878 06:26:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:52.878 06:26:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:52.878 06:26:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:52.878 06:26:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:52.878 06:26:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:53.137 06:26:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:15:53.138 06:26:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:15:53.138 06:26:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:15:53.138 06:26:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:59.711 06:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:59.711 06:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:15:59.711 06:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:59.711 06:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:59.711 06:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:59.711 06:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:59.711 06:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:59.711 06:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:15:59.711 06:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:59.711 06:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:15:59.711 06:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:15:59.711 06:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:15:59.711 06:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:15:59.711 06:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:15:59.711 06:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:15:59.711 06:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:59.711 06:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:59.711 06:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:59.711 06:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:59.711 06:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:59.711 06:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:59.711 06:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:59.711 06:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:59.711 06:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:59.711 06:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:59.711 06:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:59.711 06:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:59.711 06:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:59.711 06:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:15:59.711 06:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:15:59.711 06:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:15:59.711 06:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:15:59.711 06:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:59.711 06:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:59.711 06:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:15:59.711 Found 0000:86:00.0 (0x8086 - 0x159b) 00:15:59.711 06:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:59.711 06:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:59.711 06:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:59.711 06:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:59.711 06:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:59.711 06:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:59.711 06:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:15:59.711 Found 0000:86:00.1 (0x8086 - 0x159b) 00:15:59.711 06:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:59.711 06:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:59.711 06:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:59.711 06:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:59.711 06:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:59.711 06:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:59.711 06:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:15:59.711 06:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:15:59.711 06:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:59.711 06:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:59.711 06:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:59.711 06:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:59.711 06:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:59.711 06:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:59.711 06:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:59.711 06:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:15:59.711 Found net devices under 0000:86:00.0: cvl_0_0 00:15:59.711 06:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:59.711 06:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:59.711 06:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:59.711 06:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:59.711 06:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:59.711 06:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:59.711 06:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:59.711 06:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:59.711 06:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:15:59.711 Found net devices under 0000:86:00.1: cvl_0_1 00:15:59.711 06:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:59.711 06:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:15:59.711 06:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:15:59.711 06:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:15:59.711 06:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:15:59.711 06:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:15:59.711 06:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:59.711 06:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:59.711 06:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:59.711 06:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:59.711 06:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:15:59.711 06:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:59.711 06:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:59.711 06:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:15:59.711 06:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:15:59.711 06:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:59.711 06:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:59.711 06:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:15:59.711 06:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:15:59.711 06:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:15:59.711 06:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:59.711 06:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:59.711 06:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:59.711 06:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:15:59.711 06:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:59.711 06:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:59.711 06:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:59.711 06:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:15:59.711 06:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:15:59.711 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:59.711 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.465 ms 00:15:59.711 00:15:59.711 --- 10.0.0.2 ping statistics --- 00:15:59.711 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:59.712 rtt min/avg/max/mdev = 0.465/0.465/0.465/0.000 ms 00:15:59.712 06:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:59.712 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:59.712 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.197 ms 00:15:59.712 00:15:59.712 --- 10.0.0.1 ping statistics --- 00:15:59.712 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:59.712 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:15:59.712 06:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:59.712 06:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:15:59.712 06:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:59.712 06:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:59.712 06:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:59.712 06:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:59.712 06:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:59.712 06:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:59.712 06:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:59.712 06:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:15:59.712 06:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:59.712 06:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:59.712 06:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:59.712 06:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=463740 00:15:59.712 06:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 463740 00:15:59.712 06:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:59.712 06:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@833 -- # '[' -z 463740 ']' 00:15:59.712 06:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:59.712 06:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:59.712 06:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:59.712 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:59.712 06:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:59.712 06:26:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:59.712 [2024-11-20 06:26:30.804988] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:15:59.712 [2024-11-20 06:26:30.805032] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:59.712 [2024-11-20 06:26:30.871260] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:59.712 [2024-11-20 06:26:30.914423] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:59.712 [2024-11-20 06:26:30.914460] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:59.712 [2024-11-20 06:26:30.914468] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:59.712 [2024-11-20 06:26:30.914474] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:59.712 [2024-11-20 06:26:30.914479] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:59.712 [2024-11-20 06:26:30.919222] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:59.712 [2024-11-20 06:26:30.919264] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:59.712 [2024-11-20 06:26:30.919371] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:59.712 [2024-11-20 06:26:30.919372] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:59.712 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:59.712 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@866 -- # return 0 00:15:59.712 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:59.712 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:59.712 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:59.712 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:59.712 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:59.712 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.712 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:59.712 [2024-11-20 06:26:31.068153] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:59.712 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.712 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:15:59.712 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:15:59.712 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:15:59.712 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.712 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:59.712 Null1 00:15:59.712 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.712 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:59.712 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.712 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:59.712 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.712 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:15:59.712 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.712 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:59.712 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.712 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:59.712 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.712 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:59.712 [2024-11-20 06:26:31.109477] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:59.712 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.712 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:15:59.712 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:15:59.712 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.712 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:59.712 Null2 00:15:59.712 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.712 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:15:59.712 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.712 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:59.712 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.712 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:15:59.712 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.712 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:59.712 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.712 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:15:59.712 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.712 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:59.712 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.712 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:15:59.712 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:15:59.712 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.712 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:59.712 Null3 00:15:59.712 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.712 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:15:59.712 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.712 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:59.712 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.712 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:15:59.712 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.712 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:59.712 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.713 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:15:59.713 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.713 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:59.713 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.713 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:15:59.713 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:15:59.713 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.713 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:59.713 Null4 00:15:59.713 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.713 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:15:59.713 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.713 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:59.713 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.713 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:15:59.713 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.713 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:59.713 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.713 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:15:59.713 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.713 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:59.713 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.713 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:59.713 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.713 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:59.713 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.713 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:15:59.713 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.713 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:59.713 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.713 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:15:59.713 00:15:59.713 Discovery Log Number of Records 6, Generation counter 6 00:15:59.713 =====Discovery Log Entry 0====== 00:15:59.713 trtype: tcp 00:15:59.713 adrfam: ipv4 00:15:59.713 subtype: current discovery subsystem 00:15:59.713 treq: not required 00:15:59.713 portid: 0 00:15:59.713 trsvcid: 4420 00:15:59.713 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:15:59.713 traddr: 10.0.0.2 00:15:59.713 eflags: explicit discovery connections, duplicate discovery information 00:15:59.713 sectype: none 00:15:59.713 =====Discovery Log Entry 1====== 00:15:59.713 trtype: tcp 00:15:59.713 adrfam: ipv4 00:15:59.713 subtype: nvme subsystem 00:15:59.713 treq: not required 00:15:59.713 portid: 0 00:15:59.713 trsvcid: 4420 00:15:59.713 subnqn: nqn.2016-06.io.spdk:cnode1 00:15:59.713 traddr: 10.0.0.2 00:15:59.713 eflags: none 00:15:59.713 sectype: none 00:15:59.713 =====Discovery Log Entry 2====== 00:15:59.713 trtype: tcp 00:15:59.713 adrfam: ipv4 00:15:59.713 subtype: nvme subsystem 00:15:59.713 treq: not required 00:15:59.713 portid: 0 00:15:59.713 trsvcid: 4420 00:15:59.713 subnqn: nqn.2016-06.io.spdk:cnode2 00:15:59.713 traddr: 10.0.0.2 00:15:59.713 eflags: none 00:15:59.713 sectype: none 00:15:59.713 =====Discovery Log Entry 3====== 00:15:59.713 trtype: tcp 00:15:59.713 adrfam: ipv4 00:15:59.713 subtype: nvme subsystem 00:15:59.713 treq: not required 00:15:59.713 portid: 0 00:15:59.713 trsvcid: 4420 00:15:59.713 subnqn: nqn.2016-06.io.spdk:cnode3 00:15:59.713 traddr: 10.0.0.2 00:15:59.713 eflags: none 00:15:59.713 sectype: none 00:15:59.713 =====Discovery Log Entry 4====== 00:15:59.713 trtype: tcp 00:15:59.713 adrfam: ipv4 00:15:59.713 subtype: nvme subsystem 00:15:59.713 treq: not required 00:15:59.713 portid: 0 00:15:59.713 trsvcid: 4420 00:15:59.713 subnqn: nqn.2016-06.io.spdk:cnode4 00:15:59.713 traddr: 10.0.0.2 00:15:59.713 eflags: none 00:15:59.713 sectype: none 00:15:59.713 =====Discovery Log Entry 5====== 00:15:59.713 trtype: tcp 00:15:59.713 adrfam: ipv4 00:15:59.713 subtype: discovery subsystem referral 00:15:59.713 treq: not required 00:15:59.713 portid: 0 00:15:59.713 trsvcid: 4430 00:15:59.713 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:15:59.713 traddr: 10.0.0.2 00:15:59.713 eflags: none 00:15:59.713 sectype: none 00:15:59.713 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:15:59.713 Perform nvmf subsystem discovery via RPC 00:15:59.713 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:15:59.713 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.713 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:59.713 [ 00:15:59.713 { 00:15:59.713 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:59.713 "subtype": "Discovery", 00:15:59.713 "listen_addresses": [ 00:15:59.713 { 00:15:59.713 "trtype": "TCP", 00:15:59.713 "adrfam": "IPv4", 00:15:59.713 "traddr": "10.0.0.2", 00:15:59.713 "trsvcid": "4420" 00:15:59.713 } 00:15:59.713 ], 00:15:59.713 "allow_any_host": true, 00:15:59.713 "hosts": [] 00:15:59.713 }, 00:15:59.713 { 00:15:59.713 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:59.713 "subtype": "NVMe", 00:15:59.713 "listen_addresses": [ 00:15:59.713 { 00:15:59.713 "trtype": "TCP", 00:15:59.713 "adrfam": "IPv4", 00:15:59.713 "traddr": "10.0.0.2", 00:15:59.713 "trsvcid": "4420" 00:15:59.713 } 00:15:59.713 ], 00:15:59.713 "allow_any_host": true, 00:15:59.713 "hosts": [], 00:15:59.713 "serial_number": "SPDK00000000000001", 00:15:59.713 "model_number": "SPDK bdev Controller", 00:15:59.713 "max_namespaces": 32, 00:15:59.713 "min_cntlid": 1, 00:15:59.713 "max_cntlid": 65519, 00:15:59.713 "namespaces": [ 00:15:59.713 { 00:15:59.713 "nsid": 1, 00:15:59.713 "bdev_name": "Null1", 00:15:59.713 "name": "Null1", 00:15:59.713 "nguid": "8422E740B48346C8B574605FCDD51295", 00:15:59.713 "uuid": "8422e740-b483-46c8-b574-605fcdd51295" 00:15:59.713 } 00:15:59.713 ] 00:15:59.713 }, 00:15:59.713 { 00:15:59.713 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:15:59.713 "subtype": "NVMe", 00:15:59.713 "listen_addresses": [ 00:15:59.713 { 00:15:59.713 "trtype": "TCP", 00:15:59.713 "adrfam": "IPv4", 00:15:59.713 "traddr": "10.0.0.2", 00:15:59.713 "trsvcid": "4420" 00:15:59.713 } 00:15:59.713 ], 00:15:59.713 "allow_any_host": true, 00:15:59.713 "hosts": [], 00:15:59.713 "serial_number": "SPDK00000000000002", 00:15:59.713 "model_number": "SPDK bdev Controller", 00:15:59.713 "max_namespaces": 32, 00:15:59.713 "min_cntlid": 1, 00:15:59.713 "max_cntlid": 65519, 00:15:59.713 "namespaces": [ 00:15:59.713 { 00:15:59.713 "nsid": 1, 00:15:59.713 "bdev_name": "Null2", 00:15:59.713 "name": "Null2", 00:15:59.713 "nguid": "9B45AF6BD4FF4CEE9876DC29222BEEDD", 00:15:59.713 "uuid": "9b45af6b-d4ff-4cee-9876-dc29222beedd" 00:15:59.713 } 00:15:59.713 ] 00:15:59.713 }, 00:15:59.713 { 00:15:59.713 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:15:59.713 "subtype": "NVMe", 00:15:59.713 "listen_addresses": [ 00:15:59.713 { 00:15:59.713 "trtype": "TCP", 00:15:59.713 "adrfam": "IPv4", 00:15:59.713 "traddr": "10.0.0.2", 00:15:59.713 "trsvcid": "4420" 00:15:59.713 } 00:15:59.713 ], 00:15:59.713 "allow_any_host": true, 00:15:59.713 "hosts": [], 00:15:59.713 "serial_number": "SPDK00000000000003", 00:15:59.713 "model_number": "SPDK bdev Controller", 00:15:59.713 "max_namespaces": 32, 00:15:59.713 "min_cntlid": 1, 00:15:59.713 "max_cntlid": 65519, 00:15:59.713 "namespaces": [ 00:15:59.713 { 00:15:59.713 "nsid": 1, 00:15:59.713 "bdev_name": "Null3", 00:15:59.713 "name": "Null3", 00:15:59.713 "nguid": "CF3776AEF22C459D9BB052841A97A06B", 00:15:59.713 "uuid": "cf3776ae-f22c-459d-9bb0-52841a97a06b" 00:15:59.713 } 00:15:59.713 ] 00:15:59.713 }, 00:15:59.713 { 00:15:59.714 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:15:59.714 "subtype": "NVMe", 00:15:59.714 "listen_addresses": [ 00:15:59.714 { 00:15:59.714 "trtype": "TCP", 00:15:59.714 "adrfam": "IPv4", 00:15:59.714 "traddr": "10.0.0.2", 00:15:59.714 "trsvcid": "4420" 00:15:59.714 } 00:15:59.714 ], 00:15:59.714 "allow_any_host": true, 00:15:59.714 "hosts": [], 00:15:59.714 "serial_number": "SPDK00000000000004", 00:15:59.714 "model_number": "SPDK bdev Controller", 00:15:59.714 "max_namespaces": 32, 00:15:59.714 "min_cntlid": 1, 00:15:59.714 "max_cntlid": 65519, 00:15:59.714 "namespaces": [ 00:15:59.714 { 00:15:59.714 "nsid": 1, 00:15:59.714 "bdev_name": "Null4", 00:15:59.714 "name": "Null4", 00:15:59.714 "nguid": "82141CB32D8840BE8047C777911383D3", 00:15:59.714 "uuid": "82141cb3-2d88-40be-8047-c777911383d3" 00:15:59.714 } 00:15:59.714 ] 00:15:59.714 } 00:15:59.714 ] 00:15:59.714 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.714 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:15:59.714 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:15:59.714 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:59.714 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.714 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:59.714 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.714 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:15:59.714 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.714 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:59.714 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.714 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:15:59.714 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:15:59.714 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.714 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:59.714 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.714 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:15:59.714 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.714 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:59.714 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.714 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:15:59.714 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:15:59.714 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.714 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:59.714 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.714 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:15:59.714 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.714 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:59.714 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.714 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:15:59.714 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:15:59.714 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.714 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:59.714 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.714 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:15:59.714 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.714 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:59.714 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.714 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:15:59.714 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.714 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:59.714 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.714 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:15:59.714 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:15:59.714 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.714 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:59.714 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.714 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:15:59.714 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:15:59.714 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:15:59.714 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:15:59.714 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:59.714 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:15:59.714 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:59.714 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:15:59.714 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:59.714 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:59.714 rmmod nvme_tcp 00:15:59.714 rmmod nvme_fabrics 00:15:59.974 rmmod nvme_keyring 00:15:59.974 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:59.974 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:15:59.974 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:15:59.974 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 463740 ']' 00:15:59.974 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 463740 00:15:59.974 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@952 -- # '[' -z 463740 ']' 00:15:59.974 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # kill -0 463740 00:15:59.974 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@957 -- # uname 00:15:59.974 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:59.974 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 463740 00:15:59.974 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:59.974 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:59.974 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@970 -- # echo 'killing process with pid 463740' 00:15:59.974 killing process with pid 463740 00:15:59.974 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@971 -- # kill 463740 00:15:59.974 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@976 -- # wait 463740 00:15:59.974 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:59.974 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:59.974 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:59.974 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:15:59.974 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:59.974 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save 00:15:59.974 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:15:59.974 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:59.974 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:15:59.974 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:59.974 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:59.974 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:02.512 06:26:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:02.512 00:16:02.512 real 0m9.354s 00:16:02.512 user 0m5.360s 00:16:02.512 sys 0m4.888s 00:16:02.512 06:26:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:02.512 06:26:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:02.512 ************************************ 00:16:02.512 END TEST nvmf_target_discovery 00:16:02.512 ************************************ 00:16:02.512 06:26:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:16:02.512 06:26:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:16:02.512 06:26:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:02.512 06:26:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:02.512 ************************************ 00:16:02.512 START TEST nvmf_referrals 00:16:02.512 ************************************ 00:16:02.512 06:26:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:16:02.512 * Looking for test storage... 00:16:02.512 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:02.512 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:16:02.512 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1691 -- # lcov --version 00:16:02.512 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:16:02.512 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:16:02.512 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:02.512 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:02.512 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:02.512 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:16:02.512 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:16:02.512 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:16:02.512 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:16:02.512 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:16:02.512 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:16:02.512 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:16:02.512 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:02.512 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:16:02.512 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:16:02.512 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:02.512 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:02.512 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:16:02.512 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:16:02.512 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:02.512 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:16:02.512 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:16:02.512 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:16:02.512 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:16:02.512 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:02.512 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:16:02.512 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:16:02.512 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:02.512 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:02.512 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:16:02.512 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:02.512 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:16:02.512 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:02.512 --rc genhtml_branch_coverage=1 00:16:02.512 --rc genhtml_function_coverage=1 00:16:02.512 --rc genhtml_legend=1 00:16:02.512 --rc geninfo_all_blocks=1 00:16:02.512 --rc geninfo_unexecuted_blocks=1 00:16:02.512 00:16:02.512 ' 00:16:02.512 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:16:02.513 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:02.513 --rc genhtml_branch_coverage=1 00:16:02.513 --rc genhtml_function_coverage=1 00:16:02.513 --rc genhtml_legend=1 00:16:02.513 --rc geninfo_all_blocks=1 00:16:02.513 --rc geninfo_unexecuted_blocks=1 00:16:02.513 00:16:02.513 ' 00:16:02.513 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:16:02.513 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:02.513 --rc genhtml_branch_coverage=1 00:16:02.513 --rc genhtml_function_coverage=1 00:16:02.513 --rc genhtml_legend=1 00:16:02.513 --rc geninfo_all_blocks=1 00:16:02.513 --rc geninfo_unexecuted_blocks=1 00:16:02.513 00:16:02.513 ' 00:16:02.513 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:16:02.513 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:02.513 --rc genhtml_branch_coverage=1 00:16:02.513 --rc genhtml_function_coverage=1 00:16:02.513 --rc genhtml_legend=1 00:16:02.513 --rc geninfo_all_blocks=1 00:16:02.513 --rc geninfo_unexecuted_blocks=1 00:16:02.513 00:16:02.513 ' 00:16:02.513 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:02.513 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:16:02.513 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:02.513 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:02.513 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:02.513 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:02.513 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:02.513 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:02.513 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:02.513 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:02.513 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:02.513 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:02.513 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:02.513 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:16:02.513 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:02.513 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:02.513 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:02.513 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:02.513 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:02.513 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:16:02.513 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:02.513 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:02.513 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:02.513 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:02.513 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:02.513 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:02.513 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:16:02.513 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:02.513 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:16:02.513 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:02.513 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:02.513 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:02.513 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:02.513 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:02.513 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:02.513 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:02.513 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:02.513 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:02.513 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:02.513 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:16:02.513 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:16:02.513 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:16:02.513 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:16:02.513 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:16:02.513 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:16:02.513 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:16:02.513 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:02.513 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:02.513 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:02.513 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:02.513 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:02.513 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:02.513 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:02.513 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:02.513 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:02.513 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:02.513 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:16:02.513 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:09.086 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:09.086 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:16:09.086 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:09.086 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:09.086 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:09.086 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:09.086 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:09.086 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:16:09.086 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:09.086 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:16:09.086 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:16:09.086 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:16:09.086 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:16:09.086 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:16:09.086 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:16:09.086 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:09.086 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:09.086 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:09.086 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:09.086 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:09.086 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:09.086 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:09.086 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:09.086 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:09.086 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:09.086 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:09.086 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:09.086 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:09.086 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:09.086 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:09.086 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:09.086 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:09.086 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:09.086 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:09.086 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:16:09.086 Found 0000:86:00.0 (0x8086 - 0x159b) 00:16:09.086 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:09.086 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:09.086 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:09.086 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:09.086 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:09.086 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:09.086 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:16:09.086 Found 0000:86:00.1 (0x8086 - 0x159b) 00:16:09.086 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:09.086 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:09.086 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:09.086 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:09.086 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:09.086 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:09.086 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:09.086 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:09.086 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:09.086 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:09.086 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:09.086 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:09.086 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:09.086 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:09.086 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:09.086 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:16:09.086 Found net devices under 0000:86:00.0: cvl_0_0 00:16:09.086 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:09.086 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:09.086 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:09.086 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:09.086 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:09.087 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:09.087 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:09.087 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:09.087 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:16:09.087 Found net devices under 0000:86:00.1: cvl_0_1 00:16:09.087 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:09.087 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:09.087 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:16:09.087 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:09.087 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:16:09.087 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:16:09.087 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:09.087 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:09.087 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:09.087 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:09.087 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:09.087 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:09.087 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:09.087 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:09.087 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:09.087 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:09.087 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:09.087 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:09.087 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:09.087 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:09.087 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:09.087 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:09.087 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:09.087 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:09.087 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:09.087 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:09.087 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:09.087 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:09.087 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:09.087 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:09.087 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.490 ms 00:16:09.087 00:16:09.087 --- 10.0.0.2 ping statistics --- 00:16:09.087 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:09.087 rtt min/avg/max/mdev = 0.490/0.490/0.490/0.000 ms 00:16:09.087 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:09.087 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:09.087 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.142 ms 00:16:09.087 00:16:09.087 --- 10.0.0.1 ping statistics --- 00:16:09.087 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:09.087 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:16:09.087 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:09.087 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:16:09.087 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:09.087 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:09.087 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:09.087 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:09.087 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:09.087 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:09.087 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:09.087 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:16:09.087 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:09.087 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:09.087 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:09.087 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=467518 00:16:09.087 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:09.087 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 467518 00:16:09.087 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@833 -- # '[' -z 467518 ']' 00:16:09.087 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:09.087 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:09.087 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:09.087 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:09.087 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:09.087 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:09.087 [2024-11-20 06:26:40.179861] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:16:09.087 [2024-11-20 06:26:40.179905] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:09.087 [2024-11-20 06:26:40.261499] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:09.087 [2024-11-20 06:26:40.304093] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:09.087 [2024-11-20 06:26:40.304129] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:09.087 [2024-11-20 06:26:40.304136] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:09.087 [2024-11-20 06:26:40.304142] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:09.087 [2024-11-20 06:26:40.304147] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:09.087 [2024-11-20 06:26:40.305726] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:09.087 [2024-11-20 06:26:40.305834] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:09.087 [2024-11-20 06:26:40.305869] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:09.087 [2024-11-20 06:26:40.305870] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:09.087 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:09.087 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@866 -- # return 0 00:16:09.087 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:09.087 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:09.087 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:09.087 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:09.087 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:09.087 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.087 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:09.087 [2024-11-20 06:26:40.442813] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:09.087 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.087 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:16:09.087 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.087 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:09.087 [2024-11-20 06:26:40.456086] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:16:09.087 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.087 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:16:09.087 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.087 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:09.087 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.087 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:16:09.087 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.087 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:09.087 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.087 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:16:09.087 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.087 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:09.087 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.088 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:16:09.088 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:16:09.088 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.088 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:09.088 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.088 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:16:09.088 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:16:09.088 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:16:09.088 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:16:09.088 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:16:09.088 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.088 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:16:09.088 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:09.088 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.088 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:16:09.088 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:16:09.088 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:16:09.088 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:16:09.088 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:16:09.088 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:16:09.088 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:16:09.088 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:16:09.088 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:16:09.088 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:16:09.088 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:16:09.088 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.088 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:09.088 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.088 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:16:09.088 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.088 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:09.088 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.088 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:16:09.088 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.088 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:09.088 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.088 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:16:09.088 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:16:09.088 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.088 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:09.088 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.088 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:16:09.088 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:16:09.088 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:16:09.088 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:16:09.088 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:16:09.088 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:16:09.088 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:16:09.346 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:16:09.346 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:16:09.346 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:16:09.346 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.346 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:09.346 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.346 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:16:09.346 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.346 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:09.346 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.346 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:16:09.346 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:16:09.346 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:16:09.346 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:16:09.346 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.346 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:16:09.346 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:09.346 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.346 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:16:09.346 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:16:09.346 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:16:09.346 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:16:09.346 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:16:09.346 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:16:09.346 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:16:09.346 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:16:09.604 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:16:09.604 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:16:09.604 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:16:09.604 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:16:09.604 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:16:09.604 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:16:09.604 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:16:09.861 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:16:09.861 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:16:09.861 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:16:09.861 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:16:09.861 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:16:09.861 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:16:10.119 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:16:10.119 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:16:10.119 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.119 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:10.119 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.119 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:16:10.119 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:16:10.119 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:16:10.119 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:16:10.119 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.119 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:16:10.119 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:10.119 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.119 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:16:10.119 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:16:10.119 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:16:10.119 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:16:10.119 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:16:10.119 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:16:10.119 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:16:10.119 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:16:10.119 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:16:10.119 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:16:10.119 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:16:10.119 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:16:10.119 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:16:10.119 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:16:10.119 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:16:10.377 06:26:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:16:10.377 06:26:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:16:10.377 06:26:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:16:10.377 06:26:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:16:10.377 06:26:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:16:10.377 06:26:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:16:10.635 06:26:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:16:10.635 06:26:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:16:10.635 06:26:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.635 06:26:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:10.635 06:26:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.635 06:26:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:16:10.635 06:26:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:16:10.635 06:26:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.635 06:26:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:10.635 06:26:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.635 06:26:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:16:10.635 06:26:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:16:10.635 06:26:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:16:10.635 06:26:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:16:10.635 06:26:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:16:10.635 06:26:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:16:10.635 06:26:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:16:10.894 06:26:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:16:10.894 06:26:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:16:10.894 06:26:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:16:10.894 06:26:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:16:10.894 06:26:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:10.894 06:26:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:16:10.894 06:26:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:10.894 06:26:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:16:10.894 06:26:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:10.894 06:26:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:10.894 rmmod nvme_tcp 00:16:10.894 rmmod nvme_fabrics 00:16:10.894 rmmod nvme_keyring 00:16:10.894 06:26:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:10.894 06:26:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:16:10.894 06:26:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:16:10.894 06:26:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 467518 ']' 00:16:10.894 06:26:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 467518 00:16:10.894 06:26:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@952 -- # '[' -z 467518 ']' 00:16:10.894 06:26:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # kill -0 467518 00:16:10.894 06:26:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@957 -- # uname 00:16:10.894 06:26:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:10.894 06:26:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 467518 00:16:10.894 06:26:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:16:10.894 06:26:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:16:10.894 06:26:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@970 -- # echo 'killing process with pid 467518' 00:16:10.894 killing process with pid 467518 00:16:10.894 06:26:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@971 -- # kill 467518 00:16:10.894 06:26:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@976 -- # wait 467518 00:16:11.154 06:26:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:11.154 06:26:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:11.154 06:26:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:11.154 06:26:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:16:11.154 06:26:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save 00:16:11.154 06:26:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:11.154 06:26:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore 00:16:11.154 06:26:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:11.154 06:26:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:11.154 06:26:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:11.154 06:26:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:11.154 06:26:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:13.057 06:26:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:13.057 00:16:13.057 real 0m10.932s 00:16:13.057 user 0m12.402s 00:16:13.057 sys 0m5.265s 00:16:13.057 06:26:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:13.057 06:26:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:13.057 ************************************ 00:16:13.057 END TEST nvmf_referrals 00:16:13.057 ************************************ 00:16:13.317 06:26:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:16:13.317 06:26:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:16:13.317 06:26:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:13.317 06:26:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:13.317 ************************************ 00:16:13.317 START TEST nvmf_connect_disconnect 00:16:13.317 ************************************ 00:16:13.317 06:26:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:16:13.317 * Looking for test storage... 00:16:13.317 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:13.317 06:26:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:16:13.317 06:26:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1691 -- # lcov --version 00:16:13.317 06:26:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:16:13.317 06:26:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:16:13.317 06:26:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:13.317 06:26:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:13.317 06:26:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:13.317 06:26:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:16:13.317 06:26:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:16:13.317 06:26:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:16:13.317 06:26:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:16:13.317 06:26:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:16:13.317 06:26:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:16:13.317 06:26:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:16:13.317 06:26:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:13.317 06:26:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:16:13.317 06:26:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:16:13.317 06:26:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:13.317 06:26:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:13.317 06:26:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:16:13.317 06:26:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:16:13.317 06:26:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:13.317 06:26:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:16:13.317 06:26:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:16:13.317 06:26:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:16:13.317 06:26:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:16:13.317 06:26:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:13.317 06:26:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:16:13.317 06:26:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:16:13.317 06:26:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:13.318 06:26:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:13.318 06:26:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:16:13.318 06:26:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:13.318 06:26:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:16:13.318 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:13.318 --rc genhtml_branch_coverage=1 00:16:13.318 --rc genhtml_function_coverage=1 00:16:13.318 --rc genhtml_legend=1 00:16:13.318 --rc geninfo_all_blocks=1 00:16:13.318 --rc geninfo_unexecuted_blocks=1 00:16:13.318 00:16:13.318 ' 00:16:13.318 06:26:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:16:13.318 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:13.318 --rc genhtml_branch_coverage=1 00:16:13.318 --rc genhtml_function_coverage=1 00:16:13.318 --rc genhtml_legend=1 00:16:13.318 --rc geninfo_all_blocks=1 00:16:13.318 --rc geninfo_unexecuted_blocks=1 00:16:13.318 00:16:13.318 ' 00:16:13.318 06:26:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:16:13.318 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:13.318 --rc genhtml_branch_coverage=1 00:16:13.318 --rc genhtml_function_coverage=1 00:16:13.318 --rc genhtml_legend=1 00:16:13.318 --rc geninfo_all_blocks=1 00:16:13.318 --rc geninfo_unexecuted_blocks=1 00:16:13.318 00:16:13.318 ' 00:16:13.318 06:26:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:16:13.318 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:13.318 --rc genhtml_branch_coverage=1 00:16:13.318 --rc genhtml_function_coverage=1 00:16:13.318 --rc genhtml_legend=1 00:16:13.318 --rc geninfo_all_blocks=1 00:16:13.318 --rc geninfo_unexecuted_blocks=1 00:16:13.318 00:16:13.318 ' 00:16:13.318 06:26:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:13.318 06:26:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:16:13.318 06:26:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:13.318 06:26:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:13.318 06:26:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:13.318 06:26:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:13.318 06:26:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:13.318 06:26:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:13.318 06:26:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:13.318 06:26:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:13.318 06:26:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:13.318 06:26:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:13.318 06:26:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:13.318 06:26:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:16:13.318 06:26:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:13.318 06:26:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:13.318 06:26:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:13.318 06:26:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:13.318 06:26:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:13.318 06:26:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:16:13.318 06:26:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:13.318 06:26:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:13.318 06:26:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:13.318 06:26:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:13.318 06:26:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:13.318 06:26:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:13.318 06:26:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:16:13.318 06:26:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:13.318 06:26:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:16:13.318 06:26:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:13.318 06:26:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:13.318 06:26:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:13.318 06:26:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:13.318 06:26:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:13.318 06:26:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:13.318 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:13.318 06:26:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:13.318 06:26:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:13.318 06:26:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:13.318 06:26:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:13.318 06:26:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:13.318 06:26:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:16:13.318 06:26:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:13.318 06:26:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:13.318 06:26:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:13.318 06:26:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:13.318 06:26:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:13.318 06:26:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:13.318 06:26:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:13.318 06:26:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:13.318 06:26:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:13.318 06:26:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:13.318 06:26:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:16:13.318 06:26:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:16:19.885 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:19.885 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:16:19.885 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:19.885 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:19.885 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:19.885 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:19.885 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:19.885 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:16:19.885 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:19.885 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:16:19.885 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:16:19.885 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:16:19.885 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:16:19.885 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:16:19.885 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:16:19.885 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:19.885 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:19.885 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:19.885 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:19.885 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:19.885 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:19.885 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:19.885 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:19.886 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:19.886 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:19.886 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:19.886 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:19.886 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:19.886 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:19.886 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:19.886 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:19.886 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:19.886 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:19.886 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:19.886 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:16:19.886 Found 0000:86:00.0 (0x8086 - 0x159b) 00:16:19.886 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:19.886 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:19.886 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:19.886 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:19.886 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:19.886 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:19.886 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:16:19.886 Found 0000:86:00.1 (0x8086 - 0x159b) 00:16:19.886 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:19.886 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:19.886 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:19.886 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:19.886 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:19.886 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:19.886 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:19.886 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:19.886 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:19.886 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:19.886 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:19.886 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:19.886 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:19.886 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:19.886 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:19.886 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:16:19.886 Found net devices under 0000:86:00.0: cvl_0_0 00:16:19.886 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:19.886 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:19.886 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:19.886 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:19.886 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:19.886 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:19.886 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:19.886 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:19.886 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:16:19.886 Found net devices under 0000:86:00.1: cvl_0_1 00:16:19.886 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:19.886 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:19.886 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:16:19.886 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:19.886 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:16:19.886 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:16:19.886 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:19.886 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:19.886 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:19.886 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:19.886 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:19.886 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:19.886 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:19.886 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:19.886 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:19.886 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:19.886 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:19.886 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:19.886 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:19.886 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:19.886 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:19.886 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:19.886 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:19.886 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:19.886 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:19.886 06:26:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:19.886 06:26:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:19.886 06:26:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:19.886 06:26:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:19.886 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:19.886 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.294 ms 00:16:19.886 00:16:19.886 --- 10.0.0.2 ping statistics --- 00:16:19.886 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:19.886 rtt min/avg/max/mdev = 0.294/0.294/0.294/0.000 ms 00:16:19.886 06:26:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:19.886 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:19.886 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:16:19.886 00:16:19.886 --- 10.0.0.1 ping statistics --- 00:16:19.886 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:19.886 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:16:19.886 06:26:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:19.886 06:26:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:16:19.886 06:26:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:19.886 06:26:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:19.886 06:26:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:19.886 06:26:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:19.886 06:26:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:19.886 06:26:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:19.886 06:26:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:19.886 06:26:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:16:19.886 06:26:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:19.886 06:26:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:19.886 06:26:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:16:19.886 06:26:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=471604 00:16:19.886 06:26:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:19.886 06:26:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 471604 00:16:19.886 06:26:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@833 -- # '[' -z 471604 ']' 00:16:19.887 06:26:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:19.887 06:26:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:19.887 06:26:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:19.887 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:19.887 06:26:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:19.887 06:26:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:16:19.887 [2024-11-20 06:26:51.191280] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:16:19.887 [2024-11-20 06:26:51.191322] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:19.887 [2024-11-20 06:26:51.272308] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:19.887 [2024-11-20 06:26:51.314922] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:19.887 [2024-11-20 06:26:51.314959] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:19.887 [2024-11-20 06:26:51.314966] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:19.887 [2024-11-20 06:26:51.314972] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:19.887 [2024-11-20 06:26:51.314977] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:19.887 [2024-11-20 06:26:51.316546] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:19.887 [2024-11-20 06:26:51.316586] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:19.887 [2024-11-20 06:26:51.316691] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:19.887 [2024-11-20 06:26:51.316692] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:20.454 06:26:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:20.454 06:26:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@866 -- # return 0 00:16:20.454 06:26:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:20.454 06:26:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:20.454 06:26:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:16:20.454 06:26:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:20.454 06:26:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:16:20.454 06:26:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.454 06:26:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:16:20.454 [2024-11-20 06:26:52.070405] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:20.454 06:26:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.454 06:26:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:16:20.454 06:26:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.454 06:26:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:16:20.454 06:26:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.454 06:26:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:16:20.454 06:26:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:20.454 06:26:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.454 06:26:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:16:20.454 06:26:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.454 06:26:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:20.454 06:26:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.454 06:26:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:16:20.454 06:26:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.454 06:26:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:20.454 06:26:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.454 06:26:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:16:20.454 [2024-11-20 06:26:52.138684] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:20.454 06:26:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.454 06:26:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:16:20.454 06:26:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:16:20.454 06:26:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:16:23.741 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:27.029 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:30.319 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:33.631 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:36.918 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:36.918 06:27:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:16:36.918 06:27:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:16:36.918 06:27:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:36.918 06:27:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:16:36.918 06:27:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:36.918 06:27:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:16:36.918 06:27:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:36.918 06:27:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:36.918 rmmod nvme_tcp 00:16:36.918 rmmod nvme_fabrics 00:16:36.918 rmmod nvme_keyring 00:16:36.918 06:27:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:36.918 06:27:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:16:36.918 06:27:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:16:36.918 06:27:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 471604 ']' 00:16:36.918 06:27:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 471604 00:16:36.918 06:27:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # '[' -z 471604 ']' 00:16:36.918 06:27:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # kill -0 471604 00:16:36.919 06:27:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@957 -- # uname 00:16:36.919 06:27:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:36.919 06:27:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 471604 00:16:36.919 06:27:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:16:36.919 06:27:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:16:36.919 06:27:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@970 -- # echo 'killing process with pid 471604' 00:16:36.919 killing process with pid 471604 00:16:36.919 06:27:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@971 -- # kill 471604 00:16:36.919 06:27:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@976 -- # wait 471604 00:16:37.178 06:27:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:37.178 06:27:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:37.178 06:27:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:37.178 06:27:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:16:37.178 06:27:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:16:37.178 06:27:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:37.178 06:27:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:16:37.178 06:27:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:37.178 06:27:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:37.178 06:27:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:37.178 06:27:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:37.178 06:27:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:39.087 06:27:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:39.087 00:16:39.087 real 0m25.914s 00:16:39.087 user 1m11.152s 00:16:39.087 sys 0m5.854s 00:16:39.087 06:27:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:39.087 06:27:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:16:39.087 ************************************ 00:16:39.087 END TEST nvmf_connect_disconnect 00:16:39.087 ************************************ 00:16:39.087 06:27:10 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:16:39.087 06:27:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:16:39.087 06:27:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:39.087 06:27:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:39.087 ************************************ 00:16:39.087 START TEST nvmf_multitarget 00:16:39.087 ************************************ 00:16:39.087 06:27:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:16:39.347 * Looking for test storage... 00:16:39.347 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:39.347 06:27:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:16:39.347 06:27:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1691 -- # lcov --version 00:16:39.347 06:27:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:16:39.347 06:27:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:16:39.347 06:27:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:39.347 06:27:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:39.347 06:27:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:39.347 06:27:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:16:39.347 06:27:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:16:39.347 06:27:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:16:39.347 06:27:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:16:39.347 06:27:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:16:39.347 06:27:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:16:39.347 06:27:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:16:39.347 06:27:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:39.347 06:27:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:16:39.347 06:27:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:16:39.347 06:27:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:39.347 06:27:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:39.347 06:27:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:16:39.347 06:27:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:16:39.347 06:27:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:39.347 06:27:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:16:39.347 06:27:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:16:39.347 06:27:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:16:39.347 06:27:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:16:39.347 06:27:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:39.347 06:27:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:16:39.347 06:27:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:16:39.347 06:27:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:39.347 06:27:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:39.347 06:27:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:16:39.347 06:27:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:39.347 06:27:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:16:39.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:39.347 --rc genhtml_branch_coverage=1 00:16:39.347 --rc genhtml_function_coverage=1 00:16:39.347 --rc genhtml_legend=1 00:16:39.347 --rc geninfo_all_blocks=1 00:16:39.347 --rc geninfo_unexecuted_blocks=1 00:16:39.347 00:16:39.347 ' 00:16:39.347 06:27:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:16:39.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:39.347 --rc genhtml_branch_coverage=1 00:16:39.347 --rc genhtml_function_coverage=1 00:16:39.347 --rc genhtml_legend=1 00:16:39.347 --rc geninfo_all_blocks=1 00:16:39.347 --rc geninfo_unexecuted_blocks=1 00:16:39.347 00:16:39.347 ' 00:16:39.347 06:27:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:16:39.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:39.347 --rc genhtml_branch_coverage=1 00:16:39.347 --rc genhtml_function_coverage=1 00:16:39.347 --rc genhtml_legend=1 00:16:39.347 --rc geninfo_all_blocks=1 00:16:39.347 --rc geninfo_unexecuted_blocks=1 00:16:39.347 00:16:39.347 ' 00:16:39.347 06:27:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:16:39.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:39.347 --rc genhtml_branch_coverage=1 00:16:39.347 --rc genhtml_function_coverage=1 00:16:39.347 --rc genhtml_legend=1 00:16:39.347 --rc geninfo_all_blocks=1 00:16:39.347 --rc geninfo_unexecuted_blocks=1 00:16:39.347 00:16:39.347 ' 00:16:39.347 06:27:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:39.347 06:27:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:16:39.347 06:27:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:39.347 06:27:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:39.347 06:27:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:39.347 06:27:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:39.347 06:27:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:39.348 06:27:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:39.348 06:27:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:39.348 06:27:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:39.348 06:27:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:39.348 06:27:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:39.348 06:27:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:39.348 06:27:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:16:39.348 06:27:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:39.348 06:27:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:39.348 06:27:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:39.348 06:27:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:39.348 06:27:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:39.348 06:27:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:16:39.348 06:27:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:39.348 06:27:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:39.348 06:27:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:39.348 06:27:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:39.348 06:27:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:39.348 06:27:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:39.348 06:27:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:16:39.348 06:27:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:39.348 06:27:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:16:39.348 06:27:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:39.348 06:27:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:39.348 06:27:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:39.348 06:27:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:39.348 06:27:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:39.348 06:27:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:39.348 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:39.348 06:27:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:39.348 06:27:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:39.348 06:27:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:39.348 06:27:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:16:39.348 06:27:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:16:39.348 06:27:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:39.348 06:27:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:39.348 06:27:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:39.348 06:27:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:39.348 06:27:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:39.348 06:27:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:39.348 06:27:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:39.348 06:27:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:39.348 06:27:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:39.348 06:27:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:39.348 06:27:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:16:39.348 06:27:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:45.923 06:27:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:45.923 06:27:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:16:45.923 06:27:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:45.923 06:27:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:45.923 06:27:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:45.923 06:27:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:45.923 06:27:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:45.923 06:27:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:16:45.923 06:27:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:45.923 06:27:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:16:45.923 06:27:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:16:45.923 06:27:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:16:45.923 06:27:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:16:45.923 06:27:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:16:45.923 06:27:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:16:45.923 06:27:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:45.923 06:27:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:45.923 06:27:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:45.923 06:27:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:45.923 06:27:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:45.923 06:27:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:45.923 06:27:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:45.923 06:27:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:45.923 06:27:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:45.923 06:27:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:45.923 06:27:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:45.923 06:27:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:45.923 06:27:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:45.923 06:27:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:45.923 06:27:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:45.923 06:27:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:45.923 06:27:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:45.923 06:27:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:45.923 06:27:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:45.923 06:27:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:16:45.923 Found 0000:86:00.0 (0x8086 - 0x159b) 00:16:45.923 06:27:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:45.923 06:27:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:45.923 06:27:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:45.923 06:27:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:45.923 06:27:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:45.923 06:27:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:45.923 06:27:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:16:45.923 Found 0000:86:00.1 (0x8086 - 0x159b) 00:16:45.923 06:27:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:45.923 06:27:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:45.923 06:27:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:45.923 06:27:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:45.923 06:27:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:45.923 06:27:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:45.923 06:27:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:45.923 06:27:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:45.923 06:27:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:45.923 06:27:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:45.923 06:27:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:45.923 06:27:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:45.923 06:27:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:45.923 06:27:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:45.923 06:27:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:45.923 06:27:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:16:45.923 Found net devices under 0000:86:00.0: cvl_0_0 00:16:45.923 06:27:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:45.923 06:27:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:45.923 06:27:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:45.923 06:27:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:45.923 06:27:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:45.923 06:27:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:45.923 06:27:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:45.923 06:27:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:45.923 06:27:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:16:45.923 Found net devices under 0000:86:00.1: cvl_0_1 00:16:45.923 06:27:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:45.923 06:27:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:45.923 06:27:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:16:45.923 06:27:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:45.923 06:27:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:16:45.923 06:27:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:16:45.923 06:27:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:45.923 06:27:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:45.923 06:27:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:45.923 06:27:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:45.923 06:27:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:45.923 06:27:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:45.923 06:27:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:45.923 06:27:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:45.923 06:27:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:45.923 06:27:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:45.923 06:27:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:45.924 06:27:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:45.924 06:27:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:45.924 06:27:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:45.924 06:27:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:45.924 06:27:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:45.924 06:27:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:45.924 06:27:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:45.924 06:27:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:45.924 06:27:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:45.924 06:27:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:45.924 06:27:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:45.924 06:27:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:45.924 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:45.924 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.486 ms 00:16:45.924 00:16:45.924 --- 10.0.0.2 ping statistics --- 00:16:45.924 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:45.924 rtt min/avg/max/mdev = 0.486/0.486/0.486/0.000 ms 00:16:45.924 06:27:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:45.924 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:45.924 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.191 ms 00:16:45.924 00:16:45.924 --- 10.0.0.1 ping statistics --- 00:16:45.924 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:45.924 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:16:45.924 06:27:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:45.924 06:27:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:16:45.924 06:27:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:45.924 06:27:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:45.924 06:27:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:45.924 06:27:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:45.924 06:27:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:45.924 06:27:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:45.924 06:27:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:45.924 06:27:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:16:45.924 06:27:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:45.924 06:27:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:45.924 06:27:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:45.924 06:27:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=478525 00:16:45.924 06:27:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:45.924 06:27:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 478525 00:16:45.924 06:27:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@833 -- # '[' -z 478525 ']' 00:16:45.924 06:27:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:45.924 06:27:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:45.924 06:27:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:45.924 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:45.924 06:27:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:45.924 06:27:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:45.924 [2024-11-20 06:27:17.144030] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:16:45.924 [2024-11-20 06:27:17.144072] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:45.924 [2024-11-20 06:27:17.223387] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:45.924 [2024-11-20 06:27:17.265429] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:45.924 [2024-11-20 06:27:17.265466] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:45.924 [2024-11-20 06:27:17.265473] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:45.924 [2024-11-20 06:27:17.265479] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:45.924 [2024-11-20 06:27:17.265484] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:45.924 [2024-11-20 06:27:17.267042] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:45.924 [2024-11-20 06:27:17.267154] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:45.924 [2024-11-20 06:27:17.267290] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:45.924 [2024-11-20 06:27:17.267291] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:45.924 06:27:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:45.924 06:27:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@866 -- # return 0 00:16:45.924 06:27:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:45.924 06:27:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:45.924 06:27:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:45.924 06:27:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:45.924 06:27:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:16:45.924 06:27:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:16:45.924 06:27:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:16:45.924 06:27:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:16:45.924 06:27:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:16:45.924 "nvmf_tgt_1" 00:16:45.924 06:27:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:16:45.924 "nvmf_tgt_2" 00:16:45.924 06:27:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:16:45.924 06:27:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:16:46.183 06:27:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:16:46.183 06:27:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:16:46.183 true 00:16:46.183 06:27:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:16:46.442 true 00:16:46.442 06:27:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:16:46.442 06:27:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:16:46.442 06:27:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:16:46.442 06:27:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:16:46.442 06:27:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:16:46.442 06:27:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:46.442 06:27:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:16:46.442 06:27:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:46.442 06:27:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:16:46.442 06:27:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:46.442 06:27:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:46.442 rmmod nvme_tcp 00:16:46.442 rmmod nvme_fabrics 00:16:46.442 rmmod nvme_keyring 00:16:46.442 06:27:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:46.442 06:27:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:16:46.442 06:27:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:16:46.442 06:27:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 478525 ']' 00:16:46.442 06:27:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 478525 00:16:46.442 06:27:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@952 -- # '[' -z 478525 ']' 00:16:46.442 06:27:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # kill -0 478525 00:16:46.442 06:27:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@957 -- # uname 00:16:46.442 06:27:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:46.442 06:27:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 478525 00:16:46.702 06:27:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:16:46.702 06:27:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:16:46.702 06:27:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@970 -- # echo 'killing process with pid 478525' 00:16:46.702 killing process with pid 478525 00:16:46.702 06:27:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@971 -- # kill 478525 00:16:46.702 06:27:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@976 -- # wait 478525 00:16:46.702 06:27:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:46.702 06:27:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:46.702 06:27:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:46.702 06:27:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:16:46.702 06:27:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save 00:16:46.702 06:27:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:46.702 06:27:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore 00:16:46.702 06:27:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:46.702 06:27:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:46.702 06:27:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:46.702 06:27:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:46.702 06:27:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:49.239 06:27:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:49.239 00:16:49.239 real 0m9.606s 00:16:49.239 user 0m7.258s 00:16:49.239 sys 0m4.899s 00:16:49.239 06:27:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:49.239 06:27:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:49.239 ************************************ 00:16:49.239 END TEST nvmf_multitarget 00:16:49.239 ************************************ 00:16:49.239 06:27:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:16:49.239 06:27:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:16:49.239 06:27:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:49.239 06:27:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:49.239 ************************************ 00:16:49.239 START TEST nvmf_rpc 00:16:49.239 ************************************ 00:16:49.239 06:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:16:49.239 * Looking for test storage... 00:16:49.239 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:49.239 06:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:16:49.239 06:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:16:49.239 06:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:16:49.239 06:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:16:49.239 06:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:49.239 06:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:49.240 06:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:49.240 06:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:16:49.240 06:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:16:49.240 06:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:16:49.240 06:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:16:49.240 06:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:16:49.240 06:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:16:49.240 06:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:16:49.240 06:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:49.240 06:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:16:49.240 06:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:16:49.240 06:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:49.240 06:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:49.240 06:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:16:49.240 06:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:16:49.240 06:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:49.240 06:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:16:49.240 06:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:16:49.240 06:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:16:49.240 06:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:16:49.240 06:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:49.240 06:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:16:49.240 06:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:16:49.240 06:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:49.240 06:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:49.240 06:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:16:49.240 06:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:49.240 06:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:16:49.240 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:49.240 --rc genhtml_branch_coverage=1 00:16:49.240 --rc genhtml_function_coverage=1 00:16:49.240 --rc genhtml_legend=1 00:16:49.240 --rc geninfo_all_blocks=1 00:16:49.240 --rc geninfo_unexecuted_blocks=1 00:16:49.240 00:16:49.240 ' 00:16:49.240 06:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:16:49.240 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:49.240 --rc genhtml_branch_coverage=1 00:16:49.240 --rc genhtml_function_coverage=1 00:16:49.240 --rc genhtml_legend=1 00:16:49.240 --rc geninfo_all_blocks=1 00:16:49.240 --rc geninfo_unexecuted_blocks=1 00:16:49.240 00:16:49.240 ' 00:16:49.240 06:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:16:49.240 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:49.240 --rc genhtml_branch_coverage=1 00:16:49.240 --rc genhtml_function_coverage=1 00:16:49.240 --rc genhtml_legend=1 00:16:49.240 --rc geninfo_all_blocks=1 00:16:49.240 --rc geninfo_unexecuted_blocks=1 00:16:49.240 00:16:49.240 ' 00:16:49.240 06:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:16:49.240 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:49.240 --rc genhtml_branch_coverage=1 00:16:49.240 --rc genhtml_function_coverage=1 00:16:49.240 --rc genhtml_legend=1 00:16:49.240 --rc geninfo_all_blocks=1 00:16:49.240 --rc geninfo_unexecuted_blocks=1 00:16:49.240 00:16:49.240 ' 00:16:49.240 06:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:49.240 06:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:16:49.240 06:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:49.240 06:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:49.240 06:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:49.240 06:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:49.240 06:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:49.240 06:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:49.240 06:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:49.240 06:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:49.240 06:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:49.240 06:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:49.240 06:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:49.240 06:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:16:49.240 06:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:49.240 06:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:49.240 06:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:49.240 06:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:49.240 06:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:49.240 06:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:16:49.240 06:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:49.240 06:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:49.240 06:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:49.240 06:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:49.240 06:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:49.240 06:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:49.240 06:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:16:49.240 06:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:49.240 06:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:16:49.240 06:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:49.240 06:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:49.240 06:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:49.240 06:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:49.240 06:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:49.240 06:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:49.240 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:49.240 06:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:49.240 06:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:49.240 06:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:49.240 06:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:16:49.240 06:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:16:49.240 06:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:49.240 06:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:49.240 06:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:49.240 06:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:49.240 06:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:49.240 06:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:49.240 06:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:49.240 06:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:49.240 06:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:49.241 06:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:49.241 06:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:16:49.241 06:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:55.815 06:27:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:55.815 06:27:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:16:55.815 06:27:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:55.815 06:27:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:55.815 06:27:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:55.815 06:27:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:55.815 06:27:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:55.815 06:27:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:16:55.815 06:27:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:55.815 06:27:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:16:55.815 06:27:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:16:55.815 06:27:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:16:55.815 06:27:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:16:55.815 06:27:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:16:55.815 06:27:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:16:55.815 06:27:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:55.815 06:27:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:55.815 06:27:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:55.815 06:27:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:55.815 06:27:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:55.815 06:27:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:55.815 06:27:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:55.815 06:27:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:55.815 06:27:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:55.815 06:27:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:55.815 06:27:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:55.815 06:27:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:55.815 06:27:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:55.815 06:27:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:55.815 06:27:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:55.815 06:27:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:55.815 06:27:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:55.815 06:27:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:55.815 06:27:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:55.815 06:27:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:16:55.815 Found 0000:86:00.0 (0x8086 - 0x159b) 00:16:55.815 06:27:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:55.815 06:27:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:55.815 06:27:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:55.815 06:27:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:55.815 06:27:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:55.815 06:27:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:55.815 06:27:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:16:55.815 Found 0000:86:00.1 (0x8086 - 0x159b) 00:16:55.815 06:27:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:55.815 06:27:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:55.815 06:27:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:55.815 06:27:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:55.815 06:27:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:55.815 06:27:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:55.815 06:27:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:55.815 06:27:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:55.815 06:27:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:55.815 06:27:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:55.815 06:27:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:55.815 06:27:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:55.815 06:27:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:55.815 06:27:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:55.815 06:27:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:55.815 06:27:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:16:55.815 Found net devices under 0000:86:00.0: cvl_0_0 00:16:55.815 06:27:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:55.815 06:27:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:55.815 06:27:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:55.815 06:27:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:55.815 06:27:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:55.815 06:27:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:55.815 06:27:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:55.815 06:27:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:55.815 06:27:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:16:55.815 Found net devices under 0000:86:00.1: cvl_0_1 00:16:55.815 06:27:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:55.815 06:27:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:55.815 06:27:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:16:55.815 06:27:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:55.815 06:27:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:16:55.816 06:27:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:16:55.816 06:27:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:55.816 06:27:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:55.816 06:27:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:55.816 06:27:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:55.816 06:27:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:55.816 06:27:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:55.816 06:27:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:55.816 06:27:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:55.816 06:27:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:55.816 06:27:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:55.816 06:27:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:55.816 06:27:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:55.816 06:27:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:55.816 06:27:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:55.816 06:27:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:55.816 06:27:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:55.816 06:27:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:55.816 06:27:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:55.816 06:27:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:55.816 06:27:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:55.816 06:27:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:55.816 06:27:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:55.816 06:27:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:55.816 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:55.816 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.502 ms 00:16:55.816 00:16:55.816 --- 10.0.0.2 ping statistics --- 00:16:55.816 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:55.816 rtt min/avg/max/mdev = 0.502/0.502/0.502/0.000 ms 00:16:55.816 06:27:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:55.816 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:55.816 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.202 ms 00:16:55.816 00:16:55.816 --- 10.0.0.1 ping statistics --- 00:16:55.816 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:55.816 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:16:55.816 06:27:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:55.816 06:27:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:16:55.816 06:27:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:55.816 06:27:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:55.816 06:27:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:55.816 06:27:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:55.816 06:27:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:55.816 06:27:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:55.816 06:27:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:55.816 06:27:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:16:55.816 06:27:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:55.816 06:27:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:55.816 06:27:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:55.816 06:27:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=482312 00:16:55.816 06:27:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 482312 00:16:55.816 06:27:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:55.816 06:27:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@833 -- # '[' -z 482312 ']' 00:16:55.816 06:27:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:55.816 06:27:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:55.816 06:27:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:55.816 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:55.816 06:27:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:55.816 06:27:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:55.816 [2024-11-20 06:27:26.846170] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:16:55.816 [2024-11-20 06:27:26.846226] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:55.816 [2024-11-20 06:27:26.925703] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:55.816 [2024-11-20 06:27:26.967660] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:55.816 [2024-11-20 06:27:26.967696] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:55.816 [2024-11-20 06:27:26.967702] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:55.816 [2024-11-20 06:27:26.967708] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:55.816 [2024-11-20 06:27:26.967713] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:55.816 [2024-11-20 06:27:26.969089] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:55.816 [2024-11-20 06:27:26.969218] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:55.816 [2024-11-20 06:27:26.969309] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:55.816 [2024-11-20 06:27:26.969310] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:55.816 06:27:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:55.816 06:27:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@866 -- # return 0 00:16:55.816 06:27:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:55.816 06:27:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:55.816 06:27:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:55.816 06:27:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:55.816 06:27:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:16:55.816 06:27:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.816 06:27:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:55.816 06:27:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.816 06:27:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:16:55.816 "tick_rate": 2100000000, 00:16:55.816 "poll_groups": [ 00:16:55.816 { 00:16:55.816 "name": "nvmf_tgt_poll_group_000", 00:16:55.816 "admin_qpairs": 0, 00:16:55.816 "io_qpairs": 0, 00:16:55.816 "current_admin_qpairs": 0, 00:16:55.816 "current_io_qpairs": 0, 00:16:55.816 "pending_bdev_io": 0, 00:16:55.816 "completed_nvme_io": 0, 00:16:55.816 "transports": [] 00:16:55.816 }, 00:16:55.816 { 00:16:55.816 "name": "nvmf_tgt_poll_group_001", 00:16:55.816 "admin_qpairs": 0, 00:16:55.816 "io_qpairs": 0, 00:16:55.816 "current_admin_qpairs": 0, 00:16:55.816 "current_io_qpairs": 0, 00:16:55.816 "pending_bdev_io": 0, 00:16:55.816 "completed_nvme_io": 0, 00:16:55.816 "transports": [] 00:16:55.816 }, 00:16:55.816 { 00:16:55.816 "name": "nvmf_tgt_poll_group_002", 00:16:55.816 "admin_qpairs": 0, 00:16:55.816 "io_qpairs": 0, 00:16:55.816 "current_admin_qpairs": 0, 00:16:55.816 "current_io_qpairs": 0, 00:16:55.816 "pending_bdev_io": 0, 00:16:55.816 "completed_nvme_io": 0, 00:16:55.816 "transports": [] 00:16:55.816 }, 00:16:55.816 { 00:16:55.816 "name": "nvmf_tgt_poll_group_003", 00:16:55.816 "admin_qpairs": 0, 00:16:55.816 "io_qpairs": 0, 00:16:55.816 "current_admin_qpairs": 0, 00:16:55.816 "current_io_qpairs": 0, 00:16:55.816 "pending_bdev_io": 0, 00:16:55.816 "completed_nvme_io": 0, 00:16:55.816 "transports": [] 00:16:55.816 } 00:16:55.816 ] 00:16:55.816 }' 00:16:55.816 06:27:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:16:55.816 06:27:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:16:55.816 06:27:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:16:55.816 06:27:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:16:55.816 06:27:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:16:55.816 06:27:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:16:55.816 06:27:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:16:55.816 06:27:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:55.816 06:27:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.816 06:27:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:55.816 [2024-11-20 06:27:27.214454] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:55.817 06:27:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.817 06:27:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:16:55.817 06:27:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.817 06:27:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:55.817 06:27:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.817 06:27:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:16:55.817 "tick_rate": 2100000000, 00:16:55.817 "poll_groups": [ 00:16:55.817 { 00:16:55.817 "name": "nvmf_tgt_poll_group_000", 00:16:55.817 "admin_qpairs": 0, 00:16:55.817 "io_qpairs": 0, 00:16:55.817 "current_admin_qpairs": 0, 00:16:55.817 "current_io_qpairs": 0, 00:16:55.817 "pending_bdev_io": 0, 00:16:55.817 "completed_nvme_io": 0, 00:16:55.817 "transports": [ 00:16:55.817 { 00:16:55.817 "trtype": "TCP" 00:16:55.817 } 00:16:55.817 ] 00:16:55.817 }, 00:16:55.817 { 00:16:55.817 "name": "nvmf_tgt_poll_group_001", 00:16:55.817 "admin_qpairs": 0, 00:16:55.817 "io_qpairs": 0, 00:16:55.817 "current_admin_qpairs": 0, 00:16:55.817 "current_io_qpairs": 0, 00:16:55.817 "pending_bdev_io": 0, 00:16:55.817 "completed_nvme_io": 0, 00:16:55.817 "transports": [ 00:16:55.817 { 00:16:55.817 "trtype": "TCP" 00:16:55.817 } 00:16:55.817 ] 00:16:55.817 }, 00:16:55.817 { 00:16:55.817 "name": "nvmf_tgt_poll_group_002", 00:16:55.817 "admin_qpairs": 0, 00:16:55.817 "io_qpairs": 0, 00:16:55.817 "current_admin_qpairs": 0, 00:16:55.817 "current_io_qpairs": 0, 00:16:55.817 "pending_bdev_io": 0, 00:16:55.817 "completed_nvme_io": 0, 00:16:55.817 "transports": [ 00:16:55.817 { 00:16:55.817 "trtype": "TCP" 00:16:55.817 } 00:16:55.817 ] 00:16:55.817 }, 00:16:55.817 { 00:16:55.817 "name": "nvmf_tgt_poll_group_003", 00:16:55.817 "admin_qpairs": 0, 00:16:55.817 "io_qpairs": 0, 00:16:55.817 "current_admin_qpairs": 0, 00:16:55.817 "current_io_qpairs": 0, 00:16:55.817 "pending_bdev_io": 0, 00:16:55.817 "completed_nvme_io": 0, 00:16:55.817 "transports": [ 00:16:55.817 { 00:16:55.817 "trtype": "TCP" 00:16:55.817 } 00:16:55.817 ] 00:16:55.817 } 00:16:55.817 ] 00:16:55.817 }' 00:16:55.817 06:27:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:16:55.817 06:27:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:16:55.817 06:27:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:16:55.817 06:27:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:16:55.817 06:27:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:16:55.817 06:27:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:16:55.817 06:27:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:16:55.817 06:27:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:16:55.817 06:27:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:16:55.817 06:27:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:16:55.817 06:27:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:16:55.817 06:27:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:16:55.817 06:27:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:16:55.817 06:27:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:16:55.817 06:27:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.817 06:27:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:55.817 Malloc1 00:16:55.817 06:27:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.817 06:27:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:55.817 06:27:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.817 06:27:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:55.817 06:27:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.817 06:27:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:55.817 06:27:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.817 06:27:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:55.817 06:27:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.817 06:27:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:16:55.817 06:27:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.817 06:27:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:55.817 06:27:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.817 06:27:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:55.817 06:27:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.817 06:27:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:55.817 [2024-11-20 06:27:27.390550] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:55.817 06:27:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.817 06:27:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:16:55.817 06:27:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:16:55.817 06:27:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:16:55.817 06:27:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:16:55.817 06:27:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:55.817 06:27:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:16:55.817 06:27:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:55.817 06:27:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:16:55.817 06:27:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:55.817 06:27:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:16:55.817 06:27:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:16:55.817 06:27:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:16:55.817 [2024-11-20 06:27:27.419144] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562' 00:16:55.817 Failed to write to /dev/nvme-fabrics: Input/output error 00:16:55.817 could not add new controller: failed to write to nvme-fabrics device 00:16:55.817 06:27:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:16:55.817 06:27:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:55.817 06:27:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:55.817 06:27:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:55.817 06:27:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:55.817 06:27:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.817 06:27:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:55.817 06:27:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.817 06:27:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:56.753 06:27:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:16:56.753 06:27:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:16:56.753 06:27:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:16:56.753 06:27:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:16:56.753 06:27:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:16:59.284 06:27:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:16:59.284 06:27:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:16:59.284 06:27:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:16:59.284 06:27:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:16:59.284 06:27:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:16:59.284 06:27:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:16:59.284 06:27:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:59.284 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:59.284 06:27:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:59.284 06:27:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:16:59.284 06:27:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:16:59.284 06:27:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:59.284 06:27:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:16:59.284 06:27:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:59.284 06:27:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:16:59.284 06:27:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:59.284 06:27:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.284 06:27:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:59.284 06:27:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.284 06:27:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:59.284 06:27:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:16:59.284 06:27:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:59.284 06:27:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:16:59.284 06:27:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:59.284 06:27:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:16:59.284 06:27:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:59.284 06:27:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:16:59.284 06:27:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:59.285 06:27:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:16:59.285 06:27:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:16:59.285 06:27:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:59.285 [2024-11-20 06:27:30.714621] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562' 00:16:59.285 Failed to write to /dev/nvme-fabrics: Input/output error 00:16:59.285 could not add new controller: failed to write to nvme-fabrics device 00:16:59.285 06:27:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:16:59.285 06:27:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:59.285 06:27:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:59.285 06:27:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:59.285 06:27:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:16:59.285 06:27:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.285 06:27:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:59.285 06:27:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.285 06:27:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:00.341 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:17:00.341 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:17:00.341 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:17:00.341 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:17:00.341 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:17:02.268 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:17:02.268 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:17:02.268 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:17:02.268 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:17:02.268 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:17:02.268 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:17:02.268 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:02.268 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:02.268 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:02.268 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:17:02.268 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:17:02.268 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:02.268 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:17:02.268 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:02.268 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:17:02.268 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:02.268 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.268 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:02.268 06:27:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.268 06:27:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:17:02.268 06:27:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:02.268 06:27:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:02.268 06:27:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.268 06:27:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:02.268 06:27:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.268 06:27:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:02.268 06:27:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.268 06:27:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:02.268 [2024-11-20 06:27:34.028577] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:02.268 06:27:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.268 06:27:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:02.268 06:27:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.268 06:27:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:02.268 06:27:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.268 06:27:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:02.268 06:27:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.268 06:27:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:02.268 06:27:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.268 06:27:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:03.645 06:27:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:03.645 06:27:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:17:03.645 06:27:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:17:03.645 06:27:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:17:03.645 06:27:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:17:05.551 06:27:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:17:05.551 06:27:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:17:05.551 06:27:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:17:05.551 06:27:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:17:05.551 06:27:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:17:05.551 06:27:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:17:05.551 06:27:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:05.551 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:05.551 06:27:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:05.551 06:27:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:17:05.551 06:27:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:17:05.551 06:27:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:05.551 06:27:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:17:05.551 06:27:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:05.551 06:27:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:17:05.551 06:27:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:05.551 06:27:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.551 06:27:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:05.551 06:27:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.551 06:27:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:05.551 06:27:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.551 06:27:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:05.551 06:27:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.551 06:27:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:05.551 06:27:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:05.551 06:27:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.551 06:27:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:05.809 06:27:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.809 06:27:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:05.809 06:27:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.809 06:27:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:05.809 [2024-11-20 06:27:37.394659] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:05.809 06:27:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.810 06:27:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:05.810 06:27:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.810 06:27:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:05.810 06:27:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.810 06:27:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:05.810 06:27:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.810 06:27:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:05.810 06:27:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.810 06:27:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:06.746 06:27:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:06.746 06:27:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:17:06.746 06:27:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:17:06.746 06:27:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:17:06.746 06:27:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:17:09.292 06:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:17:09.292 06:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:17:09.292 06:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:17:09.292 06:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:17:09.292 06:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:17:09.292 06:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:17:09.292 06:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:09.292 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:09.292 06:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:09.292 06:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:17:09.292 06:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:17:09.292 06:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:09.293 06:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:17:09.293 06:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:09.293 06:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:17:09.293 06:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:09.293 06:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.293 06:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:09.293 06:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.293 06:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:09.293 06:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.293 06:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:09.293 06:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.293 06:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:09.293 06:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:09.293 06:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.293 06:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:09.293 06:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.293 06:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:09.293 06:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.293 06:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:09.293 [2024-11-20 06:27:40.678024] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:09.293 06:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.293 06:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:09.293 06:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.293 06:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:09.293 06:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.293 06:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:09.293 06:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.293 06:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:09.293 06:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.293 06:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:10.229 06:27:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:10.229 06:27:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:17:10.229 06:27:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:17:10.229 06:27:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:17:10.229 06:27:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:17:12.132 06:27:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:17:12.132 06:27:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:17:12.132 06:27:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:17:12.132 06:27:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:17:12.132 06:27:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:17:12.132 06:27:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:17:12.133 06:27:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:12.133 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:12.133 06:27:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:12.133 06:27:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:17:12.133 06:27:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:17:12.133 06:27:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:12.391 06:27:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:17:12.391 06:27:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:12.391 06:27:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:17:12.391 06:27:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:12.391 06:27:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.391 06:27:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:12.391 06:27:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.391 06:27:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:12.391 06:27:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.391 06:27:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:12.391 06:27:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.391 06:27:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:12.392 06:27:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:12.392 06:27:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.392 06:27:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:12.392 06:27:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.392 06:27:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:12.392 06:27:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.392 06:27:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:12.392 [2024-11-20 06:27:44.025240] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:12.392 06:27:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.392 06:27:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:12.392 06:27:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.392 06:27:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:12.392 06:27:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.392 06:27:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:12.392 06:27:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.392 06:27:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:12.392 06:27:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.392 06:27:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:13.328 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:13.328 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:17:13.328 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:17:13.328 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:17:13.328 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:17:15.862 06:27:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:17:15.862 06:27:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:17:15.862 06:27:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:17:15.862 06:27:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:17:15.862 06:27:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:17:15.862 06:27:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:17:15.862 06:27:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:15.862 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:15.862 06:27:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:15.862 06:27:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:17:15.862 06:27:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:17:15.862 06:27:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:15.862 06:27:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:17:15.862 06:27:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:15.862 06:27:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:17:15.862 06:27:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:15.862 06:27:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.862 06:27:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:15.862 06:27:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.862 06:27:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:15.862 06:27:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.862 06:27:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:15.862 06:27:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.862 06:27:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:15.862 06:27:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:15.862 06:27:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.862 06:27:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:15.862 06:27:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.862 06:27:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:15.862 06:27:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.862 06:27:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:15.862 [2024-11-20 06:27:47.330107] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:15.862 06:27:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.862 06:27:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:15.862 06:27:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.862 06:27:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:15.862 06:27:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.862 06:27:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:15.862 06:27:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.862 06:27:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:15.862 06:27:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.862 06:27:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:16.798 06:27:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:16.798 06:27:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:17:16.798 06:27:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:17:16.798 06:27:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:17:16.798 06:27:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:17:18.700 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:17:18.700 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:17:18.700 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:17:18.700 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:17:18.700 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:17:18.700 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:17:18.700 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:18.960 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:18.960 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:18.960 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:17:18.960 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:17:18.960 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:18.960 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:18.960 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:17:18.960 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:17:18.960 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:18.960 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.960 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:18.960 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.960 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:18.960 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.960 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:18.960 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.960 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:17:18.960 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:18.960 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:18.960 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.960 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:18.960 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.960 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:18.960 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.960 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:18.960 [2024-11-20 06:27:50.608815] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:18.960 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.960 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:18.960 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.960 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:18.960 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.960 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:18.960 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.960 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:18.960 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.960 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:18.960 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.960 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:18.960 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.960 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:18.960 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.960 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:18.960 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.960 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:18.960 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:18.960 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.960 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:18.960 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.960 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:18.960 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.960 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:18.960 [2024-11-20 06:27:50.656906] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:18.960 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.960 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:18.960 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.960 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:18.960 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.960 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:18.960 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.960 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:18.960 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.960 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:18.960 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.960 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:18.960 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.960 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:18.960 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.960 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:18.960 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.960 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:18.960 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:18.960 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.960 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:18.960 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.960 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:18.960 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.960 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:18.960 [2024-11-20 06:27:50.705048] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:18.960 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.961 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:18.961 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.961 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:18.961 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.961 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:18.961 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.961 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:18.961 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.961 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:18.961 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.961 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:18.961 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.961 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:18.961 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.961 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:18.961 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.961 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:18.961 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:18.961 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.961 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:18.961 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.961 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:18.961 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.961 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:18.961 [2024-11-20 06:27:50.753241] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:18.961 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.961 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:18.961 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.961 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:18.961 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.961 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:18.961 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.961 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:18.961 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.961 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:18.961 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.961 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:18.961 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.961 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:18.961 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.961 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:18.961 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.961 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:18.961 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:18.961 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.961 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:19.225 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.225 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:19.225 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.225 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:19.225 [2024-11-20 06:27:50.801420] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:19.225 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.225 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:19.225 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.225 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:19.225 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.225 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:19.225 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.225 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:19.225 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.225 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:19.225 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.225 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:19.225 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.225 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:19.225 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.225 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:19.225 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.225 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:17:19.225 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.225 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:19.225 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.225 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:17:19.225 "tick_rate": 2100000000, 00:17:19.225 "poll_groups": [ 00:17:19.225 { 00:17:19.225 "name": "nvmf_tgt_poll_group_000", 00:17:19.225 "admin_qpairs": 2, 00:17:19.225 "io_qpairs": 168, 00:17:19.225 "current_admin_qpairs": 0, 00:17:19.225 "current_io_qpairs": 0, 00:17:19.225 "pending_bdev_io": 0, 00:17:19.225 "completed_nvme_io": 269, 00:17:19.225 "transports": [ 00:17:19.225 { 00:17:19.225 "trtype": "TCP" 00:17:19.225 } 00:17:19.225 ] 00:17:19.225 }, 00:17:19.225 { 00:17:19.225 "name": "nvmf_tgt_poll_group_001", 00:17:19.225 "admin_qpairs": 2, 00:17:19.225 "io_qpairs": 168, 00:17:19.225 "current_admin_qpairs": 0, 00:17:19.225 "current_io_qpairs": 0, 00:17:19.225 "pending_bdev_io": 0, 00:17:19.225 "completed_nvme_io": 222, 00:17:19.225 "transports": [ 00:17:19.225 { 00:17:19.225 "trtype": "TCP" 00:17:19.225 } 00:17:19.225 ] 00:17:19.225 }, 00:17:19.225 { 00:17:19.225 "name": "nvmf_tgt_poll_group_002", 00:17:19.225 "admin_qpairs": 1, 00:17:19.225 "io_qpairs": 168, 00:17:19.225 "current_admin_qpairs": 0, 00:17:19.225 "current_io_qpairs": 0, 00:17:19.225 "pending_bdev_io": 0, 00:17:19.225 "completed_nvme_io": 198, 00:17:19.225 "transports": [ 00:17:19.225 { 00:17:19.225 "trtype": "TCP" 00:17:19.225 } 00:17:19.225 ] 00:17:19.225 }, 00:17:19.225 { 00:17:19.225 "name": "nvmf_tgt_poll_group_003", 00:17:19.225 "admin_qpairs": 2, 00:17:19.225 "io_qpairs": 168, 00:17:19.225 "current_admin_qpairs": 0, 00:17:19.225 "current_io_qpairs": 0, 00:17:19.225 "pending_bdev_io": 0, 00:17:19.225 "completed_nvme_io": 333, 00:17:19.225 "transports": [ 00:17:19.225 { 00:17:19.225 "trtype": "TCP" 00:17:19.225 } 00:17:19.225 ] 00:17:19.225 } 00:17:19.225 ] 00:17:19.225 }' 00:17:19.225 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:17:19.225 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:17:19.225 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:17:19.225 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:17:19.225 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:17:19.225 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:17:19.225 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:17:19.225 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:17:19.225 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:17:19.225 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 672 > 0 )) 00:17:19.225 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:17:19.225 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:17:19.225 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:17:19.225 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:19.225 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:17:19.225 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:19.225 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:17:19.225 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:19.225 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:19.225 rmmod nvme_tcp 00:17:19.225 rmmod nvme_fabrics 00:17:19.225 rmmod nvme_keyring 00:17:19.225 06:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:19.225 06:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:17:19.226 06:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:17:19.226 06:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 482312 ']' 00:17:19.226 06:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 482312 00:17:19.226 06:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@952 -- # '[' -z 482312 ']' 00:17:19.226 06:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # kill -0 482312 00:17:19.226 06:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@957 -- # uname 00:17:19.226 06:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:19.226 06:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 482312 00:17:19.486 06:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:19.486 06:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:19.486 06:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 482312' 00:17:19.486 killing process with pid 482312 00:17:19.486 06:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@971 -- # kill 482312 00:17:19.486 06:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@976 -- # wait 482312 00:17:19.486 06:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:19.486 06:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:19.486 06:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:19.486 06:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:17:19.486 06:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save 00:17:19.486 06:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:19.486 06:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore 00:17:19.486 06:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:19.486 06:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:19.486 06:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:19.486 06:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:19.486 06:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:22.023 06:27:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:22.023 00:17:22.023 real 0m32.737s 00:17:22.023 user 1m38.489s 00:17:22.023 sys 0m6.499s 00:17:22.023 06:27:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:22.023 06:27:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:22.023 ************************************ 00:17:22.023 END TEST nvmf_rpc 00:17:22.023 ************************************ 00:17:22.023 06:27:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:17:22.023 06:27:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:17:22.023 06:27:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:22.023 06:27:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:22.023 ************************************ 00:17:22.023 START TEST nvmf_invalid 00:17:22.023 ************************************ 00:17:22.023 06:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:17:22.023 * Looking for test storage... 00:17:22.023 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:22.023 06:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:17:22.023 06:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1691 -- # lcov --version 00:17:22.023 06:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:17:22.023 06:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:17:22.023 06:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:22.023 06:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:22.023 06:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:22.023 06:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:17:22.023 06:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:17:22.023 06:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:17:22.023 06:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:17:22.023 06:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:17:22.023 06:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:17:22.023 06:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:17:22.023 06:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:22.023 06:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:17:22.023 06:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:17:22.023 06:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:22.023 06:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:22.023 06:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:17:22.023 06:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:17:22.023 06:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:22.023 06:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:17:22.023 06:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:17:22.023 06:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:17:22.023 06:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:17:22.023 06:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:22.023 06:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:17:22.023 06:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:17:22.023 06:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:22.023 06:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:22.023 06:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:17:22.023 06:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:22.023 06:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:17:22.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:22.023 --rc genhtml_branch_coverage=1 00:17:22.023 --rc genhtml_function_coverage=1 00:17:22.023 --rc genhtml_legend=1 00:17:22.023 --rc geninfo_all_blocks=1 00:17:22.023 --rc geninfo_unexecuted_blocks=1 00:17:22.023 00:17:22.023 ' 00:17:22.023 06:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:17:22.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:22.023 --rc genhtml_branch_coverage=1 00:17:22.023 --rc genhtml_function_coverage=1 00:17:22.023 --rc genhtml_legend=1 00:17:22.023 --rc geninfo_all_blocks=1 00:17:22.023 --rc geninfo_unexecuted_blocks=1 00:17:22.023 00:17:22.023 ' 00:17:22.023 06:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:17:22.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:22.023 --rc genhtml_branch_coverage=1 00:17:22.023 --rc genhtml_function_coverage=1 00:17:22.023 --rc genhtml_legend=1 00:17:22.023 --rc geninfo_all_blocks=1 00:17:22.023 --rc geninfo_unexecuted_blocks=1 00:17:22.023 00:17:22.023 ' 00:17:22.023 06:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:17:22.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:22.023 --rc genhtml_branch_coverage=1 00:17:22.023 --rc genhtml_function_coverage=1 00:17:22.023 --rc genhtml_legend=1 00:17:22.023 --rc geninfo_all_blocks=1 00:17:22.023 --rc geninfo_unexecuted_blocks=1 00:17:22.023 00:17:22.023 ' 00:17:22.023 06:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:22.023 06:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:17:22.023 06:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:22.023 06:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:22.023 06:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:22.023 06:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:22.023 06:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:22.023 06:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:22.023 06:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:22.023 06:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:22.023 06:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:22.023 06:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:22.023 06:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:22.023 06:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:17:22.023 06:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:22.023 06:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:22.023 06:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:22.023 06:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:22.023 06:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:22.023 06:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:17:22.023 06:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:22.023 06:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:22.023 06:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:22.023 06:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:22.023 06:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:22.023 06:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:22.023 06:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:17:22.023 06:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:22.023 06:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:17:22.023 06:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:22.023 06:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:22.023 06:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:22.023 06:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:22.023 06:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:22.023 06:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:22.023 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:22.023 06:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:22.023 06:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:22.023 06:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:22.023 06:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:17:22.023 06:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:22.023 06:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:17:22.023 06:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:17:22.023 06:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:17:22.023 06:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:17:22.023 06:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:22.024 06:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:22.024 06:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:22.024 06:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:22.024 06:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:22.024 06:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:22.024 06:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:22.024 06:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:22.024 06:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:22.024 06:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:22.024 06:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:17:22.024 06:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:28.588 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:28.588 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:17:28.588 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:28.588 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:28.588 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:28.588 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:28.588 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:28.588 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:17:28.588 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:28.588 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:17:28.588 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:17:28.588 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:17:28.588 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:17:28.588 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:17:28.588 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:17:28.588 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:28.588 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:28.588 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:28.589 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:28.589 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:28.589 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:28.589 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:28.589 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:28.589 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:28.589 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:28.589 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:28.589 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:28.589 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:28.589 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:28.589 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:28.589 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:28.589 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:28.589 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:28.589 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:28.589 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:17:28.589 Found 0000:86:00.0 (0x8086 - 0x159b) 00:17:28.589 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:28.589 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:28.589 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:28.589 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:28.589 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:28.589 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:28.589 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:17:28.589 Found 0000:86:00.1 (0x8086 - 0x159b) 00:17:28.589 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:28.589 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:28.589 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:28.589 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:28.589 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:28.589 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:28.589 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:28.589 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:28.589 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:28.589 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:28.589 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:28.589 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:28.589 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:28.589 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:28.589 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:28.589 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:17:28.589 Found net devices under 0000:86:00.0: cvl_0_0 00:17:28.589 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:28.589 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:28.589 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:28.589 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:28.589 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:28.589 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:28.589 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:28.589 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:28.589 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:17:28.589 Found net devices under 0000:86:00.1: cvl_0_1 00:17:28.589 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:28.589 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:28.589 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:17:28.589 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:28.589 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:28.589 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:28.589 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:28.589 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:28.589 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:28.589 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:28.589 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:28.589 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:28.589 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:28.589 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:28.589 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:28.589 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:28.589 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:28.589 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:28.589 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:28.589 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:28.589 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:28.589 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:28.589 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:28.590 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:28.590 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:28.590 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:28.590 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:28.590 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:28.590 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:28.590 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:28.590 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.385 ms 00:17:28.590 00:17:28.590 --- 10.0.0.2 ping statistics --- 00:17:28.590 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:28.590 rtt min/avg/max/mdev = 0.385/0.385/0.385/0.000 ms 00:17:28.590 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:28.590 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:28.590 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.215 ms 00:17:28.590 00:17:28.590 --- 10.0.0.1 ping statistics --- 00:17:28.590 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:28.590 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:17:28.590 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:28.590 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:17:28.590 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:28.590 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:28.590 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:28.590 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:28.590 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:28.590 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:28.590 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:28.590 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:17:28.590 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:28.590 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:28.590 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:28.590 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=489927 00:17:28.590 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:28.590 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 489927 00:17:28.590 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@833 -- # '[' -z 489927 ']' 00:17:28.590 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:28.590 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:28.590 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:28.590 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:28.590 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:28.590 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:28.590 [2024-11-20 06:27:59.652152] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:17:28.590 [2024-11-20 06:27:59.652195] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:28.590 [2024-11-20 06:27:59.730358] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:28.590 [2024-11-20 06:27:59.772350] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:28.590 [2024-11-20 06:27:59.772388] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:28.590 [2024-11-20 06:27:59.772398] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:28.590 [2024-11-20 06:27:59.772404] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:28.590 [2024-11-20 06:27:59.772409] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:28.590 [2024-11-20 06:27:59.773844] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:28.590 [2024-11-20 06:27:59.773955] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:28.590 [2024-11-20 06:27:59.774061] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:28.590 [2024-11-20 06:27:59.774062] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:28.590 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:28.590 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@866 -- # return 0 00:17:28.590 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:28.590 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:28.590 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:28.590 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:28.590 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:17:28.590 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode21988 00:17:28.590 [2024-11-20 06:28:00.083598] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:17:28.590 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:17:28.590 { 00:17:28.590 "nqn": "nqn.2016-06.io.spdk:cnode21988", 00:17:28.590 "tgt_name": "foobar", 00:17:28.590 "method": "nvmf_create_subsystem", 00:17:28.590 "req_id": 1 00:17:28.590 } 00:17:28.590 Got JSON-RPC error response 00:17:28.590 response: 00:17:28.590 { 00:17:28.590 "code": -32603, 00:17:28.590 "message": "Unable to find target foobar" 00:17:28.590 }' 00:17:28.590 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:17:28.590 { 00:17:28.590 "nqn": "nqn.2016-06.io.spdk:cnode21988", 00:17:28.590 "tgt_name": "foobar", 00:17:28.590 "method": "nvmf_create_subsystem", 00:17:28.590 "req_id": 1 00:17:28.590 } 00:17:28.590 Got JSON-RPC error response 00:17:28.590 response: 00:17:28.590 { 00:17:28.590 "code": -32603, 00:17:28.590 "message": "Unable to find target foobar" 00:17:28.590 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:17:28.590 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:17:28.590 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode22342 00:17:28.590 [2024-11-20 06:28:00.292349] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode22342: invalid serial number 'SPDKISFASTANDAWESOME' 00:17:28.590 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:17:28.590 { 00:17:28.590 "nqn": "nqn.2016-06.io.spdk:cnode22342", 00:17:28.590 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:17:28.590 "method": "nvmf_create_subsystem", 00:17:28.590 "req_id": 1 00:17:28.590 } 00:17:28.590 Got JSON-RPC error response 00:17:28.590 response: 00:17:28.590 { 00:17:28.590 "code": -32602, 00:17:28.590 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:17:28.590 }' 00:17:28.590 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:17:28.590 { 00:17:28.590 "nqn": "nqn.2016-06.io.spdk:cnode22342", 00:17:28.590 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:17:28.590 "method": "nvmf_create_subsystem", 00:17:28.590 "req_id": 1 00:17:28.590 } 00:17:28.590 Got JSON-RPC error response 00:17:28.590 response: 00:17:28.590 { 00:17:28.591 "code": -32602, 00:17:28.591 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:17:28.591 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:17:28.591 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:17:28.591 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode30705 00:17:28.849 [2024-11-20 06:28:00.513071] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode30705: invalid model number 'SPDK_Controller' 00:17:28.849 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:17:28.849 { 00:17:28.849 "nqn": "nqn.2016-06.io.spdk:cnode30705", 00:17:28.849 "model_number": "SPDK_Controller\u001f", 00:17:28.849 "method": "nvmf_create_subsystem", 00:17:28.849 "req_id": 1 00:17:28.849 } 00:17:28.849 Got JSON-RPC error response 00:17:28.849 response: 00:17:28.849 { 00:17:28.849 "code": -32602, 00:17:28.849 "message": "Invalid MN SPDK_Controller\u001f" 00:17:28.849 }' 00:17:28.849 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:17:28.849 { 00:17:28.849 "nqn": "nqn.2016-06.io.spdk:cnode30705", 00:17:28.849 "model_number": "SPDK_Controller\u001f", 00:17:28.849 "method": "nvmf_create_subsystem", 00:17:28.849 "req_id": 1 00:17:28.849 } 00:17:28.849 Got JSON-RPC error response 00:17:28.849 response: 00:17:28.849 { 00:17:28.849 "code": -32602, 00:17:28.849 "message": "Invalid MN SPDK_Controller\u001f" 00:17:28.849 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:17:28.849 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:17:28.849 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:17:28.849 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:17:28.849 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:17:28.849 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:17:28.849 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:17:28.849 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:28.849 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:17:28.849 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:17:28.849 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:17:28.849 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:28.849 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:28.849 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:17:28.849 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:17:28.849 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:17:28.849 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:28.849 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:28.849 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:17:28.849 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:17:28.849 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:17:28.849 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:28.849 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:28.849 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:17:28.849 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:17:28.849 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:17:28.849 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:28.849 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:28.849 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:17:28.849 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:17:28.849 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:17:28.849 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:28.849 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:28.849 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:17:28.849 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:17:28.849 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:17:28.849 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:28.849 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:28.849 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:17:28.850 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:17:28.850 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:17:28.850 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:28.850 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:28.850 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:17:28.850 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:17:28.850 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:17:28.850 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:28.850 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:28.850 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:17:28.850 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:17:28.850 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:17:28.850 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:28.850 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:28.850 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:17:28.850 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:17:28.850 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:17:28.850 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:28.850 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:28.850 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:17:28.850 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:17:28.850 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:17:28.850 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:28.850 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:28.850 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:17:28.850 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:17:28.850 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:17:28.850 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:28.850 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:28.850 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:17:28.850 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:17:28.850 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:17:28.850 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:28.850 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:28.850 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:17:28.850 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:17:28.850 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:17:28.850 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:28.850 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:28.850 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:17:28.850 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:17:28.850 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:17:28.850 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:28.850 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:28.850 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:17:28.850 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:17:28.850 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:17:28.850 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:28.850 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:28.850 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:17:28.850 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:17:28.850 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:17:28.850 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:28.850 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:28.850 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:17:28.850 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:17:28.850 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:17:28.850 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:28.850 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:28.850 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:17:28.850 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:17:28.850 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:17:28.850 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:28.850 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:28.850 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:17:28.850 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:17:28.850 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:17:28.850 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:28.850 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:28.850 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:17:28.850 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:17:29.109 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:17:29.109 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:29.109 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:29.109 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ o == \- ]] 00:17:29.109 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'o(R KXGd%b^Hw:'\''{)~J' 00:17:29.109 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'o(R KXGd%b^Hw:'\''{)~J' nqn.2016-06.io.spdk:cnode9927 00:17:29.109 [2024-11-20 06:28:00.858283] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode9927: invalid serial number 'o(R KXGd%b^Hw:'{)~J' 00:17:29.109 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:17:29.109 { 00:17:29.109 "nqn": "nqn.2016-06.io.spdk:cnode9927", 00:17:29.109 "serial_number": "o(R \u007fKXGd%b^\u007fHw:'\''{)~J", 00:17:29.109 "method": "nvmf_create_subsystem", 00:17:29.109 "req_id": 1 00:17:29.109 } 00:17:29.109 Got JSON-RPC error response 00:17:29.109 response: 00:17:29.109 { 00:17:29.109 "code": -32602, 00:17:29.109 "message": "Invalid SN o(R \u007fKXGd%b^\u007fHw:'\''{)~J" 00:17:29.109 }' 00:17:29.109 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:17:29.109 { 00:17:29.109 "nqn": "nqn.2016-06.io.spdk:cnode9927", 00:17:29.109 "serial_number": "o(R \u007fKXGd%b^\u007fHw:'{)~J", 00:17:29.109 "method": "nvmf_create_subsystem", 00:17:29.109 "req_id": 1 00:17:29.109 } 00:17:29.109 Got JSON-RPC error response 00:17:29.109 response: 00:17:29.109 { 00:17:29.109 "code": -32602, 00:17:29.109 "message": "Invalid SN o(R \u007fKXGd%b^\u007fHw:'{)~J" 00:17:29.109 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:17:29.109 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:17:29.109 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:17:29.109 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:17:29.109 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:17:29.109 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:17:29.109 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:17:29.109 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:29.109 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:17:29.109 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:17:29.109 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:17:29.109 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:29.109 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:29.109 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:17:29.109 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:17:29.109 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:17:29.109 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:29.109 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:29.109 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:17:29.109 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:17:29.109 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:17:29.109 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:29.109 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:29.109 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:17:29.109 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:17:29.109 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:17:29.109 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:29.109 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:29.109 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:17:29.109 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:17:29.109 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:17:29.109 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:29.109 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:29.109 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:17:29.109 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:17:29.109 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:17:29.109 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:29.109 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:29.109 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:17:29.109 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:17:29.109 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:17:29.109 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:29.109 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:29.368 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:17:29.368 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:17:29.368 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:17:29.368 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:29.368 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:29.368 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:17:29.368 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:17:29.368 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:17:29.368 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:29.368 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:29.368 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:17:29.368 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:17:29.368 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:17:29.368 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:29.368 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:29.368 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:17:29.368 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:17:29.368 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:17:29.368 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:29.368 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:29.368 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:17:29.368 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:17:29.368 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:17:29.368 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:29.368 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:29.368 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:17:29.368 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:17:29.368 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:17:29.368 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:29.368 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:29.368 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:17:29.368 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:17:29.368 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:17:29.368 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:29.368 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:29.368 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:17:29.368 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:17:29.368 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:17:29.368 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:29.368 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:29.368 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:17:29.368 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:17:29.368 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:17:29.368 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:29.368 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:29.368 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:17:29.368 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:17:29.368 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:17:29.368 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:29.368 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:29.368 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:17:29.368 06:28:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:17:29.368 06:28:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:17:29.368 06:28:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:29.368 06:28:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:29.368 06:28:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:17:29.368 06:28:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:17:29.368 06:28:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:17:29.368 06:28:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:29.368 06:28:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:29.368 06:28:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:17:29.368 06:28:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:17:29.368 06:28:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:17:29.368 06:28:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:29.368 06:28:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:29.368 06:28:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:17:29.368 06:28:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:17:29.368 06:28:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:17:29.368 06:28:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:29.369 06:28:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:29.369 06:28:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:17:29.369 06:28:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:17:29.369 06:28:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:17:29.369 06:28:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:29.369 06:28:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:29.369 06:28:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:17:29.369 06:28:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:17:29.369 06:28:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:17:29.369 06:28:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:29.369 06:28:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:29.369 06:28:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:17:29.369 06:28:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:17:29.369 06:28:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:17:29.369 06:28:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:29.369 06:28:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:29.369 06:28:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:17:29.369 06:28:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:17:29.369 06:28:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:17:29.369 06:28:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:29.369 06:28:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:29.369 06:28:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:17:29.369 06:28:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:17:29.369 06:28:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:17:29.369 06:28:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:29.369 06:28:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:29.369 06:28:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:17:29.369 06:28:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:17:29.369 06:28:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:17:29.369 06:28:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:29.369 06:28:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:29.369 06:28:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:17:29.369 06:28:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:17:29.369 06:28:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:17:29.369 06:28:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:29.369 06:28:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:29.369 06:28:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:17:29.369 06:28:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:17:29.369 06:28:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:17:29.369 06:28:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:29.369 06:28:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:29.369 06:28:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:17:29.369 06:28:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:17:29.369 06:28:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:17:29.369 06:28:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:29.369 06:28:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:29.369 06:28:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:17:29.369 06:28:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:17:29.369 06:28:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:17:29.369 06:28:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:29.369 06:28:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:29.369 06:28:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:17:29.369 06:28:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:17:29.369 06:28:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:17:29.369 06:28:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:29.369 06:28:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:29.369 06:28:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:17:29.369 06:28:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:17:29.369 06:28:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:17:29.369 06:28:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:29.369 06:28:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:29.369 06:28:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:17:29.369 06:28:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:17:29.369 06:28:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:17:29.369 06:28:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:29.369 06:28:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:29.369 06:28:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:17:29.369 06:28:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:17:29.369 06:28:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:17:29.369 06:28:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:29.369 06:28:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:29.369 06:28:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:17:29.369 06:28:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:17:29.369 06:28:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:17:29.369 06:28:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:29.369 06:28:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:29.369 06:28:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:17:29.369 06:28:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:17:29.369 06:28:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:17:29.369 06:28:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:29.369 06:28:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:29.369 06:28:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:17:29.369 06:28:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:17:29.369 06:28:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:17:29.369 06:28:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:29.369 06:28:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:29.369 06:28:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:17:29.369 06:28:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:17:29.369 06:28:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:17:29.369 06:28:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:29.369 06:28:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:29.369 06:28:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:17:29.369 06:28:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:17:29.369 06:28:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:17:29.369 06:28:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:29.369 06:28:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:29.369 06:28:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:17:29.369 06:28:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:17:29.369 06:28:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:17:29.369 06:28:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:29.369 06:28:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:29.369 06:28:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ " == \- ]] 00:17:29.369 06:28:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '".F(hFr5I5~xw]J}*mFG|ANz.0Se^ivB~chV:l6Pk' 00:17:29.369 06:28:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d '".F(hFr5I5~xw]J}*mFG|ANz.0Se^ivB~chV:l6Pk' nqn.2016-06.io.spdk:cnode10111 00:17:29.627 [2024-11-20 06:28:01.315812] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode10111: invalid model number '".F(hFr5I5~xw]J}*mFG|ANz.0Se^ivB~chV:l6Pk' 00:17:29.627 06:28:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:17:29.627 { 00:17:29.627 "nqn": "nqn.2016-06.io.spdk:cnode10111", 00:17:29.627 "model_number": "\".F(hFr5I5~xw]J}*mFG|ANz.0Se^ivB~chV:l6Pk", 00:17:29.627 "method": "nvmf_create_subsystem", 00:17:29.627 "req_id": 1 00:17:29.627 } 00:17:29.627 Got JSON-RPC error response 00:17:29.627 response: 00:17:29.627 { 00:17:29.627 "code": -32602, 00:17:29.627 "message": "Invalid MN \".F(hFr5I5~xw]J}*mFG|ANz.0Se^ivB~chV:l6Pk" 00:17:29.627 }' 00:17:29.627 06:28:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:17:29.627 { 00:17:29.627 "nqn": "nqn.2016-06.io.spdk:cnode10111", 00:17:29.627 "model_number": "\".F(hFr5I5~xw]J}*mFG|ANz.0Se^ivB~chV:l6Pk", 00:17:29.627 "method": "nvmf_create_subsystem", 00:17:29.627 "req_id": 1 00:17:29.627 } 00:17:29.627 Got JSON-RPC error response 00:17:29.627 response: 00:17:29.627 { 00:17:29.627 "code": -32602, 00:17:29.627 "message": "Invalid MN \".F(hFr5I5~xw]J}*mFG|ANz.0Se^ivB~chV:l6Pk" 00:17:29.627 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:17:29.627 06:28:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:17:29.885 [2024-11-20 06:28:01.512548] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:29.885 06:28:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:17:30.142 06:28:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:17:30.142 06:28:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:17:30.142 06:28:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:17:30.142 06:28:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:17:30.142 06:28:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:17:30.142 [2024-11-20 06:28:01.941940] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:17:30.142 06:28:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:17:30.142 { 00:17:30.142 "nqn": "nqn.2016-06.io.spdk:cnode", 00:17:30.142 "listen_address": { 00:17:30.142 "trtype": "tcp", 00:17:30.142 "traddr": "", 00:17:30.142 "trsvcid": "4421" 00:17:30.142 }, 00:17:30.142 "method": "nvmf_subsystem_remove_listener", 00:17:30.142 "req_id": 1 00:17:30.142 } 00:17:30.142 Got JSON-RPC error response 00:17:30.142 response: 00:17:30.142 { 00:17:30.142 "code": -32602, 00:17:30.142 "message": "Invalid parameters" 00:17:30.142 }' 00:17:30.142 06:28:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:17:30.142 { 00:17:30.142 "nqn": "nqn.2016-06.io.spdk:cnode", 00:17:30.142 "listen_address": { 00:17:30.142 "trtype": "tcp", 00:17:30.142 "traddr": "", 00:17:30.142 "trsvcid": "4421" 00:17:30.142 }, 00:17:30.142 "method": "nvmf_subsystem_remove_listener", 00:17:30.142 "req_id": 1 00:17:30.142 } 00:17:30.142 Got JSON-RPC error response 00:17:30.142 response: 00:17:30.142 { 00:17:30.142 "code": -32602, 00:17:30.142 "message": "Invalid parameters" 00:17:30.142 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:17:30.399 06:28:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode27099 -i 0 00:17:30.400 [2024-11-20 06:28:02.146602] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode27099: invalid cntlid range [0-65519] 00:17:30.400 06:28:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:17:30.400 { 00:17:30.400 "nqn": "nqn.2016-06.io.spdk:cnode27099", 00:17:30.400 "min_cntlid": 0, 00:17:30.400 "method": "nvmf_create_subsystem", 00:17:30.400 "req_id": 1 00:17:30.400 } 00:17:30.400 Got JSON-RPC error response 00:17:30.400 response: 00:17:30.400 { 00:17:30.400 "code": -32602, 00:17:30.400 "message": "Invalid cntlid range [0-65519]" 00:17:30.400 }' 00:17:30.400 06:28:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:17:30.400 { 00:17:30.400 "nqn": "nqn.2016-06.io.spdk:cnode27099", 00:17:30.400 "min_cntlid": 0, 00:17:30.400 "method": "nvmf_create_subsystem", 00:17:30.400 "req_id": 1 00:17:30.400 } 00:17:30.400 Got JSON-RPC error response 00:17:30.400 response: 00:17:30.400 { 00:17:30.400 "code": -32602, 00:17:30.400 "message": "Invalid cntlid range [0-65519]" 00:17:30.400 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:30.400 06:28:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode28801 -i 65520 00:17:30.657 [2024-11-20 06:28:02.355316] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode28801: invalid cntlid range [65520-65519] 00:17:30.657 06:28:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:17:30.657 { 00:17:30.657 "nqn": "nqn.2016-06.io.spdk:cnode28801", 00:17:30.657 "min_cntlid": 65520, 00:17:30.657 "method": "nvmf_create_subsystem", 00:17:30.657 "req_id": 1 00:17:30.657 } 00:17:30.657 Got JSON-RPC error response 00:17:30.657 response: 00:17:30.657 { 00:17:30.657 "code": -32602, 00:17:30.657 "message": "Invalid cntlid range [65520-65519]" 00:17:30.657 }' 00:17:30.657 06:28:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:17:30.657 { 00:17:30.657 "nqn": "nqn.2016-06.io.spdk:cnode28801", 00:17:30.657 "min_cntlid": 65520, 00:17:30.657 "method": "nvmf_create_subsystem", 00:17:30.657 "req_id": 1 00:17:30.657 } 00:17:30.657 Got JSON-RPC error response 00:17:30.657 response: 00:17:30.657 { 00:17:30.657 "code": -32602, 00:17:30.657 "message": "Invalid cntlid range [65520-65519]" 00:17:30.657 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:30.657 06:28:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode23772 -I 0 00:17:30.915 [2024-11-20 06:28:02.555982] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode23772: invalid cntlid range [1-0] 00:17:30.915 06:28:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:17:30.915 { 00:17:30.915 "nqn": "nqn.2016-06.io.spdk:cnode23772", 00:17:30.915 "max_cntlid": 0, 00:17:30.915 "method": "nvmf_create_subsystem", 00:17:30.915 "req_id": 1 00:17:30.915 } 00:17:30.915 Got JSON-RPC error response 00:17:30.915 response: 00:17:30.915 { 00:17:30.915 "code": -32602, 00:17:30.915 "message": "Invalid cntlid range [1-0]" 00:17:30.915 }' 00:17:30.915 06:28:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:17:30.915 { 00:17:30.915 "nqn": "nqn.2016-06.io.spdk:cnode23772", 00:17:30.915 "max_cntlid": 0, 00:17:30.915 "method": "nvmf_create_subsystem", 00:17:30.915 "req_id": 1 00:17:30.915 } 00:17:30.915 Got JSON-RPC error response 00:17:30.915 response: 00:17:30.915 { 00:17:30.915 "code": -32602, 00:17:30.915 "message": "Invalid cntlid range [1-0]" 00:17:30.915 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:30.915 06:28:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11209 -I 65520 00:17:31.173 [2024-11-20 06:28:02.752694] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode11209: invalid cntlid range [1-65520] 00:17:31.173 06:28:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:17:31.173 { 00:17:31.173 "nqn": "nqn.2016-06.io.spdk:cnode11209", 00:17:31.173 "max_cntlid": 65520, 00:17:31.173 "method": "nvmf_create_subsystem", 00:17:31.173 "req_id": 1 00:17:31.173 } 00:17:31.173 Got JSON-RPC error response 00:17:31.173 response: 00:17:31.173 { 00:17:31.173 "code": -32602, 00:17:31.173 "message": "Invalid cntlid range [1-65520]" 00:17:31.173 }' 00:17:31.173 06:28:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:17:31.173 { 00:17:31.173 "nqn": "nqn.2016-06.io.spdk:cnode11209", 00:17:31.173 "max_cntlid": 65520, 00:17:31.173 "method": "nvmf_create_subsystem", 00:17:31.173 "req_id": 1 00:17:31.173 } 00:17:31.173 Got JSON-RPC error response 00:17:31.173 response: 00:17:31.173 { 00:17:31.173 "code": -32602, 00:17:31.173 "message": "Invalid cntlid range [1-65520]" 00:17:31.173 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:31.173 06:28:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode15269 -i 6 -I 5 00:17:31.173 [2024-11-20 06:28:02.949402] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode15269: invalid cntlid range [6-5] 00:17:31.173 06:28:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:17:31.173 { 00:17:31.173 "nqn": "nqn.2016-06.io.spdk:cnode15269", 00:17:31.173 "min_cntlid": 6, 00:17:31.173 "max_cntlid": 5, 00:17:31.173 "method": "nvmf_create_subsystem", 00:17:31.173 "req_id": 1 00:17:31.173 } 00:17:31.173 Got JSON-RPC error response 00:17:31.173 response: 00:17:31.173 { 00:17:31.173 "code": -32602, 00:17:31.173 "message": "Invalid cntlid range [6-5]" 00:17:31.173 }' 00:17:31.173 06:28:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:17:31.173 { 00:17:31.173 "nqn": "nqn.2016-06.io.spdk:cnode15269", 00:17:31.173 "min_cntlid": 6, 00:17:31.173 "max_cntlid": 5, 00:17:31.173 "method": "nvmf_create_subsystem", 00:17:31.173 "req_id": 1 00:17:31.173 } 00:17:31.173 Got JSON-RPC error response 00:17:31.173 response: 00:17:31.173 { 00:17:31.173 "code": -32602, 00:17:31.173 "message": "Invalid cntlid range [6-5]" 00:17:31.173 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:31.173 06:28:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:17:31.432 06:28:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:17:31.432 { 00:17:31.432 "name": "foobar", 00:17:31.432 "method": "nvmf_delete_target", 00:17:31.432 "req_id": 1 00:17:31.432 } 00:17:31.432 Got JSON-RPC error response 00:17:31.432 response: 00:17:31.432 { 00:17:31.432 "code": -32602, 00:17:31.432 "message": "The specified target doesn'\''t exist, cannot delete it." 00:17:31.432 }' 00:17:31.432 06:28:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:17:31.432 { 00:17:31.432 "name": "foobar", 00:17:31.432 "method": "nvmf_delete_target", 00:17:31.432 "req_id": 1 00:17:31.432 } 00:17:31.432 Got JSON-RPC error response 00:17:31.432 response: 00:17:31.432 { 00:17:31.432 "code": -32602, 00:17:31.432 "message": "The specified target doesn't exist, cannot delete it." 00:17:31.432 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:17:31.432 06:28:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:17:31.432 06:28:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:17:31.432 06:28:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:31.432 06:28:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:17:31.432 06:28:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:31.432 06:28:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:17:31.432 06:28:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:31.432 06:28:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:31.432 rmmod nvme_tcp 00:17:31.432 rmmod nvme_fabrics 00:17:31.432 rmmod nvme_keyring 00:17:31.432 06:28:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:31.432 06:28:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:17:31.432 06:28:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:17:31.432 06:28:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@517 -- # '[' -n 489927 ']' 00:17:31.432 06:28:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # killprocess 489927 00:17:31.432 06:28:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@952 -- # '[' -z 489927 ']' 00:17:31.432 06:28:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@956 -- # kill -0 489927 00:17:31.432 06:28:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@957 -- # uname 00:17:31.432 06:28:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:31.432 06:28:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 489927 00:17:31.432 06:28:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:31.432 06:28:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:31.432 06:28:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@970 -- # echo 'killing process with pid 489927' 00:17:31.432 killing process with pid 489927 00:17:31.432 06:28:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@971 -- # kill 489927 00:17:31.432 06:28:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@976 -- # wait 489927 00:17:31.692 06:28:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:31.692 06:28:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:31.692 06:28:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:31.692 06:28:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # iptr 00:17:31.692 06:28:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-save 00:17:31.692 06:28:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:31.692 06:28:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-restore 00:17:31.692 06:28:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:31.692 06:28:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:31.692 06:28:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:31.692 06:28:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:31.692 06:28:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:33.597 06:28:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:33.856 00:17:33.856 real 0m12.026s 00:17:33.856 user 0m18.556s 00:17:33.856 sys 0m5.415s 00:17:33.856 06:28:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:33.856 06:28:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:33.856 ************************************ 00:17:33.856 END TEST nvmf_invalid 00:17:33.856 ************************************ 00:17:33.856 06:28:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:17:33.856 06:28:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:17:33.856 06:28:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:33.856 06:28:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:33.856 ************************************ 00:17:33.856 START TEST nvmf_connect_stress 00:17:33.856 ************************************ 00:17:33.856 06:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:17:33.856 * Looking for test storage... 00:17:33.856 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:33.856 06:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:17:33.856 06:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1691 -- # lcov --version 00:17:33.856 06:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:17:33.856 06:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:17:33.856 06:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:33.856 06:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:33.856 06:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:33.856 06:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:17:33.856 06:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:17:33.856 06:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:17:33.856 06:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:17:33.856 06:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:17:33.856 06:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:17:33.856 06:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:17:33.856 06:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:33.856 06:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:17:33.856 06:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:17:33.856 06:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:33.856 06:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:33.856 06:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:17:33.856 06:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:17:33.856 06:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:33.857 06:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:17:33.857 06:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:17:33.857 06:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:17:33.857 06:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:17:33.857 06:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:33.857 06:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:17:33.857 06:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:17:33.857 06:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:33.857 06:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:33.857 06:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:17:33.857 06:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:33.857 06:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:17:33.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:33.857 --rc genhtml_branch_coverage=1 00:17:33.857 --rc genhtml_function_coverage=1 00:17:33.857 --rc genhtml_legend=1 00:17:33.857 --rc geninfo_all_blocks=1 00:17:33.857 --rc geninfo_unexecuted_blocks=1 00:17:33.857 00:17:33.857 ' 00:17:33.857 06:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:17:33.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:33.857 --rc genhtml_branch_coverage=1 00:17:33.857 --rc genhtml_function_coverage=1 00:17:33.857 --rc genhtml_legend=1 00:17:33.857 --rc geninfo_all_blocks=1 00:17:33.857 --rc geninfo_unexecuted_blocks=1 00:17:33.857 00:17:33.857 ' 00:17:33.857 06:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:17:33.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:33.857 --rc genhtml_branch_coverage=1 00:17:33.857 --rc genhtml_function_coverage=1 00:17:33.857 --rc genhtml_legend=1 00:17:33.857 --rc geninfo_all_blocks=1 00:17:33.857 --rc geninfo_unexecuted_blocks=1 00:17:33.857 00:17:33.857 ' 00:17:33.857 06:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:17:33.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:33.857 --rc genhtml_branch_coverage=1 00:17:33.857 --rc genhtml_function_coverage=1 00:17:33.857 --rc genhtml_legend=1 00:17:33.857 --rc geninfo_all_blocks=1 00:17:33.857 --rc geninfo_unexecuted_blocks=1 00:17:33.857 00:17:33.857 ' 00:17:33.857 06:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:33.857 06:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:17:33.857 06:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:33.857 06:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:33.857 06:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:33.857 06:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:33.857 06:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:33.857 06:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:33.857 06:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:33.857 06:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:33.857 06:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:33.857 06:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:34.116 06:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:34.116 06:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:17:34.116 06:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:34.116 06:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:34.116 06:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:34.116 06:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:34.116 06:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:34.116 06:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:17:34.116 06:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:34.116 06:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:34.116 06:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:34.117 06:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:34.117 06:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:34.117 06:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:34.117 06:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:17:34.117 06:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:34.117 06:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:17:34.117 06:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:34.117 06:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:34.117 06:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:34.117 06:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:34.117 06:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:34.117 06:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:34.117 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:34.117 06:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:34.117 06:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:34.117 06:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:34.117 06:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:17:34.117 06:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:34.117 06:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:34.117 06:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:34.117 06:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:34.117 06:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:34.117 06:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:34.117 06:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:34.117 06:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:34.117 06:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:34.117 06:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:34.117 06:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:17:34.117 06:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:40.689 06:28:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:40.689 06:28:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:17:40.689 06:28:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:40.689 06:28:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:40.689 06:28:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:40.689 06:28:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:40.689 06:28:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:40.689 06:28:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:17:40.689 06:28:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:40.689 06:28:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:17:40.689 06:28:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:17:40.689 06:28:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:17:40.689 06:28:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:17:40.689 06:28:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:17:40.689 06:28:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:17:40.689 06:28:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:40.689 06:28:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:40.689 06:28:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:40.689 06:28:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:40.689 06:28:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:40.689 06:28:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:40.689 06:28:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:40.689 06:28:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:40.689 06:28:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:40.689 06:28:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:40.689 06:28:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:40.689 06:28:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:40.689 06:28:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:40.689 06:28:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:40.689 06:28:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:40.689 06:28:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:40.689 06:28:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:40.689 06:28:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:40.689 06:28:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:40.689 06:28:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:17:40.689 Found 0000:86:00.0 (0x8086 - 0x159b) 00:17:40.689 06:28:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:40.689 06:28:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:40.689 06:28:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:40.689 06:28:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:40.689 06:28:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:40.689 06:28:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:40.689 06:28:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:17:40.689 Found 0000:86:00.1 (0x8086 - 0x159b) 00:17:40.689 06:28:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:40.689 06:28:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:40.689 06:28:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:40.689 06:28:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:40.689 06:28:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:40.689 06:28:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:40.689 06:28:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:40.689 06:28:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:40.689 06:28:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:40.689 06:28:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:40.689 06:28:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:40.689 06:28:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:40.689 06:28:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:40.689 06:28:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:40.689 06:28:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:40.689 06:28:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:17:40.689 Found net devices under 0000:86:00.0: cvl_0_0 00:17:40.689 06:28:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:40.689 06:28:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:40.689 06:28:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:40.689 06:28:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:40.689 06:28:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:40.689 06:28:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:40.689 06:28:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:40.689 06:28:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:40.689 06:28:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:17:40.689 Found net devices under 0000:86:00.1: cvl_0_1 00:17:40.689 06:28:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:40.689 06:28:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:40.689 06:28:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:17:40.689 06:28:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:40.689 06:28:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:40.689 06:28:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:40.689 06:28:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:40.689 06:28:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:40.689 06:28:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:40.689 06:28:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:40.690 06:28:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:40.690 06:28:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:40.690 06:28:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:40.690 06:28:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:40.690 06:28:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:40.690 06:28:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:40.690 06:28:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:40.690 06:28:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:40.690 06:28:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:40.690 06:28:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:40.690 06:28:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:40.690 06:28:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:40.690 06:28:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:40.690 06:28:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:40.690 06:28:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:40.690 06:28:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:40.690 06:28:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:40.690 06:28:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:40.690 06:28:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:40.690 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:40.690 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.407 ms 00:17:40.690 00:17:40.690 --- 10.0.0.2 ping statistics --- 00:17:40.690 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:40.690 rtt min/avg/max/mdev = 0.407/0.407/0.407/0.000 ms 00:17:40.690 06:28:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:40.690 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:40.690 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.138 ms 00:17:40.690 00:17:40.690 --- 10.0.0.1 ping statistics --- 00:17:40.690 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:40.690 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:17:40.690 06:28:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:40.690 06:28:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:17:40.690 06:28:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:40.690 06:28:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:40.690 06:28:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:40.690 06:28:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:40.690 06:28:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:40.690 06:28:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:40.690 06:28:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:40.690 06:28:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:17:40.690 06:28:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:40.690 06:28:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:40.690 06:28:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:40.690 06:28:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=494218 00:17:40.690 06:28:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:17:40.690 06:28:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 494218 00:17:40.690 06:28:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@833 -- # '[' -z 494218 ']' 00:17:40.690 06:28:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:40.690 06:28:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:40.690 06:28:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:40.690 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:40.690 06:28:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:40.690 06:28:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:40.690 [2024-11-20 06:28:11.807797] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:17:40.690 [2024-11-20 06:28:11.807840] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:40.690 [2024-11-20 06:28:11.887638] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:40.690 [2024-11-20 06:28:11.927916] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:40.690 [2024-11-20 06:28:11.927954] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:40.690 [2024-11-20 06:28:11.927961] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:40.690 [2024-11-20 06:28:11.927967] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:40.690 [2024-11-20 06:28:11.927972] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:40.690 [2024-11-20 06:28:11.929338] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:40.690 [2024-11-20 06:28:11.929447] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:40.690 [2024-11-20 06:28:11.929447] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:40.690 06:28:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:40.690 06:28:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@866 -- # return 0 00:17:40.690 06:28:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:40.690 06:28:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:40.690 06:28:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:40.690 06:28:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:40.690 06:28:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:40.690 06:28:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.690 06:28:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:40.690 [2024-11-20 06:28:12.064179] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:40.690 06:28:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.690 06:28:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:17:40.690 06:28:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.690 06:28:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:40.690 06:28:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.690 06:28:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:40.690 06:28:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.690 06:28:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:40.690 [2024-11-20 06:28:12.084422] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:40.690 06:28:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.690 06:28:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:17:40.691 06:28:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.691 06:28:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:40.691 NULL1 00:17:40.691 06:28:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.691 06:28:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=494343 00:17:40.691 06:28:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:17:40.691 06:28:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:17:40.691 06:28:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:17:40.691 06:28:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:17:40.691 06:28:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:40.691 06:28:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:40.691 06:28:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:40.691 06:28:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:40.691 06:28:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:40.691 06:28:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:40.691 06:28:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:40.691 06:28:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:40.691 06:28:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:40.691 06:28:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:40.691 06:28:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:40.691 06:28:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:40.691 06:28:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:40.691 06:28:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:40.691 06:28:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:40.691 06:28:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:40.691 06:28:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:40.691 06:28:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:40.691 06:28:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:40.691 06:28:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:40.691 06:28:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:40.691 06:28:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:40.691 06:28:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:40.691 06:28:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:40.691 06:28:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:40.691 06:28:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:40.691 06:28:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:40.691 06:28:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:40.691 06:28:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:40.691 06:28:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:40.691 06:28:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:40.691 06:28:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:40.691 06:28:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:40.691 06:28:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:40.691 06:28:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:40.691 06:28:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:40.691 06:28:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:40.691 06:28:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:40.691 06:28:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:40.691 06:28:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:40.691 06:28:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 494343 00:17:40.691 06:28:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:40.691 06:28:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.691 06:28:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:40.691 06:28:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.691 06:28:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 494343 00:17:40.691 06:28:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:40.691 06:28:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.691 06:28:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:41.258 06:28:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.258 06:28:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 494343 00:17:41.258 06:28:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:41.258 06:28:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.258 06:28:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:41.516 06:28:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.516 06:28:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 494343 00:17:41.516 06:28:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:41.516 06:28:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.516 06:28:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:41.774 06:28:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.774 06:28:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 494343 00:17:41.774 06:28:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:41.774 06:28:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.774 06:28:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:42.032 06:28:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.032 06:28:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 494343 00:17:42.032 06:28:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:42.032 06:28:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.032 06:28:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:42.599 06:28:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.599 06:28:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 494343 00:17:42.599 06:28:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:42.599 06:28:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.599 06:28:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:42.858 06:28:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.858 06:28:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 494343 00:17:42.858 06:28:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:42.858 06:28:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.858 06:28:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:43.117 06:28:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.117 06:28:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 494343 00:17:43.117 06:28:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:43.117 06:28:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.117 06:28:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:43.375 06:28:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.375 06:28:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 494343 00:17:43.375 06:28:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:43.375 06:28:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.375 06:28:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:43.634 06:28:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.634 06:28:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 494343 00:17:43.634 06:28:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:43.634 06:28:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.634 06:28:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:44.202 06:28:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.202 06:28:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 494343 00:17:44.202 06:28:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:44.202 06:28:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.202 06:28:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:44.460 06:28:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.460 06:28:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 494343 00:17:44.460 06:28:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:44.460 06:28:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.460 06:28:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:44.718 06:28:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.718 06:28:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 494343 00:17:44.718 06:28:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:44.718 06:28:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.718 06:28:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:44.976 06:28:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.976 06:28:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 494343 00:17:44.976 06:28:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:44.976 06:28:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.976 06:28:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:45.234 06:28:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.234 06:28:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 494343 00:17:45.234 06:28:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:45.234 06:28:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.234 06:28:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:45.801 06:28:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.801 06:28:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 494343 00:17:45.801 06:28:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:45.801 06:28:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.801 06:28:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:46.060 06:28:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.060 06:28:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 494343 00:17:46.060 06:28:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:46.060 06:28:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.060 06:28:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:46.318 06:28:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.318 06:28:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 494343 00:17:46.318 06:28:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:46.318 06:28:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.318 06:28:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:46.577 06:28:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.577 06:28:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 494343 00:17:46.577 06:28:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:46.577 06:28:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.577 06:28:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:47.143 06:28:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.143 06:28:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 494343 00:17:47.143 06:28:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:47.143 06:28:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.143 06:28:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:47.401 06:28:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.401 06:28:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 494343 00:17:47.401 06:28:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:47.401 06:28:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.401 06:28:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:47.660 06:28:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.660 06:28:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 494343 00:17:47.660 06:28:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:47.660 06:28:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.660 06:28:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:47.918 06:28:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.918 06:28:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 494343 00:17:47.918 06:28:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:47.918 06:28:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.918 06:28:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:48.177 06:28:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.177 06:28:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 494343 00:17:48.177 06:28:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:48.177 06:28:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.177 06:28:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:48.741 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.741 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 494343 00:17:48.741 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:48.741 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.741 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:48.998 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.998 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 494343 00:17:48.999 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:48.999 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.999 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:49.256 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.256 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 494343 00:17:49.256 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:49.256 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.256 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:49.513 06:28:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.513 06:28:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 494343 00:17:49.513 06:28:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:49.513 06:28:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.513 06:28:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:50.077 06:28:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.077 06:28:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 494343 00:17:50.077 06:28:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:50.077 06:28:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.077 06:28:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:50.336 06:28:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.337 06:28:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 494343 00:17:50.337 06:28:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:50.337 06:28:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.337 06:28:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:50.618 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:50.618 06:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.618 06:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 494343 00:17:50.618 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (494343) - No such process 00:17:50.618 06:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 494343 00:17:50.618 06:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:17:50.618 06:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:17:50.618 06:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:17:50.618 06:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:50.618 06:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:17:50.618 06:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:50.618 06:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:17:50.618 06:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:50.618 06:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:50.618 rmmod nvme_tcp 00:17:50.618 rmmod nvme_fabrics 00:17:50.618 rmmod nvme_keyring 00:17:50.618 06:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:50.618 06:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:17:50.618 06:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:17:50.618 06:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 494218 ']' 00:17:50.618 06:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 494218 00:17:50.618 06:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@952 -- # '[' -z 494218 ']' 00:17:50.618 06:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # kill -0 494218 00:17:50.618 06:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@957 -- # uname 00:17:50.618 06:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:50.618 06:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 494218 00:17:50.618 06:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:17:50.618 06:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:17:50.618 06:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@970 -- # echo 'killing process with pid 494218' 00:17:50.618 killing process with pid 494218 00:17:50.618 06:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@971 -- # kill 494218 00:17:50.618 06:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@976 -- # wait 494218 00:17:50.941 06:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:50.941 06:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:50.941 06:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:50.941 06:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:17:50.941 06:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save 00:17:50.941 06:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:50.941 06:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore 00:17:50.941 06:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:50.941 06:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:50.941 06:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:50.941 06:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:50.941 06:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:52.848 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:52.848 00:17:52.848 real 0m19.127s 00:17:52.848 user 0m39.506s 00:17:52.848 sys 0m8.579s 00:17:52.848 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:52.848 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:52.848 ************************************ 00:17:52.848 END TEST nvmf_connect_stress 00:17:52.848 ************************************ 00:17:52.848 06:28:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:17:52.848 06:28:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:17:52.848 06:28:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:52.848 06:28:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:53.109 ************************************ 00:17:53.109 START TEST nvmf_fused_ordering 00:17:53.109 ************************************ 00:17:53.109 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:17:53.109 * Looking for test storage... 00:17:53.109 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:53.109 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:17:53.109 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1691 -- # lcov --version 00:17:53.109 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:17:53.109 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:17:53.109 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:53.109 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:53.109 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:53.109 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:17:53.109 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:17:53.109 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:17:53.109 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:17:53.109 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:17:53.109 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:17:53.109 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:17:53.109 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:53.109 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:17:53.109 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:17:53.109 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:53.109 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:53.109 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:17:53.109 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:17:53.109 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:53.109 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:17:53.109 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:17:53.109 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:17:53.109 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:17:53.109 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:53.109 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:17:53.109 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:17:53.109 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:53.109 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:53.109 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:17:53.109 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:53.109 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:17:53.109 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:53.109 --rc genhtml_branch_coverage=1 00:17:53.109 --rc genhtml_function_coverage=1 00:17:53.109 --rc genhtml_legend=1 00:17:53.109 --rc geninfo_all_blocks=1 00:17:53.109 --rc geninfo_unexecuted_blocks=1 00:17:53.109 00:17:53.109 ' 00:17:53.109 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:17:53.109 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:53.109 --rc genhtml_branch_coverage=1 00:17:53.109 --rc genhtml_function_coverage=1 00:17:53.109 --rc genhtml_legend=1 00:17:53.109 --rc geninfo_all_blocks=1 00:17:53.109 --rc geninfo_unexecuted_blocks=1 00:17:53.109 00:17:53.109 ' 00:17:53.109 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:17:53.109 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:53.109 --rc genhtml_branch_coverage=1 00:17:53.109 --rc genhtml_function_coverage=1 00:17:53.109 --rc genhtml_legend=1 00:17:53.109 --rc geninfo_all_blocks=1 00:17:53.109 --rc geninfo_unexecuted_blocks=1 00:17:53.109 00:17:53.109 ' 00:17:53.109 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:17:53.109 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:53.109 --rc genhtml_branch_coverage=1 00:17:53.109 --rc genhtml_function_coverage=1 00:17:53.109 --rc genhtml_legend=1 00:17:53.109 --rc geninfo_all_blocks=1 00:17:53.109 --rc geninfo_unexecuted_blocks=1 00:17:53.109 00:17:53.109 ' 00:17:53.109 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:53.109 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:17:53.109 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:53.109 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:53.109 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:53.109 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:53.109 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:53.109 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:53.109 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:53.109 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:53.109 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:53.109 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:53.109 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:53.109 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:17:53.109 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:53.109 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:53.109 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:53.109 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:53.109 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:53.109 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:17:53.109 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:53.109 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:53.109 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:53.109 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:53.109 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:53.109 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:53.109 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:17:53.109 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:53.109 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:17:53.109 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:53.109 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:53.109 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:53.110 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:53.110 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:53.110 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:53.110 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:53.110 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:53.110 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:53.110 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:53.110 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:17:53.110 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:53.110 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:53.110 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:53.110 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:53.110 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:53.110 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:53.110 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:53.110 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:53.110 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:53.110 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:53.110 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:17:53.110 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:59.683 06:28:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:59.683 06:28:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:17:59.683 06:28:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:59.683 06:28:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:59.683 06:28:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:59.683 06:28:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:59.683 06:28:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:59.683 06:28:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:17:59.683 06:28:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:59.683 06:28:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:17:59.683 06:28:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:17:59.683 06:28:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:17:59.683 06:28:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:17:59.683 06:28:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:17:59.683 06:28:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:17:59.683 06:28:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:59.683 06:28:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:59.683 06:28:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:59.683 06:28:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:59.683 06:28:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:59.683 06:28:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:59.683 06:28:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:59.683 06:28:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:59.683 06:28:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:59.683 06:28:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:59.683 06:28:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:59.683 06:28:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:59.683 06:28:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:59.683 06:28:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:59.683 06:28:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:59.683 06:28:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:59.683 06:28:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:59.683 06:28:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:59.683 06:28:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:59.683 06:28:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:17:59.683 Found 0000:86:00.0 (0x8086 - 0x159b) 00:17:59.683 06:28:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:59.683 06:28:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:59.683 06:28:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:59.683 06:28:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:59.683 06:28:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:59.683 06:28:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:59.683 06:28:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:17:59.683 Found 0000:86:00.1 (0x8086 - 0x159b) 00:17:59.683 06:28:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:59.683 06:28:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:59.683 06:28:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:59.683 06:28:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:59.683 06:28:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:59.683 06:28:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:59.683 06:28:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:59.683 06:28:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:59.683 06:28:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:59.683 06:28:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:59.683 06:28:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:59.683 06:28:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:59.683 06:28:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:59.683 06:28:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:59.683 06:28:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:59.683 06:28:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:17:59.683 Found net devices under 0000:86:00.0: cvl_0_0 00:17:59.683 06:28:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:59.683 06:28:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:59.683 06:28:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:59.683 06:28:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:59.683 06:28:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:59.683 06:28:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:59.683 06:28:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:59.683 06:28:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:59.684 06:28:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:17:59.684 Found net devices under 0000:86:00.1: cvl_0_1 00:17:59.684 06:28:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:59.684 06:28:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:59.684 06:28:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:17:59.684 06:28:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:59.684 06:28:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:59.684 06:28:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:59.684 06:28:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:59.684 06:28:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:59.684 06:28:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:59.684 06:28:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:59.684 06:28:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:59.684 06:28:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:59.684 06:28:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:59.684 06:28:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:59.684 06:28:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:59.684 06:28:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:59.684 06:28:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:59.684 06:28:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:59.684 06:28:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:59.684 06:28:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:59.684 06:28:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:59.684 06:28:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:59.684 06:28:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:59.684 06:28:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:59.684 06:28:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:59.684 06:28:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:59.684 06:28:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:59.684 06:28:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:59.684 06:28:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:59.684 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:59.684 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.465 ms 00:17:59.684 00:17:59.684 --- 10.0.0.2 ping statistics --- 00:17:59.684 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:59.684 rtt min/avg/max/mdev = 0.465/0.465/0.465/0.000 ms 00:17:59.684 06:28:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:59.684 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:59.684 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.149 ms 00:17:59.684 00:17:59.684 --- 10.0.0.1 ping statistics --- 00:17:59.684 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:59.684 rtt min/avg/max/mdev = 0.149/0.149/0.149/0.000 ms 00:17:59.684 06:28:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:59.684 06:28:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:17:59.684 06:28:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:59.684 06:28:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:59.684 06:28:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:59.684 06:28:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:59.684 06:28:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:59.684 06:28:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:59.684 06:28:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:59.684 06:28:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:17:59.684 06:28:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:59.684 06:28:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:59.684 06:28:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:59.684 06:28:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=499508 00:17:59.684 06:28:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 499508 00:17:59.684 06:28:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:59.684 06:28:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@833 -- # '[' -z 499508 ']' 00:17:59.684 06:28:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:59.684 06:28:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:59.684 06:28:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:59.684 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:59.684 06:28:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:59.684 06:28:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:59.684 [2024-11-20 06:28:30.920447] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:17:59.684 [2024-11-20 06:28:30.920492] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:59.684 [2024-11-20 06:28:30.999324] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:59.684 [2024-11-20 06:28:31.039668] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:59.684 [2024-11-20 06:28:31.039701] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:59.684 [2024-11-20 06:28:31.039708] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:59.684 [2024-11-20 06:28:31.039714] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:59.684 [2024-11-20 06:28:31.039718] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:59.684 [2024-11-20 06:28:31.040281] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:59.684 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:59.684 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@866 -- # return 0 00:17:59.684 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:59.684 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:59.684 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:59.684 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:59.684 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:59.684 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.684 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:59.684 [2024-11-20 06:28:31.174930] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:59.684 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.684 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:17:59.684 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.684 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:59.684 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.685 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:59.685 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.685 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:59.685 [2024-11-20 06:28:31.195118] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:59.685 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.685 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:17:59.685 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.685 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:59.685 NULL1 00:17:59.685 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.685 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:17:59.685 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.685 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:59.685 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.685 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:17:59.685 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.685 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:59.685 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.685 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:17:59.685 [2024-11-20 06:28:31.252114] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:17:59.685 [2024-11-20 06:28:31.252145] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid499527 ] 00:17:59.944 Attached to nqn.2016-06.io.spdk:cnode1 00:17:59.944 Namespace ID: 1 size: 1GB 00:17:59.944 fused_ordering(0) 00:17:59.944 fused_ordering(1) 00:17:59.944 fused_ordering(2) 00:17:59.944 fused_ordering(3) 00:17:59.944 fused_ordering(4) 00:17:59.944 fused_ordering(5) 00:17:59.944 fused_ordering(6) 00:17:59.944 fused_ordering(7) 00:17:59.944 fused_ordering(8) 00:17:59.944 fused_ordering(9) 00:17:59.944 fused_ordering(10) 00:17:59.944 fused_ordering(11) 00:17:59.944 fused_ordering(12) 00:17:59.944 fused_ordering(13) 00:17:59.944 fused_ordering(14) 00:17:59.944 fused_ordering(15) 00:17:59.944 fused_ordering(16) 00:17:59.944 fused_ordering(17) 00:17:59.944 fused_ordering(18) 00:17:59.944 fused_ordering(19) 00:17:59.944 fused_ordering(20) 00:17:59.944 fused_ordering(21) 00:17:59.944 fused_ordering(22) 00:17:59.944 fused_ordering(23) 00:17:59.944 fused_ordering(24) 00:17:59.944 fused_ordering(25) 00:17:59.944 fused_ordering(26) 00:17:59.944 fused_ordering(27) 00:17:59.944 fused_ordering(28) 00:17:59.944 fused_ordering(29) 00:17:59.944 fused_ordering(30) 00:17:59.944 fused_ordering(31) 00:17:59.944 fused_ordering(32) 00:17:59.944 fused_ordering(33) 00:17:59.944 fused_ordering(34) 00:17:59.944 fused_ordering(35) 00:17:59.944 fused_ordering(36) 00:17:59.944 fused_ordering(37) 00:17:59.944 fused_ordering(38) 00:17:59.944 fused_ordering(39) 00:17:59.944 fused_ordering(40) 00:17:59.944 fused_ordering(41) 00:17:59.944 fused_ordering(42) 00:17:59.944 fused_ordering(43) 00:17:59.944 fused_ordering(44) 00:17:59.944 fused_ordering(45) 00:17:59.944 fused_ordering(46) 00:17:59.944 fused_ordering(47) 00:17:59.944 fused_ordering(48) 00:17:59.944 fused_ordering(49) 00:17:59.944 fused_ordering(50) 00:17:59.944 fused_ordering(51) 00:17:59.944 fused_ordering(52) 00:17:59.944 fused_ordering(53) 00:17:59.944 fused_ordering(54) 00:17:59.944 fused_ordering(55) 00:17:59.944 fused_ordering(56) 00:17:59.944 fused_ordering(57) 00:17:59.944 fused_ordering(58) 00:17:59.944 fused_ordering(59) 00:17:59.944 fused_ordering(60) 00:17:59.944 fused_ordering(61) 00:17:59.944 fused_ordering(62) 00:17:59.944 fused_ordering(63) 00:17:59.944 fused_ordering(64) 00:17:59.944 fused_ordering(65) 00:17:59.944 fused_ordering(66) 00:17:59.944 fused_ordering(67) 00:17:59.944 fused_ordering(68) 00:17:59.944 fused_ordering(69) 00:17:59.944 fused_ordering(70) 00:17:59.944 fused_ordering(71) 00:17:59.944 fused_ordering(72) 00:17:59.944 fused_ordering(73) 00:17:59.944 fused_ordering(74) 00:17:59.944 fused_ordering(75) 00:17:59.944 fused_ordering(76) 00:17:59.944 fused_ordering(77) 00:17:59.944 fused_ordering(78) 00:17:59.944 fused_ordering(79) 00:17:59.944 fused_ordering(80) 00:17:59.944 fused_ordering(81) 00:17:59.944 fused_ordering(82) 00:17:59.944 fused_ordering(83) 00:17:59.944 fused_ordering(84) 00:17:59.944 fused_ordering(85) 00:17:59.944 fused_ordering(86) 00:17:59.944 fused_ordering(87) 00:17:59.944 fused_ordering(88) 00:17:59.944 fused_ordering(89) 00:17:59.944 fused_ordering(90) 00:17:59.944 fused_ordering(91) 00:17:59.944 fused_ordering(92) 00:17:59.944 fused_ordering(93) 00:17:59.944 fused_ordering(94) 00:17:59.944 fused_ordering(95) 00:17:59.944 fused_ordering(96) 00:17:59.944 fused_ordering(97) 00:17:59.944 fused_ordering(98) 00:17:59.944 fused_ordering(99) 00:17:59.944 fused_ordering(100) 00:17:59.944 fused_ordering(101) 00:17:59.944 fused_ordering(102) 00:17:59.944 fused_ordering(103) 00:17:59.944 fused_ordering(104) 00:17:59.944 fused_ordering(105) 00:17:59.944 fused_ordering(106) 00:17:59.944 fused_ordering(107) 00:17:59.944 fused_ordering(108) 00:17:59.944 fused_ordering(109) 00:17:59.944 fused_ordering(110) 00:17:59.944 fused_ordering(111) 00:17:59.944 fused_ordering(112) 00:17:59.944 fused_ordering(113) 00:17:59.944 fused_ordering(114) 00:17:59.944 fused_ordering(115) 00:17:59.944 fused_ordering(116) 00:17:59.944 fused_ordering(117) 00:17:59.944 fused_ordering(118) 00:17:59.944 fused_ordering(119) 00:17:59.944 fused_ordering(120) 00:17:59.944 fused_ordering(121) 00:17:59.944 fused_ordering(122) 00:17:59.944 fused_ordering(123) 00:17:59.944 fused_ordering(124) 00:17:59.944 fused_ordering(125) 00:17:59.944 fused_ordering(126) 00:17:59.944 fused_ordering(127) 00:17:59.944 fused_ordering(128) 00:17:59.944 fused_ordering(129) 00:17:59.944 fused_ordering(130) 00:17:59.944 fused_ordering(131) 00:17:59.944 fused_ordering(132) 00:17:59.944 fused_ordering(133) 00:17:59.944 fused_ordering(134) 00:17:59.944 fused_ordering(135) 00:17:59.944 fused_ordering(136) 00:17:59.944 fused_ordering(137) 00:17:59.944 fused_ordering(138) 00:17:59.944 fused_ordering(139) 00:17:59.944 fused_ordering(140) 00:17:59.944 fused_ordering(141) 00:17:59.944 fused_ordering(142) 00:17:59.944 fused_ordering(143) 00:17:59.944 fused_ordering(144) 00:17:59.944 fused_ordering(145) 00:17:59.944 fused_ordering(146) 00:17:59.944 fused_ordering(147) 00:17:59.944 fused_ordering(148) 00:17:59.944 fused_ordering(149) 00:17:59.944 fused_ordering(150) 00:17:59.944 fused_ordering(151) 00:17:59.944 fused_ordering(152) 00:17:59.944 fused_ordering(153) 00:17:59.944 fused_ordering(154) 00:17:59.944 fused_ordering(155) 00:17:59.944 fused_ordering(156) 00:17:59.944 fused_ordering(157) 00:17:59.944 fused_ordering(158) 00:17:59.944 fused_ordering(159) 00:17:59.944 fused_ordering(160) 00:17:59.945 fused_ordering(161) 00:17:59.945 fused_ordering(162) 00:17:59.945 fused_ordering(163) 00:17:59.945 fused_ordering(164) 00:17:59.945 fused_ordering(165) 00:17:59.945 fused_ordering(166) 00:17:59.945 fused_ordering(167) 00:17:59.945 fused_ordering(168) 00:17:59.945 fused_ordering(169) 00:17:59.945 fused_ordering(170) 00:17:59.945 fused_ordering(171) 00:17:59.945 fused_ordering(172) 00:17:59.945 fused_ordering(173) 00:17:59.945 fused_ordering(174) 00:17:59.945 fused_ordering(175) 00:17:59.945 fused_ordering(176) 00:17:59.945 fused_ordering(177) 00:17:59.945 fused_ordering(178) 00:17:59.945 fused_ordering(179) 00:17:59.945 fused_ordering(180) 00:17:59.945 fused_ordering(181) 00:17:59.945 fused_ordering(182) 00:17:59.945 fused_ordering(183) 00:17:59.945 fused_ordering(184) 00:17:59.945 fused_ordering(185) 00:17:59.945 fused_ordering(186) 00:17:59.945 fused_ordering(187) 00:17:59.945 fused_ordering(188) 00:17:59.945 fused_ordering(189) 00:17:59.945 fused_ordering(190) 00:17:59.945 fused_ordering(191) 00:17:59.945 fused_ordering(192) 00:17:59.945 fused_ordering(193) 00:17:59.945 fused_ordering(194) 00:17:59.945 fused_ordering(195) 00:17:59.945 fused_ordering(196) 00:17:59.945 fused_ordering(197) 00:17:59.945 fused_ordering(198) 00:17:59.945 fused_ordering(199) 00:17:59.945 fused_ordering(200) 00:17:59.945 fused_ordering(201) 00:17:59.945 fused_ordering(202) 00:17:59.945 fused_ordering(203) 00:17:59.945 fused_ordering(204) 00:17:59.945 fused_ordering(205) 00:18:00.204 fused_ordering(206) 00:18:00.204 fused_ordering(207) 00:18:00.204 fused_ordering(208) 00:18:00.204 fused_ordering(209) 00:18:00.204 fused_ordering(210) 00:18:00.204 fused_ordering(211) 00:18:00.204 fused_ordering(212) 00:18:00.204 fused_ordering(213) 00:18:00.204 fused_ordering(214) 00:18:00.204 fused_ordering(215) 00:18:00.204 fused_ordering(216) 00:18:00.204 fused_ordering(217) 00:18:00.204 fused_ordering(218) 00:18:00.204 fused_ordering(219) 00:18:00.204 fused_ordering(220) 00:18:00.204 fused_ordering(221) 00:18:00.204 fused_ordering(222) 00:18:00.204 fused_ordering(223) 00:18:00.204 fused_ordering(224) 00:18:00.204 fused_ordering(225) 00:18:00.204 fused_ordering(226) 00:18:00.204 fused_ordering(227) 00:18:00.204 fused_ordering(228) 00:18:00.204 fused_ordering(229) 00:18:00.204 fused_ordering(230) 00:18:00.204 fused_ordering(231) 00:18:00.204 fused_ordering(232) 00:18:00.204 fused_ordering(233) 00:18:00.204 fused_ordering(234) 00:18:00.204 fused_ordering(235) 00:18:00.204 fused_ordering(236) 00:18:00.204 fused_ordering(237) 00:18:00.204 fused_ordering(238) 00:18:00.204 fused_ordering(239) 00:18:00.204 fused_ordering(240) 00:18:00.204 fused_ordering(241) 00:18:00.204 fused_ordering(242) 00:18:00.204 fused_ordering(243) 00:18:00.204 fused_ordering(244) 00:18:00.204 fused_ordering(245) 00:18:00.204 fused_ordering(246) 00:18:00.204 fused_ordering(247) 00:18:00.204 fused_ordering(248) 00:18:00.204 fused_ordering(249) 00:18:00.204 fused_ordering(250) 00:18:00.204 fused_ordering(251) 00:18:00.204 fused_ordering(252) 00:18:00.204 fused_ordering(253) 00:18:00.204 fused_ordering(254) 00:18:00.204 fused_ordering(255) 00:18:00.204 fused_ordering(256) 00:18:00.204 fused_ordering(257) 00:18:00.204 fused_ordering(258) 00:18:00.204 fused_ordering(259) 00:18:00.204 fused_ordering(260) 00:18:00.204 fused_ordering(261) 00:18:00.204 fused_ordering(262) 00:18:00.204 fused_ordering(263) 00:18:00.204 fused_ordering(264) 00:18:00.204 fused_ordering(265) 00:18:00.204 fused_ordering(266) 00:18:00.204 fused_ordering(267) 00:18:00.204 fused_ordering(268) 00:18:00.204 fused_ordering(269) 00:18:00.204 fused_ordering(270) 00:18:00.204 fused_ordering(271) 00:18:00.204 fused_ordering(272) 00:18:00.204 fused_ordering(273) 00:18:00.204 fused_ordering(274) 00:18:00.204 fused_ordering(275) 00:18:00.204 fused_ordering(276) 00:18:00.204 fused_ordering(277) 00:18:00.204 fused_ordering(278) 00:18:00.204 fused_ordering(279) 00:18:00.204 fused_ordering(280) 00:18:00.204 fused_ordering(281) 00:18:00.204 fused_ordering(282) 00:18:00.204 fused_ordering(283) 00:18:00.204 fused_ordering(284) 00:18:00.204 fused_ordering(285) 00:18:00.204 fused_ordering(286) 00:18:00.204 fused_ordering(287) 00:18:00.204 fused_ordering(288) 00:18:00.204 fused_ordering(289) 00:18:00.204 fused_ordering(290) 00:18:00.204 fused_ordering(291) 00:18:00.204 fused_ordering(292) 00:18:00.204 fused_ordering(293) 00:18:00.204 fused_ordering(294) 00:18:00.204 fused_ordering(295) 00:18:00.204 fused_ordering(296) 00:18:00.204 fused_ordering(297) 00:18:00.204 fused_ordering(298) 00:18:00.204 fused_ordering(299) 00:18:00.204 fused_ordering(300) 00:18:00.204 fused_ordering(301) 00:18:00.204 fused_ordering(302) 00:18:00.204 fused_ordering(303) 00:18:00.204 fused_ordering(304) 00:18:00.204 fused_ordering(305) 00:18:00.204 fused_ordering(306) 00:18:00.204 fused_ordering(307) 00:18:00.204 fused_ordering(308) 00:18:00.204 fused_ordering(309) 00:18:00.204 fused_ordering(310) 00:18:00.204 fused_ordering(311) 00:18:00.204 fused_ordering(312) 00:18:00.204 fused_ordering(313) 00:18:00.204 fused_ordering(314) 00:18:00.204 fused_ordering(315) 00:18:00.204 fused_ordering(316) 00:18:00.204 fused_ordering(317) 00:18:00.204 fused_ordering(318) 00:18:00.204 fused_ordering(319) 00:18:00.204 fused_ordering(320) 00:18:00.204 fused_ordering(321) 00:18:00.204 fused_ordering(322) 00:18:00.204 fused_ordering(323) 00:18:00.204 fused_ordering(324) 00:18:00.204 fused_ordering(325) 00:18:00.204 fused_ordering(326) 00:18:00.204 fused_ordering(327) 00:18:00.204 fused_ordering(328) 00:18:00.204 fused_ordering(329) 00:18:00.204 fused_ordering(330) 00:18:00.204 fused_ordering(331) 00:18:00.204 fused_ordering(332) 00:18:00.204 fused_ordering(333) 00:18:00.204 fused_ordering(334) 00:18:00.204 fused_ordering(335) 00:18:00.204 fused_ordering(336) 00:18:00.204 fused_ordering(337) 00:18:00.204 fused_ordering(338) 00:18:00.204 fused_ordering(339) 00:18:00.204 fused_ordering(340) 00:18:00.204 fused_ordering(341) 00:18:00.204 fused_ordering(342) 00:18:00.204 fused_ordering(343) 00:18:00.204 fused_ordering(344) 00:18:00.204 fused_ordering(345) 00:18:00.204 fused_ordering(346) 00:18:00.204 fused_ordering(347) 00:18:00.204 fused_ordering(348) 00:18:00.204 fused_ordering(349) 00:18:00.204 fused_ordering(350) 00:18:00.204 fused_ordering(351) 00:18:00.204 fused_ordering(352) 00:18:00.204 fused_ordering(353) 00:18:00.204 fused_ordering(354) 00:18:00.204 fused_ordering(355) 00:18:00.204 fused_ordering(356) 00:18:00.204 fused_ordering(357) 00:18:00.204 fused_ordering(358) 00:18:00.204 fused_ordering(359) 00:18:00.204 fused_ordering(360) 00:18:00.204 fused_ordering(361) 00:18:00.204 fused_ordering(362) 00:18:00.204 fused_ordering(363) 00:18:00.204 fused_ordering(364) 00:18:00.204 fused_ordering(365) 00:18:00.204 fused_ordering(366) 00:18:00.204 fused_ordering(367) 00:18:00.204 fused_ordering(368) 00:18:00.204 fused_ordering(369) 00:18:00.204 fused_ordering(370) 00:18:00.204 fused_ordering(371) 00:18:00.204 fused_ordering(372) 00:18:00.204 fused_ordering(373) 00:18:00.205 fused_ordering(374) 00:18:00.205 fused_ordering(375) 00:18:00.205 fused_ordering(376) 00:18:00.205 fused_ordering(377) 00:18:00.205 fused_ordering(378) 00:18:00.205 fused_ordering(379) 00:18:00.205 fused_ordering(380) 00:18:00.205 fused_ordering(381) 00:18:00.205 fused_ordering(382) 00:18:00.205 fused_ordering(383) 00:18:00.205 fused_ordering(384) 00:18:00.205 fused_ordering(385) 00:18:00.205 fused_ordering(386) 00:18:00.205 fused_ordering(387) 00:18:00.205 fused_ordering(388) 00:18:00.205 fused_ordering(389) 00:18:00.205 fused_ordering(390) 00:18:00.205 fused_ordering(391) 00:18:00.205 fused_ordering(392) 00:18:00.205 fused_ordering(393) 00:18:00.205 fused_ordering(394) 00:18:00.205 fused_ordering(395) 00:18:00.205 fused_ordering(396) 00:18:00.205 fused_ordering(397) 00:18:00.205 fused_ordering(398) 00:18:00.205 fused_ordering(399) 00:18:00.205 fused_ordering(400) 00:18:00.205 fused_ordering(401) 00:18:00.205 fused_ordering(402) 00:18:00.205 fused_ordering(403) 00:18:00.205 fused_ordering(404) 00:18:00.205 fused_ordering(405) 00:18:00.205 fused_ordering(406) 00:18:00.205 fused_ordering(407) 00:18:00.205 fused_ordering(408) 00:18:00.205 fused_ordering(409) 00:18:00.205 fused_ordering(410) 00:18:00.463 fused_ordering(411) 00:18:00.463 fused_ordering(412) 00:18:00.463 fused_ordering(413) 00:18:00.463 fused_ordering(414) 00:18:00.463 fused_ordering(415) 00:18:00.463 fused_ordering(416) 00:18:00.463 fused_ordering(417) 00:18:00.463 fused_ordering(418) 00:18:00.463 fused_ordering(419) 00:18:00.463 fused_ordering(420) 00:18:00.463 fused_ordering(421) 00:18:00.463 fused_ordering(422) 00:18:00.463 fused_ordering(423) 00:18:00.463 fused_ordering(424) 00:18:00.464 fused_ordering(425) 00:18:00.464 fused_ordering(426) 00:18:00.464 fused_ordering(427) 00:18:00.464 fused_ordering(428) 00:18:00.464 fused_ordering(429) 00:18:00.464 fused_ordering(430) 00:18:00.464 fused_ordering(431) 00:18:00.464 fused_ordering(432) 00:18:00.464 fused_ordering(433) 00:18:00.464 fused_ordering(434) 00:18:00.464 fused_ordering(435) 00:18:00.464 fused_ordering(436) 00:18:00.464 fused_ordering(437) 00:18:00.464 fused_ordering(438) 00:18:00.464 fused_ordering(439) 00:18:00.464 fused_ordering(440) 00:18:00.464 fused_ordering(441) 00:18:00.464 fused_ordering(442) 00:18:00.464 fused_ordering(443) 00:18:00.464 fused_ordering(444) 00:18:00.464 fused_ordering(445) 00:18:00.464 fused_ordering(446) 00:18:00.464 fused_ordering(447) 00:18:00.464 fused_ordering(448) 00:18:00.464 fused_ordering(449) 00:18:00.464 fused_ordering(450) 00:18:00.464 fused_ordering(451) 00:18:00.464 fused_ordering(452) 00:18:00.464 fused_ordering(453) 00:18:00.464 fused_ordering(454) 00:18:00.464 fused_ordering(455) 00:18:00.464 fused_ordering(456) 00:18:00.464 fused_ordering(457) 00:18:00.464 fused_ordering(458) 00:18:00.464 fused_ordering(459) 00:18:00.464 fused_ordering(460) 00:18:00.464 fused_ordering(461) 00:18:00.464 fused_ordering(462) 00:18:00.464 fused_ordering(463) 00:18:00.464 fused_ordering(464) 00:18:00.464 fused_ordering(465) 00:18:00.464 fused_ordering(466) 00:18:00.464 fused_ordering(467) 00:18:00.464 fused_ordering(468) 00:18:00.464 fused_ordering(469) 00:18:00.464 fused_ordering(470) 00:18:00.464 fused_ordering(471) 00:18:00.464 fused_ordering(472) 00:18:00.464 fused_ordering(473) 00:18:00.464 fused_ordering(474) 00:18:00.464 fused_ordering(475) 00:18:00.464 fused_ordering(476) 00:18:00.464 fused_ordering(477) 00:18:00.464 fused_ordering(478) 00:18:00.464 fused_ordering(479) 00:18:00.464 fused_ordering(480) 00:18:00.464 fused_ordering(481) 00:18:00.464 fused_ordering(482) 00:18:00.464 fused_ordering(483) 00:18:00.464 fused_ordering(484) 00:18:00.464 fused_ordering(485) 00:18:00.464 fused_ordering(486) 00:18:00.464 fused_ordering(487) 00:18:00.464 fused_ordering(488) 00:18:00.464 fused_ordering(489) 00:18:00.464 fused_ordering(490) 00:18:00.464 fused_ordering(491) 00:18:00.464 fused_ordering(492) 00:18:00.464 fused_ordering(493) 00:18:00.464 fused_ordering(494) 00:18:00.464 fused_ordering(495) 00:18:00.464 fused_ordering(496) 00:18:00.464 fused_ordering(497) 00:18:00.464 fused_ordering(498) 00:18:00.464 fused_ordering(499) 00:18:00.464 fused_ordering(500) 00:18:00.464 fused_ordering(501) 00:18:00.464 fused_ordering(502) 00:18:00.464 fused_ordering(503) 00:18:00.464 fused_ordering(504) 00:18:00.464 fused_ordering(505) 00:18:00.464 fused_ordering(506) 00:18:00.464 fused_ordering(507) 00:18:00.464 fused_ordering(508) 00:18:00.464 fused_ordering(509) 00:18:00.464 fused_ordering(510) 00:18:00.464 fused_ordering(511) 00:18:00.464 fused_ordering(512) 00:18:00.464 fused_ordering(513) 00:18:00.464 fused_ordering(514) 00:18:00.464 fused_ordering(515) 00:18:00.464 fused_ordering(516) 00:18:00.464 fused_ordering(517) 00:18:00.464 fused_ordering(518) 00:18:00.464 fused_ordering(519) 00:18:00.464 fused_ordering(520) 00:18:00.464 fused_ordering(521) 00:18:00.464 fused_ordering(522) 00:18:00.464 fused_ordering(523) 00:18:00.464 fused_ordering(524) 00:18:00.464 fused_ordering(525) 00:18:00.464 fused_ordering(526) 00:18:00.464 fused_ordering(527) 00:18:00.464 fused_ordering(528) 00:18:00.464 fused_ordering(529) 00:18:00.464 fused_ordering(530) 00:18:00.464 fused_ordering(531) 00:18:00.464 fused_ordering(532) 00:18:00.464 fused_ordering(533) 00:18:00.464 fused_ordering(534) 00:18:00.464 fused_ordering(535) 00:18:00.464 fused_ordering(536) 00:18:00.464 fused_ordering(537) 00:18:00.464 fused_ordering(538) 00:18:00.464 fused_ordering(539) 00:18:00.464 fused_ordering(540) 00:18:00.464 fused_ordering(541) 00:18:00.464 fused_ordering(542) 00:18:00.464 fused_ordering(543) 00:18:00.464 fused_ordering(544) 00:18:00.464 fused_ordering(545) 00:18:00.464 fused_ordering(546) 00:18:00.464 fused_ordering(547) 00:18:00.464 fused_ordering(548) 00:18:00.464 fused_ordering(549) 00:18:00.464 fused_ordering(550) 00:18:00.464 fused_ordering(551) 00:18:00.464 fused_ordering(552) 00:18:00.464 fused_ordering(553) 00:18:00.464 fused_ordering(554) 00:18:00.464 fused_ordering(555) 00:18:00.464 fused_ordering(556) 00:18:00.464 fused_ordering(557) 00:18:00.464 fused_ordering(558) 00:18:00.464 fused_ordering(559) 00:18:00.464 fused_ordering(560) 00:18:00.464 fused_ordering(561) 00:18:00.464 fused_ordering(562) 00:18:00.464 fused_ordering(563) 00:18:00.464 fused_ordering(564) 00:18:00.464 fused_ordering(565) 00:18:00.464 fused_ordering(566) 00:18:00.464 fused_ordering(567) 00:18:00.464 fused_ordering(568) 00:18:00.464 fused_ordering(569) 00:18:00.464 fused_ordering(570) 00:18:00.464 fused_ordering(571) 00:18:00.464 fused_ordering(572) 00:18:00.464 fused_ordering(573) 00:18:00.464 fused_ordering(574) 00:18:00.464 fused_ordering(575) 00:18:00.464 fused_ordering(576) 00:18:00.464 fused_ordering(577) 00:18:00.464 fused_ordering(578) 00:18:00.464 fused_ordering(579) 00:18:00.464 fused_ordering(580) 00:18:00.464 fused_ordering(581) 00:18:00.464 fused_ordering(582) 00:18:00.464 fused_ordering(583) 00:18:00.464 fused_ordering(584) 00:18:00.464 fused_ordering(585) 00:18:00.464 fused_ordering(586) 00:18:00.464 fused_ordering(587) 00:18:00.464 fused_ordering(588) 00:18:00.464 fused_ordering(589) 00:18:00.464 fused_ordering(590) 00:18:00.464 fused_ordering(591) 00:18:00.464 fused_ordering(592) 00:18:00.464 fused_ordering(593) 00:18:00.464 fused_ordering(594) 00:18:00.464 fused_ordering(595) 00:18:00.464 fused_ordering(596) 00:18:00.464 fused_ordering(597) 00:18:00.464 fused_ordering(598) 00:18:00.464 fused_ordering(599) 00:18:00.465 fused_ordering(600) 00:18:00.465 fused_ordering(601) 00:18:00.465 fused_ordering(602) 00:18:00.465 fused_ordering(603) 00:18:00.465 fused_ordering(604) 00:18:00.465 fused_ordering(605) 00:18:00.465 fused_ordering(606) 00:18:00.465 fused_ordering(607) 00:18:00.465 fused_ordering(608) 00:18:00.465 fused_ordering(609) 00:18:00.465 fused_ordering(610) 00:18:00.465 fused_ordering(611) 00:18:00.465 fused_ordering(612) 00:18:00.465 fused_ordering(613) 00:18:00.465 fused_ordering(614) 00:18:00.465 fused_ordering(615) 00:18:01.031 fused_ordering(616) 00:18:01.031 fused_ordering(617) 00:18:01.031 fused_ordering(618) 00:18:01.031 fused_ordering(619) 00:18:01.031 fused_ordering(620) 00:18:01.031 fused_ordering(621) 00:18:01.031 fused_ordering(622) 00:18:01.031 fused_ordering(623) 00:18:01.031 fused_ordering(624) 00:18:01.031 fused_ordering(625) 00:18:01.031 fused_ordering(626) 00:18:01.031 fused_ordering(627) 00:18:01.031 fused_ordering(628) 00:18:01.031 fused_ordering(629) 00:18:01.031 fused_ordering(630) 00:18:01.031 fused_ordering(631) 00:18:01.031 fused_ordering(632) 00:18:01.031 fused_ordering(633) 00:18:01.031 fused_ordering(634) 00:18:01.031 fused_ordering(635) 00:18:01.031 fused_ordering(636) 00:18:01.031 fused_ordering(637) 00:18:01.031 fused_ordering(638) 00:18:01.031 fused_ordering(639) 00:18:01.031 fused_ordering(640) 00:18:01.031 fused_ordering(641) 00:18:01.031 fused_ordering(642) 00:18:01.031 fused_ordering(643) 00:18:01.031 fused_ordering(644) 00:18:01.031 fused_ordering(645) 00:18:01.031 fused_ordering(646) 00:18:01.031 fused_ordering(647) 00:18:01.031 fused_ordering(648) 00:18:01.031 fused_ordering(649) 00:18:01.031 fused_ordering(650) 00:18:01.031 fused_ordering(651) 00:18:01.031 fused_ordering(652) 00:18:01.031 fused_ordering(653) 00:18:01.031 fused_ordering(654) 00:18:01.031 fused_ordering(655) 00:18:01.031 fused_ordering(656) 00:18:01.031 fused_ordering(657) 00:18:01.031 fused_ordering(658) 00:18:01.031 fused_ordering(659) 00:18:01.031 fused_ordering(660) 00:18:01.031 fused_ordering(661) 00:18:01.031 fused_ordering(662) 00:18:01.031 fused_ordering(663) 00:18:01.031 fused_ordering(664) 00:18:01.031 fused_ordering(665) 00:18:01.031 fused_ordering(666) 00:18:01.031 fused_ordering(667) 00:18:01.031 fused_ordering(668) 00:18:01.031 fused_ordering(669) 00:18:01.031 fused_ordering(670) 00:18:01.031 fused_ordering(671) 00:18:01.031 fused_ordering(672) 00:18:01.031 fused_ordering(673) 00:18:01.031 fused_ordering(674) 00:18:01.031 fused_ordering(675) 00:18:01.031 fused_ordering(676) 00:18:01.031 fused_ordering(677) 00:18:01.031 fused_ordering(678) 00:18:01.031 fused_ordering(679) 00:18:01.031 fused_ordering(680) 00:18:01.031 fused_ordering(681) 00:18:01.031 fused_ordering(682) 00:18:01.031 fused_ordering(683) 00:18:01.031 fused_ordering(684) 00:18:01.031 fused_ordering(685) 00:18:01.031 fused_ordering(686) 00:18:01.031 fused_ordering(687) 00:18:01.031 fused_ordering(688) 00:18:01.031 fused_ordering(689) 00:18:01.031 fused_ordering(690) 00:18:01.031 fused_ordering(691) 00:18:01.031 fused_ordering(692) 00:18:01.031 fused_ordering(693) 00:18:01.031 fused_ordering(694) 00:18:01.031 fused_ordering(695) 00:18:01.031 fused_ordering(696) 00:18:01.031 fused_ordering(697) 00:18:01.031 fused_ordering(698) 00:18:01.031 fused_ordering(699) 00:18:01.031 fused_ordering(700) 00:18:01.031 fused_ordering(701) 00:18:01.031 fused_ordering(702) 00:18:01.031 fused_ordering(703) 00:18:01.031 fused_ordering(704) 00:18:01.031 fused_ordering(705) 00:18:01.031 fused_ordering(706) 00:18:01.031 fused_ordering(707) 00:18:01.031 fused_ordering(708) 00:18:01.031 fused_ordering(709) 00:18:01.031 fused_ordering(710) 00:18:01.031 fused_ordering(711) 00:18:01.031 fused_ordering(712) 00:18:01.031 fused_ordering(713) 00:18:01.031 fused_ordering(714) 00:18:01.031 fused_ordering(715) 00:18:01.031 fused_ordering(716) 00:18:01.031 fused_ordering(717) 00:18:01.031 fused_ordering(718) 00:18:01.031 fused_ordering(719) 00:18:01.031 fused_ordering(720) 00:18:01.031 fused_ordering(721) 00:18:01.031 fused_ordering(722) 00:18:01.031 fused_ordering(723) 00:18:01.031 fused_ordering(724) 00:18:01.031 fused_ordering(725) 00:18:01.031 fused_ordering(726) 00:18:01.031 fused_ordering(727) 00:18:01.031 fused_ordering(728) 00:18:01.031 fused_ordering(729) 00:18:01.031 fused_ordering(730) 00:18:01.031 fused_ordering(731) 00:18:01.031 fused_ordering(732) 00:18:01.031 fused_ordering(733) 00:18:01.031 fused_ordering(734) 00:18:01.031 fused_ordering(735) 00:18:01.031 fused_ordering(736) 00:18:01.031 fused_ordering(737) 00:18:01.031 fused_ordering(738) 00:18:01.031 fused_ordering(739) 00:18:01.031 fused_ordering(740) 00:18:01.031 fused_ordering(741) 00:18:01.031 fused_ordering(742) 00:18:01.031 fused_ordering(743) 00:18:01.031 fused_ordering(744) 00:18:01.031 fused_ordering(745) 00:18:01.031 fused_ordering(746) 00:18:01.031 fused_ordering(747) 00:18:01.031 fused_ordering(748) 00:18:01.031 fused_ordering(749) 00:18:01.031 fused_ordering(750) 00:18:01.031 fused_ordering(751) 00:18:01.031 fused_ordering(752) 00:18:01.031 fused_ordering(753) 00:18:01.031 fused_ordering(754) 00:18:01.031 fused_ordering(755) 00:18:01.031 fused_ordering(756) 00:18:01.031 fused_ordering(757) 00:18:01.031 fused_ordering(758) 00:18:01.031 fused_ordering(759) 00:18:01.031 fused_ordering(760) 00:18:01.031 fused_ordering(761) 00:18:01.031 fused_ordering(762) 00:18:01.031 fused_ordering(763) 00:18:01.031 fused_ordering(764) 00:18:01.031 fused_ordering(765) 00:18:01.031 fused_ordering(766) 00:18:01.031 fused_ordering(767) 00:18:01.031 fused_ordering(768) 00:18:01.031 fused_ordering(769) 00:18:01.031 fused_ordering(770) 00:18:01.031 fused_ordering(771) 00:18:01.031 fused_ordering(772) 00:18:01.031 fused_ordering(773) 00:18:01.031 fused_ordering(774) 00:18:01.031 fused_ordering(775) 00:18:01.031 fused_ordering(776) 00:18:01.031 fused_ordering(777) 00:18:01.031 fused_ordering(778) 00:18:01.031 fused_ordering(779) 00:18:01.031 fused_ordering(780) 00:18:01.031 fused_ordering(781) 00:18:01.031 fused_ordering(782) 00:18:01.031 fused_ordering(783) 00:18:01.031 fused_ordering(784) 00:18:01.031 fused_ordering(785) 00:18:01.031 fused_ordering(786) 00:18:01.031 fused_ordering(787) 00:18:01.031 fused_ordering(788) 00:18:01.031 fused_ordering(789) 00:18:01.031 fused_ordering(790) 00:18:01.031 fused_ordering(791) 00:18:01.031 fused_ordering(792) 00:18:01.031 fused_ordering(793) 00:18:01.031 fused_ordering(794) 00:18:01.031 fused_ordering(795) 00:18:01.031 fused_ordering(796) 00:18:01.031 fused_ordering(797) 00:18:01.031 fused_ordering(798) 00:18:01.031 fused_ordering(799) 00:18:01.031 fused_ordering(800) 00:18:01.031 fused_ordering(801) 00:18:01.031 fused_ordering(802) 00:18:01.031 fused_ordering(803) 00:18:01.031 fused_ordering(804) 00:18:01.031 fused_ordering(805) 00:18:01.031 fused_ordering(806) 00:18:01.031 fused_ordering(807) 00:18:01.031 fused_ordering(808) 00:18:01.031 fused_ordering(809) 00:18:01.031 fused_ordering(810) 00:18:01.031 fused_ordering(811) 00:18:01.031 fused_ordering(812) 00:18:01.031 fused_ordering(813) 00:18:01.031 fused_ordering(814) 00:18:01.031 fused_ordering(815) 00:18:01.031 fused_ordering(816) 00:18:01.031 fused_ordering(817) 00:18:01.031 fused_ordering(818) 00:18:01.031 fused_ordering(819) 00:18:01.031 fused_ordering(820) 00:18:01.290 fused_o[2024-11-20 06:28:33.097089] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1acef00 is same with the state(6) to be set 00:18:01.290 rdering(821) 00:18:01.290 fused_ordering(822) 00:18:01.290 fused_ordering(823) 00:18:01.290 fused_ordering(824) 00:18:01.290 fused_ordering(825) 00:18:01.290 fused_ordering(826) 00:18:01.290 fused_ordering(827) 00:18:01.290 fused_ordering(828) 00:18:01.290 fused_ordering(829) 00:18:01.290 fused_ordering(830) 00:18:01.290 fused_ordering(831) 00:18:01.290 fused_ordering(832) 00:18:01.290 fused_ordering(833) 00:18:01.290 fused_ordering(834) 00:18:01.290 fused_ordering(835) 00:18:01.290 fused_ordering(836) 00:18:01.290 fused_ordering(837) 00:18:01.290 fused_ordering(838) 00:18:01.290 fused_ordering(839) 00:18:01.290 fused_ordering(840) 00:18:01.290 fused_ordering(841) 00:18:01.290 fused_ordering(842) 00:18:01.290 fused_ordering(843) 00:18:01.290 fused_ordering(844) 00:18:01.290 fused_ordering(845) 00:18:01.290 fused_ordering(846) 00:18:01.290 fused_ordering(847) 00:18:01.290 fused_ordering(848) 00:18:01.290 fused_ordering(849) 00:18:01.290 fused_ordering(850) 00:18:01.290 fused_ordering(851) 00:18:01.290 fused_ordering(852) 00:18:01.290 fused_ordering(853) 00:18:01.290 fused_ordering(854) 00:18:01.290 fused_ordering(855) 00:18:01.290 fused_ordering(856) 00:18:01.290 fused_ordering(857) 00:18:01.290 fused_ordering(858) 00:18:01.290 fused_ordering(859) 00:18:01.290 fused_ordering(860) 00:18:01.290 fused_ordering(861) 00:18:01.290 fused_ordering(862) 00:18:01.290 fused_ordering(863) 00:18:01.290 fused_ordering(864) 00:18:01.290 fused_ordering(865) 00:18:01.290 fused_ordering(866) 00:18:01.290 fused_ordering(867) 00:18:01.290 fused_ordering(868) 00:18:01.290 fused_ordering(869) 00:18:01.290 fused_ordering(870) 00:18:01.290 fused_ordering(871) 00:18:01.290 fused_ordering(872) 00:18:01.290 fused_ordering(873) 00:18:01.290 fused_ordering(874) 00:18:01.290 fused_ordering(875) 00:18:01.290 fused_ordering(876) 00:18:01.290 fused_ordering(877) 00:18:01.290 fused_ordering(878) 00:18:01.290 fused_ordering(879) 00:18:01.290 fused_ordering(880) 00:18:01.290 fused_ordering(881) 00:18:01.290 fused_ordering(882) 00:18:01.290 fused_ordering(883) 00:18:01.290 fused_ordering(884) 00:18:01.290 fused_ordering(885) 00:18:01.290 fused_ordering(886) 00:18:01.290 fused_ordering(887) 00:18:01.290 fused_ordering(888) 00:18:01.290 fused_ordering(889) 00:18:01.290 fused_ordering(890) 00:18:01.290 fused_ordering(891) 00:18:01.290 fused_ordering(892) 00:18:01.290 fused_ordering(893) 00:18:01.290 fused_ordering(894) 00:18:01.290 fused_ordering(895) 00:18:01.290 fused_ordering(896) 00:18:01.290 fused_ordering(897) 00:18:01.290 fused_ordering(898) 00:18:01.290 fused_ordering(899) 00:18:01.290 fused_ordering(900) 00:18:01.290 fused_ordering(901) 00:18:01.290 fused_ordering(902) 00:18:01.290 fused_ordering(903) 00:18:01.290 fused_ordering(904) 00:18:01.290 fused_ordering(905) 00:18:01.290 fused_ordering(906) 00:18:01.290 fused_ordering(907) 00:18:01.290 fused_ordering(908) 00:18:01.290 fused_ordering(909) 00:18:01.290 fused_ordering(910) 00:18:01.290 fused_ordering(911) 00:18:01.290 fused_ordering(912) 00:18:01.290 fused_ordering(913) 00:18:01.290 fused_ordering(914) 00:18:01.290 fused_ordering(915) 00:18:01.290 fused_ordering(916) 00:18:01.290 fused_ordering(917) 00:18:01.290 fused_ordering(918) 00:18:01.290 fused_ordering(919) 00:18:01.290 fused_ordering(920) 00:18:01.290 fused_ordering(921) 00:18:01.290 fused_ordering(922) 00:18:01.290 fused_ordering(923) 00:18:01.290 fused_ordering(924) 00:18:01.290 fused_ordering(925) 00:18:01.290 fused_ordering(926) 00:18:01.290 fused_ordering(927) 00:18:01.290 fused_ordering(928) 00:18:01.290 fused_ordering(929) 00:18:01.290 fused_ordering(930) 00:18:01.290 fused_ordering(931) 00:18:01.290 fused_ordering(932) 00:18:01.290 fused_ordering(933) 00:18:01.290 fused_ordering(934) 00:18:01.290 fused_ordering(935) 00:18:01.290 fused_ordering(936) 00:18:01.290 fused_ordering(937) 00:18:01.290 fused_ordering(938) 00:18:01.290 fused_ordering(939) 00:18:01.290 fused_ordering(940) 00:18:01.290 fused_ordering(941) 00:18:01.290 fused_ordering(942) 00:18:01.290 fused_ordering(943) 00:18:01.290 fused_ordering(944) 00:18:01.290 fused_ordering(945) 00:18:01.290 fused_ordering(946) 00:18:01.290 fused_ordering(947) 00:18:01.290 fused_ordering(948) 00:18:01.290 fused_ordering(949) 00:18:01.290 fused_ordering(950) 00:18:01.290 fused_ordering(951) 00:18:01.290 fused_ordering(952) 00:18:01.290 fused_ordering(953) 00:18:01.290 fused_ordering(954) 00:18:01.290 fused_ordering(955) 00:18:01.290 fused_ordering(956) 00:18:01.290 fused_ordering(957) 00:18:01.290 fused_ordering(958) 00:18:01.290 fused_ordering(959) 00:18:01.290 fused_ordering(960) 00:18:01.290 fused_ordering(961) 00:18:01.290 fused_ordering(962) 00:18:01.290 fused_ordering(963) 00:18:01.290 fused_ordering(964) 00:18:01.290 fused_ordering(965) 00:18:01.290 fused_ordering(966) 00:18:01.290 fused_ordering(967) 00:18:01.290 fused_ordering(968) 00:18:01.290 fused_ordering(969) 00:18:01.290 fused_ordering(970) 00:18:01.290 fused_ordering(971) 00:18:01.290 fused_ordering(972) 00:18:01.290 fused_ordering(973) 00:18:01.290 fused_ordering(974) 00:18:01.290 fused_ordering(975) 00:18:01.290 fused_ordering(976) 00:18:01.290 fused_ordering(977) 00:18:01.290 fused_ordering(978) 00:18:01.290 fused_ordering(979) 00:18:01.290 fused_ordering(980) 00:18:01.290 fused_ordering(981) 00:18:01.290 fused_ordering(982) 00:18:01.290 fused_ordering(983) 00:18:01.290 fused_ordering(984) 00:18:01.290 fused_ordering(985) 00:18:01.290 fused_ordering(986) 00:18:01.290 fused_ordering(987) 00:18:01.290 fused_ordering(988) 00:18:01.290 fused_ordering(989) 00:18:01.290 fused_ordering(990) 00:18:01.290 fused_ordering(991) 00:18:01.290 fused_ordering(992) 00:18:01.290 fused_ordering(993) 00:18:01.290 fused_ordering(994) 00:18:01.290 fused_ordering(995) 00:18:01.290 fused_ordering(996) 00:18:01.290 fused_ordering(997) 00:18:01.290 fused_ordering(998) 00:18:01.290 fused_ordering(999) 00:18:01.290 fused_ordering(1000) 00:18:01.290 fused_ordering(1001) 00:18:01.290 fused_ordering(1002) 00:18:01.290 fused_ordering(1003) 00:18:01.290 fused_ordering(1004) 00:18:01.290 fused_ordering(1005) 00:18:01.290 fused_ordering(1006) 00:18:01.290 fused_ordering(1007) 00:18:01.290 fused_ordering(1008) 00:18:01.291 fused_ordering(1009) 00:18:01.291 fused_ordering(1010) 00:18:01.291 fused_ordering(1011) 00:18:01.291 fused_ordering(1012) 00:18:01.291 fused_ordering(1013) 00:18:01.291 fused_ordering(1014) 00:18:01.291 fused_ordering(1015) 00:18:01.291 fused_ordering(1016) 00:18:01.291 fused_ordering(1017) 00:18:01.291 fused_ordering(1018) 00:18:01.291 fused_ordering(1019) 00:18:01.291 fused_ordering(1020) 00:18:01.291 fused_ordering(1021) 00:18:01.291 fused_ordering(1022) 00:18:01.291 fused_ordering(1023) 00:18:01.291 06:28:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:18:01.291 06:28:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:18:01.291 06:28:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:01.291 06:28:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:18:01.291 06:28:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:01.291 06:28:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:18:01.291 06:28:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:01.291 06:28:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:01.549 rmmod nvme_tcp 00:18:01.549 rmmod nvme_fabrics 00:18:01.549 rmmod nvme_keyring 00:18:01.549 06:28:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:01.549 06:28:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:18:01.549 06:28:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:18:01.549 06:28:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 499508 ']' 00:18:01.549 06:28:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 499508 00:18:01.549 06:28:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # '[' -z 499508 ']' 00:18:01.549 06:28:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # kill -0 499508 00:18:01.549 06:28:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@957 -- # uname 00:18:01.549 06:28:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:01.549 06:28:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 499508 00:18:01.549 06:28:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:18:01.549 06:28:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:18:01.549 06:28:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@970 -- # echo 'killing process with pid 499508' 00:18:01.549 killing process with pid 499508 00:18:01.549 06:28:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@971 -- # kill 499508 00:18:01.549 06:28:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@976 -- # wait 499508 00:18:01.809 06:28:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:01.809 06:28:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:01.809 06:28:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:01.809 06:28:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:18:01.809 06:28:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save 00:18:01.809 06:28:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:01.809 06:28:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore 00:18:01.809 06:28:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:01.809 06:28:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:01.809 06:28:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:01.809 06:28:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:01.809 06:28:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:03.715 06:28:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:03.715 00:18:03.715 real 0m10.761s 00:18:03.715 user 0m5.117s 00:18:03.715 sys 0m5.849s 00:18:03.715 06:28:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1128 -- # xtrace_disable 00:18:03.715 06:28:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:03.715 ************************************ 00:18:03.715 END TEST nvmf_fused_ordering 00:18:03.715 ************************************ 00:18:03.715 06:28:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:18:03.715 06:28:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:18:03.715 06:28:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:03.715 06:28:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:03.715 ************************************ 00:18:03.715 START TEST nvmf_ns_masking 00:18:03.715 ************************************ 00:18:03.715 06:28:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1127 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:18:03.976 * Looking for test storage... 00:18:03.976 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:03.976 06:28:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:18:03.976 06:28:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:18:03.976 06:28:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1691 -- # lcov --version 00:18:03.976 06:28:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:18:03.976 06:28:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:03.976 06:28:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:03.976 06:28:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:03.976 06:28:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:18:03.976 06:28:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:18:03.976 06:28:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:18:03.976 06:28:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:18:03.976 06:28:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:18:03.976 06:28:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:18:03.976 06:28:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:18:03.976 06:28:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:03.976 06:28:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:18:03.976 06:28:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:18:03.976 06:28:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:03.976 06:28:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:03.976 06:28:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:18:03.976 06:28:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:18:03.976 06:28:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:03.976 06:28:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:18:03.976 06:28:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:18:03.976 06:28:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:18:03.976 06:28:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:18:03.976 06:28:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:03.976 06:28:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:18:03.976 06:28:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:18:03.976 06:28:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:03.976 06:28:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:03.976 06:28:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:18:03.976 06:28:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:03.976 06:28:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:18:03.976 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:03.976 --rc genhtml_branch_coverage=1 00:18:03.976 --rc genhtml_function_coverage=1 00:18:03.976 --rc genhtml_legend=1 00:18:03.976 --rc geninfo_all_blocks=1 00:18:03.976 --rc geninfo_unexecuted_blocks=1 00:18:03.976 00:18:03.976 ' 00:18:03.976 06:28:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:18:03.976 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:03.976 --rc genhtml_branch_coverage=1 00:18:03.976 --rc genhtml_function_coverage=1 00:18:03.976 --rc genhtml_legend=1 00:18:03.976 --rc geninfo_all_blocks=1 00:18:03.976 --rc geninfo_unexecuted_blocks=1 00:18:03.976 00:18:03.976 ' 00:18:03.976 06:28:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:18:03.976 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:03.976 --rc genhtml_branch_coverage=1 00:18:03.976 --rc genhtml_function_coverage=1 00:18:03.976 --rc genhtml_legend=1 00:18:03.976 --rc geninfo_all_blocks=1 00:18:03.976 --rc geninfo_unexecuted_blocks=1 00:18:03.976 00:18:03.976 ' 00:18:03.976 06:28:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:18:03.976 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:03.976 --rc genhtml_branch_coverage=1 00:18:03.976 --rc genhtml_function_coverage=1 00:18:03.976 --rc genhtml_legend=1 00:18:03.976 --rc geninfo_all_blocks=1 00:18:03.976 --rc geninfo_unexecuted_blocks=1 00:18:03.976 00:18:03.976 ' 00:18:03.976 06:28:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:03.976 06:28:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:18:03.976 06:28:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:03.976 06:28:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:03.976 06:28:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:03.976 06:28:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:03.976 06:28:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:03.976 06:28:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:03.976 06:28:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:03.976 06:28:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:03.976 06:28:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:03.976 06:28:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:03.976 06:28:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:03.976 06:28:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:18:03.976 06:28:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:03.976 06:28:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:03.976 06:28:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:03.976 06:28:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:03.976 06:28:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:03.976 06:28:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:18:03.976 06:28:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:03.976 06:28:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:03.976 06:28:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:03.976 06:28:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:03.976 06:28:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:03.976 06:28:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:03.977 06:28:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:18:03.977 06:28:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:03.977 06:28:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:18:03.977 06:28:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:03.977 06:28:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:03.977 06:28:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:03.977 06:28:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:03.977 06:28:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:03.977 06:28:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:03.977 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:03.977 06:28:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:03.977 06:28:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:03.977 06:28:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:03.977 06:28:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:03.977 06:28:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:18:03.977 06:28:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:18:03.977 06:28:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:18:03.977 06:28:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=106ebb75-b2c3-4f8c-b90f-74af8bd361cc 00:18:03.977 06:28:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:18:03.977 06:28:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=db2532d8-b36b-42a4-ad2d-6959bc049b4e 00:18:03.977 06:28:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:18:03.977 06:28:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:18:03.977 06:28:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:18:03.977 06:28:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:18:03.977 06:28:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=6f37cfbc-ce73-4bab-95a1-c33328fccc27 00:18:03.977 06:28:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:18:03.977 06:28:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:03.977 06:28:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:03.977 06:28:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:03.977 06:28:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:03.977 06:28:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:03.977 06:28:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:03.977 06:28:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:03.977 06:28:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:03.977 06:28:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:03.977 06:28:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:03.977 06:28:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:18:03.977 06:28:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:10.552 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:10.552 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:18:10.552 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:10.552 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:10.552 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:10.552 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:10.552 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:10.552 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:18:10.552 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:10.552 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:18:10.552 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:18:10.552 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:18:10.552 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:18:10.552 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:18:10.552 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:18:10.552 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:10.552 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:10.552 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:10.552 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:10.552 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:10.552 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:10.552 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:10.552 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:10.552 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:10.552 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:10.552 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:10.552 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:10.552 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:10.552 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:10.552 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:10.552 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:10.552 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:10.552 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:10.552 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:10.552 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:18:10.552 Found 0000:86:00.0 (0x8086 - 0x159b) 00:18:10.552 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:10.552 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:10.552 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:10.552 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:10.552 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:10.552 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:10.552 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:18:10.552 Found 0000:86:00.1 (0x8086 - 0x159b) 00:18:10.552 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:10.552 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:10.552 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:10.552 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:10.552 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:10.552 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:10.552 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:10.552 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:10.552 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:10.552 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:10.552 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:10.552 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:10.552 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:10.552 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:10.552 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:10.552 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:18:10.553 Found net devices under 0000:86:00.0: cvl_0_0 00:18:10.553 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:10.553 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:10.553 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:10.553 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:10.553 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:10.553 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:10.553 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:10.553 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:10.553 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:18:10.553 Found net devices under 0000:86:00.1: cvl_0_1 00:18:10.553 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:10.553 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:10.553 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:18:10.553 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:10.553 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:10.553 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:10.553 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:10.553 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:10.553 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:10.553 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:10.553 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:10.553 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:10.553 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:10.553 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:10.553 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:10.553 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:10.553 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:10.553 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:10.553 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:10.553 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:10.553 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:10.553 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:10.553 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:10.553 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:10.553 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:10.553 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:10.553 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:10.553 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:10.553 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:10.553 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:10.553 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.383 ms 00:18:10.553 00:18:10.553 --- 10.0.0.2 ping statistics --- 00:18:10.553 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:10.553 rtt min/avg/max/mdev = 0.383/0.383/0.383/0.000 ms 00:18:10.553 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:10.553 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:10.553 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.209 ms 00:18:10.553 00:18:10.553 --- 10.0.0.1 ping statistics --- 00:18:10.553 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:10.553 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:18:10.553 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:10.553 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:18:10.553 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:10.553 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:10.553 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:10.553 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:10.553 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:10.553 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:10.553 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:10.553 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:18:10.553 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:10.553 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:10.553 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:10.553 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=503522 00:18:10.553 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:10.553 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 503522 00:18:10.553 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@833 -- # '[' -z 503522 ']' 00:18:10.553 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:10.553 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:10.553 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:10.553 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:10.553 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:10.553 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:10.553 [2024-11-20 06:28:41.780082] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:18:10.553 [2024-11-20 06:28:41.780126] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:10.553 [2024-11-20 06:28:41.841758] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:10.553 [2024-11-20 06:28:41.882757] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:10.553 [2024-11-20 06:28:41.882793] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:10.553 [2024-11-20 06:28:41.882800] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:10.553 [2024-11-20 06:28:41.882806] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:10.553 [2024-11-20 06:28:41.882812] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:10.553 [2024-11-20 06:28:41.883357] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:10.553 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:10.553 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@866 -- # return 0 00:18:10.553 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:10.553 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:10.553 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:10.553 06:28:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:10.553 06:28:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:10.553 [2024-11-20 06:28:42.182812] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:10.553 06:28:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:18:10.553 06:28:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:18:10.553 06:28:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:18:10.813 Malloc1 00:18:10.813 06:28:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:18:10.813 Malloc2 00:18:10.813 06:28:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:18:11.072 06:28:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:18:11.330 06:28:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:11.589 [2024-11-20 06:28:43.195300] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:11.589 06:28:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:18:11.589 06:28:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 6f37cfbc-ce73-4bab-95a1-c33328fccc27 -a 10.0.0.2 -s 4420 -i 4 00:18:11.589 06:28:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:18:11.589 06:28:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # local i=0 00:18:11.589 06:28:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:18:11.589 06:28:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:18:11.589 06:28:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # sleep 2 00:18:14.122 06:28:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:18:14.122 06:28:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:18:14.122 06:28:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:18:14.122 06:28:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:18:14.122 06:28:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:18:14.122 06:28:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # return 0 00:18:14.122 06:28:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:18:14.122 06:28:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:18:14.122 06:28:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:18:14.122 06:28:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:18:14.122 06:28:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:18:14.122 06:28:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:14.122 06:28:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:14.122 [ 0]:0x1 00:18:14.122 06:28:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:14.122 06:28:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:14.122 06:28:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=02368ca36a0e4235ad532fe0df5675a3 00:18:14.122 06:28:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 02368ca36a0e4235ad532fe0df5675a3 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:14.122 06:28:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:18:14.122 06:28:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:18:14.122 06:28:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:14.122 06:28:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:14.122 [ 0]:0x1 00:18:14.122 06:28:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:14.122 06:28:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:14.122 06:28:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=02368ca36a0e4235ad532fe0df5675a3 00:18:14.122 06:28:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 02368ca36a0e4235ad532fe0df5675a3 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:14.122 06:28:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:18:14.122 06:28:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:14.122 06:28:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:14.122 [ 1]:0x2 00:18:14.122 06:28:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:14.122 06:28:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:14.122 06:28:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=84121940321146d58782fbe2deeb8b98 00:18:14.122 06:28:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 84121940321146d58782fbe2deeb8b98 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:14.122 06:28:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:18:14.122 06:28:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:14.381 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:14.381 06:28:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:14.639 06:28:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:18:14.898 06:28:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:18:14.898 06:28:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 6f37cfbc-ce73-4bab-95a1-c33328fccc27 -a 10.0.0.2 -s 4420 -i 4 00:18:14.898 06:28:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:18:14.898 06:28:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # local i=0 00:18:14.898 06:28:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:18:14.898 06:28:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # [[ -n 1 ]] 00:18:14.898 06:28:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_device_counter=1 00:18:14.898 06:28:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # sleep 2 00:18:16.802 06:28:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:18:16.802 06:28:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:18:16.802 06:28:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:18:16.802 06:28:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:18:16.802 06:28:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:18:16.802 06:28:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # return 0 00:18:16.802 06:28:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:18:16.802 06:28:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:18:17.061 06:28:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:18:17.061 06:28:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:18:17.061 06:28:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:18:17.061 06:28:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:18:17.061 06:28:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:18:17.061 06:28:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:18:17.061 06:28:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:17.061 06:28:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:18:17.061 06:28:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:17.061 06:28:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:18:17.061 06:28:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:17.061 06:28:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:17.061 06:28:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:17.061 06:28:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:17.061 06:28:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:18:17.061 06:28:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:17.061 06:28:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:18:17.061 06:28:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:17.061 06:28:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:17.061 06:28:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:17.061 06:28:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:18:17.061 06:28:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:17.061 06:28:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:17.061 [ 0]:0x2 00:18:17.061 06:28:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:17.061 06:28:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:17.061 06:28:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=84121940321146d58782fbe2deeb8b98 00:18:17.061 06:28:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 84121940321146d58782fbe2deeb8b98 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:17.061 06:28:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:17.320 06:28:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:18:17.320 06:28:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:17.320 06:28:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:17.320 [ 0]:0x1 00:18:17.320 06:28:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:17.320 06:28:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:17.320 06:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=02368ca36a0e4235ad532fe0df5675a3 00:18:17.320 06:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 02368ca36a0e4235ad532fe0df5675a3 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:17.320 06:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:18:17.320 06:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:17.320 06:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:17.320 [ 1]:0x2 00:18:17.320 06:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:17.320 06:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:17.320 06:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=84121940321146d58782fbe2deeb8b98 00:18:17.320 06:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 84121940321146d58782fbe2deeb8b98 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:17.320 06:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:17.579 06:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:18:17.579 06:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:18:17.579 06:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:18:17.579 06:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:18:17.579 06:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:17.579 06:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:18:17.579 06:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:17.579 06:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:18:17.579 06:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:17.579 06:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:17.579 06:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:17.579 06:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:17.579 06:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:18:17.579 06:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:17.579 06:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:18:17.579 06:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:17.579 06:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:17.579 06:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:17.579 06:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:18:17.579 06:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:17.579 06:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:17.579 [ 0]:0x2 00:18:17.579 06:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:17.579 06:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:17.579 06:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=84121940321146d58782fbe2deeb8b98 00:18:17.579 06:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 84121940321146d58782fbe2deeb8b98 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:17.579 06:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:18:17.579 06:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:17.837 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:17.837 06:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:17.837 06:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:18:17.837 06:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 6f37cfbc-ce73-4bab-95a1-c33328fccc27 -a 10.0.0.2 -s 4420 -i 4 00:18:18.094 06:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:18:18.095 06:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # local i=0 00:18:18.095 06:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:18:18.095 06:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # [[ -n 2 ]] 00:18:18.095 06:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_device_counter=2 00:18:18.095 06:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # sleep 2 00:18:20.628 06:28:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:18:20.628 06:28:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:18:20.628 06:28:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:18:20.628 06:28:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # nvme_devices=2 00:18:20.628 06:28:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:18:20.628 06:28:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # return 0 00:18:20.628 06:28:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:18:20.628 06:28:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:18:20.628 06:28:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:18:20.628 06:28:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:18:20.628 06:28:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:18:20.628 06:28:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:20.628 06:28:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:20.628 [ 0]:0x1 00:18:20.628 06:28:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:20.628 06:28:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:20.628 06:28:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=02368ca36a0e4235ad532fe0df5675a3 00:18:20.628 06:28:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 02368ca36a0e4235ad532fe0df5675a3 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:20.628 06:28:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:18:20.628 06:28:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:20.628 06:28:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:20.628 [ 1]:0x2 00:18:20.628 06:28:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:20.628 06:28:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:20.628 06:28:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=84121940321146d58782fbe2deeb8b98 00:18:20.628 06:28:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 84121940321146d58782fbe2deeb8b98 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:20.629 06:28:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:20.629 06:28:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:18:20.629 06:28:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:18:20.629 06:28:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:18:20.629 06:28:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:18:20.629 06:28:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:20.629 06:28:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:18:20.629 06:28:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:20.629 06:28:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:18:20.629 06:28:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:20.629 06:28:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:20.629 06:28:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:20.629 06:28:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:20.629 06:28:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:18:20.629 06:28:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:20.629 06:28:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:18:20.629 06:28:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:20.629 06:28:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:20.629 06:28:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:20.629 06:28:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:18:20.629 06:28:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:20.629 06:28:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:20.629 [ 0]:0x2 00:18:20.629 06:28:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:20.629 06:28:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:20.629 06:28:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=84121940321146d58782fbe2deeb8b98 00:18:20.629 06:28:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 84121940321146d58782fbe2deeb8b98 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:20.629 06:28:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:18:20.629 06:28:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:18:20.629 06:28:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:18:20.629 06:28:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:20.629 06:28:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:20.629 06:28:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:20.629 06:28:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:20.629 06:28:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:20.629 06:28:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:20.629 06:28:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:20.629 06:28:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:18:20.629 06:28:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:18:20.888 [2024-11-20 06:28:52.485550] nvmf_rpc.c:1870:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:18:20.888 request: 00:18:20.888 { 00:18:20.888 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:20.888 "nsid": 2, 00:18:20.888 "host": "nqn.2016-06.io.spdk:host1", 00:18:20.888 "method": "nvmf_ns_remove_host", 00:18:20.888 "req_id": 1 00:18:20.888 } 00:18:20.888 Got JSON-RPC error response 00:18:20.888 response: 00:18:20.888 { 00:18:20.888 "code": -32602, 00:18:20.888 "message": "Invalid parameters" 00:18:20.888 } 00:18:20.888 06:28:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:18:20.888 06:28:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:20.888 06:28:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:20.888 06:28:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:20.888 06:28:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:18:20.888 06:28:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:18:20.888 06:28:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:18:20.888 06:28:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:18:20.888 06:28:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:20.888 06:28:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:18:20.888 06:28:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:20.888 06:28:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:18:20.888 06:28:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:20.888 06:28:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:20.888 06:28:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:20.888 06:28:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:20.888 06:28:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:18:20.888 06:28:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:20.888 06:28:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:18:20.888 06:28:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:20.888 06:28:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:20.888 06:28:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:20.888 06:28:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:18:20.888 06:28:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:20.888 06:28:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:20.888 [ 0]:0x2 00:18:20.888 06:28:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:20.888 06:28:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:20.888 06:28:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=84121940321146d58782fbe2deeb8b98 00:18:20.888 06:28:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 84121940321146d58782fbe2deeb8b98 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:20.888 06:28:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:18:20.888 06:28:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:20.888 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:20.888 06:28:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=505369 00:18:20.888 06:28:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:18:20.888 06:28:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 505369 /var/tmp/host.sock 00:18:20.888 06:28:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:18:20.888 06:28:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@833 -- # '[' -z 505369 ']' 00:18:20.888 06:28:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/host.sock 00:18:20.888 06:28:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:20.888 06:28:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:18:20.888 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:18:20.888 06:28:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:20.888 06:28:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:21.147 [2024-11-20 06:28:52.722494] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:18:21.147 [2024-11-20 06:28:52.722545] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid505369 ] 00:18:21.147 [2024-11-20 06:28:52.799021] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:21.147 [2024-11-20 06:28:52.839523] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:21.406 06:28:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:21.406 06:28:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@866 -- # return 0 00:18:21.406 06:28:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:21.664 06:28:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:18:21.664 06:28:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 106ebb75-b2c3-4f8c-b90f-74af8bd361cc 00:18:21.664 06:28:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:18:21.664 06:28:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 106EBB75B2C34F8CB90F74AF8BD361CC -i 00:18:21.923 06:28:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid db2532d8-b36b-42a4-ad2d-6959bc049b4e 00:18:21.923 06:28:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:18:21.923 06:28:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g DB2532D8B36B42A4AD2D6959BC049B4E -i 00:18:22.182 06:28:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:22.441 06:28:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:18:22.441 06:28:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:18:22.441 06:28:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:18:23.008 nvme0n1 00:18:23.009 06:28:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:18:23.009 06:28:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:18:23.267 nvme1n2 00:18:23.267 06:28:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:18:23.267 06:28:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:18:23.267 06:28:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:18:23.267 06:28:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:18:23.267 06:28:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:18:23.526 06:28:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:18:23.526 06:28:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:18:23.526 06:28:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:18:23.526 06:28:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:18:23.784 06:28:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 106ebb75-b2c3-4f8c-b90f-74af8bd361cc == \1\0\6\e\b\b\7\5\-\b\2\c\3\-\4\f\8\c\-\b\9\0\f\-\7\4\a\f\8\b\d\3\6\1\c\c ]] 00:18:23.784 06:28:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:18:23.784 06:28:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:18:23.784 06:28:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:18:24.043 06:28:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ db2532d8-b36b-42a4-ad2d-6959bc049b4e == \d\b\2\5\3\2\d\8\-\b\3\6\b\-\4\2\a\4\-\a\d\2\d\-\6\9\5\9\b\c\0\4\9\b\4\e ]] 00:18:24.043 06:28:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:24.043 06:28:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:18:24.301 06:28:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid 106ebb75-b2c3-4f8c-b90f-74af8bd361cc 00:18:24.301 06:28:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:18:24.301 06:28:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 106EBB75B2C34F8CB90F74AF8BD361CC 00:18:24.301 06:28:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:18:24.301 06:28:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 106EBB75B2C34F8CB90F74AF8BD361CC 00:18:24.301 06:28:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:24.301 06:28:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:24.301 06:28:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:24.301 06:28:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:24.301 06:28:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:24.301 06:28:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:24.301 06:28:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:24.301 06:28:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:18:24.301 06:28:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 106EBB75B2C34F8CB90F74AF8BD361CC 00:18:24.560 [2024-11-20 06:28:56.215761] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:18:24.560 [2024-11-20 06:28:56.215794] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:18:24.560 [2024-11-20 06:28:56.215801] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.560 request: 00:18:24.560 { 00:18:24.560 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:24.560 "namespace": { 00:18:24.560 "bdev_name": "invalid", 00:18:24.560 "nsid": 1, 00:18:24.560 "nguid": "106EBB75B2C34F8CB90F74AF8BD361CC", 00:18:24.560 "no_auto_visible": false 00:18:24.560 }, 00:18:24.560 "method": "nvmf_subsystem_add_ns", 00:18:24.560 "req_id": 1 00:18:24.560 } 00:18:24.560 Got JSON-RPC error response 00:18:24.560 response: 00:18:24.560 { 00:18:24.560 "code": -32602, 00:18:24.560 "message": "Invalid parameters" 00:18:24.560 } 00:18:24.560 06:28:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:18:24.560 06:28:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:24.560 06:28:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:24.560 06:28:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:24.560 06:28:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid 106ebb75-b2c3-4f8c-b90f-74af8bd361cc 00:18:24.560 06:28:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:18:24.560 06:28:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 106EBB75B2C34F8CB90F74AF8BD361CC -i 00:18:24.818 06:28:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:18:26.730 06:28:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:18:26.730 06:28:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:18:26.730 06:28:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:18:26.987 06:28:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:18:26.987 06:28:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 505369 00:18:26.987 06:28:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@952 -- # '[' -z 505369 ']' 00:18:26.987 06:28:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # kill -0 505369 00:18:26.987 06:28:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@957 -- # uname 00:18:26.987 06:28:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:26.987 06:28:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 505369 00:18:26.987 06:28:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:18:26.987 06:28:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:18:26.987 06:28:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@970 -- # echo 'killing process with pid 505369' 00:18:26.988 killing process with pid 505369 00:18:26.988 06:28:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@971 -- # kill 505369 00:18:26.988 06:28:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@976 -- # wait 505369 00:18:27.246 06:28:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:27.505 06:28:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:18:27.505 06:28:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:18:27.505 06:28:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:27.505 06:28:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:18:27.505 06:28:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:27.505 06:28:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:18:27.505 06:28:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:27.505 06:28:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:27.505 rmmod nvme_tcp 00:18:27.505 rmmod nvme_fabrics 00:18:27.505 rmmod nvme_keyring 00:18:27.505 06:28:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:27.505 06:28:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:18:27.505 06:28:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:18:27.505 06:28:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 503522 ']' 00:18:27.505 06:28:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 503522 00:18:27.505 06:28:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@952 -- # '[' -z 503522 ']' 00:18:27.505 06:28:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # kill -0 503522 00:18:27.505 06:28:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@957 -- # uname 00:18:27.505 06:28:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:27.505 06:28:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 503522 00:18:27.505 06:28:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:18:27.505 06:28:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:18:27.505 06:28:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@970 -- # echo 'killing process with pid 503522' 00:18:27.505 killing process with pid 503522 00:18:27.505 06:28:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@971 -- # kill 503522 00:18:27.505 06:28:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@976 -- # wait 503522 00:18:27.768 06:28:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:27.768 06:28:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:27.768 06:28:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:27.768 06:28:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:18:27.768 06:28:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save 00:18:27.768 06:28:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:27.768 06:28:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore 00:18:27.768 06:28:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:27.768 06:28:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:27.768 06:28:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:27.768 06:28:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:27.768 06:28:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:30.321 06:29:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:30.321 00:18:30.321 real 0m26.060s 00:18:30.321 user 0m31.224s 00:18:30.321 sys 0m7.069s 00:18:30.321 06:29:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1128 -- # xtrace_disable 00:18:30.321 06:29:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:30.321 ************************************ 00:18:30.321 END TEST nvmf_ns_masking 00:18:30.321 ************************************ 00:18:30.321 06:29:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:18:30.321 06:29:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:18:30.321 06:29:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:18:30.321 06:29:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:30.321 06:29:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:30.321 ************************************ 00:18:30.321 START TEST nvmf_nvme_cli 00:18:30.321 ************************************ 00:18:30.321 06:29:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:18:30.321 * Looking for test storage... 00:18:30.321 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:30.321 06:29:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:18:30.321 06:29:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1691 -- # lcov --version 00:18:30.321 06:29:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:18:30.321 06:29:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:18:30.321 06:29:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:30.321 06:29:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:30.321 06:29:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:30.321 06:29:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:18:30.321 06:29:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:18:30.321 06:29:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:18:30.321 06:29:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:18:30.321 06:29:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:18:30.321 06:29:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:18:30.321 06:29:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:18:30.321 06:29:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:30.321 06:29:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:18:30.321 06:29:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:18:30.321 06:29:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:30.321 06:29:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:30.321 06:29:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:18:30.321 06:29:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:18:30.321 06:29:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:30.321 06:29:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:18:30.321 06:29:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:18:30.321 06:29:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:18:30.321 06:29:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:18:30.321 06:29:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:30.321 06:29:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:18:30.321 06:29:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:18:30.321 06:29:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:30.321 06:29:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:30.321 06:29:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:18:30.321 06:29:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:30.321 06:29:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:18:30.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:30.321 --rc genhtml_branch_coverage=1 00:18:30.321 --rc genhtml_function_coverage=1 00:18:30.321 --rc genhtml_legend=1 00:18:30.321 --rc geninfo_all_blocks=1 00:18:30.321 --rc geninfo_unexecuted_blocks=1 00:18:30.321 00:18:30.321 ' 00:18:30.321 06:29:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:18:30.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:30.321 --rc genhtml_branch_coverage=1 00:18:30.321 --rc genhtml_function_coverage=1 00:18:30.321 --rc genhtml_legend=1 00:18:30.321 --rc geninfo_all_blocks=1 00:18:30.321 --rc geninfo_unexecuted_blocks=1 00:18:30.321 00:18:30.321 ' 00:18:30.321 06:29:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:18:30.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:30.321 --rc genhtml_branch_coverage=1 00:18:30.321 --rc genhtml_function_coverage=1 00:18:30.321 --rc genhtml_legend=1 00:18:30.321 --rc geninfo_all_blocks=1 00:18:30.321 --rc geninfo_unexecuted_blocks=1 00:18:30.321 00:18:30.321 ' 00:18:30.321 06:29:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:18:30.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:30.321 --rc genhtml_branch_coverage=1 00:18:30.321 --rc genhtml_function_coverage=1 00:18:30.321 --rc genhtml_legend=1 00:18:30.321 --rc geninfo_all_blocks=1 00:18:30.321 --rc geninfo_unexecuted_blocks=1 00:18:30.321 00:18:30.321 ' 00:18:30.321 06:29:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:30.321 06:29:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:18:30.321 06:29:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:30.321 06:29:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:30.321 06:29:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:30.321 06:29:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:30.321 06:29:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:30.321 06:29:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:30.321 06:29:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:30.322 06:29:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:30.322 06:29:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:30.322 06:29:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:30.322 06:29:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:30.322 06:29:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:18:30.322 06:29:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:30.322 06:29:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:30.322 06:29:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:30.322 06:29:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:30.322 06:29:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:30.322 06:29:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:18:30.322 06:29:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:30.322 06:29:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:30.322 06:29:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:30.322 06:29:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:30.322 06:29:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:30.322 06:29:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:30.322 06:29:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:18:30.322 06:29:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:30.322 06:29:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:18:30.322 06:29:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:30.322 06:29:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:30.322 06:29:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:30.322 06:29:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:30.322 06:29:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:30.322 06:29:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:30.322 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:30.322 06:29:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:30.322 06:29:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:30.322 06:29:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:30.322 06:29:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:30.322 06:29:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:30.322 06:29:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:18:30.322 06:29:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:18:30.322 06:29:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:30.322 06:29:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:30.322 06:29:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:30.322 06:29:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:30.322 06:29:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:30.322 06:29:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:30.322 06:29:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:30.322 06:29:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:30.322 06:29:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:30.322 06:29:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:30.322 06:29:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:18:30.322 06:29:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:36.905 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:36.905 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:18:36.905 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:36.905 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:36.905 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:36.905 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:36.905 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:36.905 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:18:36.905 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:36.905 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:18:36.905 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:18:36.905 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:18:36.905 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:18:36.905 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:18:36.905 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:18:36.905 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:36.905 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:36.905 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:36.905 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:36.905 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:36.905 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:36.905 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:36.905 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:36.905 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:36.905 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:36.905 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:36.905 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:36.905 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:36.905 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:36.905 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:36.905 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:36.905 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:36.905 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:36.905 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:36.905 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:18:36.905 Found 0000:86:00.0 (0x8086 - 0x159b) 00:18:36.905 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:36.905 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:36.905 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:36.905 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:36.905 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:36.905 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:36.905 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:18:36.905 Found 0000:86:00.1 (0x8086 - 0x159b) 00:18:36.905 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:36.905 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:36.905 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:36.905 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:36.905 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:36.905 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:36.905 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:36.905 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:36.905 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:36.905 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:36.905 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:36.905 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:36.905 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:36.905 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:36.905 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:36.905 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:18:36.905 Found net devices under 0000:86:00.0: cvl_0_0 00:18:36.905 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:36.905 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:36.905 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:36.905 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:36.905 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:36.905 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:36.905 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:36.905 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:36.905 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:18:36.905 Found net devices under 0000:86:00.1: cvl_0_1 00:18:36.905 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:36.905 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:36.905 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:18:36.905 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:36.905 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:36.905 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:36.905 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:36.905 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:36.905 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:36.905 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:36.905 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:36.905 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:36.905 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:36.905 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:36.905 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:36.905 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:36.905 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:36.905 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:36.905 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:36.905 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:36.905 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:36.905 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:36.905 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:36.905 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:36.905 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:36.905 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:36.906 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:36.906 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:36.906 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:36.906 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:36.906 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.527 ms 00:18:36.906 00:18:36.906 --- 10.0.0.2 ping statistics --- 00:18:36.906 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:36.906 rtt min/avg/max/mdev = 0.527/0.527/0.527/0.000 ms 00:18:36.906 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:36.906 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:36.906 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.212 ms 00:18:36.906 00:18:36.906 --- 10.0.0.1 ping statistics --- 00:18:36.906 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:36.906 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:18:36.906 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:36.906 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:18:36.906 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:36.906 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:36.906 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:36.906 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:36.906 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:36.906 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:36.906 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:36.906 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:18:36.906 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:36.906 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:36.906 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:36.906 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=510018 00:18:36.906 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 510018 00:18:36.906 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:36.906 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@833 -- # '[' -z 510018 ']' 00:18:36.906 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:36.906 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:36.906 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:36.906 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:36.906 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:36.906 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:36.906 [2024-11-20 06:29:07.910274] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:18:36.906 [2024-11-20 06:29:07.910325] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:36.906 [2024-11-20 06:29:07.989791] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:36.906 [2024-11-20 06:29:08.032921] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:36.906 [2024-11-20 06:29:08.032956] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:36.906 [2024-11-20 06:29:08.032963] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:36.906 [2024-11-20 06:29:08.032969] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:36.906 [2024-11-20 06:29:08.032974] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:36.906 [2024-11-20 06:29:08.034535] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:36.906 [2024-11-20 06:29:08.034622] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:36.906 [2024-11-20 06:29:08.034732] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:36.906 [2024-11-20 06:29:08.034732] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:37.165 06:29:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:37.165 06:29:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@866 -- # return 0 00:18:37.165 06:29:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:37.165 06:29:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:37.166 06:29:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:37.166 06:29:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:37.166 06:29:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:37.166 06:29:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.166 06:29:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:37.166 [2024-11-20 06:29:08.793991] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:37.166 06:29:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.166 06:29:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:37.166 06:29:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.166 06:29:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:37.166 Malloc0 00:18:37.166 06:29:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.166 06:29:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:18:37.166 06:29:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.166 06:29:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:37.166 Malloc1 00:18:37.166 06:29:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.166 06:29:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:18:37.166 06:29:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.166 06:29:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:37.166 06:29:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.166 06:29:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:37.166 06:29:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.166 06:29:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:37.166 06:29:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.166 06:29:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:37.166 06:29:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.166 06:29:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:37.166 06:29:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.166 06:29:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:37.166 06:29:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.166 06:29:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:37.166 [2024-11-20 06:29:08.891270] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:37.166 06:29:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.166 06:29:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:37.166 06:29:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.166 06:29:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:37.166 06:29:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.166 06:29:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:18:37.425 00:18:37.425 Discovery Log Number of Records 2, Generation counter 2 00:18:37.425 =====Discovery Log Entry 0====== 00:18:37.425 trtype: tcp 00:18:37.425 adrfam: ipv4 00:18:37.425 subtype: current discovery subsystem 00:18:37.425 treq: not required 00:18:37.425 portid: 0 00:18:37.425 trsvcid: 4420 00:18:37.425 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:18:37.425 traddr: 10.0.0.2 00:18:37.425 eflags: explicit discovery connections, duplicate discovery information 00:18:37.425 sectype: none 00:18:37.425 =====Discovery Log Entry 1====== 00:18:37.425 trtype: tcp 00:18:37.425 adrfam: ipv4 00:18:37.425 subtype: nvme subsystem 00:18:37.425 treq: not required 00:18:37.425 portid: 0 00:18:37.425 trsvcid: 4420 00:18:37.425 subnqn: nqn.2016-06.io.spdk:cnode1 00:18:37.425 traddr: 10.0.0.2 00:18:37.425 eflags: none 00:18:37.425 sectype: none 00:18:37.425 06:29:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:18:37.425 06:29:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:18:37.425 06:29:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:18:37.425 06:29:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:37.425 06:29:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:18:37.425 06:29:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:18:37.425 06:29:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:37.425 06:29:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:18:37.425 06:29:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:37.425 06:29:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:18:37.425 06:29:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:38.361 06:29:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:18:38.361 06:29:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # local i=0 00:18:38.361 06:29:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:18:38.361 06:29:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # [[ -n 2 ]] 00:18:38.361 06:29:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # nvme_device_counter=2 00:18:38.361 06:29:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # sleep 2 00:18:40.892 06:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:18:40.892 06:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:18:40.892 06:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:18:40.892 06:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # nvme_devices=2 00:18:40.892 06:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:18:40.892 06:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # return 0 00:18:40.892 06:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:18:40.892 06:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:18:40.892 06:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:40.892 06:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:18:40.892 06:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:18:40.892 06:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:40.893 06:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:18:40.893 06:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:40.893 06:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:18:40.893 06:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:18:40.893 06:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:40.893 06:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:18:40.893 06:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:18:40.893 06:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:40.893 06:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:18:40.893 /dev/nvme0n2 ]] 00:18:40.893 06:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:18:40.893 06:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:18:40.893 06:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:18:40.893 06:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:40.893 06:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:18:40.893 06:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:18:40.893 06:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:40.893 06:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:18:40.893 06:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:40.893 06:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:18:40.893 06:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:18:40.893 06:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:40.893 06:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:18:40.893 06:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:18:40.893 06:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:40.893 06:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:18:40.893 06:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:41.152 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:41.152 06:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:41.152 06:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1221 -- # local i=0 00:18:41.152 06:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:18:41.152 06:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:41.152 06:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:18:41.152 06:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:41.152 06:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1233 -- # return 0 00:18:41.152 06:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:18:41.152 06:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:41.152 06:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.152 06:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:41.152 06:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.152 06:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:18:41.152 06:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:18:41.152 06:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:41.152 06:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:18:41.152 06:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:41.152 06:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:18:41.152 06:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:41.152 06:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:41.152 rmmod nvme_tcp 00:18:41.152 rmmod nvme_fabrics 00:18:41.152 rmmod nvme_keyring 00:18:41.152 06:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:41.152 06:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:18:41.152 06:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:18:41.152 06:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 510018 ']' 00:18:41.152 06:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 510018 00:18:41.152 06:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # '[' -z 510018 ']' 00:18:41.152 06:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # kill -0 510018 00:18:41.152 06:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@957 -- # uname 00:18:41.152 06:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:41.152 06:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 510018 00:18:41.152 06:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:18:41.152 06:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:18:41.152 06:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@970 -- # echo 'killing process with pid 510018' 00:18:41.152 killing process with pid 510018 00:18:41.152 06:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@971 -- # kill 510018 00:18:41.152 06:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@976 -- # wait 510018 00:18:41.411 06:29:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:41.411 06:29:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:41.411 06:29:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:41.412 06:29:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:18:41.412 06:29:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-save 00:18:41.412 06:29:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:41.412 06:29:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-restore 00:18:41.412 06:29:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:41.412 06:29:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:41.412 06:29:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:41.412 06:29:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:41.412 06:29:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:43.949 06:29:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:43.949 00:18:43.949 real 0m13.511s 00:18:43.949 user 0m22.121s 00:18:43.949 sys 0m5.071s 00:18:43.949 06:29:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1128 -- # xtrace_disable 00:18:43.949 06:29:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:43.949 ************************************ 00:18:43.949 END TEST nvmf_nvme_cli 00:18:43.949 ************************************ 00:18:43.949 06:29:15 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:18:43.949 06:29:15 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:18:43.949 06:29:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:18:43.949 06:29:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:43.949 06:29:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:43.949 ************************************ 00:18:43.949 START TEST nvmf_vfio_user 00:18:43.949 ************************************ 00:18:43.949 06:29:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:18:43.949 * Looking for test storage... 00:18:43.949 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:43.949 06:29:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:18:43.949 06:29:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1691 -- # lcov --version 00:18:43.949 06:29:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:18:43.949 06:29:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:18:43.949 06:29:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:43.949 06:29:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:43.949 06:29:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:43.949 06:29:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:18:43.949 06:29:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:18:43.949 06:29:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:18:43.949 06:29:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:18:43.949 06:29:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:18:43.949 06:29:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:18:43.949 06:29:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:18:43.949 06:29:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:43.949 06:29:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:18:43.949 06:29:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:18:43.949 06:29:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:43.949 06:29:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:43.949 06:29:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:18:43.949 06:29:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:18:43.949 06:29:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:43.949 06:29:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:18:43.949 06:29:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:18:43.949 06:29:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:18:43.949 06:29:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:18:43.949 06:29:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:43.949 06:29:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:18:43.949 06:29:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:18:43.949 06:29:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:43.949 06:29:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:43.949 06:29:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:18:43.949 06:29:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:43.949 06:29:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:18:43.949 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:43.949 --rc genhtml_branch_coverage=1 00:18:43.949 --rc genhtml_function_coverage=1 00:18:43.949 --rc genhtml_legend=1 00:18:43.949 --rc geninfo_all_blocks=1 00:18:43.949 --rc geninfo_unexecuted_blocks=1 00:18:43.949 00:18:43.949 ' 00:18:43.949 06:29:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:18:43.949 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:43.949 --rc genhtml_branch_coverage=1 00:18:43.949 --rc genhtml_function_coverage=1 00:18:43.949 --rc genhtml_legend=1 00:18:43.949 --rc geninfo_all_blocks=1 00:18:43.949 --rc geninfo_unexecuted_blocks=1 00:18:43.949 00:18:43.949 ' 00:18:43.949 06:29:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:18:43.949 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:43.949 --rc genhtml_branch_coverage=1 00:18:43.949 --rc genhtml_function_coverage=1 00:18:43.949 --rc genhtml_legend=1 00:18:43.949 --rc geninfo_all_blocks=1 00:18:43.949 --rc geninfo_unexecuted_blocks=1 00:18:43.949 00:18:43.949 ' 00:18:43.949 06:29:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:18:43.949 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:43.949 --rc genhtml_branch_coverage=1 00:18:43.949 --rc genhtml_function_coverage=1 00:18:43.949 --rc genhtml_legend=1 00:18:43.949 --rc geninfo_all_blocks=1 00:18:43.949 --rc geninfo_unexecuted_blocks=1 00:18:43.949 00:18:43.949 ' 00:18:43.950 06:29:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:43.950 06:29:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:18:43.950 06:29:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:43.950 06:29:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:43.950 06:29:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:43.950 06:29:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:43.950 06:29:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:43.950 06:29:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:43.950 06:29:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:43.950 06:29:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:43.950 06:29:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:43.950 06:29:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:43.950 06:29:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:43.950 06:29:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:18:43.950 06:29:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:43.950 06:29:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:43.950 06:29:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:43.950 06:29:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:43.950 06:29:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:43.950 06:29:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:18:43.950 06:29:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:43.950 06:29:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:43.950 06:29:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:43.950 06:29:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:43.950 06:29:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:43.950 06:29:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:43.950 06:29:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:18:43.950 06:29:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:43.950 06:29:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:18:43.950 06:29:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:43.950 06:29:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:43.950 06:29:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:43.950 06:29:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:43.950 06:29:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:43.950 06:29:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:43.950 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:43.950 06:29:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:43.950 06:29:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:43.950 06:29:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:43.950 06:29:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:18:43.950 06:29:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:18:43.950 06:29:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:18:43.950 06:29:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:43.950 06:29:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:18:43.950 06:29:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:18:43.950 06:29:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:18:43.950 06:29:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:18:43.950 06:29:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:18:43.950 06:29:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:18:43.950 06:29:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=511483 00:18:43.950 06:29:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 511483' 00:18:43.950 Process pid: 511483 00:18:43.950 06:29:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:18:43.950 06:29:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 511483 00:18:43.950 06:29:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:18:43.950 06:29:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@833 -- # '[' -z 511483 ']' 00:18:43.950 06:29:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:43.950 06:29:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:43.950 06:29:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:43.950 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:43.950 06:29:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:43.950 06:29:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:18:43.950 [2024-11-20 06:29:15.520713] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:18:43.950 [2024-11-20 06:29:15.520765] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:43.950 [2024-11-20 06:29:15.596246] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:43.950 [2024-11-20 06:29:15.638091] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:43.950 [2024-11-20 06:29:15.638123] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:43.951 [2024-11-20 06:29:15.638131] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:43.951 [2024-11-20 06:29:15.638137] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:43.951 [2024-11-20 06:29:15.638143] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:43.951 [2024-11-20 06:29:15.639521] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:43.951 [2024-11-20 06:29:15.639630] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:43.951 [2024-11-20 06:29:15.639735] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:43.951 [2024-11-20 06:29:15.639735] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:44.518 06:29:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:44.518 06:29:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@866 -- # return 0 00:18:44.518 06:29:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:18:45.896 06:29:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:18:45.896 06:29:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:18:45.896 06:29:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:18:45.896 06:29:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:45.896 06:29:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:18:45.896 06:29:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:18:46.154 Malloc1 00:18:46.154 06:29:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:18:46.154 06:29:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:18:46.412 06:29:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:18:46.670 06:29:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:46.670 06:29:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:18:46.670 06:29:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:18:46.930 Malloc2 00:18:46.930 06:29:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:18:47.189 06:29:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:18:47.189 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:18:47.474 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:18:47.474 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:18:47.474 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:47.474 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:18:47.474 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:18:47.474 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:18:47.474 [2024-11-20 06:29:19.231584] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:18:47.474 [2024-11-20 06:29:19.231617] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid512027 ] 00:18:47.474 [2024-11-20 06:29:19.271694] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:18:47.474 [2024-11-20 06:29:19.277009] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:18:47.474 [2024-11-20 06:29:19.277034] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fdeb44ed000 00:18:47.474 [2024-11-20 06:29:19.278008] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:47.474 [2024-11-20 06:29:19.279009] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:47.474 [2024-11-20 06:29:19.280018] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:47.474 [2024-11-20 06:29:19.281022] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:47.474 [2024-11-20 06:29:19.282031] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:47.474 [2024-11-20 06:29:19.283035] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:47.474 [2024-11-20 06:29:19.284041] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:47.474 [2024-11-20 06:29:19.285038] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:47.474 [2024-11-20 06:29:19.286050] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:18:47.474 [2024-11-20 06:29:19.286059] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fdeb44e2000 00:18:47.474 [2024-11-20 06:29:19.286976] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:18:47.767 [2024-11-20 06:29:19.296455] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:18:47.767 [2024-11-20 06:29:19.296484] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to connect adminq (no timeout) 00:18:47.767 [2024-11-20 06:29:19.302147] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:18:47.767 [2024-11-20 06:29:19.302184] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:18:47.767 [2024-11-20 06:29:19.302256] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for connect adminq (no timeout) 00:18:47.767 [2024-11-20 06:29:19.302273] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs (no timeout) 00:18:47.767 [2024-11-20 06:29:19.302278] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs wait for vs (no timeout) 00:18:47.767 [2024-11-20 06:29:19.303146] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:18:47.767 [2024-11-20 06:29:19.303154] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap (no timeout) 00:18:47.767 [2024-11-20 06:29:19.303161] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap wait for cap (no timeout) 00:18:47.767 [2024-11-20 06:29:19.304151] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:18:47.767 [2024-11-20 06:29:19.304159] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en (no timeout) 00:18:47.767 [2024-11-20 06:29:19.304166] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en wait for cc (timeout 15000 ms) 00:18:47.767 [2024-11-20 06:29:19.305158] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:18:47.767 [2024-11-20 06:29:19.305167] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:18:47.767 [2024-11-20 06:29:19.306156] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:18:47.767 [2024-11-20 06:29:19.306163] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 0 && CSTS.RDY = 0 00:18:47.767 [2024-11-20 06:29:19.306168] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to controller is disabled (timeout 15000 ms) 00:18:47.767 [2024-11-20 06:29:19.306176] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:18:47.767 [2024-11-20 06:29:19.306284] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Setting CC.EN = 1 00:18:47.767 [2024-11-20 06:29:19.306289] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:18:47.767 [2024-11-20 06:29:19.306293] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:18:47.767 [2024-11-20 06:29:19.307169] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:18:47.767 [2024-11-20 06:29:19.308167] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:18:47.767 [2024-11-20 06:29:19.309174] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:18:47.767 [2024-11-20 06:29:19.310171] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:47.767 [2024-11-20 06:29:19.310247] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:18:47.767 [2024-11-20 06:29:19.311184] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:18:47.767 [2024-11-20 06:29:19.311191] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:18:47.767 [2024-11-20 06:29:19.311195] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to reset admin queue (timeout 30000 ms) 00:18:47.767 [2024-11-20 06:29:19.311215] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller (no timeout) 00:18:47.767 [2024-11-20 06:29:19.311227] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify controller (timeout 30000 ms) 00:18:47.767 [2024-11-20 06:29:19.311242] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:47.767 [2024-11-20 06:29:19.311247] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:47.767 [2024-11-20 06:29:19.311250] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:47.767 [2024-11-20 06:29:19.311264] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:47.767 [2024-11-20 06:29:19.311311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:18:47.767 [2024-11-20 06:29:19.311321] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_xfer_size 131072 00:18:47.767 [2024-11-20 06:29:19.311326] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] MDTS max_xfer_size 131072 00:18:47.767 [2024-11-20 06:29:19.311330] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CNTLID 0x0001 00:18:47.767 [2024-11-20 06:29:19.311334] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:18:47.767 [2024-11-20 06:29:19.311342] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_sges 1 00:18:47.767 [2024-11-20 06:29:19.311346] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] fuses compare and write: 1 00:18:47.767 [2024-11-20 06:29:19.311350] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to configure AER (timeout 30000 ms) 00:18:47.767 [2024-11-20 06:29:19.311360] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for configure aer (timeout 30000 ms) 00:18:47.767 [2024-11-20 06:29:19.311370] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:18:47.767 [2024-11-20 06:29:19.311381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:18:47.767 [2024-11-20 06:29:19.311391] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:18:47.767 [2024-11-20 06:29:19.311399] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:18:47.767 [2024-11-20 06:29:19.311406] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:18:47.767 [2024-11-20 06:29:19.311414] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:18:47.767 [2024-11-20 06:29:19.311418] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:18:47.767 [2024-11-20 06:29:19.311424] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:18:47.767 [2024-11-20 06:29:19.311432] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:18:47.767 [2024-11-20 06:29:19.311438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:18:47.768 [2024-11-20 06:29:19.311445] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Controller adjusted keep alive timeout to 0 ms 00:18:47.768 [2024-11-20 06:29:19.311449] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:18:47.768 [2024-11-20 06:29:19.311455] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set number of queues (timeout 30000 ms) 00:18:47.768 [2024-11-20 06:29:19.311461] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:18:47.768 [2024-11-20 06:29:19.311468] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:18:47.768 [2024-11-20 06:29:19.311478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:18:47.768 [2024-11-20 06:29:19.311527] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify active ns (timeout 30000 ms) 00:18:47.768 [2024-11-20 06:29:19.311534] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:18:47.768 [2024-11-20 06:29:19.311541] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:18:47.768 [2024-11-20 06:29:19.311545] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:18:47.768 [2024-11-20 06:29:19.311548] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:47.768 [2024-11-20 06:29:19.311553] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:18:47.768 [2024-11-20 06:29:19.311564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:18:47.768 [2024-11-20 06:29:19.311573] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Namespace 1 was added 00:18:47.768 [2024-11-20 06:29:19.311582] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns (timeout 30000 ms) 00:18:47.768 [2024-11-20 06:29:19.311588] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify ns (timeout 30000 ms) 00:18:47.768 [2024-11-20 06:29:19.311594] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:47.768 [2024-11-20 06:29:19.311598] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:47.768 [2024-11-20 06:29:19.311601] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:47.768 [2024-11-20 06:29:19.311606] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:47.768 [2024-11-20 06:29:19.311627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:18:47.768 [2024-11-20 06:29:19.311639] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:18:47.768 [2024-11-20 06:29:19.311646] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:18:47.768 [2024-11-20 06:29:19.311652] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:47.768 [2024-11-20 06:29:19.311656] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:47.768 [2024-11-20 06:29:19.311658] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:47.768 [2024-11-20 06:29:19.311664] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:47.768 [2024-11-20 06:29:19.311677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:18:47.768 [2024-11-20 06:29:19.311684] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:18:47.768 [2024-11-20 06:29:19.311690] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported log pages (timeout 30000 ms) 00:18:47.768 [2024-11-20 06:29:19.311696] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported features (timeout 30000 ms) 00:18:47.768 [2024-11-20 06:29:19.311702] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:18:47.768 [2024-11-20 06:29:19.311706] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:18:47.768 [2024-11-20 06:29:19.311711] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host ID (timeout 30000 ms) 00:18:47.768 [2024-11-20 06:29:19.311715] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] NVMe-oF transport - not sending Set Features - Host ID 00:18:47.768 [2024-11-20 06:29:19.311719] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to transport ready (timeout 30000 ms) 00:18:47.768 [2024-11-20 06:29:19.311723] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to ready (no timeout) 00:18:47.768 [2024-11-20 06:29:19.311739] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:18:47.768 [2024-11-20 06:29:19.311748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:18:47.768 [2024-11-20 06:29:19.311759] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:18:47.768 [2024-11-20 06:29:19.311770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:18:47.768 [2024-11-20 06:29:19.311779] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:18:47.768 [2024-11-20 06:29:19.311789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:18:47.768 [2024-11-20 06:29:19.311799] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:18:47.768 [2024-11-20 06:29:19.311811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:18:47.768 [2024-11-20 06:29:19.311822] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:18:47.768 [2024-11-20 06:29:19.311827] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:18:47.768 [2024-11-20 06:29:19.311830] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:18:47.768 [2024-11-20 06:29:19.311833] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:18:47.768 [2024-11-20 06:29:19.311836] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:18:47.768 [2024-11-20 06:29:19.311841] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:18:47.768 [2024-11-20 06:29:19.311848] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:18:47.768 [2024-11-20 06:29:19.311852] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:18:47.768 [2024-11-20 06:29:19.311854] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:47.768 [2024-11-20 06:29:19.311860] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:18:47.768 [2024-11-20 06:29:19.311866] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:18:47.768 [2024-11-20 06:29:19.311869] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:47.768 [2024-11-20 06:29:19.311872] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:47.768 [2024-11-20 06:29:19.311878] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:47.768 [2024-11-20 06:29:19.311884] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:18:47.768 [2024-11-20 06:29:19.311888] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:18:47.768 [2024-11-20 06:29:19.311891] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:47.768 [2024-11-20 06:29:19.311896] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:18:47.768 [2024-11-20 06:29:19.311902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:18:47.768 [2024-11-20 06:29:19.311913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:18:47.768 [2024-11-20 06:29:19.311923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:18:47.768 [2024-11-20 06:29:19.311930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:18:47.768 ===================================================== 00:18:47.768 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:18:47.768 ===================================================== 00:18:47.768 Controller Capabilities/Features 00:18:47.768 ================================ 00:18:47.768 Vendor ID: 4e58 00:18:47.768 Subsystem Vendor ID: 4e58 00:18:47.768 Serial Number: SPDK1 00:18:47.768 Model Number: SPDK bdev Controller 00:18:47.768 Firmware Version: 25.01 00:18:47.768 Recommended Arb Burst: 6 00:18:47.768 IEEE OUI Identifier: 8d 6b 50 00:18:47.768 Multi-path I/O 00:18:47.768 May have multiple subsystem ports: Yes 00:18:47.768 May have multiple controllers: Yes 00:18:47.768 Associated with SR-IOV VF: No 00:18:47.768 Max Data Transfer Size: 131072 00:18:47.768 Max Number of Namespaces: 32 00:18:47.768 Max Number of I/O Queues: 127 00:18:47.768 NVMe Specification Version (VS): 1.3 00:18:47.768 NVMe Specification Version (Identify): 1.3 00:18:47.768 Maximum Queue Entries: 256 00:18:47.768 Contiguous Queues Required: Yes 00:18:47.768 Arbitration Mechanisms Supported 00:18:47.768 Weighted Round Robin: Not Supported 00:18:47.768 Vendor Specific: Not Supported 00:18:47.768 Reset Timeout: 15000 ms 00:18:47.768 Doorbell Stride: 4 bytes 00:18:47.768 NVM Subsystem Reset: Not Supported 00:18:47.768 Command Sets Supported 00:18:47.768 NVM Command Set: Supported 00:18:47.768 Boot Partition: Not Supported 00:18:47.768 Memory Page Size Minimum: 4096 bytes 00:18:47.768 Memory Page Size Maximum: 4096 bytes 00:18:47.768 Persistent Memory Region: Not Supported 00:18:47.768 Optional Asynchronous Events Supported 00:18:47.769 Namespace Attribute Notices: Supported 00:18:47.769 Firmware Activation Notices: Not Supported 00:18:47.769 ANA Change Notices: Not Supported 00:18:47.769 PLE Aggregate Log Change Notices: Not Supported 00:18:47.769 LBA Status Info Alert Notices: Not Supported 00:18:47.769 EGE Aggregate Log Change Notices: Not Supported 00:18:47.769 Normal NVM Subsystem Shutdown event: Not Supported 00:18:47.769 Zone Descriptor Change Notices: Not Supported 00:18:47.769 Discovery Log Change Notices: Not Supported 00:18:47.769 Controller Attributes 00:18:47.769 128-bit Host Identifier: Supported 00:18:47.769 Non-Operational Permissive Mode: Not Supported 00:18:47.769 NVM Sets: Not Supported 00:18:47.769 Read Recovery Levels: Not Supported 00:18:47.769 Endurance Groups: Not Supported 00:18:47.769 Predictable Latency Mode: Not Supported 00:18:47.769 Traffic Based Keep ALive: Not Supported 00:18:47.769 Namespace Granularity: Not Supported 00:18:47.769 SQ Associations: Not Supported 00:18:47.769 UUID List: Not Supported 00:18:47.769 Multi-Domain Subsystem: Not Supported 00:18:47.769 Fixed Capacity Management: Not Supported 00:18:47.769 Variable Capacity Management: Not Supported 00:18:47.769 Delete Endurance Group: Not Supported 00:18:47.769 Delete NVM Set: Not Supported 00:18:47.769 Extended LBA Formats Supported: Not Supported 00:18:47.769 Flexible Data Placement Supported: Not Supported 00:18:47.769 00:18:47.769 Controller Memory Buffer Support 00:18:47.769 ================================ 00:18:47.769 Supported: No 00:18:47.769 00:18:47.769 Persistent Memory Region Support 00:18:47.769 ================================ 00:18:47.769 Supported: No 00:18:47.769 00:18:47.769 Admin Command Set Attributes 00:18:47.769 ============================ 00:18:47.769 Security Send/Receive: Not Supported 00:18:47.769 Format NVM: Not Supported 00:18:47.769 Firmware Activate/Download: Not Supported 00:18:47.769 Namespace Management: Not Supported 00:18:47.769 Device Self-Test: Not Supported 00:18:47.769 Directives: Not Supported 00:18:47.769 NVMe-MI: Not Supported 00:18:47.769 Virtualization Management: Not Supported 00:18:47.769 Doorbell Buffer Config: Not Supported 00:18:47.769 Get LBA Status Capability: Not Supported 00:18:47.769 Command & Feature Lockdown Capability: Not Supported 00:18:47.769 Abort Command Limit: 4 00:18:47.769 Async Event Request Limit: 4 00:18:47.769 Number of Firmware Slots: N/A 00:18:47.769 Firmware Slot 1 Read-Only: N/A 00:18:47.769 Firmware Activation Without Reset: N/A 00:18:47.769 Multiple Update Detection Support: N/A 00:18:47.769 Firmware Update Granularity: No Information Provided 00:18:47.769 Per-Namespace SMART Log: No 00:18:47.769 Asymmetric Namespace Access Log Page: Not Supported 00:18:47.769 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:18:47.769 Command Effects Log Page: Supported 00:18:47.769 Get Log Page Extended Data: Supported 00:18:47.769 Telemetry Log Pages: Not Supported 00:18:47.769 Persistent Event Log Pages: Not Supported 00:18:47.769 Supported Log Pages Log Page: May Support 00:18:47.769 Commands Supported & Effects Log Page: Not Supported 00:18:47.769 Feature Identifiers & Effects Log Page:May Support 00:18:47.769 NVMe-MI Commands & Effects Log Page: May Support 00:18:47.769 Data Area 4 for Telemetry Log: Not Supported 00:18:47.769 Error Log Page Entries Supported: 128 00:18:47.769 Keep Alive: Supported 00:18:47.769 Keep Alive Granularity: 10000 ms 00:18:47.769 00:18:47.769 NVM Command Set Attributes 00:18:47.769 ========================== 00:18:47.769 Submission Queue Entry Size 00:18:47.769 Max: 64 00:18:47.769 Min: 64 00:18:47.769 Completion Queue Entry Size 00:18:47.769 Max: 16 00:18:47.769 Min: 16 00:18:47.769 Number of Namespaces: 32 00:18:47.769 Compare Command: Supported 00:18:47.769 Write Uncorrectable Command: Not Supported 00:18:47.769 Dataset Management Command: Supported 00:18:47.769 Write Zeroes Command: Supported 00:18:47.769 Set Features Save Field: Not Supported 00:18:47.769 Reservations: Not Supported 00:18:47.769 Timestamp: Not Supported 00:18:47.769 Copy: Supported 00:18:47.769 Volatile Write Cache: Present 00:18:47.769 Atomic Write Unit (Normal): 1 00:18:47.769 Atomic Write Unit (PFail): 1 00:18:47.769 Atomic Compare & Write Unit: 1 00:18:47.769 Fused Compare & Write: Supported 00:18:47.769 Scatter-Gather List 00:18:47.769 SGL Command Set: Supported (Dword aligned) 00:18:47.769 SGL Keyed: Not Supported 00:18:47.769 SGL Bit Bucket Descriptor: Not Supported 00:18:47.769 SGL Metadata Pointer: Not Supported 00:18:47.769 Oversized SGL: Not Supported 00:18:47.769 SGL Metadata Address: Not Supported 00:18:47.769 SGL Offset: Not Supported 00:18:47.769 Transport SGL Data Block: Not Supported 00:18:47.769 Replay Protected Memory Block: Not Supported 00:18:47.769 00:18:47.769 Firmware Slot Information 00:18:47.769 ========================= 00:18:47.769 Active slot: 1 00:18:47.769 Slot 1 Firmware Revision: 25.01 00:18:47.769 00:18:47.769 00:18:47.769 Commands Supported and Effects 00:18:47.769 ============================== 00:18:47.769 Admin Commands 00:18:47.769 -------------- 00:18:47.769 Get Log Page (02h): Supported 00:18:47.769 Identify (06h): Supported 00:18:47.769 Abort (08h): Supported 00:18:47.769 Set Features (09h): Supported 00:18:47.769 Get Features (0Ah): Supported 00:18:47.769 Asynchronous Event Request (0Ch): Supported 00:18:47.769 Keep Alive (18h): Supported 00:18:47.769 I/O Commands 00:18:47.769 ------------ 00:18:47.769 Flush (00h): Supported LBA-Change 00:18:47.769 Write (01h): Supported LBA-Change 00:18:47.769 Read (02h): Supported 00:18:47.769 Compare (05h): Supported 00:18:47.769 Write Zeroes (08h): Supported LBA-Change 00:18:47.769 Dataset Management (09h): Supported LBA-Change 00:18:47.769 Copy (19h): Supported LBA-Change 00:18:47.769 00:18:47.769 Error Log 00:18:47.769 ========= 00:18:47.769 00:18:47.769 Arbitration 00:18:47.769 =========== 00:18:47.769 Arbitration Burst: 1 00:18:47.769 00:18:47.769 Power Management 00:18:47.769 ================ 00:18:47.769 Number of Power States: 1 00:18:47.769 Current Power State: Power State #0 00:18:47.769 Power State #0: 00:18:47.769 Max Power: 0.00 W 00:18:47.769 Non-Operational State: Operational 00:18:47.769 Entry Latency: Not Reported 00:18:47.769 Exit Latency: Not Reported 00:18:47.769 Relative Read Throughput: 0 00:18:47.769 Relative Read Latency: 0 00:18:47.769 Relative Write Throughput: 0 00:18:47.769 Relative Write Latency: 0 00:18:47.769 Idle Power: Not Reported 00:18:47.769 Active Power: Not Reported 00:18:47.769 Non-Operational Permissive Mode: Not Supported 00:18:47.769 00:18:47.769 Health Information 00:18:47.769 ================== 00:18:47.769 Critical Warnings: 00:18:47.769 Available Spare Space: OK 00:18:47.769 Temperature: OK 00:18:47.769 Device Reliability: OK 00:18:47.769 Read Only: No 00:18:47.769 Volatile Memory Backup: OK 00:18:47.769 Current Temperature: 0 Kelvin (-273 Celsius) 00:18:47.769 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:18:47.769 Available Spare: 0% 00:18:47.769 Available Sp[2024-11-20 06:29:19.312013] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:18:47.769 [2024-11-20 06:29:19.312021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:18:47.769 [2024-11-20 06:29:19.312043] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Prepare to destruct SSD 00:18:47.769 [2024-11-20 06:29:19.312052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.769 [2024-11-20 06:29:19.312058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.769 [2024-11-20 06:29:19.312063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.769 [2024-11-20 06:29:19.312068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.769 [2024-11-20 06:29:19.312192] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:18:47.769 [2024-11-20 06:29:19.312206] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:18:47.769 [2024-11-20 06:29:19.313191] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:47.769 [2024-11-20 06:29:19.313245] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] RTD3E = 0 us 00:18:47.769 [2024-11-20 06:29:19.313251] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown timeout = 10000 ms 00:18:47.769 [2024-11-20 06:29:19.314197] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:18:47.769 [2024-11-20 06:29:19.314211] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown complete in 0 milliseconds 00:18:47.770 [2024-11-20 06:29:19.314262] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:18:47.770 [2024-11-20 06:29:19.317209] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:18:47.770 are Threshold: 0% 00:18:47.770 Life Percentage Used: 0% 00:18:47.770 Data Units Read: 0 00:18:47.770 Data Units Written: 0 00:18:47.770 Host Read Commands: 0 00:18:47.770 Host Write Commands: 0 00:18:47.770 Controller Busy Time: 0 minutes 00:18:47.770 Power Cycles: 0 00:18:47.770 Power On Hours: 0 hours 00:18:47.770 Unsafe Shutdowns: 0 00:18:47.770 Unrecoverable Media Errors: 0 00:18:47.770 Lifetime Error Log Entries: 0 00:18:47.770 Warning Temperature Time: 0 minutes 00:18:47.770 Critical Temperature Time: 0 minutes 00:18:47.770 00:18:47.770 Number of Queues 00:18:47.770 ================ 00:18:47.770 Number of I/O Submission Queues: 127 00:18:47.770 Number of I/O Completion Queues: 127 00:18:47.770 00:18:47.770 Active Namespaces 00:18:47.770 ================= 00:18:47.770 Namespace ID:1 00:18:47.770 Error Recovery Timeout: Unlimited 00:18:47.770 Command Set Identifier: NVM (00h) 00:18:47.770 Deallocate: Supported 00:18:47.770 Deallocated/Unwritten Error: Not Supported 00:18:47.770 Deallocated Read Value: Unknown 00:18:47.770 Deallocate in Write Zeroes: Not Supported 00:18:47.770 Deallocated Guard Field: 0xFFFF 00:18:47.770 Flush: Supported 00:18:47.770 Reservation: Supported 00:18:47.770 Namespace Sharing Capabilities: Multiple Controllers 00:18:47.770 Size (in LBAs): 131072 (0GiB) 00:18:47.770 Capacity (in LBAs): 131072 (0GiB) 00:18:47.770 Utilization (in LBAs): 131072 (0GiB) 00:18:47.770 NGUID: 12C5AEDCA26F41CBAB33FF36EF650D57 00:18:47.770 UUID: 12c5aedc-a26f-41cb-ab33-ff36ef650d57 00:18:47.770 Thin Provisioning: Not Supported 00:18:47.770 Per-NS Atomic Units: Yes 00:18:47.770 Atomic Boundary Size (Normal): 0 00:18:47.770 Atomic Boundary Size (PFail): 0 00:18:47.770 Atomic Boundary Offset: 0 00:18:47.770 Maximum Single Source Range Length: 65535 00:18:47.770 Maximum Copy Length: 65535 00:18:47.770 Maximum Source Range Count: 1 00:18:47.770 NGUID/EUI64 Never Reused: No 00:18:47.770 Namespace Write Protected: No 00:18:47.770 Number of LBA Formats: 1 00:18:47.770 Current LBA Format: LBA Format #00 00:18:47.770 LBA Format #00: Data Size: 512 Metadata Size: 0 00:18:47.770 00:18:47.770 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:18:47.770 [2024-11-20 06:29:19.545189] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:53.082 Initializing NVMe Controllers 00:18:53.082 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:18:53.082 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:18:53.082 Initialization complete. Launching workers. 00:18:53.082 ======================================================== 00:18:53.082 Latency(us) 00:18:53.082 Device Information : IOPS MiB/s Average min max 00:18:53.082 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39946.71 156.04 3204.07 946.77 7139.67 00:18:53.082 ======================================================== 00:18:53.082 Total : 39946.71 156.04 3204.07 946.77 7139.67 00:18:53.082 00:18:53.082 [2024-11-20 06:29:24.569157] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:53.082 06:29:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:18:53.082 [2024-11-20 06:29:24.804204] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:58.351 Initializing NVMe Controllers 00:18:58.351 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:18:58.351 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:18:58.351 Initialization complete. Launching workers. 00:18:58.351 ======================================================== 00:18:58.351 Latency(us) 00:18:58.351 Device Information : IOPS MiB/s Average min max 00:18:58.351 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16047.60 62.69 7983.16 5983.84 15447.21 00:18:58.351 ======================================================== 00:18:58.351 Total : 16047.60 62.69 7983.16 5983.84 15447.21 00:18:58.351 00:18:58.351 [2024-11-20 06:29:29.841494] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:58.351 06:29:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:18:58.351 [2024-11-20 06:29:30.052483] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:19:03.618 [2024-11-20 06:29:35.125521] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:19:03.618 Initializing NVMe Controllers 00:19:03.618 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:19:03.618 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:19:03.618 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:19:03.618 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:19:03.618 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:19:03.618 Initialization complete. Launching workers. 00:19:03.618 Starting thread on core 2 00:19:03.618 Starting thread on core 3 00:19:03.618 Starting thread on core 1 00:19:03.618 06:29:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:19:03.618 [2024-11-20 06:29:35.422598] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:19:06.903 [2024-11-20 06:29:38.481196] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:19:06.903 Initializing NVMe Controllers 00:19:06.903 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:19:06.903 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:19:06.903 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:19:06.903 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:19:06.903 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:19:06.903 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:19:06.903 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:19:06.903 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:19:06.903 Initialization complete. Launching workers. 00:19:06.903 Starting thread on core 1 with urgent priority queue 00:19:06.903 Starting thread on core 2 with urgent priority queue 00:19:06.903 Starting thread on core 3 with urgent priority queue 00:19:06.903 Starting thread on core 0 with urgent priority queue 00:19:06.903 SPDK bdev Controller (SPDK1 ) core 0: 7611.33 IO/s 13.14 secs/100000 ios 00:19:06.903 SPDK bdev Controller (SPDK1 ) core 1: 7499.67 IO/s 13.33 secs/100000 ios 00:19:06.903 SPDK bdev Controller (SPDK1 ) core 2: 9150.00 IO/s 10.93 secs/100000 ios 00:19:06.903 SPDK bdev Controller (SPDK1 ) core 3: 8967.00 IO/s 11.15 secs/100000 ios 00:19:06.903 ======================================================== 00:19:06.903 00:19:06.903 06:29:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:19:07.161 [2024-11-20 06:29:38.773632] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:19:07.161 Initializing NVMe Controllers 00:19:07.161 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:19:07.161 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:19:07.161 Namespace ID: 1 size: 0GB 00:19:07.161 Initialization complete. 00:19:07.161 INFO: using host memory buffer for IO 00:19:07.161 Hello world! 00:19:07.161 [2024-11-20 06:29:38.807861] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:19:07.161 06:29:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:19:07.419 [2024-11-20 06:29:39.086613] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:19:08.354 Initializing NVMe Controllers 00:19:08.354 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:19:08.354 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:19:08.354 Initialization complete. Launching workers. 00:19:08.354 submit (in ns) avg, min, max = 5844.7, 3207.6, 4000027.6 00:19:08.354 complete (in ns) avg, min, max = 20955.3, 1761.9, 4178099.0 00:19:08.354 00:19:08.354 Submit histogram 00:19:08.354 ================ 00:19:08.354 Range in us Cumulative Count 00:19:08.354 3.200 - 3.215: 0.0303% ( 5) 00:19:08.354 3.215 - 3.230: 0.1271% ( 16) 00:19:08.354 3.230 - 3.246: 0.3147% ( 31) 00:19:08.354 3.246 - 3.261: 0.8110% ( 82) 00:19:08.354 3.261 - 3.276: 3.0019% ( 362) 00:19:08.354 3.276 - 3.291: 8.3883% ( 890) 00:19:08.354 3.291 - 3.307: 14.4768% ( 1006) 00:19:08.354 3.307 - 3.322: 21.2371% ( 1117) 00:19:08.354 3.322 - 3.337: 28.0518% ( 1126) 00:19:08.354 3.337 - 3.352: 34.2371% ( 1022) 00:19:08.354 3.352 - 3.368: 40.0472% ( 960) 00:19:08.354 3.368 - 3.383: 45.7484% ( 942) 00:19:08.354 3.383 - 3.398: 51.5161% ( 953) 00:19:08.354 3.398 - 3.413: 56.2973% ( 790) 00:19:08.354 3.413 - 3.429: 63.9896% ( 1271) 00:19:08.354 3.429 - 3.444: 71.2522% ( 1200) 00:19:08.354 3.444 - 3.459: 75.5855% ( 716) 00:19:08.354 3.459 - 3.474: 79.9492% ( 721) 00:19:08.354 3.474 - 3.490: 82.7211% ( 458) 00:19:08.354 3.490 - 3.505: 84.8877% ( 358) 00:19:08.354 3.505 - 3.520: 85.8924% ( 166) 00:19:08.354 3.520 - 3.535: 86.4129% ( 86) 00:19:08.354 3.535 - 3.550: 86.7215% ( 51) 00:19:08.354 3.550 - 3.566: 87.0120% ( 48) 00:19:08.354 3.566 - 3.581: 87.6596% ( 107) 00:19:08.354 3.581 - 3.596: 88.4040% ( 123) 00:19:08.354 3.596 - 3.611: 89.5297% ( 186) 00:19:08.354 3.611 - 3.627: 90.5041% ( 161) 00:19:08.354 3.627 - 3.642: 91.4664% ( 159) 00:19:08.354 3.642 - 3.657: 92.5377% ( 177) 00:19:08.354 3.657 - 3.672: 93.5423% ( 166) 00:19:08.354 3.672 - 3.688: 94.7104% ( 193) 00:19:08.354 3.688 - 3.703: 95.7151% ( 166) 00:19:08.354 3.703 - 3.718: 96.5140% ( 132) 00:19:08.354 3.718 - 3.733: 97.1736% ( 109) 00:19:08.354 3.733 - 3.749: 97.6336% ( 76) 00:19:08.354 3.749 - 3.764: 97.9241% ( 48) 00:19:08.354 3.764 - 3.779: 98.2872% ( 60) 00:19:08.354 3.779 - 3.794: 98.5172% ( 38) 00:19:08.354 3.794 - 3.810: 98.6988% ( 30) 00:19:08.354 3.810 - 3.825: 98.8259% ( 21) 00:19:08.354 3.825 - 3.840: 98.9167% ( 15) 00:19:08.354 3.840 - 3.855: 98.9893% ( 12) 00:19:08.354 3.855 - 3.870: 99.0861% ( 16) 00:19:08.354 3.870 - 3.886: 99.1527% ( 11) 00:19:08.354 3.886 - 3.901: 99.1830% ( 5) 00:19:08.354 3.901 - 3.931: 99.2435% ( 10) 00:19:08.354 3.931 - 3.962: 99.3101% ( 11) 00:19:08.354 3.962 - 3.992: 99.3948% ( 14) 00:19:08.354 3.992 - 4.023: 99.4553% ( 10) 00:19:08.354 4.023 - 4.053: 99.4856% ( 5) 00:19:08.355 4.053 - 4.084: 99.5098% ( 4) 00:19:08.355 4.084 - 4.114: 99.5219% ( 2) 00:19:08.355 4.114 - 4.145: 99.5461% ( 4) 00:19:08.355 4.145 - 4.175: 99.5703% ( 4) 00:19:08.355 4.175 - 4.206: 99.6006% ( 5) 00:19:08.355 4.206 - 4.236: 99.6066% ( 1) 00:19:08.355 4.236 - 4.267: 99.6187% ( 2) 00:19:08.355 4.267 - 4.297: 99.6369% ( 3) 00:19:08.355 4.297 - 4.328: 99.6429% ( 1) 00:19:08.355 4.328 - 4.358: 99.6490% ( 1) 00:19:08.355 4.480 - 4.510: 99.6550% ( 1) 00:19:08.355 5.029 - 5.059: 99.6611% ( 1) 00:19:08.355 5.150 - 5.181: 99.6671% ( 1) 00:19:08.355 5.303 - 5.333: 99.6792% ( 2) 00:19:08.355 5.394 - 5.425: 99.6913% ( 2) 00:19:08.355 5.516 - 5.547: 99.6974% ( 1) 00:19:08.355 5.547 - 5.577: 99.7034% ( 1) 00:19:08.355 5.577 - 5.608: 99.7095% ( 1) 00:19:08.355 5.608 - 5.638: 99.7155% ( 1) 00:19:08.355 5.699 - 5.730: 99.7216% ( 1) 00:19:08.355 5.730 - 5.760: 99.7337% ( 2) 00:19:08.355 5.973 - 6.004: 99.7398% ( 1) 00:19:08.355 6.034 - 6.065: 99.7458% ( 1) 00:19:08.355 6.126 - 6.156: 99.7519% ( 1) 00:19:08.355 6.156 - 6.187: 99.7579% ( 1) 00:19:08.355 6.248 - 6.278: 99.7700% ( 2) 00:19:08.355 6.309 - 6.339: 99.7761% ( 1) 00:19:08.355 6.370 - 6.400: 99.7821% ( 1) 00:19:08.355 6.491 - 6.522: 99.7882% ( 1) 00:19:08.355 6.583 - 6.613: 99.8003% ( 2) 00:19:08.355 6.613 - 6.644: 99.8063% ( 1) 00:19:08.355 6.796 - 6.827: 99.8245% ( 3) 00:19:08.355 6.888 - 6.918: 99.8366% ( 2) 00:19:08.355 6.918 - 6.949: 99.8426% ( 1) 00:19:08.355 7.010 - 7.040: 99.8487% ( 1) 00:19:08.355 7.040 - 7.070: 99.8608% ( 2) 00:19:08.355 7.131 - 7.162: 99.8669% ( 1) 00:19:08.355 7.345 - 7.375: 99.8729% ( 1) 00:19:08.355 7.558 - 7.589: 99.8790% ( 1) 00:19:08.355 7.710 - 7.741: 99.8850% ( 1) 00:19:08.355 7.741 - 7.771: 99.8911% ( 1) 00:19:08.355 7.802 - 7.863: 99.8971% ( 1) 00:19:08.355 7.863 - 7.924: 99.9032% ( 1) 00:19:08.355 8.107 - 8.168: 99.9092% ( 1) 00:19:08.355 8.350 - 8.411: 99.9153% ( 1) 00:19:08.355 8.777 - 8.838: 99.9274% ( 2) 00:19:08.355 9.204 - 9.265: 99.9334% ( 1) 00:19:08.355 15.116 - 15.177: 99.9395% ( 1) 00:19:08.355 3994.575 - 4025.783: 100.0000% ( 10) 00:19:08.355 00:19:08.355 Complete histogram 00:19:08.355 ================== 00:19:08.355 Range in us Cumulative Count 00:19:08.355 1.760 - 1.768: 0.0363% ( 6) 00:19:08.355 1.768 - 1.775: 0.2602% ( 37) 00:19:08.355 1.775 - 1.783: 0.8291% ( 94) 00:19:08.355 1.783 - 1.790: 1.6280% ( 132) 00:19:08.355 1.790 - 1.798: 2.4027% ( 128) 00:19:08.355 1.798 - 1.806: 2.9656% ( 93) 00:19:08.355 1.806 - 1.813: 3.2500% ( 47) 00:19:08.355 1.813 - 1.821: 5.8585% ( 431) 00:19:08.355 1.821 - 1.829: 23.1132% ( 2851) 00:19:08.355 1.829 - 1.836: 56.5575% ( 5526) 00:19:08.355 1.836 - 1.844: 79.7131% ( 3826) 00:19:08.355 1.844 - 1.851: 88.1075% ( 1387) 00:19:08.355 1.851 - 1.859: 91.7085% ( 595) 00:19:08.355 1.859 - 1.867: 93.8934% ( 361) 00:19:08.355 1.867 - 1.874: 94.8980% ( 166) 00:19:08.355 1.874 - 1.882: 95.2309% ( 55) 00:19:08.355 1.882 - 1.890: 95.4367% ( 34) 00:19:08.355 1.890 - 1.897: 95.8543% ( 69) 00:19:08.355 1.897 - 1.905: 96.3324% ( 79) 00:19:08.355 1.905 - 1.912: 96.7621% ( 71) 00:19:08.355 1.912 - 1.920: 97.1797% ( 69) 00:19:08.355 1.920 - 1.928: 97.2947% ( 19) 00:19:08.355 1.928 - 1.935: 97.3976% ( 17) 00:19:08.355 1.935 - 1.943: 97.5126% ( 19) 00:19:08.355 1.943 - 1.950: 97.7062% ( 32) 00:19:08.355 1.950 - 1.966: 97.9907% ( 47) 00:19:08.355 1.966 - 1.981: 98.0451% ( 9) 00:19:08.355 1.981 - 1.996: 98.0633% ( 3) 00:19:08.355 1.996 - 2.011: 98.0694% ( 1) 00:19:08.355 2.011 - 2.027: 98.0875% ( 3) 00:19:08.355 2.027 - 2.042: 98.1117% ( 4) 00:19:08.355 2.042 - 2.057: 98.1359% ( 4) 00:19:08.355 2.057 - 2.072: 98.3538% ( 36) 00:19:08.355 2.072 - 2.088: 98.6141% ( 43) 00:19:08.355 2.088 - 2.103: 98.6504% ( 6) 00:19:08.355 2.103 - 2.118: 98.6867% ( 6) 00:19:08.355 2.118 - 2.133: 98.6988% ( 2) 00:19:08.355 2.149 - 2.164: 98.7230% ( 4) 00:19:08.355 2.164 - 2.179: 98.7351% ( 2) 00:19:08.355 2.179 - 2.194: 98.7472% ( 2) 00:19:08.355 2.194 - 2.210: 98.7896% ( 7) 00:19:08.355 2.210 - 2.225: 99.0922% ( 50) 00:19:08.355 2.225 - 2.240: 99.2677% ( 29) 00:19:08.355 2.240 - 2.255: 99.2858% ( 3) 00:19:08.355 2.255 - 2.270: 99.2979% ( 2) 00:19:08.355 2.270 - 2.286: 99.3040% ( 1) 00:19:08.355 2.286 - 2.301: 99.3222% ( 3) 00:19:08.355 2.514 - 2.530: 99.3282% ( 1) 00:19:08.355 2.545 - 2.560: 99.3343% ( 1) 00:19:08.355 3.550 - 3.566: 99.3403% ( 1) 00:19:08.355 3.764 - 3.779: 99.3464% ( 1) 00:19:08.355 3.794 - 3.810: 99.3585% ( 2) 00:19:08.355 3.870 - 3.886: 99.3645% ( 1) 00:19:08.355 3.901 - 3.931: 99.3706% ( 1) 00:19:08.355 3.931 - 3.962: 99.3766% ( 1) 00:19:08.355 4.084 - 4.114: 99.3827% ( 1) 00:19:08.355 4.114 - 4.145: 99.3887% ( 1) 00:19:08.355 4.510 - 4.541: 99.3948% ( 1) 00:19:08.355 4.876 - 4.907: 99.4008% ( 1) 00:19:08.355 5.211 - 5.242: 99.4069% ( 1) 00:19:08.355 5.242 - 5.272: 99.4129% ( 1) 00:19:08.355 5.303 - 5.333: 99.4190% ( 1) 00:19:08.355 5.516 - 5.547: 99.4311% ( 2) 00:19:08.355 5.638 - 5.669: 99.4371% ( 1) 00:19:08.355 6.034 - 6.0[2024-11-20 06:29:40.108544] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:19:08.355 65: 99.4432% ( 1) 00:19:08.355 6.095 - 6.126: 99.4493% ( 1) 00:19:08.355 6.522 - 6.552: 99.4553% ( 1) 00:19:08.355 6.613 - 6.644: 99.4614% ( 1) 00:19:08.355 6.674 - 6.705: 99.4674% ( 1) 00:19:08.355 6.766 - 6.796: 99.4735% ( 1) 00:19:08.355 7.192 - 7.223: 99.4795% ( 1) 00:19:08.355 7.802 - 7.863: 99.4856% ( 1) 00:19:08.355 8.107 - 8.168: 99.4916% ( 1) 00:19:08.355 11.215 - 11.276: 99.4977% ( 1) 00:19:08.355 12.312 - 12.373: 99.5037% ( 1) 00:19:08.355 13.897 - 13.958: 99.5098% ( 1) 00:19:08.355 17.798 - 17.920: 99.5158% ( 1) 00:19:08.355 998.644 - 1006.446: 99.5219% ( 1) 00:19:08.355 2886.705 - 2902.309: 99.5279% ( 1) 00:19:08.355 3978.971 - 3994.575: 99.5340% ( 1) 00:19:08.355 3994.575 - 4025.783: 99.9939% ( 76) 00:19:08.355 4150.613 - 4181.821: 100.0000% ( 1) 00:19:08.355 00:19:08.355 06:29:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:19:08.355 06:29:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:19:08.355 06:29:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:19:08.355 06:29:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:19:08.355 06:29:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:19:08.614 [ 00:19:08.614 { 00:19:08.614 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:19:08.614 "subtype": "Discovery", 00:19:08.614 "listen_addresses": [], 00:19:08.614 "allow_any_host": true, 00:19:08.614 "hosts": [] 00:19:08.614 }, 00:19:08.614 { 00:19:08.614 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:19:08.614 "subtype": "NVMe", 00:19:08.614 "listen_addresses": [ 00:19:08.614 { 00:19:08.614 "trtype": "VFIOUSER", 00:19:08.614 "adrfam": "IPv4", 00:19:08.614 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:19:08.614 "trsvcid": "0" 00:19:08.614 } 00:19:08.614 ], 00:19:08.614 "allow_any_host": true, 00:19:08.614 "hosts": [], 00:19:08.614 "serial_number": "SPDK1", 00:19:08.614 "model_number": "SPDK bdev Controller", 00:19:08.614 "max_namespaces": 32, 00:19:08.614 "min_cntlid": 1, 00:19:08.614 "max_cntlid": 65519, 00:19:08.614 "namespaces": [ 00:19:08.614 { 00:19:08.614 "nsid": 1, 00:19:08.615 "bdev_name": "Malloc1", 00:19:08.615 "name": "Malloc1", 00:19:08.615 "nguid": "12C5AEDCA26F41CBAB33FF36EF650D57", 00:19:08.615 "uuid": "12c5aedc-a26f-41cb-ab33-ff36ef650d57" 00:19:08.615 } 00:19:08.615 ] 00:19:08.615 }, 00:19:08.615 { 00:19:08.615 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:19:08.615 "subtype": "NVMe", 00:19:08.615 "listen_addresses": [ 00:19:08.615 { 00:19:08.615 "trtype": "VFIOUSER", 00:19:08.615 "adrfam": "IPv4", 00:19:08.615 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:19:08.615 "trsvcid": "0" 00:19:08.615 } 00:19:08.615 ], 00:19:08.615 "allow_any_host": true, 00:19:08.615 "hosts": [], 00:19:08.615 "serial_number": "SPDK2", 00:19:08.615 "model_number": "SPDK bdev Controller", 00:19:08.615 "max_namespaces": 32, 00:19:08.615 "min_cntlid": 1, 00:19:08.615 "max_cntlid": 65519, 00:19:08.615 "namespaces": [ 00:19:08.615 { 00:19:08.615 "nsid": 1, 00:19:08.615 "bdev_name": "Malloc2", 00:19:08.615 "name": "Malloc2", 00:19:08.615 "nguid": "0254B6485E56410CA29140AEF7888BB8", 00:19:08.615 "uuid": "0254b648-5e56-410c-a291-40aef7888bb8" 00:19:08.615 } 00:19:08.615 ] 00:19:08.615 } 00:19:08.615 ] 00:19:08.615 06:29:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:19:08.615 06:29:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=515488 00:19:08.615 06:29:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:19:08.615 06:29:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:19:08.615 06:29:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1267 -- # local i=0 00:19:08.615 06:29:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:08.615 06:29:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1274 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:08.615 06:29:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1278 -- # return 0 00:19:08.615 06:29:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:19:08.615 06:29:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:19:08.874 [2024-11-20 06:29:40.514617] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:19:08.874 Malloc3 00:19:08.874 06:29:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:19:09.133 [2024-11-20 06:29:40.759524] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:19:09.133 06:29:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:19:09.133 Asynchronous Event Request test 00:19:09.133 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:19:09.133 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:19:09.133 Registering asynchronous event callbacks... 00:19:09.133 Starting namespace attribute notice tests for all controllers... 00:19:09.133 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:19:09.133 aer_cb - Changed Namespace 00:19:09.133 Cleaning up... 00:19:09.133 [ 00:19:09.133 { 00:19:09.133 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:19:09.133 "subtype": "Discovery", 00:19:09.133 "listen_addresses": [], 00:19:09.133 "allow_any_host": true, 00:19:09.133 "hosts": [] 00:19:09.133 }, 00:19:09.133 { 00:19:09.133 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:19:09.133 "subtype": "NVMe", 00:19:09.133 "listen_addresses": [ 00:19:09.133 { 00:19:09.133 "trtype": "VFIOUSER", 00:19:09.133 "adrfam": "IPv4", 00:19:09.133 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:19:09.133 "trsvcid": "0" 00:19:09.133 } 00:19:09.133 ], 00:19:09.133 "allow_any_host": true, 00:19:09.133 "hosts": [], 00:19:09.133 "serial_number": "SPDK1", 00:19:09.133 "model_number": "SPDK bdev Controller", 00:19:09.133 "max_namespaces": 32, 00:19:09.133 "min_cntlid": 1, 00:19:09.133 "max_cntlid": 65519, 00:19:09.133 "namespaces": [ 00:19:09.133 { 00:19:09.133 "nsid": 1, 00:19:09.133 "bdev_name": "Malloc1", 00:19:09.133 "name": "Malloc1", 00:19:09.133 "nguid": "12C5AEDCA26F41CBAB33FF36EF650D57", 00:19:09.133 "uuid": "12c5aedc-a26f-41cb-ab33-ff36ef650d57" 00:19:09.133 }, 00:19:09.133 { 00:19:09.133 "nsid": 2, 00:19:09.133 "bdev_name": "Malloc3", 00:19:09.133 "name": "Malloc3", 00:19:09.133 "nguid": "CEB4BE5067C74E059690BEC7791076DD", 00:19:09.133 "uuid": "ceb4be50-67c7-4e05-9690-bec7791076dd" 00:19:09.133 } 00:19:09.133 ] 00:19:09.133 }, 00:19:09.133 { 00:19:09.133 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:19:09.133 "subtype": "NVMe", 00:19:09.133 "listen_addresses": [ 00:19:09.133 { 00:19:09.133 "trtype": "VFIOUSER", 00:19:09.133 "adrfam": "IPv4", 00:19:09.133 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:19:09.133 "trsvcid": "0" 00:19:09.133 } 00:19:09.133 ], 00:19:09.133 "allow_any_host": true, 00:19:09.133 "hosts": [], 00:19:09.133 "serial_number": "SPDK2", 00:19:09.133 "model_number": "SPDK bdev Controller", 00:19:09.133 "max_namespaces": 32, 00:19:09.133 "min_cntlid": 1, 00:19:09.133 "max_cntlid": 65519, 00:19:09.133 "namespaces": [ 00:19:09.133 { 00:19:09.133 "nsid": 1, 00:19:09.133 "bdev_name": "Malloc2", 00:19:09.133 "name": "Malloc2", 00:19:09.133 "nguid": "0254B6485E56410CA29140AEF7888BB8", 00:19:09.133 "uuid": "0254b648-5e56-410c-a291-40aef7888bb8" 00:19:09.133 } 00:19:09.133 ] 00:19:09.133 } 00:19:09.133 ] 00:19:09.393 06:29:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 515488 00:19:09.393 06:29:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:19:09.393 06:29:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:19:09.393 06:29:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:19:09.393 06:29:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:19:09.393 [2024-11-20 06:29:41.005849] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:19:09.393 [2024-11-20 06:29:41.005882] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid515709 ] 00:19:09.394 [2024-11-20 06:29:41.047831] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:19:09.394 [2024-11-20 06:29:41.056413] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:19:09.394 [2024-11-20 06:29:41.056440] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fcb3b6f5000 00:19:09.394 [2024-11-20 06:29:41.057416] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:19:09.394 [2024-11-20 06:29:41.058426] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:19:09.394 [2024-11-20 06:29:41.059426] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:19:09.394 [2024-11-20 06:29:41.060431] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:19:09.394 [2024-11-20 06:29:41.061443] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:19:09.394 [2024-11-20 06:29:41.062445] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:19:09.394 [2024-11-20 06:29:41.063450] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:19:09.394 [2024-11-20 06:29:41.064462] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:19:09.394 [2024-11-20 06:29:41.065469] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:19:09.394 [2024-11-20 06:29:41.065479] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fcb3b6ea000 00:19:09.394 [2024-11-20 06:29:41.066394] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:19:09.394 [2024-11-20 06:29:41.079748] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:19:09.394 [2024-11-20 06:29:41.079773] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to connect adminq (no timeout) 00:19:09.394 [2024-11-20 06:29:41.081843] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:19:09.394 [2024-11-20 06:29:41.081883] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:19:09.394 [2024-11-20 06:29:41.081952] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for connect adminq (no timeout) 00:19:09.394 [2024-11-20 06:29:41.081965] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs (no timeout) 00:19:09.394 [2024-11-20 06:29:41.081970] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs wait for vs (no timeout) 00:19:09.394 [2024-11-20 06:29:41.082849] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:19:09.394 [2024-11-20 06:29:41.082858] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap (no timeout) 00:19:09.394 [2024-11-20 06:29:41.082864] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap wait for cap (no timeout) 00:19:09.394 [2024-11-20 06:29:41.083851] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:19:09.394 [2024-11-20 06:29:41.083861] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en (no timeout) 00:19:09.394 [2024-11-20 06:29:41.083867] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en wait for cc (timeout 15000 ms) 00:19:09.394 [2024-11-20 06:29:41.084858] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:19:09.394 [2024-11-20 06:29:41.084869] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:19:09.394 [2024-11-20 06:29:41.085862] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:19:09.394 [2024-11-20 06:29:41.085871] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 0 && CSTS.RDY = 0 00:19:09.394 [2024-11-20 06:29:41.085875] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to controller is disabled (timeout 15000 ms) 00:19:09.394 [2024-11-20 06:29:41.085881] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:19:09.394 [2024-11-20 06:29:41.085988] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Setting CC.EN = 1 00:19:09.394 [2024-11-20 06:29:41.085993] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:19:09.394 [2024-11-20 06:29:41.085997] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:19:09.394 [2024-11-20 06:29:41.086866] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:19:09.394 [2024-11-20 06:29:41.087868] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:19:09.394 [2024-11-20 06:29:41.088874] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:19:09.394 [2024-11-20 06:29:41.089881] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:19:09.394 [2024-11-20 06:29:41.089918] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:19:09.394 [2024-11-20 06:29:41.090898] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:19:09.394 [2024-11-20 06:29:41.090908] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:19:09.394 [2024-11-20 06:29:41.090912] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to reset admin queue (timeout 30000 ms) 00:19:09.394 [2024-11-20 06:29:41.090929] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller (no timeout) 00:19:09.394 [2024-11-20 06:29:41.090936] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify controller (timeout 30000 ms) 00:19:09.394 [2024-11-20 06:29:41.090947] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:19:09.394 [2024-11-20 06:29:41.090951] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:19:09.394 [2024-11-20 06:29:41.090955] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:09.394 [2024-11-20 06:29:41.090966] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:19:09.394 [2024-11-20 06:29:41.097209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:19:09.394 [2024-11-20 06:29:41.097221] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_xfer_size 131072 00:19:09.394 [2024-11-20 06:29:41.097228] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] MDTS max_xfer_size 131072 00:19:09.394 [2024-11-20 06:29:41.097233] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CNTLID 0x0001 00:19:09.394 [2024-11-20 06:29:41.097237] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:19:09.394 [2024-11-20 06:29:41.097244] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_sges 1 00:19:09.394 [2024-11-20 06:29:41.097248] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] fuses compare and write: 1 00:19:09.394 [2024-11-20 06:29:41.097252] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to configure AER (timeout 30000 ms) 00:19:09.394 [2024-11-20 06:29:41.097261] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for configure aer (timeout 30000 ms) 00:19:09.394 [2024-11-20 06:29:41.097271] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:19:09.394 [2024-11-20 06:29:41.105207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:19:09.394 [2024-11-20 06:29:41.105219] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:19:09.394 [2024-11-20 06:29:41.105226] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:19:09.394 [2024-11-20 06:29:41.105233] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:19:09.394 [2024-11-20 06:29:41.105240] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:19:09.394 [2024-11-20 06:29:41.105245] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:19:09.394 [2024-11-20 06:29:41.105251] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:19:09.394 [2024-11-20 06:29:41.105259] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:19:09.394 [2024-11-20 06:29:41.113206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:19:09.394 [2024-11-20 06:29:41.113216] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Controller adjusted keep alive timeout to 0 ms 00:19:09.394 [2024-11-20 06:29:41.113221] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:19:09.394 [2024-11-20 06:29:41.113227] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set number of queues (timeout 30000 ms) 00:19:09.394 [2024-11-20 06:29:41.113232] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:19:09.394 [2024-11-20 06:29:41.113240] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:19:09.394 [2024-11-20 06:29:41.121209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:19:09.394 [2024-11-20 06:29:41.121263] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify active ns (timeout 30000 ms) 00:19:09.395 [2024-11-20 06:29:41.121271] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:19:09.395 [2024-11-20 06:29:41.121281] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:19:09.395 [2024-11-20 06:29:41.121286] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:19:09.395 [2024-11-20 06:29:41.121289] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:09.395 [2024-11-20 06:29:41.121294] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:19:09.395 [2024-11-20 06:29:41.129206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:19:09.395 [2024-11-20 06:29:41.129217] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Namespace 1 was added 00:19:09.395 [2024-11-20 06:29:41.129229] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns (timeout 30000 ms) 00:19:09.395 [2024-11-20 06:29:41.129236] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify ns (timeout 30000 ms) 00:19:09.395 [2024-11-20 06:29:41.129242] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:19:09.395 [2024-11-20 06:29:41.129246] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:19:09.395 [2024-11-20 06:29:41.129249] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:09.395 [2024-11-20 06:29:41.129255] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:19:09.395 [2024-11-20 06:29:41.137207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:19:09.395 [2024-11-20 06:29:41.137220] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:19:09.395 [2024-11-20 06:29:41.137227] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:19:09.395 [2024-11-20 06:29:41.137234] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:19:09.395 [2024-11-20 06:29:41.137238] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:19:09.395 [2024-11-20 06:29:41.137241] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:09.395 [2024-11-20 06:29:41.137246] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:19:09.395 [2024-11-20 06:29:41.145206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:19:09.395 [2024-11-20 06:29:41.145215] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:19:09.395 [2024-11-20 06:29:41.145222] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported log pages (timeout 30000 ms) 00:19:09.395 [2024-11-20 06:29:41.145229] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported features (timeout 30000 ms) 00:19:09.395 [2024-11-20 06:29:41.145235] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:19:09.395 [2024-11-20 06:29:41.145239] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:19:09.395 [2024-11-20 06:29:41.145244] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host ID (timeout 30000 ms) 00:19:09.395 [2024-11-20 06:29:41.145250] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] NVMe-oF transport - not sending Set Features - Host ID 00:19:09.395 [2024-11-20 06:29:41.145254] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to transport ready (timeout 30000 ms) 00:19:09.395 [2024-11-20 06:29:41.145259] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to ready (no timeout) 00:19:09.395 [2024-11-20 06:29:41.145274] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:19:09.395 [2024-11-20 06:29:41.153205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:19:09.395 [2024-11-20 06:29:41.153218] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:19:09.395 [2024-11-20 06:29:41.161207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:19:09.395 [2024-11-20 06:29:41.161218] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:19:09.395 [2024-11-20 06:29:41.169206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:19:09.395 [2024-11-20 06:29:41.169217] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:19:09.395 [2024-11-20 06:29:41.177208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:19:09.395 [2024-11-20 06:29:41.177223] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:19:09.395 [2024-11-20 06:29:41.177227] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:19:09.395 [2024-11-20 06:29:41.177230] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:19:09.395 [2024-11-20 06:29:41.177233] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:19:09.395 [2024-11-20 06:29:41.177237] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:19:09.395 [2024-11-20 06:29:41.177243] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:19:09.395 [2024-11-20 06:29:41.177249] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:19:09.395 [2024-11-20 06:29:41.177253] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:19:09.395 [2024-11-20 06:29:41.177256] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:09.395 [2024-11-20 06:29:41.177261] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:19:09.395 [2024-11-20 06:29:41.177267] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:19:09.395 [2024-11-20 06:29:41.177271] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:19:09.395 [2024-11-20 06:29:41.177274] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:09.395 [2024-11-20 06:29:41.177279] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:19:09.395 [2024-11-20 06:29:41.177286] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:19:09.395 [2024-11-20 06:29:41.177290] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:19:09.395 [2024-11-20 06:29:41.177293] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:09.395 [2024-11-20 06:29:41.177300] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:19:09.395 [2024-11-20 06:29:41.185205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:19:09.395 [2024-11-20 06:29:41.185218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:19:09.395 [2024-11-20 06:29:41.185227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:19:09.395 [2024-11-20 06:29:41.185234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:19:09.395 ===================================================== 00:19:09.395 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:19:09.395 ===================================================== 00:19:09.395 Controller Capabilities/Features 00:19:09.395 ================================ 00:19:09.395 Vendor ID: 4e58 00:19:09.395 Subsystem Vendor ID: 4e58 00:19:09.395 Serial Number: SPDK2 00:19:09.395 Model Number: SPDK bdev Controller 00:19:09.395 Firmware Version: 25.01 00:19:09.395 Recommended Arb Burst: 6 00:19:09.395 IEEE OUI Identifier: 8d 6b 50 00:19:09.395 Multi-path I/O 00:19:09.395 May have multiple subsystem ports: Yes 00:19:09.395 May have multiple controllers: Yes 00:19:09.395 Associated with SR-IOV VF: No 00:19:09.395 Max Data Transfer Size: 131072 00:19:09.395 Max Number of Namespaces: 32 00:19:09.395 Max Number of I/O Queues: 127 00:19:09.395 NVMe Specification Version (VS): 1.3 00:19:09.395 NVMe Specification Version (Identify): 1.3 00:19:09.395 Maximum Queue Entries: 256 00:19:09.395 Contiguous Queues Required: Yes 00:19:09.395 Arbitration Mechanisms Supported 00:19:09.395 Weighted Round Robin: Not Supported 00:19:09.395 Vendor Specific: Not Supported 00:19:09.395 Reset Timeout: 15000 ms 00:19:09.395 Doorbell Stride: 4 bytes 00:19:09.395 NVM Subsystem Reset: Not Supported 00:19:09.395 Command Sets Supported 00:19:09.395 NVM Command Set: Supported 00:19:09.395 Boot Partition: Not Supported 00:19:09.395 Memory Page Size Minimum: 4096 bytes 00:19:09.395 Memory Page Size Maximum: 4096 bytes 00:19:09.395 Persistent Memory Region: Not Supported 00:19:09.395 Optional Asynchronous Events Supported 00:19:09.395 Namespace Attribute Notices: Supported 00:19:09.395 Firmware Activation Notices: Not Supported 00:19:09.395 ANA Change Notices: Not Supported 00:19:09.395 PLE Aggregate Log Change Notices: Not Supported 00:19:09.395 LBA Status Info Alert Notices: Not Supported 00:19:09.395 EGE Aggregate Log Change Notices: Not Supported 00:19:09.395 Normal NVM Subsystem Shutdown event: Not Supported 00:19:09.395 Zone Descriptor Change Notices: Not Supported 00:19:09.395 Discovery Log Change Notices: Not Supported 00:19:09.395 Controller Attributes 00:19:09.395 128-bit Host Identifier: Supported 00:19:09.396 Non-Operational Permissive Mode: Not Supported 00:19:09.396 NVM Sets: Not Supported 00:19:09.396 Read Recovery Levels: Not Supported 00:19:09.396 Endurance Groups: Not Supported 00:19:09.396 Predictable Latency Mode: Not Supported 00:19:09.396 Traffic Based Keep ALive: Not Supported 00:19:09.396 Namespace Granularity: Not Supported 00:19:09.396 SQ Associations: Not Supported 00:19:09.396 UUID List: Not Supported 00:19:09.396 Multi-Domain Subsystem: Not Supported 00:19:09.396 Fixed Capacity Management: Not Supported 00:19:09.396 Variable Capacity Management: Not Supported 00:19:09.396 Delete Endurance Group: Not Supported 00:19:09.396 Delete NVM Set: Not Supported 00:19:09.396 Extended LBA Formats Supported: Not Supported 00:19:09.396 Flexible Data Placement Supported: Not Supported 00:19:09.396 00:19:09.396 Controller Memory Buffer Support 00:19:09.396 ================================ 00:19:09.396 Supported: No 00:19:09.396 00:19:09.396 Persistent Memory Region Support 00:19:09.396 ================================ 00:19:09.396 Supported: No 00:19:09.396 00:19:09.396 Admin Command Set Attributes 00:19:09.396 ============================ 00:19:09.396 Security Send/Receive: Not Supported 00:19:09.396 Format NVM: Not Supported 00:19:09.396 Firmware Activate/Download: Not Supported 00:19:09.396 Namespace Management: Not Supported 00:19:09.396 Device Self-Test: Not Supported 00:19:09.396 Directives: Not Supported 00:19:09.396 NVMe-MI: Not Supported 00:19:09.396 Virtualization Management: Not Supported 00:19:09.396 Doorbell Buffer Config: Not Supported 00:19:09.396 Get LBA Status Capability: Not Supported 00:19:09.396 Command & Feature Lockdown Capability: Not Supported 00:19:09.396 Abort Command Limit: 4 00:19:09.396 Async Event Request Limit: 4 00:19:09.396 Number of Firmware Slots: N/A 00:19:09.396 Firmware Slot 1 Read-Only: N/A 00:19:09.396 Firmware Activation Without Reset: N/A 00:19:09.396 Multiple Update Detection Support: N/A 00:19:09.396 Firmware Update Granularity: No Information Provided 00:19:09.396 Per-Namespace SMART Log: No 00:19:09.396 Asymmetric Namespace Access Log Page: Not Supported 00:19:09.396 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:19:09.396 Command Effects Log Page: Supported 00:19:09.396 Get Log Page Extended Data: Supported 00:19:09.396 Telemetry Log Pages: Not Supported 00:19:09.396 Persistent Event Log Pages: Not Supported 00:19:09.396 Supported Log Pages Log Page: May Support 00:19:09.396 Commands Supported & Effects Log Page: Not Supported 00:19:09.396 Feature Identifiers & Effects Log Page:May Support 00:19:09.396 NVMe-MI Commands & Effects Log Page: May Support 00:19:09.396 Data Area 4 for Telemetry Log: Not Supported 00:19:09.396 Error Log Page Entries Supported: 128 00:19:09.396 Keep Alive: Supported 00:19:09.396 Keep Alive Granularity: 10000 ms 00:19:09.396 00:19:09.396 NVM Command Set Attributes 00:19:09.396 ========================== 00:19:09.396 Submission Queue Entry Size 00:19:09.396 Max: 64 00:19:09.396 Min: 64 00:19:09.396 Completion Queue Entry Size 00:19:09.396 Max: 16 00:19:09.396 Min: 16 00:19:09.396 Number of Namespaces: 32 00:19:09.396 Compare Command: Supported 00:19:09.396 Write Uncorrectable Command: Not Supported 00:19:09.396 Dataset Management Command: Supported 00:19:09.396 Write Zeroes Command: Supported 00:19:09.396 Set Features Save Field: Not Supported 00:19:09.396 Reservations: Not Supported 00:19:09.396 Timestamp: Not Supported 00:19:09.396 Copy: Supported 00:19:09.396 Volatile Write Cache: Present 00:19:09.396 Atomic Write Unit (Normal): 1 00:19:09.396 Atomic Write Unit (PFail): 1 00:19:09.396 Atomic Compare & Write Unit: 1 00:19:09.396 Fused Compare & Write: Supported 00:19:09.396 Scatter-Gather List 00:19:09.396 SGL Command Set: Supported (Dword aligned) 00:19:09.396 SGL Keyed: Not Supported 00:19:09.396 SGL Bit Bucket Descriptor: Not Supported 00:19:09.396 SGL Metadata Pointer: Not Supported 00:19:09.396 Oversized SGL: Not Supported 00:19:09.396 SGL Metadata Address: Not Supported 00:19:09.396 SGL Offset: Not Supported 00:19:09.396 Transport SGL Data Block: Not Supported 00:19:09.396 Replay Protected Memory Block: Not Supported 00:19:09.396 00:19:09.396 Firmware Slot Information 00:19:09.396 ========================= 00:19:09.396 Active slot: 1 00:19:09.396 Slot 1 Firmware Revision: 25.01 00:19:09.396 00:19:09.396 00:19:09.396 Commands Supported and Effects 00:19:09.396 ============================== 00:19:09.396 Admin Commands 00:19:09.396 -------------- 00:19:09.396 Get Log Page (02h): Supported 00:19:09.396 Identify (06h): Supported 00:19:09.396 Abort (08h): Supported 00:19:09.396 Set Features (09h): Supported 00:19:09.396 Get Features (0Ah): Supported 00:19:09.396 Asynchronous Event Request (0Ch): Supported 00:19:09.396 Keep Alive (18h): Supported 00:19:09.396 I/O Commands 00:19:09.396 ------------ 00:19:09.396 Flush (00h): Supported LBA-Change 00:19:09.396 Write (01h): Supported LBA-Change 00:19:09.396 Read (02h): Supported 00:19:09.396 Compare (05h): Supported 00:19:09.396 Write Zeroes (08h): Supported LBA-Change 00:19:09.396 Dataset Management (09h): Supported LBA-Change 00:19:09.396 Copy (19h): Supported LBA-Change 00:19:09.396 00:19:09.396 Error Log 00:19:09.396 ========= 00:19:09.396 00:19:09.396 Arbitration 00:19:09.396 =========== 00:19:09.396 Arbitration Burst: 1 00:19:09.396 00:19:09.396 Power Management 00:19:09.396 ================ 00:19:09.396 Number of Power States: 1 00:19:09.396 Current Power State: Power State #0 00:19:09.396 Power State #0: 00:19:09.396 Max Power: 0.00 W 00:19:09.396 Non-Operational State: Operational 00:19:09.396 Entry Latency: Not Reported 00:19:09.396 Exit Latency: Not Reported 00:19:09.396 Relative Read Throughput: 0 00:19:09.396 Relative Read Latency: 0 00:19:09.396 Relative Write Throughput: 0 00:19:09.396 Relative Write Latency: 0 00:19:09.396 Idle Power: Not Reported 00:19:09.396 Active Power: Not Reported 00:19:09.396 Non-Operational Permissive Mode: Not Supported 00:19:09.396 00:19:09.396 Health Information 00:19:09.396 ================== 00:19:09.396 Critical Warnings: 00:19:09.396 Available Spare Space: OK 00:19:09.396 Temperature: OK 00:19:09.396 Device Reliability: OK 00:19:09.396 Read Only: No 00:19:09.396 Volatile Memory Backup: OK 00:19:09.396 Current Temperature: 0 Kelvin (-273 Celsius) 00:19:09.396 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:19:09.396 Available Spare: 0% 00:19:09.396 Available Sp[2024-11-20 06:29:41.185321] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:19:09.396 [2024-11-20 06:29:41.193208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:19:09.396 [2024-11-20 06:29:41.193236] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Prepare to destruct SSD 00:19:09.396 [2024-11-20 06:29:41.193245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.396 [2024-11-20 06:29:41.193251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.396 [2024-11-20 06:29:41.193256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.396 [2024-11-20 06:29:41.193262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.396 [2024-11-20 06:29:41.193311] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:19:09.396 [2024-11-20 06:29:41.193321] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:19:09.396 [2024-11-20 06:29:41.194313] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:19:09.396 [2024-11-20 06:29:41.194355] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] RTD3E = 0 us 00:19:09.396 [2024-11-20 06:29:41.194361] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown timeout = 10000 ms 00:19:09.396 [2024-11-20 06:29:41.195325] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:19:09.396 [2024-11-20 06:29:41.195336] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown complete in 0 milliseconds 00:19:09.396 [2024-11-20 06:29:41.195380] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:19:09.396 [2024-11-20 06:29:41.198208] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:19:09.656 are Threshold: 0% 00:19:09.656 Life Percentage Used: 0% 00:19:09.656 Data Units Read: 0 00:19:09.656 Data Units Written: 0 00:19:09.656 Host Read Commands: 0 00:19:09.656 Host Write Commands: 0 00:19:09.656 Controller Busy Time: 0 minutes 00:19:09.656 Power Cycles: 0 00:19:09.656 Power On Hours: 0 hours 00:19:09.656 Unsafe Shutdowns: 0 00:19:09.656 Unrecoverable Media Errors: 0 00:19:09.656 Lifetime Error Log Entries: 0 00:19:09.656 Warning Temperature Time: 0 minutes 00:19:09.656 Critical Temperature Time: 0 minutes 00:19:09.656 00:19:09.656 Number of Queues 00:19:09.656 ================ 00:19:09.656 Number of I/O Submission Queues: 127 00:19:09.656 Number of I/O Completion Queues: 127 00:19:09.656 00:19:09.656 Active Namespaces 00:19:09.656 ================= 00:19:09.656 Namespace ID:1 00:19:09.656 Error Recovery Timeout: Unlimited 00:19:09.656 Command Set Identifier: NVM (00h) 00:19:09.656 Deallocate: Supported 00:19:09.656 Deallocated/Unwritten Error: Not Supported 00:19:09.656 Deallocated Read Value: Unknown 00:19:09.656 Deallocate in Write Zeroes: Not Supported 00:19:09.656 Deallocated Guard Field: 0xFFFF 00:19:09.656 Flush: Supported 00:19:09.656 Reservation: Supported 00:19:09.656 Namespace Sharing Capabilities: Multiple Controllers 00:19:09.656 Size (in LBAs): 131072 (0GiB) 00:19:09.656 Capacity (in LBAs): 131072 (0GiB) 00:19:09.656 Utilization (in LBAs): 131072 (0GiB) 00:19:09.656 NGUID: 0254B6485E56410CA29140AEF7888BB8 00:19:09.656 UUID: 0254b648-5e56-410c-a291-40aef7888bb8 00:19:09.656 Thin Provisioning: Not Supported 00:19:09.656 Per-NS Atomic Units: Yes 00:19:09.656 Atomic Boundary Size (Normal): 0 00:19:09.656 Atomic Boundary Size (PFail): 0 00:19:09.656 Atomic Boundary Offset: 0 00:19:09.656 Maximum Single Source Range Length: 65535 00:19:09.656 Maximum Copy Length: 65535 00:19:09.656 Maximum Source Range Count: 1 00:19:09.656 NGUID/EUI64 Never Reused: No 00:19:09.656 Namespace Write Protected: No 00:19:09.656 Number of LBA Formats: 1 00:19:09.656 Current LBA Format: LBA Format #00 00:19:09.656 LBA Format #00: Data Size: 512 Metadata Size: 0 00:19:09.656 00:19:09.656 06:29:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:19:09.656 [2024-11-20 06:29:41.423378] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:19:14.928 Initializing NVMe Controllers 00:19:14.928 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:19:14.928 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:19:14.928 Initialization complete. Launching workers. 00:19:14.928 ======================================================== 00:19:14.928 Latency(us) 00:19:14.928 Device Information : IOPS MiB/s Average min max 00:19:14.928 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39950.07 156.05 3203.81 942.17 8638.67 00:19:14.928 ======================================================== 00:19:14.928 Total : 39950.07 156.05 3203.81 942.17 8638.67 00:19:14.928 00:19:14.928 [2024-11-20 06:29:46.530460] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:19:14.928 06:29:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:19:15.187 [2024-11-20 06:29:46.762135] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:19:20.459 Initializing NVMe Controllers 00:19:20.459 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:19:20.459 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:19:20.459 Initialization complete. Launching workers. 00:19:20.459 ======================================================== 00:19:20.459 Latency(us) 00:19:20.459 Device Information : IOPS MiB/s Average min max 00:19:20.459 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39864.66 155.72 3210.46 1000.77 9569.72 00:19:20.459 ======================================================== 00:19:20.459 Total : 39864.66 155.72 3210.46 1000.77 9569.72 00:19:20.459 00:19:20.459 [2024-11-20 06:29:51.781367] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:19:20.459 06:29:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:19:20.459 [2024-11-20 06:29:51.994666] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:19:25.729 [2024-11-20 06:29:57.129305] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:19:25.729 Initializing NVMe Controllers 00:19:25.729 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:19:25.729 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:19:25.729 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:19:25.729 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:19:25.729 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:19:25.729 Initialization complete. Launching workers. 00:19:25.729 Starting thread on core 2 00:19:25.729 Starting thread on core 3 00:19:25.729 Starting thread on core 1 00:19:25.729 06:29:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:19:25.729 [2024-11-20 06:29:57.425651] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:19:29.916 [2024-11-20 06:30:00.984419] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:19:29.916 Initializing NVMe Controllers 00:19:29.916 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:19:29.916 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:19:29.916 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:19:29.916 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:19:29.916 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:19:29.916 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:19:29.916 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:19:29.916 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:19:29.916 Initialization complete. Launching workers. 00:19:29.916 Starting thread on core 1 with urgent priority queue 00:19:29.916 Starting thread on core 2 with urgent priority queue 00:19:29.916 Starting thread on core 3 with urgent priority queue 00:19:29.916 Starting thread on core 0 with urgent priority queue 00:19:29.916 SPDK bdev Controller (SPDK2 ) core 0: 7114.33 IO/s 14.06 secs/100000 ios 00:19:29.916 SPDK bdev Controller (SPDK2 ) core 1: 5636.33 IO/s 17.74 secs/100000 ios 00:19:29.916 SPDK bdev Controller (SPDK2 ) core 2: 7611.00 IO/s 13.14 secs/100000 ios 00:19:29.916 SPDK bdev Controller (SPDK2 ) core 3: 5307.00 IO/s 18.84 secs/100000 ios 00:19:29.916 ======================================================== 00:19:29.916 00:19:29.916 06:30:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:19:29.916 [2024-11-20 06:30:01.276677] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:19:29.916 Initializing NVMe Controllers 00:19:29.916 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:19:29.916 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:19:29.916 Namespace ID: 1 size: 0GB 00:19:29.916 Initialization complete. 00:19:29.916 INFO: using host memory buffer for IO 00:19:29.916 Hello world! 00:19:29.916 [2024-11-20 06:30:01.286739] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:19:29.916 06:30:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:19:29.916 [2024-11-20 06:30:01.568498] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:19:30.852 Initializing NVMe Controllers 00:19:30.852 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:19:30.852 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:19:30.852 Initialization complete. Launching workers. 00:19:30.852 submit (in ns) avg, min, max = 7031.6, 3177.1, 4000454.3 00:19:30.852 complete (in ns) avg, min, max = 19967.5, 1774.3, 4000692.4 00:19:30.852 00:19:30.852 Submit histogram 00:19:30.852 ================ 00:19:30.852 Range in us Cumulative Count 00:19:30.852 3.170 - 3.185: 0.0060% ( 1) 00:19:30.852 3.185 - 3.200: 0.0121% ( 1) 00:19:30.852 3.200 - 3.215: 0.0363% ( 4) 00:19:30.852 3.215 - 3.230: 0.1330% ( 16) 00:19:30.852 3.230 - 3.246: 0.5561% ( 70) 00:19:30.852 3.246 - 3.261: 2.5267% ( 326) 00:19:30.852 3.261 - 3.276: 7.4956% ( 822) 00:19:30.852 3.276 - 3.291: 13.7883% ( 1041) 00:19:30.852 3.291 - 3.307: 20.3832% ( 1091) 00:19:30.852 3.307 - 3.322: 27.1897% ( 1126) 00:19:30.852 3.322 - 3.337: 33.4885% ( 1042) 00:19:30.852 3.337 - 3.352: 39.3580% ( 971) 00:19:30.852 3.352 - 3.368: 45.3304% ( 988) 00:19:30.852 3.368 - 3.383: 50.8009% ( 905) 00:19:30.852 3.383 - 3.398: 55.6066% ( 795) 00:19:30.852 3.398 - 3.413: 61.2706% ( 937) 00:19:30.852 3.413 - 3.429: 70.2351% ( 1483) 00:19:30.852 3.429 - 3.444: 75.2705% ( 833) 00:19:30.852 3.444 - 3.459: 79.7377% ( 739) 00:19:30.852 3.459 - 3.474: 83.4069% ( 607) 00:19:30.852 3.474 - 3.490: 85.4440% ( 337) 00:19:30.852 3.490 - 3.505: 87.0036% ( 258) 00:19:30.852 3.505 - 3.520: 87.7531% ( 124) 00:19:30.852 3.520 - 3.535: 88.0614% ( 51) 00:19:30.852 3.535 - 3.550: 88.4664% ( 67) 00:19:30.852 3.550 - 3.566: 88.9621% ( 82) 00:19:30.852 3.566 - 3.581: 89.7540% ( 131) 00:19:30.852 3.581 - 3.596: 90.7695% ( 168) 00:19:30.852 3.596 - 3.611: 91.7367% ( 160) 00:19:30.852 3.611 - 3.627: 92.6736% ( 155) 00:19:30.852 3.627 - 3.642: 93.4776% ( 133) 00:19:30.852 3.642 - 3.657: 94.3541% ( 145) 00:19:30.852 3.657 - 3.672: 95.2669% ( 151) 00:19:30.852 3.672 - 3.688: 96.3610% ( 181) 00:19:30.852 3.688 - 3.703: 97.2738% ( 151) 00:19:30.852 3.703 - 3.718: 97.8420% ( 94) 00:19:30.852 3.718 - 3.733: 98.3195% ( 79) 00:19:30.852 3.733 - 3.749: 98.6641% ( 57) 00:19:30.852 3.749 - 3.764: 98.9603% ( 49) 00:19:30.852 3.764 - 3.779: 99.2504% ( 48) 00:19:30.852 3.779 - 3.794: 99.4076% ( 26) 00:19:30.852 3.794 - 3.810: 99.5043% ( 16) 00:19:30.852 3.810 - 3.825: 99.5587% ( 9) 00:19:30.852 3.825 - 3.840: 99.6071% ( 8) 00:19:30.852 3.840 - 3.855: 99.6252% ( 3) 00:19:30.852 3.870 - 3.886: 99.6313% ( 1) 00:19:30.852 4.754 - 4.785: 99.6373% ( 1) 00:19:30.852 5.638 - 5.669: 99.6434% ( 1) 00:19:30.852 5.669 - 5.699: 99.6494% ( 1) 00:19:30.852 5.730 - 5.760: 99.6554% ( 1) 00:19:30.852 5.760 - 5.790: 99.6615% ( 1) 00:19:30.852 5.912 - 5.943: 99.6675% ( 1) 00:19:30.852 5.943 - 5.973: 99.6736% ( 1) 00:19:30.852 6.004 - 6.034: 99.6796% ( 1) 00:19:30.852 6.065 - 6.095: 99.6857% ( 1) 00:19:30.852 6.095 - 6.126: 99.7038% ( 3) 00:19:30.852 6.126 - 6.156: 99.7098% ( 1) 00:19:30.852 6.156 - 6.187: 99.7219% ( 2) 00:19:30.852 6.217 - 6.248: 99.7280% ( 1) 00:19:30.852 6.278 - 6.309: 99.7340% ( 1) 00:19:30.852 6.339 - 6.370: 99.7401% ( 1) 00:19:30.852 6.370 - 6.400: 99.7461% ( 1) 00:19:30.852 6.430 - 6.461: 99.7582% ( 2) 00:19:30.852 6.491 - 6.522: 99.7824% ( 4) 00:19:30.852 6.583 - 6.613: 99.7884% ( 1) 00:19:30.852 6.644 - 6.674: 99.7945% ( 1) 00:19:30.852 6.674 - 6.705: 99.8005% ( 1) 00:19:30.852 6.705 - 6.735: 99.8066% ( 1) 00:19:30.852 6.735 - 6.766: 99.8126% ( 1) 00:19:30.853 6.796 - 6.827: 99.8187% ( 1) 00:19:30.853 6.979 - 7.010: 99.8247% ( 1) 00:19:30.853 7.040 - 7.070: 99.8368% ( 2) 00:19:30.853 7.070 - 7.101: 99.8428% ( 1) 00:19:30.853 7.467 - 7.497: 99.8489% ( 1) 00:19:30.853 7.589 - 7.619: 99.8549% ( 1) 00:19:30.853 7.680 - 7.710: 99.8670% ( 2) 00:19:30.853 7.985 - 8.046: 99.8731% ( 1) 00:19:30.853 8.046 - 8.107: 99.8851% ( 2) 00:19:30.853 8.107 - 8.168: 99.8912% ( 1) 00:19:30.853 [2024-11-20 06:30:02.664199] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:19:31.112 8.229 - 8.290: 99.8972% ( 1) 00:19:31.112 8.899 - 8.960: 99.9033% ( 1) 00:19:31.112 9.082 - 9.143: 99.9093% ( 1) 00:19:31.112 3994.575 - 4025.783: 100.0000% ( 15) 00:19:31.112 00:19:31.112 Complete histogram 00:19:31.112 ================== 00:19:31.112 Range in us Cumulative Count 00:19:31.112 1.768 - 1.775: 0.0181% ( 3) 00:19:31.112 1.775 - 1.783: 0.0967% ( 13) 00:19:31.112 1.783 - 1.790: 0.4534% ( 59) 00:19:31.112 1.790 - 1.798: 1.0337% ( 96) 00:19:31.112 1.798 - 1.806: 1.7591% ( 120) 00:19:31.112 1.806 - 1.813: 2.2729% ( 85) 00:19:31.112 1.813 - 1.821: 4.7029% ( 402) 00:19:31.112 1.821 - 1.829: 23.7623% ( 3153) 00:19:31.112 1.829 - 1.836: 60.1100% ( 6013) 00:19:31.112 1.836 - 1.844: 82.7661% ( 3748) 00:19:31.112 1.844 - 1.851: 90.2013% ( 1230) 00:19:31.112 1.851 - 1.859: 93.3688% ( 524) 00:19:31.112 1.859 - 1.867: 95.6719% ( 381) 00:19:31.112 1.867 - 1.874: 96.4940% ( 136) 00:19:31.112 1.874 - 1.882: 96.7962% ( 50) 00:19:31.112 1.882 - 1.890: 97.1831% ( 64) 00:19:31.112 1.890 - 1.897: 97.5639% ( 63) 00:19:31.112 1.897 - 1.905: 98.1563% ( 98) 00:19:31.112 1.905 - 1.912: 98.5976% ( 73) 00:19:31.112 1.912 - 1.920: 98.9784% ( 63) 00:19:31.112 1.920 - 1.928: 99.2202% ( 40) 00:19:31.112 1.928 - 1.935: 99.2867% ( 11) 00:19:31.112 1.935 - 1.943: 99.3290% ( 7) 00:19:31.112 1.943 - 1.950: 99.3411% ( 2) 00:19:31.112 1.950 - 1.966: 99.3713% ( 5) 00:19:31.112 1.981 - 1.996: 99.3774% ( 1) 00:19:31.112 2.027 - 2.042: 99.3895% ( 2) 00:19:31.112 2.103 - 2.118: 99.3955% ( 1) 00:19:31.112 4.785 - 4.815: 99.4016% ( 1) 00:19:31.112 4.815 - 4.846: 99.4076% ( 1) 00:19:31.112 4.846 - 4.876: 99.4136% ( 1) 00:19:31.112 5.059 - 5.090: 99.4197% ( 1) 00:19:31.112 5.638 - 5.669: 99.4257% ( 1) 00:19:31.112 5.821 - 5.851: 99.4318% ( 1) 00:19:31.112 5.851 - 5.882: 99.4378% ( 1) 00:19:31.112 5.882 - 5.912: 99.4439% ( 1) 00:19:31.112 5.973 - 6.004: 99.4560% ( 2) 00:19:31.112 6.004 - 6.034: 99.4620% ( 1) 00:19:31.112 6.126 - 6.156: 99.4681% ( 1) 00:19:31.112 6.217 - 6.248: 99.4741% ( 1) 00:19:31.112 6.735 - 6.766: 99.4862% ( 2) 00:19:31.112 6.766 - 6.796: 99.4922% ( 1) 00:19:31.112 6.979 - 7.010: 99.4983% ( 1) 00:19:31.112 7.314 - 7.345: 99.5043% ( 1) 00:19:31.112 7.558 - 7.589: 99.5104% ( 1) 00:19:31.112 7.741 - 7.771: 99.5164% ( 1) 00:19:31.112 12.190 - 12.251: 99.5225% ( 1) 00:19:31.112 13.166 - 13.227: 99.5285% ( 1) 00:19:31.112 17.798 - 17.920: 99.5345% ( 1) 00:19:31.112 42.667 - 42.910: 99.5406% ( 1) 00:19:31.112 157.989 - 158.964: 99.5466% ( 1) 00:19:31.112 3994.575 - 4025.783: 100.0000% ( 75) 00:19:31.112 00:19:31.112 06:30:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:19:31.112 06:30:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:19:31.112 06:30:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:19:31.112 06:30:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:19:31.112 06:30:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:19:31.112 [ 00:19:31.112 { 00:19:31.112 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:19:31.112 "subtype": "Discovery", 00:19:31.112 "listen_addresses": [], 00:19:31.112 "allow_any_host": true, 00:19:31.112 "hosts": [] 00:19:31.112 }, 00:19:31.112 { 00:19:31.112 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:19:31.112 "subtype": "NVMe", 00:19:31.112 "listen_addresses": [ 00:19:31.112 { 00:19:31.112 "trtype": "VFIOUSER", 00:19:31.112 "adrfam": "IPv4", 00:19:31.112 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:19:31.112 "trsvcid": "0" 00:19:31.112 } 00:19:31.112 ], 00:19:31.112 "allow_any_host": true, 00:19:31.112 "hosts": [], 00:19:31.112 "serial_number": "SPDK1", 00:19:31.112 "model_number": "SPDK bdev Controller", 00:19:31.112 "max_namespaces": 32, 00:19:31.112 "min_cntlid": 1, 00:19:31.112 "max_cntlid": 65519, 00:19:31.112 "namespaces": [ 00:19:31.112 { 00:19:31.112 "nsid": 1, 00:19:31.112 "bdev_name": "Malloc1", 00:19:31.112 "name": "Malloc1", 00:19:31.112 "nguid": "12C5AEDCA26F41CBAB33FF36EF650D57", 00:19:31.112 "uuid": "12c5aedc-a26f-41cb-ab33-ff36ef650d57" 00:19:31.112 }, 00:19:31.112 { 00:19:31.112 "nsid": 2, 00:19:31.112 "bdev_name": "Malloc3", 00:19:31.112 "name": "Malloc3", 00:19:31.112 "nguid": "CEB4BE5067C74E059690BEC7791076DD", 00:19:31.112 "uuid": "ceb4be50-67c7-4e05-9690-bec7791076dd" 00:19:31.112 } 00:19:31.112 ] 00:19:31.112 }, 00:19:31.112 { 00:19:31.112 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:19:31.112 "subtype": "NVMe", 00:19:31.112 "listen_addresses": [ 00:19:31.112 { 00:19:31.112 "trtype": "VFIOUSER", 00:19:31.112 "adrfam": "IPv4", 00:19:31.112 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:19:31.112 "trsvcid": "0" 00:19:31.112 } 00:19:31.112 ], 00:19:31.112 "allow_any_host": true, 00:19:31.112 "hosts": [], 00:19:31.112 "serial_number": "SPDK2", 00:19:31.112 "model_number": "SPDK bdev Controller", 00:19:31.112 "max_namespaces": 32, 00:19:31.112 "min_cntlid": 1, 00:19:31.112 "max_cntlid": 65519, 00:19:31.112 "namespaces": [ 00:19:31.112 { 00:19:31.112 "nsid": 1, 00:19:31.112 "bdev_name": "Malloc2", 00:19:31.112 "name": "Malloc2", 00:19:31.112 "nguid": "0254B6485E56410CA29140AEF7888BB8", 00:19:31.112 "uuid": "0254b648-5e56-410c-a291-40aef7888bb8" 00:19:31.112 } 00:19:31.112 ] 00:19:31.112 } 00:19:31.112 ] 00:19:31.112 06:30:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:19:31.112 06:30:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=519338 00:19:31.112 06:30:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:19:31.112 06:30:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:19:31.112 06:30:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1267 -- # local i=0 00:19:31.112 06:30:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:31.112 06:30:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1274 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:31.112 06:30:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1278 -- # return 0 00:19:31.112 06:30:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:19:31.112 06:30:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:19:31.371 [2024-11-20 06:30:03.069617] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:19:31.371 Malloc4 00:19:31.371 06:30:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:19:31.630 [2024-11-20 06:30:03.320441] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:19:31.630 06:30:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:19:31.630 Asynchronous Event Request test 00:19:31.630 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:19:31.630 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:19:31.630 Registering asynchronous event callbacks... 00:19:31.630 Starting namespace attribute notice tests for all controllers... 00:19:31.630 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:19:31.630 aer_cb - Changed Namespace 00:19:31.630 Cleaning up... 00:19:31.889 [ 00:19:31.889 { 00:19:31.889 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:19:31.889 "subtype": "Discovery", 00:19:31.889 "listen_addresses": [], 00:19:31.889 "allow_any_host": true, 00:19:31.889 "hosts": [] 00:19:31.889 }, 00:19:31.889 { 00:19:31.889 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:19:31.889 "subtype": "NVMe", 00:19:31.889 "listen_addresses": [ 00:19:31.889 { 00:19:31.889 "trtype": "VFIOUSER", 00:19:31.889 "adrfam": "IPv4", 00:19:31.889 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:19:31.889 "trsvcid": "0" 00:19:31.889 } 00:19:31.889 ], 00:19:31.889 "allow_any_host": true, 00:19:31.889 "hosts": [], 00:19:31.889 "serial_number": "SPDK1", 00:19:31.889 "model_number": "SPDK bdev Controller", 00:19:31.889 "max_namespaces": 32, 00:19:31.889 "min_cntlid": 1, 00:19:31.889 "max_cntlid": 65519, 00:19:31.889 "namespaces": [ 00:19:31.889 { 00:19:31.889 "nsid": 1, 00:19:31.889 "bdev_name": "Malloc1", 00:19:31.889 "name": "Malloc1", 00:19:31.889 "nguid": "12C5AEDCA26F41CBAB33FF36EF650D57", 00:19:31.889 "uuid": "12c5aedc-a26f-41cb-ab33-ff36ef650d57" 00:19:31.889 }, 00:19:31.889 { 00:19:31.889 "nsid": 2, 00:19:31.889 "bdev_name": "Malloc3", 00:19:31.889 "name": "Malloc3", 00:19:31.889 "nguid": "CEB4BE5067C74E059690BEC7791076DD", 00:19:31.889 "uuid": "ceb4be50-67c7-4e05-9690-bec7791076dd" 00:19:31.889 } 00:19:31.889 ] 00:19:31.889 }, 00:19:31.889 { 00:19:31.889 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:19:31.889 "subtype": "NVMe", 00:19:31.889 "listen_addresses": [ 00:19:31.889 { 00:19:31.889 "trtype": "VFIOUSER", 00:19:31.889 "adrfam": "IPv4", 00:19:31.889 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:19:31.889 "trsvcid": "0" 00:19:31.890 } 00:19:31.890 ], 00:19:31.890 "allow_any_host": true, 00:19:31.890 "hosts": [], 00:19:31.890 "serial_number": "SPDK2", 00:19:31.890 "model_number": "SPDK bdev Controller", 00:19:31.890 "max_namespaces": 32, 00:19:31.890 "min_cntlid": 1, 00:19:31.890 "max_cntlid": 65519, 00:19:31.890 "namespaces": [ 00:19:31.890 { 00:19:31.890 "nsid": 1, 00:19:31.890 "bdev_name": "Malloc2", 00:19:31.890 "name": "Malloc2", 00:19:31.890 "nguid": "0254B6485E56410CA29140AEF7888BB8", 00:19:31.890 "uuid": "0254b648-5e56-410c-a291-40aef7888bb8" 00:19:31.890 }, 00:19:31.890 { 00:19:31.890 "nsid": 2, 00:19:31.890 "bdev_name": "Malloc4", 00:19:31.890 "name": "Malloc4", 00:19:31.890 "nguid": "6283A5D7D0C643A9830D1BD5753CD1DF", 00:19:31.890 "uuid": "6283a5d7-d0c6-43a9-830d-1bd5753cd1df" 00:19:31.890 } 00:19:31.890 ] 00:19:31.890 } 00:19:31.890 ] 00:19:31.890 06:30:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 519338 00:19:31.890 06:30:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:19:31.890 06:30:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 511483 00:19:31.890 06:30:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@952 -- # '[' -z 511483 ']' 00:19:31.890 06:30:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # kill -0 511483 00:19:31.890 06:30:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@957 -- # uname 00:19:31.890 06:30:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:31.890 06:30:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 511483 00:19:31.890 06:30:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:19:31.890 06:30:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:19:31.890 06:30:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@970 -- # echo 'killing process with pid 511483' 00:19:31.890 killing process with pid 511483 00:19:31.890 06:30:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@971 -- # kill 511483 00:19:31.890 06:30:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@976 -- # wait 511483 00:19:32.149 06:30:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:19:32.149 06:30:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:19:32.149 06:30:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:19:32.149 06:30:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:19:32.149 06:30:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:19:32.149 06:30:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=519526 00:19:32.149 06:30:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 519526' 00:19:32.149 Process pid: 519526 00:19:32.149 06:30:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:19:32.149 06:30:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:19:32.149 06:30:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 519526 00:19:32.149 06:30:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@833 -- # '[' -z 519526 ']' 00:19:32.149 06:30:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:32.149 06:30:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:32.149 06:30:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:32.149 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:32.149 06:30:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:32.149 06:30:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:19:32.149 [2024-11-20 06:30:03.882071] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:19:32.149 [2024-11-20 06:30:03.882975] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:19:32.149 [2024-11-20 06:30:03.883018] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:32.149 [2024-11-20 06:30:03.958579] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:32.408 [2024-11-20 06:30:03.997780] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:32.408 [2024-11-20 06:30:03.997815] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:32.408 [2024-11-20 06:30:03.997821] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:32.408 [2024-11-20 06:30:03.997845] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:32.408 [2024-11-20 06:30:03.997850] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:32.408 [2024-11-20 06:30:03.999400] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:32.408 [2024-11-20 06:30:03.999516] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:32.408 [2024-11-20 06:30:03.999626] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:32.408 [2024-11-20 06:30:03.999627] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:19:32.408 [2024-11-20 06:30:04.067392] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:19:32.408 [2024-11-20 06:30:04.067781] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:19:32.408 [2024-11-20 06:30:04.068217] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:19:32.408 [2024-11-20 06:30:04.068526] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:19:32.408 [2024-11-20 06:30:04.068581] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:19:32.408 06:30:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:32.408 06:30:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@866 -- # return 0 00:19:32.408 06:30:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:19:33.346 06:30:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:19:33.605 06:30:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:19:33.605 06:30:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:19:33.605 06:30:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:19:33.605 06:30:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:19:33.605 06:30:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:19:33.864 Malloc1 00:19:33.864 06:30:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:19:34.122 06:30:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:19:34.122 06:30:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:19:34.380 06:30:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:19:34.380 06:30:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:19:34.380 06:30:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:19:34.639 Malloc2 00:19:34.639 06:30:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:19:34.898 06:30:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:19:35.156 06:30:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:19:35.156 06:30:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:19:35.156 06:30:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 519526 00:19:35.156 06:30:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@952 -- # '[' -z 519526 ']' 00:19:35.156 06:30:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # kill -0 519526 00:19:35.156 06:30:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@957 -- # uname 00:19:35.156 06:30:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:35.156 06:30:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 519526 00:19:35.416 06:30:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:19:35.416 06:30:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:19:35.416 06:30:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@970 -- # echo 'killing process with pid 519526' 00:19:35.416 killing process with pid 519526 00:19:35.416 06:30:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@971 -- # kill 519526 00:19:35.416 06:30:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@976 -- # wait 519526 00:19:35.416 06:30:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:19:35.416 06:30:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:19:35.416 00:19:35.416 real 0m51.974s 00:19:35.416 user 3m21.260s 00:19:35.416 sys 0m3.315s 00:19:35.416 06:30:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:35.416 06:30:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:19:35.416 ************************************ 00:19:35.416 END TEST nvmf_vfio_user 00:19:35.416 ************************************ 00:19:35.676 06:30:07 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:19:35.676 06:30:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:19:35.676 06:30:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:35.676 06:30:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:35.676 ************************************ 00:19:35.676 START TEST nvmf_vfio_user_nvme_compliance 00:19:35.676 ************************************ 00:19:35.676 06:30:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:19:35.676 * Looking for test storage... 00:19:35.676 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:19:35.676 06:30:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:19:35.676 06:30:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1691 -- # lcov --version 00:19:35.676 06:30:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:19:35.676 06:30:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:19:35.676 06:30:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:35.676 06:30:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:35.676 06:30:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:35.676 06:30:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:19:35.676 06:30:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:19:35.676 06:30:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:19:35.676 06:30:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:19:35.676 06:30:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:19:35.676 06:30:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:19:35.676 06:30:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:19:35.676 06:30:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:35.676 06:30:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:19:35.676 06:30:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:19:35.676 06:30:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:35.676 06:30:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:35.676 06:30:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:19:35.676 06:30:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:19:35.676 06:30:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:35.676 06:30:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:19:35.676 06:30:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:19:35.676 06:30:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:19:35.676 06:30:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:19:35.676 06:30:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:35.676 06:30:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:19:35.676 06:30:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:19:35.676 06:30:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:35.676 06:30:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:35.676 06:30:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:19:35.676 06:30:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:35.676 06:30:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:19:35.676 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:35.676 --rc genhtml_branch_coverage=1 00:19:35.676 --rc genhtml_function_coverage=1 00:19:35.676 --rc genhtml_legend=1 00:19:35.676 --rc geninfo_all_blocks=1 00:19:35.676 --rc geninfo_unexecuted_blocks=1 00:19:35.676 00:19:35.676 ' 00:19:35.676 06:30:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:19:35.676 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:35.676 --rc genhtml_branch_coverage=1 00:19:35.676 --rc genhtml_function_coverage=1 00:19:35.676 --rc genhtml_legend=1 00:19:35.676 --rc geninfo_all_blocks=1 00:19:35.676 --rc geninfo_unexecuted_blocks=1 00:19:35.676 00:19:35.676 ' 00:19:35.676 06:30:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:19:35.676 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:35.676 --rc genhtml_branch_coverage=1 00:19:35.676 --rc genhtml_function_coverage=1 00:19:35.676 --rc genhtml_legend=1 00:19:35.676 --rc geninfo_all_blocks=1 00:19:35.676 --rc geninfo_unexecuted_blocks=1 00:19:35.676 00:19:35.676 ' 00:19:35.676 06:30:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:19:35.676 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:35.676 --rc genhtml_branch_coverage=1 00:19:35.676 --rc genhtml_function_coverage=1 00:19:35.676 --rc genhtml_legend=1 00:19:35.676 --rc geninfo_all_blocks=1 00:19:35.676 --rc geninfo_unexecuted_blocks=1 00:19:35.676 00:19:35.676 ' 00:19:35.676 06:30:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:35.676 06:30:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:19:35.676 06:30:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:35.676 06:30:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:35.676 06:30:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:35.676 06:30:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:35.676 06:30:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:35.676 06:30:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:35.676 06:30:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:35.676 06:30:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:35.676 06:30:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:35.676 06:30:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:35.677 06:30:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:35.677 06:30:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:19:35.677 06:30:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:35.677 06:30:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:35.677 06:30:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:35.677 06:30:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:35.677 06:30:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:35.677 06:30:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:19:35.677 06:30:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:35.677 06:30:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:35.677 06:30:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:35.677 06:30:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:35.677 06:30:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:35.677 06:30:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:35.677 06:30:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:19:35.677 06:30:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:35.677 06:30:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:19:35.677 06:30:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:35.677 06:30:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:35.677 06:30:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:35.677 06:30:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:35.677 06:30:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:35.677 06:30:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:35.677 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:35.677 06:30:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:35.677 06:30:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:35.677 06:30:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:35.937 06:30:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:35.937 06:30:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:35.937 06:30:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:19:35.937 06:30:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:19:35.937 06:30:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:19:35.937 06:30:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=520676 00:19:35.937 06:30:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 520676' 00:19:35.937 Process pid: 520676 00:19:35.937 06:30:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:19:35.937 06:30:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:19:35.937 06:30:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 520676 00:19:35.937 06:30:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@833 -- # '[' -z 520676 ']' 00:19:35.937 06:30:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:35.937 06:30:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:35.937 06:30:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:35.937 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:35.937 06:30:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:35.937 06:30:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:35.937 [2024-11-20 06:30:07.563422] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:19:35.937 [2024-11-20 06:30:07.563470] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:35.937 [2024-11-20 06:30:07.636106] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:35.937 [2024-11-20 06:30:07.678109] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:35.937 [2024-11-20 06:30:07.678146] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:35.937 [2024-11-20 06:30:07.678153] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:35.937 [2024-11-20 06:30:07.678159] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:35.937 [2024-11-20 06:30:07.678164] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:35.937 [2024-11-20 06:30:07.679601] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:35.937 [2024-11-20 06:30:07.679712] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:35.937 [2024-11-20 06:30:07.679713] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:36.195 06:30:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:36.195 06:30:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@866 -- # return 0 00:19:36.195 06:30:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:19:37.131 06:30:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:19:37.131 06:30:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:19:37.131 06:30:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:19:37.131 06:30:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:37.131 06:30:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:37.131 06:30:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:37.131 06:30:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:19:37.131 06:30:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:19:37.131 06:30:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:37.131 06:30:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:37.131 malloc0 00:19:37.131 06:30:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:37.131 06:30:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:19:37.131 06:30:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:37.131 06:30:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:37.131 06:30:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:37.131 06:30:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:19:37.131 06:30:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:37.131 06:30:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:37.131 06:30:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:37.132 06:30:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:19:37.132 06:30:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:37.132 06:30:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:37.132 06:30:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:37.132 06:30:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:19:37.389 00:19:37.389 00:19:37.389 CUnit - A unit testing framework for C - Version 2.1-3 00:19:37.389 http://cunit.sourceforge.net/ 00:19:37.389 00:19:37.389 00:19:37.389 Suite: nvme_compliance 00:19:37.389 Test: admin_identify_ctrlr_verify_dptr ...[2024-11-20 06:30:09.018657] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:37.389 [2024-11-20 06:30:09.020011] vfio_user.c: 800:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:19:37.390 [2024-11-20 06:30:09.020026] vfio_user.c:5503:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:19:37.390 [2024-11-20 06:30:09.020032] vfio_user.c:5596:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:19:37.390 [2024-11-20 06:30:09.022691] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:37.390 passed 00:19:37.390 Test: admin_identify_ctrlr_verify_fused ...[2024-11-20 06:30:09.102234] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:37.390 [2024-11-20 06:30:09.105257] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:37.390 passed 00:19:37.390 Test: admin_identify_ns ...[2024-11-20 06:30:09.184513] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:37.646 [2024-11-20 06:30:09.245215] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:19:37.646 [2024-11-20 06:30:09.253215] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:19:37.646 [2024-11-20 06:30:09.274293] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:37.646 passed 00:19:37.646 Test: admin_get_features_mandatory_features ...[2024-11-20 06:30:09.348144] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:37.646 [2024-11-20 06:30:09.351167] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:37.646 passed 00:19:37.646 Test: admin_get_features_optional_features ...[2024-11-20 06:30:09.427676] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:37.646 [2024-11-20 06:30:09.431706] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:37.646 passed 00:19:37.904 Test: admin_set_features_number_of_queues ...[2024-11-20 06:30:09.508440] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:37.904 [2024-11-20 06:30:09.614301] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:37.904 passed 00:19:37.904 Test: admin_get_log_page_mandatory_logs ...[2024-11-20 06:30:09.691162] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:37.904 [2024-11-20 06:30:09.694179] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:37.904 passed 00:19:38.162 Test: admin_get_log_page_with_lpo ...[2024-11-20 06:30:09.770558] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:38.162 [2024-11-20 06:30:09.839213] ctrlr.c:2697:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:19:38.162 [2024-11-20 06:30:09.852279] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:38.162 passed 00:19:38.162 Test: fabric_property_get ...[2024-11-20 06:30:09.925037] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:38.162 [2024-11-20 06:30:09.926272] vfio_user.c:5596:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:19:38.162 [2024-11-20 06:30:09.930065] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:38.162 passed 00:19:38.420 Test: admin_delete_io_sq_use_admin_qid ...[2024-11-20 06:30:10.006582] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:38.420 [2024-11-20 06:30:10.007833] vfio_user.c:2305:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:19:38.420 [2024-11-20 06:30:10.010607] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:38.420 passed 00:19:38.420 Test: admin_delete_io_sq_delete_sq_twice ...[2024-11-20 06:30:10.089801] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:38.420 [2024-11-20 06:30:10.173213] vfio_user.c:2305:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:19:38.420 [2024-11-20 06:30:10.189220] vfio_user.c:2305:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:19:38.420 [2024-11-20 06:30:10.194361] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:38.420 passed 00:19:38.679 Test: admin_delete_io_cq_use_admin_qid ...[2024-11-20 06:30:10.269334] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:38.679 [2024-11-20 06:30:10.270566] vfio_user.c:2305:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:19:38.679 [2024-11-20 06:30:10.272356] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:38.679 passed 00:19:38.679 Test: admin_delete_io_cq_delete_cq_first ...[2024-11-20 06:30:10.350108] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:38.679 [2024-11-20 06:30:10.425211] vfio_user.c:2315:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:19:38.679 [2024-11-20 06:30:10.449211] vfio_user.c:2305:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:19:38.679 [2024-11-20 06:30:10.454288] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:38.679 passed 00:19:38.938 Test: admin_create_io_cq_verify_iv_pc ...[2024-11-20 06:30:10.529948] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:38.938 [2024-11-20 06:30:10.531192] vfio_user.c:2154:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:19:38.938 [2024-11-20 06:30:10.531218] vfio_user.c:2148:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:19:38.938 [2024-11-20 06:30:10.535983] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:38.938 passed 00:19:38.938 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-11-20 06:30:10.611761] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:38.938 [2024-11-20 06:30:10.703210] vfio_user.c:2236:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:19:38.938 [2024-11-20 06:30:10.711211] vfio_user.c:2236:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:19:38.938 [2024-11-20 06:30:10.719206] vfio_user.c:2034:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:19:38.938 [2024-11-20 06:30:10.727218] vfio_user.c:2034:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:19:38.938 [2024-11-20 06:30:10.756288] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:39.197 passed 00:19:39.197 Test: admin_create_io_sq_verify_pc ...[2024-11-20 06:30:10.831971] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:39.197 [2024-11-20 06:30:10.848218] vfio_user.c:2047:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:19:39.197 [2024-11-20 06:30:10.866156] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:39.197 passed 00:19:39.197 Test: admin_create_io_qp_max_qps ...[2024-11-20 06:30:10.943690] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:40.570 [2024-11-20 06:30:12.030210] nvme_ctrlr.c:5523:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user, 0] No free I/O queue IDs 00:19:40.828 [2024-11-20 06:30:12.413816] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:40.828 passed 00:19:40.828 Test: admin_create_io_sq_shared_cq ...[2024-11-20 06:30:12.490845] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:40.828 [2024-11-20 06:30:12.622209] vfio_user.c:2315:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:19:40.828 [2024-11-20 06:30:12.659279] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:41.087 passed 00:19:41.087 00:19:41.087 Run Summary: Type Total Ran Passed Failed Inactive 00:19:41.087 suites 1 1 n/a 0 0 00:19:41.087 tests 18 18 18 0 0 00:19:41.087 asserts 360 360 360 0 n/a 00:19:41.087 00:19:41.087 Elapsed time = 1.494 seconds 00:19:41.087 06:30:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 520676 00:19:41.087 06:30:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # '[' -z 520676 ']' 00:19:41.087 06:30:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # kill -0 520676 00:19:41.087 06:30:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@957 -- # uname 00:19:41.087 06:30:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:41.087 06:30:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 520676 00:19:41.087 06:30:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:19:41.088 06:30:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:19:41.088 06:30:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@970 -- # echo 'killing process with pid 520676' 00:19:41.088 killing process with pid 520676 00:19:41.088 06:30:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@971 -- # kill 520676 00:19:41.088 06:30:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@976 -- # wait 520676 00:19:41.347 06:30:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:19:41.347 06:30:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:19:41.347 00:19:41.347 real 0m5.642s 00:19:41.347 user 0m15.718s 00:19:41.347 sys 0m0.526s 00:19:41.347 06:30:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:41.347 06:30:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:41.347 ************************************ 00:19:41.347 END TEST nvmf_vfio_user_nvme_compliance 00:19:41.347 ************************************ 00:19:41.347 06:30:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:19:41.347 06:30:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:19:41.347 06:30:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:41.347 06:30:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:41.347 ************************************ 00:19:41.347 START TEST nvmf_vfio_user_fuzz 00:19:41.347 ************************************ 00:19:41.347 06:30:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:19:41.347 * Looking for test storage... 00:19:41.347 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:41.347 06:30:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:19:41.347 06:30:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1691 -- # lcov --version 00:19:41.347 06:30:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:19:41.347 06:30:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:19:41.347 06:30:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:41.347 06:30:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:41.347 06:30:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:41.347 06:30:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:19:41.347 06:30:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:19:41.347 06:30:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:19:41.347 06:30:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:19:41.347 06:30:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:19:41.347 06:30:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:19:41.347 06:30:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:19:41.347 06:30:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:41.347 06:30:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:19:41.347 06:30:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:19:41.347 06:30:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:41.347 06:30:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:41.347 06:30:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:19:41.347 06:30:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:19:41.347 06:30:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:41.347 06:30:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:19:41.347 06:30:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:19:41.607 06:30:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:19:41.607 06:30:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:19:41.607 06:30:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:41.607 06:30:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:19:41.607 06:30:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:19:41.607 06:30:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:41.607 06:30:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:41.607 06:30:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:19:41.607 06:30:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:41.607 06:30:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:19:41.607 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:41.607 --rc genhtml_branch_coverage=1 00:19:41.607 --rc genhtml_function_coverage=1 00:19:41.607 --rc genhtml_legend=1 00:19:41.607 --rc geninfo_all_blocks=1 00:19:41.607 --rc geninfo_unexecuted_blocks=1 00:19:41.607 00:19:41.607 ' 00:19:41.607 06:30:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:19:41.607 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:41.607 --rc genhtml_branch_coverage=1 00:19:41.607 --rc genhtml_function_coverage=1 00:19:41.607 --rc genhtml_legend=1 00:19:41.607 --rc geninfo_all_blocks=1 00:19:41.607 --rc geninfo_unexecuted_blocks=1 00:19:41.607 00:19:41.607 ' 00:19:41.607 06:30:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:19:41.607 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:41.607 --rc genhtml_branch_coverage=1 00:19:41.607 --rc genhtml_function_coverage=1 00:19:41.607 --rc genhtml_legend=1 00:19:41.607 --rc geninfo_all_blocks=1 00:19:41.607 --rc geninfo_unexecuted_blocks=1 00:19:41.607 00:19:41.607 ' 00:19:41.607 06:30:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:19:41.607 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:41.607 --rc genhtml_branch_coverage=1 00:19:41.607 --rc genhtml_function_coverage=1 00:19:41.607 --rc genhtml_legend=1 00:19:41.607 --rc geninfo_all_blocks=1 00:19:41.607 --rc geninfo_unexecuted_blocks=1 00:19:41.607 00:19:41.607 ' 00:19:41.607 06:30:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:41.607 06:30:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:19:41.607 06:30:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:41.607 06:30:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:41.607 06:30:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:41.607 06:30:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:41.607 06:30:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:41.607 06:30:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:41.607 06:30:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:41.607 06:30:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:41.607 06:30:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:41.607 06:30:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:41.607 06:30:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:41.607 06:30:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:19:41.607 06:30:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:41.607 06:30:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:41.607 06:30:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:41.607 06:30:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:41.607 06:30:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:41.607 06:30:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:19:41.607 06:30:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:41.607 06:30:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:41.607 06:30:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:41.607 06:30:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:41.607 06:30:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:41.607 06:30:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:41.608 06:30:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:19:41.608 06:30:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:41.608 06:30:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:19:41.608 06:30:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:41.608 06:30:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:41.608 06:30:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:41.608 06:30:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:41.608 06:30:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:41.608 06:30:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:41.608 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:41.608 06:30:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:41.608 06:30:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:41.608 06:30:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:41.608 06:30:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:19:41.608 06:30:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:19:41.608 06:30:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:19:41.608 06:30:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:19:41.608 06:30:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:19:41.608 06:30:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:19:41.608 06:30:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:19:41.608 06:30:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=521665 00:19:41.608 06:30:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 521665' 00:19:41.608 Process pid: 521665 00:19:41.608 06:30:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:19:41.608 06:30:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 521665 00:19:41.608 06:30:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:19:41.608 06:30:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@833 -- # '[' -z 521665 ']' 00:19:41.608 06:30:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:41.608 06:30:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:41.608 06:30:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:41.608 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:41.608 06:30:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:41.608 06:30:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:41.867 06:30:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:41.867 06:30:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@866 -- # return 0 00:19:41.867 06:30:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:19:42.803 06:30:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:19:42.803 06:30:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.803 06:30:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:42.803 06:30:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.803 06:30:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:19:42.803 06:30:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:19:42.803 06:30:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.803 06:30:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:42.803 malloc0 00:19:42.803 06:30:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.803 06:30:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:19:42.803 06:30:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.803 06:30:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:42.803 06:30:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.803 06:30:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:19:42.803 06:30:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.803 06:30:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:42.803 06:30:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.803 06:30:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:19:42.804 06:30:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.804 06:30:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:42.804 06:30:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.804 06:30:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:19:42.804 06:30:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:20:14.987 Fuzzing completed. Shutting down the fuzz application 00:20:14.987 00:20:14.987 Dumping successful admin opcodes: 00:20:14.987 8, 9, 10, 24, 00:20:14.987 Dumping successful io opcodes: 00:20:14.987 0, 00:20:14.987 NS: 0x20000081ef00 I/O qp, Total commands completed: 1139259, total successful commands: 4488, random_seed: 441258752 00:20:14.987 NS: 0x20000081ef00 admin qp, Total commands completed: 282638, total successful commands: 2274, random_seed: 1002976704 00:20:14.987 06:30:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:20:14.987 06:30:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:14.987 06:30:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:14.987 06:30:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:14.987 06:30:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 521665 00:20:14.987 06:30:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # '[' -z 521665 ']' 00:20:14.987 06:30:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # kill -0 521665 00:20:14.987 06:30:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@957 -- # uname 00:20:14.987 06:30:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:14.987 06:30:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 521665 00:20:14.987 06:30:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:20:14.987 06:30:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:20:14.987 06:30:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@970 -- # echo 'killing process with pid 521665' 00:20:14.987 killing process with pid 521665 00:20:14.987 06:30:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@971 -- # kill 521665 00:20:14.987 06:30:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@976 -- # wait 521665 00:20:14.987 06:30:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:20:14.987 06:30:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:20:14.987 00:20:14.987 real 0m32.235s 00:20:14.987 user 0m33.687s 00:20:14.987 sys 0m27.588s 00:20:14.987 06:30:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:14.987 06:30:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:14.987 ************************************ 00:20:14.987 END TEST nvmf_vfio_user_fuzz 00:20:14.987 ************************************ 00:20:14.987 06:30:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:20:14.987 06:30:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:20:14.987 06:30:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:14.987 06:30:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:14.987 ************************************ 00:20:14.987 START TEST nvmf_auth_target 00:20:14.987 ************************************ 00:20:14.987 06:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:20:14.987 * Looking for test storage... 00:20:14.987 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:14.987 06:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:20:14.987 06:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # lcov --version 00:20:14.987 06:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:20:14.987 06:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:20:14.987 06:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:14.987 06:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:14.987 06:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:14.987 06:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:20:14.987 06:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:20:14.987 06:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:20:14.987 06:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:20:14.987 06:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:20:14.987 06:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:20:14.987 06:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:20:14.987 06:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:14.987 06:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:20:14.987 06:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:20:14.987 06:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:14.987 06:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:14.987 06:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:20:14.987 06:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:20:14.987 06:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:14.987 06:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:20:14.987 06:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:20:14.987 06:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:20:14.988 06:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:20:14.988 06:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:14.988 06:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:20:14.988 06:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:20:14.988 06:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:14.988 06:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:14.988 06:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:20:14.988 06:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:14.988 06:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:20:14.988 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:14.988 --rc genhtml_branch_coverage=1 00:20:14.988 --rc genhtml_function_coverage=1 00:20:14.988 --rc genhtml_legend=1 00:20:14.988 --rc geninfo_all_blocks=1 00:20:14.988 --rc geninfo_unexecuted_blocks=1 00:20:14.988 00:20:14.988 ' 00:20:14.988 06:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:20:14.988 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:14.988 --rc genhtml_branch_coverage=1 00:20:14.988 --rc genhtml_function_coverage=1 00:20:14.988 --rc genhtml_legend=1 00:20:14.988 --rc geninfo_all_blocks=1 00:20:14.988 --rc geninfo_unexecuted_blocks=1 00:20:14.988 00:20:14.988 ' 00:20:14.988 06:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:20:14.988 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:14.988 --rc genhtml_branch_coverage=1 00:20:14.988 --rc genhtml_function_coverage=1 00:20:14.988 --rc genhtml_legend=1 00:20:14.988 --rc geninfo_all_blocks=1 00:20:14.988 --rc geninfo_unexecuted_blocks=1 00:20:14.988 00:20:14.988 ' 00:20:14.988 06:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:20:14.988 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:14.988 --rc genhtml_branch_coverage=1 00:20:14.988 --rc genhtml_function_coverage=1 00:20:14.988 --rc genhtml_legend=1 00:20:14.988 --rc geninfo_all_blocks=1 00:20:14.988 --rc geninfo_unexecuted_blocks=1 00:20:14.988 00:20:14.988 ' 00:20:14.988 06:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:14.988 06:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:20:14.988 06:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:14.988 06:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:14.988 06:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:14.988 06:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:14.988 06:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:14.988 06:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:14.988 06:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:14.988 06:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:14.988 06:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:14.988 06:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:14.988 06:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:14.988 06:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:20:14.988 06:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:14.988 06:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:14.988 06:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:14.988 06:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:14.988 06:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:14.988 06:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:20:14.988 06:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:14.988 06:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:14.988 06:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:14.988 06:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:14.988 06:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:14.988 06:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:14.988 06:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:20:14.988 06:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:14.988 06:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:20:14.988 06:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:14.988 06:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:14.988 06:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:14.988 06:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:14.988 06:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:14.988 06:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:14.988 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:14.988 06:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:14.988 06:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:14.988 06:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:14.988 06:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:20:14.988 06:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:20:14.988 06:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:20:14.988 06:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:14.988 06:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:20:14.988 06:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:20:14.988 06:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:20:14.988 06:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:20:14.988 06:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:14.988 06:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:14.988 06:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:14.988 06:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:14.988 06:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:14.988 06:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:14.988 06:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:14.989 06:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:14.989 06:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:14.989 06:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:14.989 06:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:20:14.989 06:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.265 06:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:20.265 06:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:20:20.265 06:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:20.265 06:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:20.265 06:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:20.265 06:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:20.265 06:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:20.265 06:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:20:20.265 06:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:20.265 06:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:20:20.265 06:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:20:20.265 06:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:20:20.265 06:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:20:20.265 06:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:20:20.265 06:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:20:20.265 06:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:20.265 06:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:20.265 06:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:20.265 06:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:20.265 06:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:20.265 06:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:20.265 06:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:20.265 06:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:20.265 06:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:20.265 06:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:20.265 06:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:20.265 06:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:20.265 06:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:20.265 06:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:20.265 06:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:20.265 06:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:20.265 06:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:20.265 06:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:20.265 06:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:20.266 06:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:20.266 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:20.266 06:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:20.266 06:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:20.266 06:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:20.266 06:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:20.266 06:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:20.266 06:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:20.266 06:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:20.266 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:20.266 06:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:20.266 06:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:20.266 06:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:20.266 06:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:20.266 06:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:20.266 06:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:20.266 06:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:20.266 06:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:20.266 06:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:20.266 06:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:20.266 06:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:20.266 06:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:20.266 06:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:20.266 06:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:20.266 06:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:20.266 06:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:20.266 Found net devices under 0000:86:00.0: cvl_0_0 00:20:20.266 06:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:20.266 06:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:20.266 06:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:20.266 06:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:20.266 06:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:20.266 06:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:20.266 06:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:20.266 06:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:20.266 06:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:20.266 Found net devices under 0000:86:00.1: cvl_0_1 00:20:20.266 06:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:20.266 06:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:20.266 06:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:20:20.266 06:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:20.266 06:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:20.266 06:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:20.266 06:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:20.266 06:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:20.266 06:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:20.266 06:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:20.266 06:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:20.266 06:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:20.266 06:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:20.266 06:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:20.266 06:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:20.266 06:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:20.266 06:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:20.266 06:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:20.266 06:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:20.266 06:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:20.266 06:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:20.266 06:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:20.266 06:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:20.266 06:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:20.266 06:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:20.266 06:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:20.266 06:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:20.266 06:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:20.266 06:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:20.266 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:20.266 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.391 ms 00:20:20.266 00:20:20.266 --- 10.0.0.2 ping statistics --- 00:20:20.266 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:20.266 rtt min/avg/max/mdev = 0.391/0.391/0.391/0.000 ms 00:20:20.266 06:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:20.266 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:20.266 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.110 ms 00:20:20.266 00:20:20.266 --- 10.0.0.1 ping statistics --- 00:20:20.266 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:20.266 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:20:20.266 06:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:20.266 06:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:20:20.266 06:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:20.266 06:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:20.266 06:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:20.266 06:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:20.266 06:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:20.266 06:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:20.266 06:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:20.266 06:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:20:20.266 06:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:20.266 06:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:20.266 06:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.266 06:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=529977 00:20:20.266 06:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 529977 00:20:20.266 06:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:20:20.266 06:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 529977 ']' 00:20:20.266 06:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:20.266 06:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:20.266 06:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:20.266 06:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:20.266 06:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.833 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:20.833 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:20:20.833 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:20.833 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:20.833 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.833 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:20.833 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=530177 00:20:20.833 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:20:20.833 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:20:20.833 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:20:20.833 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:20:20.833 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:20.833 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:20:20.833 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:20:20.833 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:20:20.833 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:20:20.833 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=751c3d84608b68436996a60c9e302a0b9a836e5665638c97 00:20:20.833 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:20:20.833 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.jNY 00:20:20.833 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 751c3d84608b68436996a60c9e302a0b9a836e5665638c97 0 00:20:20.833 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 751c3d84608b68436996a60c9e302a0b9a836e5665638c97 0 00:20:20.833 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:20:20.833 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:20:20.833 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=751c3d84608b68436996a60c9e302a0b9a836e5665638c97 00:20:20.833 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:20:20.833 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:20:20.833 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.jNY 00:20:20.833 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.jNY 00:20:20.833 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.jNY 00:20:20.833 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:20:20.833 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:20:20.833 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:20.833 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:20:20.833 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:20:20.833 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:20:20.834 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:20:20.834 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=71bf674480f9b1d8f7484cc7ed183913d29375c554e91644921ccf80e5c1ae59 00:20:20.834 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:20:20.834 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.q4x 00:20:20.834 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 71bf674480f9b1d8f7484cc7ed183913d29375c554e91644921ccf80e5c1ae59 3 00:20:20.834 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 71bf674480f9b1d8f7484cc7ed183913d29375c554e91644921ccf80e5c1ae59 3 00:20:20.834 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:20:20.834 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:20:20.834 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=71bf674480f9b1d8f7484cc7ed183913d29375c554e91644921ccf80e5c1ae59 00:20:20.834 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:20:20.834 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:20:20.834 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.q4x 00:20:20.834 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.q4x 00:20:20.834 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.q4x 00:20:20.834 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:20:20.834 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:20:20.834 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:20.834 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:20:20.834 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:20:20.834 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:20:20.834 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:20:20.834 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=b05884a9683fb1a34a464734e80bedb5 00:20:20.834 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:20:20.834 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.ODS 00:20:20.834 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key b05884a9683fb1a34a464734e80bedb5 1 00:20:20.834 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 b05884a9683fb1a34a464734e80bedb5 1 00:20:20.834 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:20:20.834 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:20:20.834 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=b05884a9683fb1a34a464734e80bedb5 00:20:20.834 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:20:20.834 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:20:20.834 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.ODS 00:20:20.834 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.ODS 00:20:20.834 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.ODS 00:20:20.834 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:20:20.834 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:20:20.834 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:20.834 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:20:20.834 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:20:20.834 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:20:20.834 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:20:20.834 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=45b51489865eeaf1ef3b251c3ab72f1c8abc7d5028f7b340 00:20:20.834 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:20:20.834 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.jRx 00:20:20.834 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 45b51489865eeaf1ef3b251c3ab72f1c8abc7d5028f7b340 2 00:20:20.834 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 45b51489865eeaf1ef3b251c3ab72f1c8abc7d5028f7b340 2 00:20:20.834 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:20:20.834 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:20:20.834 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=45b51489865eeaf1ef3b251c3ab72f1c8abc7d5028f7b340 00:20:20.834 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:20:20.834 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:20:21.093 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.jRx 00:20:21.093 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.jRx 00:20:21.093 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.jRx 00:20:21.093 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:20:21.093 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:20:21.093 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:21.093 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:20:21.093 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:20:21.093 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:20:21.093 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:20:21.093 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=0b06e0d127d4e05ddb0a12e4c2b885315dddb719d6ceeb25 00:20:21.093 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:20:21.093 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.1aX 00:20:21.093 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 0b06e0d127d4e05ddb0a12e4c2b885315dddb719d6ceeb25 2 00:20:21.093 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 0b06e0d127d4e05ddb0a12e4c2b885315dddb719d6ceeb25 2 00:20:21.093 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:20:21.093 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:20:21.093 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=0b06e0d127d4e05ddb0a12e4c2b885315dddb719d6ceeb25 00:20:21.093 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:20:21.093 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:20:21.093 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.1aX 00:20:21.093 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.1aX 00:20:21.093 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.1aX 00:20:21.093 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:20:21.093 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:20:21.093 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:21.093 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:20:21.093 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:20:21.093 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:20:21.093 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:20:21.093 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=29b051d2435d9951c5ee283a6bc7b5c2 00:20:21.093 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:20:21.093 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.95D 00:20:21.093 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 29b051d2435d9951c5ee283a6bc7b5c2 1 00:20:21.093 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 29b051d2435d9951c5ee283a6bc7b5c2 1 00:20:21.093 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:20:21.093 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:20:21.093 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=29b051d2435d9951c5ee283a6bc7b5c2 00:20:21.093 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:20:21.093 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:20:21.093 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.95D 00:20:21.093 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.95D 00:20:21.093 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.95D 00:20:21.094 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:20:21.094 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:20:21.094 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:21.094 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:20:21.094 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:20:21.094 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:20:21.094 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:20:21.094 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=be7a7acaaede2ef60bb138e6bff83ffd097db863aee26df880de28a639875662 00:20:21.094 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:20:21.094 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.Ejz 00:20:21.094 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key be7a7acaaede2ef60bb138e6bff83ffd097db863aee26df880de28a639875662 3 00:20:21.094 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 be7a7acaaede2ef60bb138e6bff83ffd097db863aee26df880de28a639875662 3 00:20:21.094 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:20:21.094 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:20:21.094 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=be7a7acaaede2ef60bb138e6bff83ffd097db863aee26df880de28a639875662 00:20:21.094 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:20:21.094 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:20:21.094 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.Ejz 00:20:21.094 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.Ejz 00:20:21.094 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.Ejz 00:20:21.094 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:20:21.094 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 529977 00:20:21.094 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 529977 ']' 00:20:21.094 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:21.094 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:21.094 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:21.094 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:21.094 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:21.094 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.353 06:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:21.353 06:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:20:21.353 06:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 530177 /var/tmp/host.sock 00:20:21.353 06:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 530177 ']' 00:20:21.353 06:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/host.sock 00:20:21.353 06:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:21.353 06:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:20:21.353 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:20:21.353 06:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:21.353 06:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.611 06:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:21.611 06:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:20:21.611 06:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:20:21.611 06:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:21.611 06:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.611 06:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:21.611 06:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:20:21.611 06:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.jNY 00:20:21.611 06:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:21.611 06:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.611 06:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:21.611 06:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.jNY 00:20:21.611 06:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.jNY 00:20:21.870 06:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.q4x ]] 00:20:21.870 06:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.q4x 00:20:21.870 06:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:21.870 06:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.870 06:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:21.870 06:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.q4x 00:20:21.870 06:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.q4x 00:20:21.870 06:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:20:21.870 06:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.ODS 00:20:21.870 06:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:21.870 06:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.129 06:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.129 06:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.ODS 00:20:22.129 06:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.ODS 00:20:22.129 06:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.jRx ]] 00:20:22.129 06:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.jRx 00:20:22.129 06:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.129 06:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.129 06:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.129 06:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.jRx 00:20:22.129 06:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.jRx 00:20:22.388 06:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:20:22.388 06:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.1aX 00:20:22.388 06:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.388 06:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.388 06:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.388 06:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.1aX 00:20:22.388 06:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.1aX 00:20:22.647 06:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.95D ]] 00:20:22.647 06:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.95D 00:20:22.647 06:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.647 06:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.647 06:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.647 06:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.95D 00:20:22.647 06:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.95D 00:20:22.906 06:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:20:22.906 06:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.Ejz 00:20:22.906 06:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.906 06:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.906 06:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.906 06:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.Ejz 00:20:22.906 06:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.Ejz 00:20:22.906 06:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:20:22.906 06:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:20:22.906 06:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:22.906 06:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:22.906 06:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:22.906 06:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:23.165 06:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:20:23.165 06:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:23.165 06:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:23.165 06:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:23.165 06:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:23.165 06:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:23.165 06:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:23.165 06:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.165 06:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.165 06:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.165 06:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:23.165 06:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:23.165 06:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:23.423 00:20:23.423 06:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:23.423 06:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:23.423 06:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:23.682 06:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:23.682 06:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:23.682 06:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.682 06:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.682 06:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.682 06:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:23.682 { 00:20:23.682 "cntlid": 1, 00:20:23.682 "qid": 0, 00:20:23.682 "state": "enabled", 00:20:23.682 "thread": "nvmf_tgt_poll_group_000", 00:20:23.682 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:23.682 "listen_address": { 00:20:23.682 "trtype": "TCP", 00:20:23.682 "adrfam": "IPv4", 00:20:23.682 "traddr": "10.0.0.2", 00:20:23.682 "trsvcid": "4420" 00:20:23.682 }, 00:20:23.682 "peer_address": { 00:20:23.682 "trtype": "TCP", 00:20:23.682 "adrfam": "IPv4", 00:20:23.682 "traddr": "10.0.0.1", 00:20:23.682 "trsvcid": "35614" 00:20:23.682 }, 00:20:23.682 "auth": { 00:20:23.682 "state": "completed", 00:20:23.682 "digest": "sha256", 00:20:23.682 "dhgroup": "null" 00:20:23.682 } 00:20:23.682 } 00:20:23.682 ]' 00:20:23.682 06:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:23.682 06:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:23.682 06:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:23.682 06:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:23.682 06:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:23.682 06:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:23.682 06:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:23.682 06:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:23.940 06:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzUxYzNkODQ2MDhiNjg0MzY5OTZhNjBjOWUzMDJhMGI5YTgzNmU1NjY1NjM4Yzk3aI3eWg==: --dhchap-ctrl-secret DHHC-1:03:NzFiZjY3NDQ4MGY5YjFkOGY3NDg0Y2M3ZWQxODM5MTNkMjkzNzVjNTU0ZTkxNjQ0OTIxY2NmODBlNWMxYWU1Of+wUH4=: 00:20:23.940 06:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NzUxYzNkODQ2MDhiNjg0MzY5OTZhNjBjOWUzMDJhMGI5YTgzNmU1NjY1NjM4Yzk3aI3eWg==: --dhchap-ctrl-secret DHHC-1:03:NzFiZjY3NDQ4MGY5YjFkOGY3NDg0Y2M3ZWQxODM5MTNkMjkzNzVjNTU0ZTkxNjQ0OTIxY2NmODBlNWMxYWU1Of+wUH4=: 00:20:24.508 06:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:24.508 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:24.508 06:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:24.508 06:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.508 06:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.508 06:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.508 06:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:24.508 06:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:24.508 06:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:24.767 06:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:20:24.767 06:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:24.767 06:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:24.767 06:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:24.767 06:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:24.767 06:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:24.767 06:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:24.767 06:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.767 06:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.767 06:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.767 06:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:24.767 06:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:24.767 06:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:25.026 00:20:25.026 06:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:25.026 06:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:25.026 06:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:25.285 06:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:25.285 06:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:25.285 06:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:25.285 06:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.285 06:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:25.285 06:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:25.285 { 00:20:25.285 "cntlid": 3, 00:20:25.285 "qid": 0, 00:20:25.285 "state": "enabled", 00:20:25.286 "thread": "nvmf_tgt_poll_group_000", 00:20:25.286 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:25.286 "listen_address": { 00:20:25.286 "trtype": "TCP", 00:20:25.286 "adrfam": "IPv4", 00:20:25.286 "traddr": "10.0.0.2", 00:20:25.286 "trsvcid": "4420" 00:20:25.286 }, 00:20:25.286 "peer_address": { 00:20:25.286 "trtype": "TCP", 00:20:25.286 "adrfam": "IPv4", 00:20:25.286 "traddr": "10.0.0.1", 00:20:25.286 "trsvcid": "35632" 00:20:25.286 }, 00:20:25.286 "auth": { 00:20:25.286 "state": "completed", 00:20:25.286 "digest": "sha256", 00:20:25.286 "dhgroup": "null" 00:20:25.286 } 00:20:25.286 } 00:20:25.286 ]' 00:20:25.286 06:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:25.286 06:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:25.286 06:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:25.286 06:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:25.286 06:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:25.286 06:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:25.286 06:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:25.286 06:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:25.545 06:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjA1ODg0YTk2ODNmYjFhMzRhNDY0NzM0ZTgwYmVkYjVuWnga: --dhchap-ctrl-secret DHHC-1:02:NDViNTE0ODk4NjVlZWFmMWVmM2IyNTFjM2FiNzJmMWM4YWJjN2Q1MDI4ZjdiMzQwy7PKjQ==: 00:20:25.545 06:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YjA1ODg0YTk2ODNmYjFhMzRhNDY0NzM0ZTgwYmVkYjVuWnga: --dhchap-ctrl-secret DHHC-1:02:NDViNTE0ODk4NjVlZWFmMWVmM2IyNTFjM2FiNzJmMWM4YWJjN2Q1MDI4ZjdiMzQwy7PKjQ==: 00:20:26.112 06:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:26.112 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:26.112 06:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:26.112 06:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:26.112 06:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.112 06:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:26.112 06:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:26.112 06:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:26.112 06:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:26.371 06:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:20:26.371 06:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:26.371 06:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:26.371 06:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:26.371 06:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:26.371 06:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:26.371 06:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:26.371 06:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:26.371 06:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.371 06:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:26.371 06:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:26.371 06:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:26.371 06:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:26.630 00:20:26.630 06:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:26.630 06:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:26.630 06:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:26.889 06:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:26.889 06:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:26.889 06:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:26.889 06:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.889 06:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:26.889 06:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:26.889 { 00:20:26.889 "cntlid": 5, 00:20:26.889 "qid": 0, 00:20:26.889 "state": "enabled", 00:20:26.889 "thread": "nvmf_tgt_poll_group_000", 00:20:26.889 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:26.889 "listen_address": { 00:20:26.889 "trtype": "TCP", 00:20:26.889 "adrfam": "IPv4", 00:20:26.889 "traddr": "10.0.0.2", 00:20:26.889 "trsvcid": "4420" 00:20:26.889 }, 00:20:26.889 "peer_address": { 00:20:26.889 "trtype": "TCP", 00:20:26.889 "adrfam": "IPv4", 00:20:26.889 "traddr": "10.0.0.1", 00:20:26.889 "trsvcid": "35668" 00:20:26.889 }, 00:20:26.889 "auth": { 00:20:26.889 "state": "completed", 00:20:26.889 "digest": "sha256", 00:20:26.889 "dhgroup": "null" 00:20:26.889 } 00:20:26.889 } 00:20:26.889 ]' 00:20:26.889 06:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:26.889 06:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:26.889 06:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:26.889 06:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:26.889 06:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:26.889 06:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:26.889 06:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:26.889 06:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:27.149 06:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MGIwNmUwZDEyN2Q0ZTA1ZGRiMGExMmU0YzJiODg1MzE1ZGRkYjcxOWQ2Y2VlYjI16WXLmg==: --dhchap-ctrl-secret DHHC-1:01:MjliMDUxZDI0MzVkOTk1MWM1ZWUyODNhNmJjN2I1YzL0PRcf: 00:20:27.149 06:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MGIwNmUwZDEyN2Q0ZTA1ZGRiMGExMmU0YzJiODg1MzE1ZGRkYjcxOWQ2Y2VlYjI16WXLmg==: --dhchap-ctrl-secret DHHC-1:01:MjliMDUxZDI0MzVkOTk1MWM1ZWUyODNhNmJjN2I1YzL0PRcf: 00:20:27.716 06:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:27.716 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:27.716 06:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:27.716 06:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:27.716 06:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.716 06:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:27.716 06:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:27.716 06:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:27.716 06:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:27.974 06:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:20:27.974 06:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:27.974 06:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:27.974 06:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:27.974 06:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:27.974 06:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:27.974 06:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:20:27.974 06:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:27.974 06:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.974 06:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:27.974 06:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:27.974 06:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:27.974 06:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:28.231 00:20:28.231 06:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:28.231 06:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:28.231 06:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:28.231 06:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:28.231 06:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:28.231 06:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:28.231 06:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.231 06:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:28.231 06:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:28.231 { 00:20:28.231 "cntlid": 7, 00:20:28.231 "qid": 0, 00:20:28.231 "state": "enabled", 00:20:28.231 "thread": "nvmf_tgt_poll_group_000", 00:20:28.231 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:28.231 "listen_address": { 00:20:28.231 "trtype": "TCP", 00:20:28.231 "adrfam": "IPv4", 00:20:28.231 "traddr": "10.0.0.2", 00:20:28.231 "trsvcid": "4420" 00:20:28.231 }, 00:20:28.231 "peer_address": { 00:20:28.231 "trtype": "TCP", 00:20:28.231 "adrfam": "IPv4", 00:20:28.231 "traddr": "10.0.0.1", 00:20:28.231 "trsvcid": "51318" 00:20:28.231 }, 00:20:28.231 "auth": { 00:20:28.231 "state": "completed", 00:20:28.231 "digest": "sha256", 00:20:28.231 "dhgroup": "null" 00:20:28.231 } 00:20:28.231 } 00:20:28.231 ]' 00:20:28.231 06:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:28.489 06:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:28.489 06:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:28.489 06:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:28.489 06:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:28.489 06:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:28.489 06:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:28.489 06:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:28.747 06:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmU3YTdhY2FhZWRlMmVmNjBiYjEzOGU2YmZmODNmZmQwOTdkYjg2M2FlZTI2ZGY4ODBkZTI4YTYzOTg3NTY2MhLCEXY=: 00:20:28.747 06:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YmU3YTdhY2FhZWRlMmVmNjBiYjEzOGU2YmZmODNmZmQwOTdkYjg2M2FlZTI2ZGY4ODBkZTI4YTYzOTg3NTY2MhLCEXY=: 00:20:29.314 06:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:29.314 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:29.314 06:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:29.314 06:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.314 06:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.314 06:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.314 06:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:29.314 06:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:29.314 06:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:29.314 06:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:29.572 06:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:20:29.572 06:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:29.572 06:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:29.572 06:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:29.572 06:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:29.572 06:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:29.572 06:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:29.572 06:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.572 06:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.572 06:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.572 06:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:29.572 06:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:29.572 06:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:29.831 00:20:29.831 06:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:29.831 06:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:29.831 06:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:29.831 06:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:29.831 06:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:29.831 06:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.831 06:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.831 06:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.831 06:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:29.831 { 00:20:29.831 "cntlid": 9, 00:20:29.831 "qid": 0, 00:20:29.831 "state": "enabled", 00:20:29.831 "thread": "nvmf_tgt_poll_group_000", 00:20:29.831 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:29.831 "listen_address": { 00:20:29.831 "trtype": "TCP", 00:20:29.831 "adrfam": "IPv4", 00:20:29.831 "traddr": "10.0.0.2", 00:20:29.831 "trsvcid": "4420" 00:20:29.831 }, 00:20:29.831 "peer_address": { 00:20:29.831 "trtype": "TCP", 00:20:29.831 "adrfam": "IPv4", 00:20:29.831 "traddr": "10.0.0.1", 00:20:29.831 "trsvcid": "51346" 00:20:29.831 }, 00:20:29.831 "auth": { 00:20:29.831 "state": "completed", 00:20:29.831 "digest": "sha256", 00:20:29.831 "dhgroup": "ffdhe2048" 00:20:29.831 } 00:20:29.831 } 00:20:29.831 ]' 00:20:29.831 06:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:30.090 06:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:30.090 06:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:30.090 06:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:30.090 06:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:30.090 06:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:30.090 06:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:30.090 06:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:30.349 06:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzUxYzNkODQ2MDhiNjg0MzY5OTZhNjBjOWUzMDJhMGI5YTgzNmU1NjY1NjM4Yzk3aI3eWg==: --dhchap-ctrl-secret DHHC-1:03:NzFiZjY3NDQ4MGY5YjFkOGY3NDg0Y2M3ZWQxODM5MTNkMjkzNzVjNTU0ZTkxNjQ0OTIxY2NmODBlNWMxYWU1Of+wUH4=: 00:20:30.349 06:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NzUxYzNkODQ2MDhiNjg0MzY5OTZhNjBjOWUzMDJhMGI5YTgzNmU1NjY1NjM4Yzk3aI3eWg==: --dhchap-ctrl-secret DHHC-1:03:NzFiZjY3NDQ4MGY5YjFkOGY3NDg0Y2M3ZWQxODM5MTNkMjkzNzVjNTU0ZTkxNjQ0OTIxY2NmODBlNWMxYWU1Of+wUH4=: 00:20:30.917 06:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:30.917 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:30.917 06:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:30.917 06:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.917 06:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.917 06:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.917 06:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:30.917 06:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:30.917 06:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:31.176 06:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:20:31.176 06:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:31.176 06:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:31.176 06:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:31.176 06:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:31.176 06:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:31.176 06:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:31.176 06:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:31.176 06:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.176 06:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:31.176 06:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:31.176 06:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:31.176 06:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:31.176 00:20:31.435 06:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:31.435 06:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:31.435 06:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:31.435 06:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:31.435 06:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:31.435 06:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:31.435 06:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.435 06:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:31.435 06:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:31.435 { 00:20:31.435 "cntlid": 11, 00:20:31.435 "qid": 0, 00:20:31.435 "state": "enabled", 00:20:31.435 "thread": "nvmf_tgt_poll_group_000", 00:20:31.435 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:31.435 "listen_address": { 00:20:31.435 "trtype": "TCP", 00:20:31.435 "adrfam": "IPv4", 00:20:31.435 "traddr": "10.0.0.2", 00:20:31.435 "trsvcid": "4420" 00:20:31.435 }, 00:20:31.435 "peer_address": { 00:20:31.435 "trtype": "TCP", 00:20:31.435 "adrfam": "IPv4", 00:20:31.435 "traddr": "10.0.0.1", 00:20:31.435 "trsvcid": "51364" 00:20:31.435 }, 00:20:31.435 "auth": { 00:20:31.435 "state": "completed", 00:20:31.435 "digest": "sha256", 00:20:31.435 "dhgroup": "ffdhe2048" 00:20:31.435 } 00:20:31.435 } 00:20:31.435 ]' 00:20:31.435 06:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:31.693 06:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:31.693 06:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:31.693 06:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:31.693 06:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:31.693 06:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:31.693 06:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:31.693 06:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:31.952 06:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjA1ODg0YTk2ODNmYjFhMzRhNDY0NzM0ZTgwYmVkYjVuWnga: --dhchap-ctrl-secret DHHC-1:02:NDViNTE0ODk4NjVlZWFmMWVmM2IyNTFjM2FiNzJmMWM4YWJjN2Q1MDI4ZjdiMzQwy7PKjQ==: 00:20:31.952 06:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YjA1ODg0YTk2ODNmYjFhMzRhNDY0NzM0ZTgwYmVkYjVuWnga: --dhchap-ctrl-secret DHHC-1:02:NDViNTE0ODk4NjVlZWFmMWVmM2IyNTFjM2FiNzJmMWM4YWJjN2Q1MDI4ZjdiMzQwy7PKjQ==: 00:20:32.519 06:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:32.519 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:32.519 06:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:32.519 06:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:32.519 06:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.519 06:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:32.519 06:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:32.519 06:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:32.519 06:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:32.519 06:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:20:32.519 06:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:32.519 06:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:32.519 06:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:32.519 06:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:32.519 06:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:32.519 06:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:32.519 06:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:32.519 06:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.520 06:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:32.520 06:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:32.520 06:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:32.520 06:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:32.778 00:20:33.036 06:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:33.036 06:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:33.036 06:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:33.036 06:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:33.036 06:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:33.036 06:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:33.036 06:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.036 06:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:33.036 06:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:33.036 { 00:20:33.036 "cntlid": 13, 00:20:33.036 "qid": 0, 00:20:33.036 "state": "enabled", 00:20:33.036 "thread": "nvmf_tgt_poll_group_000", 00:20:33.036 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:33.036 "listen_address": { 00:20:33.036 "trtype": "TCP", 00:20:33.036 "adrfam": "IPv4", 00:20:33.036 "traddr": "10.0.0.2", 00:20:33.036 "trsvcid": "4420" 00:20:33.036 }, 00:20:33.036 "peer_address": { 00:20:33.036 "trtype": "TCP", 00:20:33.036 "adrfam": "IPv4", 00:20:33.036 "traddr": "10.0.0.1", 00:20:33.036 "trsvcid": "51394" 00:20:33.036 }, 00:20:33.036 "auth": { 00:20:33.036 "state": "completed", 00:20:33.036 "digest": "sha256", 00:20:33.036 "dhgroup": "ffdhe2048" 00:20:33.036 } 00:20:33.036 } 00:20:33.036 ]' 00:20:33.036 06:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:33.036 06:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:33.036 06:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:33.295 06:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:33.295 06:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:33.295 06:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:33.295 06:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:33.295 06:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:33.554 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MGIwNmUwZDEyN2Q0ZTA1ZGRiMGExMmU0YzJiODg1MzE1ZGRkYjcxOWQ2Y2VlYjI16WXLmg==: --dhchap-ctrl-secret DHHC-1:01:MjliMDUxZDI0MzVkOTk1MWM1ZWUyODNhNmJjN2I1YzL0PRcf: 00:20:33.554 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MGIwNmUwZDEyN2Q0ZTA1ZGRiMGExMmU0YzJiODg1MzE1ZGRkYjcxOWQ2Y2VlYjI16WXLmg==: --dhchap-ctrl-secret DHHC-1:01:MjliMDUxZDI0MzVkOTk1MWM1ZWUyODNhNmJjN2I1YzL0PRcf: 00:20:34.121 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:34.121 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:34.121 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:34.121 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:34.121 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.121 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:34.121 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:34.121 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:34.121 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:34.121 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:20:34.121 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:34.121 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:34.121 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:34.121 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:34.121 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:34.121 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:20:34.121 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:34.121 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.121 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:34.121 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:34.121 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:34.121 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:34.379 00:20:34.379 06:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:34.379 06:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:34.379 06:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:34.638 06:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:34.638 06:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:34.638 06:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:34.638 06:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.638 06:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:34.638 06:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:34.638 { 00:20:34.638 "cntlid": 15, 00:20:34.638 "qid": 0, 00:20:34.638 "state": "enabled", 00:20:34.638 "thread": "nvmf_tgt_poll_group_000", 00:20:34.638 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:34.638 "listen_address": { 00:20:34.638 "trtype": "TCP", 00:20:34.638 "adrfam": "IPv4", 00:20:34.638 "traddr": "10.0.0.2", 00:20:34.638 "trsvcid": "4420" 00:20:34.638 }, 00:20:34.638 "peer_address": { 00:20:34.638 "trtype": "TCP", 00:20:34.638 "adrfam": "IPv4", 00:20:34.638 "traddr": "10.0.0.1", 00:20:34.638 "trsvcid": "51412" 00:20:34.638 }, 00:20:34.638 "auth": { 00:20:34.638 "state": "completed", 00:20:34.638 "digest": "sha256", 00:20:34.638 "dhgroup": "ffdhe2048" 00:20:34.638 } 00:20:34.638 } 00:20:34.638 ]' 00:20:34.638 06:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:34.638 06:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:34.638 06:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:34.896 06:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:34.896 06:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:34.896 06:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:34.896 06:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:34.896 06:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:34.896 06:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmU3YTdhY2FhZWRlMmVmNjBiYjEzOGU2YmZmODNmZmQwOTdkYjg2M2FlZTI2ZGY4ODBkZTI4YTYzOTg3NTY2MhLCEXY=: 00:20:34.896 06:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YmU3YTdhY2FhZWRlMmVmNjBiYjEzOGU2YmZmODNmZmQwOTdkYjg2M2FlZTI2ZGY4ODBkZTI4YTYzOTg3NTY2MhLCEXY=: 00:20:35.464 06:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:35.464 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:35.464 06:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:35.464 06:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.464 06:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.464 06:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.464 06:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:35.464 06:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:35.464 06:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:35.464 06:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:35.723 06:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:20:35.723 06:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:35.723 06:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:35.723 06:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:35.723 06:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:35.723 06:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:35.723 06:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:35.723 06:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.723 06:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.723 06:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.723 06:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:35.723 06:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:35.723 06:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:35.981 00:20:35.981 06:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:35.981 06:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:35.981 06:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:36.239 06:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:36.239 06:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:36.239 06:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.239 06:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.239 06:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.239 06:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:36.239 { 00:20:36.239 "cntlid": 17, 00:20:36.239 "qid": 0, 00:20:36.239 "state": "enabled", 00:20:36.239 "thread": "nvmf_tgt_poll_group_000", 00:20:36.239 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:36.239 "listen_address": { 00:20:36.239 "trtype": "TCP", 00:20:36.239 "adrfam": "IPv4", 00:20:36.239 "traddr": "10.0.0.2", 00:20:36.239 "trsvcid": "4420" 00:20:36.239 }, 00:20:36.239 "peer_address": { 00:20:36.239 "trtype": "TCP", 00:20:36.239 "adrfam": "IPv4", 00:20:36.239 "traddr": "10.0.0.1", 00:20:36.239 "trsvcid": "51434" 00:20:36.239 }, 00:20:36.239 "auth": { 00:20:36.239 "state": "completed", 00:20:36.239 "digest": "sha256", 00:20:36.239 "dhgroup": "ffdhe3072" 00:20:36.239 } 00:20:36.239 } 00:20:36.239 ]' 00:20:36.239 06:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:36.239 06:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:36.239 06:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:36.239 06:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:36.239 06:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:36.239 06:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:36.239 06:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:36.239 06:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:36.498 06:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzUxYzNkODQ2MDhiNjg0MzY5OTZhNjBjOWUzMDJhMGI5YTgzNmU1NjY1NjM4Yzk3aI3eWg==: --dhchap-ctrl-secret DHHC-1:03:NzFiZjY3NDQ4MGY5YjFkOGY3NDg0Y2M3ZWQxODM5MTNkMjkzNzVjNTU0ZTkxNjQ0OTIxY2NmODBlNWMxYWU1Of+wUH4=: 00:20:36.498 06:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NzUxYzNkODQ2MDhiNjg0MzY5OTZhNjBjOWUzMDJhMGI5YTgzNmU1NjY1NjM4Yzk3aI3eWg==: --dhchap-ctrl-secret DHHC-1:03:NzFiZjY3NDQ4MGY5YjFkOGY3NDg0Y2M3ZWQxODM5MTNkMjkzNzVjNTU0ZTkxNjQ0OTIxY2NmODBlNWMxYWU1Of+wUH4=: 00:20:37.066 06:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:37.066 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:37.066 06:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:37.066 06:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.066 06:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.066 06:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.066 06:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:37.066 06:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:37.066 06:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:37.328 06:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:20:37.328 06:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:37.328 06:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:37.328 06:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:37.328 06:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:37.328 06:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:37.328 06:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:37.329 06:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.329 06:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.329 06:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.329 06:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:37.329 06:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:37.329 06:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:37.589 00:20:37.589 06:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:37.589 06:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:37.589 06:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:37.849 06:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:37.849 06:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:37.849 06:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.849 06:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.849 06:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.849 06:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:37.849 { 00:20:37.849 "cntlid": 19, 00:20:37.849 "qid": 0, 00:20:37.849 "state": "enabled", 00:20:37.849 "thread": "nvmf_tgt_poll_group_000", 00:20:37.849 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:37.849 "listen_address": { 00:20:37.849 "trtype": "TCP", 00:20:37.849 "adrfam": "IPv4", 00:20:37.849 "traddr": "10.0.0.2", 00:20:37.849 "trsvcid": "4420" 00:20:37.849 }, 00:20:37.849 "peer_address": { 00:20:37.849 "trtype": "TCP", 00:20:37.849 "adrfam": "IPv4", 00:20:37.849 "traddr": "10.0.0.1", 00:20:37.849 "trsvcid": "51470" 00:20:37.849 }, 00:20:37.849 "auth": { 00:20:37.849 "state": "completed", 00:20:37.849 "digest": "sha256", 00:20:37.849 "dhgroup": "ffdhe3072" 00:20:37.849 } 00:20:37.849 } 00:20:37.849 ]' 00:20:37.849 06:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:37.849 06:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:37.849 06:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:37.849 06:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:37.849 06:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:37.849 06:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:37.849 06:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:37.849 06:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:38.108 06:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjA1ODg0YTk2ODNmYjFhMzRhNDY0NzM0ZTgwYmVkYjVuWnga: --dhchap-ctrl-secret DHHC-1:02:NDViNTE0ODk4NjVlZWFmMWVmM2IyNTFjM2FiNzJmMWM4YWJjN2Q1MDI4ZjdiMzQwy7PKjQ==: 00:20:38.108 06:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YjA1ODg0YTk2ODNmYjFhMzRhNDY0NzM0ZTgwYmVkYjVuWnga: --dhchap-ctrl-secret DHHC-1:02:NDViNTE0ODk4NjVlZWFmMWVmM2IyNTFjM2FiNzJmMWM4YWJjN2Q1MDI4ZjdiMzQwy7PKjQ==: 00:20:38.675 06:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:38.675 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:38.675 06:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:38.675 06:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.675 06:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.675 06:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.675 06:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:38.675 06:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:38.675 06:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:38.934 06:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:20:38.934 06:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:38.934 06:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:38.934 06:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:38.934 06:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:38.934 06:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:38.934 06:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:38.934 06:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.934 06:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.934 06:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.934 06:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:38.934 06:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:38.934 06:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:39.192 00:20:39.192 06:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:39.192 06:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:39.192 06:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:39.450 06:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:39.450 06:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:39.450 06:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.450 06:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.450 06:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.450 06:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:39.450 { 00:20:39.450 "cntlid": 21, 00:20:39.450 "qid": 0, 00:20:39.450 "state": "enabled", 00:20:39.450 "thread": "nvmf_tgt_poll_group_000", 00:20:39.450 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:39.450 "listen_address": { 00:20:39.450 "trtype": "TCP", 00:20:39.450 "adrfam": "IPv4", 00:20:39.450 "traddr": "10.0.0.2", 00:20:39.450 "trsvcid": "4420" 00:20:39.450 }, 00:20:39.450 "peer_address": { 00:20:39.450 "trtype": "TCP", 00:20:39.450 "adrfam": "IPv4", 00:20:39.450 "traddr": "10.0.0.1", 00:20:39.450 "trsvcid": "46642" 00:20:39.450 }, 00:20:39.450 "auth": { 00:20:39.450 "state": "completed", 00:20:39.450 "digest": "sha256", 00:20:39.450 "dhgroup": "ffdhe3072" 00:20:39.450 } 00:20:39.450 } 00:20:39.450 ]' 00:20:39.450 06:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:39.450 06:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:39.450 06:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:39.450 06:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:39.450 06:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:39.450 06:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:39.450 06:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:39.450 06:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:39.709 06:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MGIwNmUwZDEyN2Q0ZTA1ZGRiMGExMmU0YzJiODg1MzE1ZGRkYjcxOWQ2Y2VlYjI16WXLmg==: --dhchap-ctrl-secret DHHC-1:01:MjliMDUxZDI0MzVkOTk1MWM1ZWUyODNhNmJjN2I1YzL0PRcf: 00:20:39.709 06:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MGIwNmUwZDEyN2Q0ZTA1ZGRiMGExMmU0YzJiODg1MzE1ZGRkYjcxOWQ2Y2VlYjI16WXLmg==: --dhchap-ctrl-secret DHHC-1:01:MjliMDUxZDI0MzVkOTk1MWM1ZWUyODNhNmJjN2I1YzL0PRcf: 00:20:40.276 06:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:40.276 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:40.276 06:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:40.276 06:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.276 06:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.276 06:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.276 06:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:40.276 06:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:40.276 06:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:40.535 06:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:20:40.535 06:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:40.535 06:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:40.535 06:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:40.535 06:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:40.535 06:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:40.535 06:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:20:40.535 06:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.535 06:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.535 06:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.535 06:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:40.535 06:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:40.535 06:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:40.794 00:20:40.794 06:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:40.794 06:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:40.794 06:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:41.051 06:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:41.051 06:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:41.051 06:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.051 06:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.051 06:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.051 06:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:41.051 { 00:20:41.051 "cntlid": 23, 00:20:41.051 "qid": 0, 00:20:41.051 "state": "enabled", 00:20:41.051 "thread": "nvmf_tgt_poll_group_000", 00:20:41.051 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:41.051 "listen_address": { 00:20:41.051 "trtype": "TCP", 00:20:41.051 "adrfam": "IPv4", 00:20:41.051 "traddr": "10.0.0.2", 00:20:41.051 "trsvcid": "4420" 00:20:41.051 }, 00:20:41.051 "peer_address": { 00:20:41.051 "trtype": "TCP", 00:20:41.051 "adrfam": "IPv4", 00:20:41.051 "traddr": "10.0.0.1", 00:20:41.051 "trsvcid": "46682" 00:20:41.051 }, 00:20:41.051 "auth": { 00:20:41.051 "state": "completed", 00:20:41.051 "digest": "sha256", 00:20:41.051 "dhgroup": "ffdhe3072" 00:20:41.051 } 00:20:41.051 } 00:20:41.051 ]' 00:20:41.051 06:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:41.051 06:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:41.051 06:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:41.051 06:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:41.051 06:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:41.051 06:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:41.051 06:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:41.051 06:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:41.309 06:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmU3YTdhY2FhZWRlMmVmNjBiYjEzOGU2YmZmODNmZmQwOTdkYjg2M2FlZTI2ZGY4ODBkZTI4YTYzOTg3NTY2MhLCEXY=: 00:20:41.309 06:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YmU3YTdhY2FhZWRlMmVmNjBiYjEzOGU2YmZmODNmZmQwOTdkYjg2M2FlZTI2ZGY4ODBkZTI4YTYzOTg3NTY2MhLCEXY=: 00:20:41.876 06:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:41.876 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:41.876 06:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:41.876 06:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.876 06:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.876 06:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.876 06:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:41.876 06:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:41.876 06:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:41.876 06:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:42.134 06:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:20:42.134 06:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:42.134 06:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:42.134 06:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:42.134 06:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:42.134 06:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:42.135 06:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:42.135 06:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.135 06:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.135 06:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.135 06:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:42.135 06:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:42.135 06:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:42.393 00:20:42.393 06:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:42.393 06:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:42.393 06:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:42.393 06:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:42.393 06:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:42.393 06:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.393 06:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.653 06:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.653 06:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:42.653 { 00:20:42.653 "cntlid": 25, 00:20:42.653 "qid": 0, 00:20:42.653 "state": "enabled", 00:20:42.653 "thread": "nvmf_tgt_poll_group_000", 00:20:42.653 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:42.653 "listen_address": { 00:20:42.653 "trtype": "TCP", 00:20:42.653 "adrfam": "IPv4", 00:20:42.653 "traddr": "10.0.0.2", 00:20:42.653 "trsvcid": "4420" 00:20:42.653 }, 00:20:42.653 "peer_address": { 00:20:42.653 "trtype": "TCP", 00:20:42.653 "adrfam": "IPv4", 00:20:42.653 "traddr": "10.0.0.1", 00:20:42.653 "trsvcid": "46718" 00:20:42.653 }, 00:20:42.653 "auth": { 00:20:42.653 "state": "completed", 00:20:42.653 "digest": "sha256", 00:20:42.653 "dhgroup": "ffdhe4096" 00:20:42.653 } 00:20:42.653 } 00:20:42.653 ]' 00:20:42.653 06:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:42.653 06:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:42.653 06:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:42.653 06:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:42.653 06:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:42.653 06:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:42.653 06:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:42.653 06:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:42.911 06:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzUxYzNkODQ2MDhiNjg0MzY5OTZhNjBjOWUzMDJhMGI5YTgzNmU1NjY1NjM4Yzk3aI3eWg==: --dhchap-ctrl-secret DHHC-1:03:NzFiZjY3NDQ4MGY5YjFkOGY3NDg0Y2M3ZWQxODM5MTNkMjkzNzVjNTU0ZTkxNjQ0OTIxY2NmODBlNWMxYWU1Of+wUH4=: 00:20:42.911 06:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NzUxYzNkODQ2MDhiNjg0MzY5OTZhNjBjOWUzMDJhMGI5YTgzNmU1NjY1NjM4Yzk3aI3eWg==: --dhchap-ctrl-secret DHHC-1:03:NzFiZjY3NDQ4MGY5YjFkOGY3NDg0Y2M3ZWQxODM5MTNkMjkzNzVjNTU0ZTkxNjQ0OTIxY2NmODBlNWMxYWU1Of+wUH4=: 00:20:43.477 06:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:43.477 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:43.477 06:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:43.477 06:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.477 06:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.477 06:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.477 06:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:43.477 06:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:43.477 06:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:43.735 06:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:20:43.735 06:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:43.735 06:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:43.735 06:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:43.735 06:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:43.735 06:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:43.735 06:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:43.736 06:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.736 06:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.736 06:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.736 06:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:43.736 06:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:43.736 06:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:43.994 00:20:43.994 06:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:43.994 06:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:43.994 06:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:43.994 06:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:43.994 06:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:43.995 06:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.995 06:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.253 06:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.253 06:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:44.253 { 00:20:44.253 "cntlid": 27, 00:20:44.253 "qid": 0, 00:20:44.253 "state": "enabled", 00:20:44.253 "thread": "nvmf_tgt_poll_group_000", 00:20:44.253 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:44.253 "listen_address": { 00:20:44.253 "trtype": "TCP", 00:20:44.253 "adrfam": "IPv4", 00:20:44.253 "traddr": "10.0.0.2", 00:20:44.253 "trsvcid": "4420" 00:20:44.253 }, 00:20:44.253 "peer_address": { 00:20:44.253 "trtype": "TCP", 00:20:44.253 "adrfam": "IPv4", 00:20:44.253 "traddr": "10.0.0.1", 00:20:44.253 "trsvcid": "46730" 00:20:44.253 }, 00:20:44.253 "auth": { 00:20:44.253 "state": "completed", 00:20:44.253 "digest": "sha256", 00:20:44.253 "dhgroup": "ffdhe4096" 00:20:44.253 } 00:20:44.253 } 00:20:44.253 ]' 00:20:44.253 06:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:44.253 06:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:44.253 06:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:44.253 06:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:44.253 06:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:44.253 06:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:44.253 06:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:44.253 06:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:44.511 06:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjA1ODg0YTk2ODNmYjFhMzRhNDY0NzM0ZTgwYmVkYjVuWnga: --dhchap-ctrl-secret DHHC-1:02:NDViNTE0ODk4NjVlZWFmMWVmM2IyNTFjM2FiNzJmMWM4YWJjN2Q1MDI4ZjdiMzQwy7PKjQ==: 00:20:44.511 06:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YjA1ODg0YTk2ODNmYjFhMzRhNDY0NzM0ZTgwYmVkYjVuWnga: --dhchap-ctrl-secret DHHC-1:02:NDViNTE0ODk4NjVlZWFmMWVmM2IyNTFjM2FiNzJmMWM4YWJjN2Q1MDI4ZjdiMzQwy7PKjQ==: 00:20:45.079 06:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:45.079 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:45.079 06:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:45.079 06:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.079 06:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.079 06:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.079 06:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:45.079 06:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:45.079 06:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:45.338 06:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:20:45.338 06:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:45.338 06:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:45.338 06:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:45.338 06:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:45.338 06:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:45.338 06:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:45.338 06:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.338 06:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.338 06:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.338 06:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:45.338 06:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:45.338 06:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:45.596 00:20:45.596 06:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:45.596 06:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:45.596 06:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:45.856 06:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:45.856 06:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:45.856 06:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.856 06:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.856 06:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.856 06:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:45.856 { 00:20:45.856 "cntlid": 29, 00:20:45.856 "qid": 0, 00:20:45.856 "state": "enabled", 00:20:45.856 "thread": "nvmf_tgt_poll_group_000", 00:20:45.856 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:45.856 "listen_address": { 00:20:45.856 "trtype": "TCP", 00:20:45.856 "adrfam": "IPv4", 00:20:45.856 "traddr": "10.0.0.2", 00:20:45.856 "trsvcid": "4420" 00:20:45.856 }, 00:20:45.856 "peer_address": { 00:20:45.856 "trtype": "TCP", 00:20:45.856 "adrfam": "IPv4", 00:20:45.856 "traddr": "10.0.0.1", 00:20:45.856 "trsvcid": "46752" 00:20:45.856 }, 00:20:45.856 "auth": { 00:20:45.856 "state": "completed", 00:20:45.856 "digest": "sha256", 00:20:45.856 "dhgroup": "ffdhe4096" 00:20:45.856 } 00:20:45.856 } 00:20:45.856 ]' 00:20:45.856 06:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:45.856 06:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:45.856 06:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:45.856 06:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:45.856 06:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:45.856 06:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:45.856 06:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:45.856 06:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:46.115 06:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MGIwNmUwZDEyN2Q0ZTA1ZGRiMGExMmU0YzJiODg1MzE1ZGRkYjcxOWQ2Y2VlYjI16WXLmg==: --dhchap-ctrl-secret DHHC-1:01:MjliMDUxZDI0MzVkOTk1MWM1ZWUyODNhNmJjN2I1YzL0PRcf: 00:20:46.115 06:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MGIwNmUwZDEyN2Q0ZTA1ZGRiMGExMmU0YzJiODg1MzE1ZGRkYjcxOWQ2Y2VlYjI16WXLmg==: --dhchap-ctrl-secret DHHC-1:01:MjliMDUxZDI0MzVkOTk1MWM1ZWUyODNhNmJjN2I1YzL0PRcf: 00:20:46.683 06:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:46.683 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:46.683 06:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:46.683 06:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.683 06:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.683 06:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.683 06:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:46.683 06:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:46.683 06:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:46.942 06:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:20:46.942 06:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:46.942 06:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:46.942 06:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:46.942 06:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:46.942 06:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:46.942 06:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:20:46.942 06:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.942 06:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.942 06:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.942 06:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:46.942 06:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:46.942 06:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:47.201 00:20:47.201 06:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:47.201 06:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:47.201 06:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:47.201 06:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:47.201 06:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:47.201 06:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.201 06:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.201 06:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.201 06:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:47.201 { 00:20:47.201 "cntlid": 31, 00:20:47.201 "qid": 0, 00:20:47.201 "state": "enabled", 00:20:47.201 "thread": "nvmf_tgt_poll_group_000", 00:20:47.201 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:47.201 "listen_address": { 00:20:47.201 "trtype": "TCP", 00:20:47.201 "adrfam": "IPv4", 00:20:47.201 "traddr": "10.0.0.2", 00:20:47.201 "trsvcid": "4420" 00:20:47.201 }, 00:20:47.201 "peer_address": { 00:20:47.201 "trtype": "TCP", 00:20:47.201 "adrfam": "IPv4", 00:20:47.201 "traddr": "10.0.0.1", 00:20:47.201 "trsvcid": "46784" 00:20:47.201 }, 00:20:47.201 "auth": { 00:20:47.201 "state": "completed", 00:20:47.201 "digest": "sha256", 00:20:47.201 "dhgroup": "ffdhe4096" 00:20:47.201 } 00:20:47.201 } 00:20:47.201 ]' 00:20:47.201 06:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:47.460 06:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:47.460 06:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:47.460 06:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:47.460 06:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:47.460 06:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:47.460 06:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:47.460 06:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:47.718 06:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmU3YTdhY2FhZWRlMmVmNjBiYjEzOGU2YmZmODNmZmQwOTdkYjg2M2FlZTI2ZGY4ODBkZTI4YTYzOTg3NTY2MhLCEXY=: 00:20:47.718 06:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YmU3YTdhY2FhZWRlMmVmNjBiYjEzOGU2YmZmODNmZmQwOTdkYjg2M2FlZTI2ZGY4ODBkZTI4YTYzOTg3NTY2MhLCEXY=: 00:20:48.286 06:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:48.286 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:48.286 06:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:48.286 06:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.286 06:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.286 06:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.286 06:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:48.286 06:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:48.286 06:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:48.286 06:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:48.544 06:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:20:48.544 06:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:48.544 06:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:48.544 06:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:48.544 06:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:48.544 06:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:48.544 06:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:48.544 06:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.544 06:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.544 06:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.544 06:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:48.544 06:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:48.544 06:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:48.803 00:20:48.803 06:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:48.803 06:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:48.803 06:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:49.062 06:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:49.062 06:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:49.062 06:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:49.062 06:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.062 06:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:49.062 06:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:49.062 { 00:20:49.062 "cntlid": 33, 00:20:49.062 "qid": 0, 00:20:49.062 "state": "enabled", 00:20:49.062 "thread": "nvmf_tgt_poll_group_000", 00:20:49.062 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:49.062 "listen_address": { 00:20:49.062 "trtype": "TCP", 00:20:49.062 "adrfam": "IPv4", 00:20:49.062 "traddr": "10.0.0.2", 00:20:49.062 "trsvcid": "4420" 00:20:49.062 }, 00:20:49.062 "peer_address": { 00:20:49.062 "trtype": "TCP", 00:20:49.062 "adrfam": "IPv4", 00:20:49.062 "traddr": "10.0.0.1", 00:20:49.062 "trsvcid": "49746" 00:20:49.062 }, 00:20:49.062 "auth": { 00:20:49.062 "state": "completed", 00:20:49.062 "digest": "sha256", 00:20:49.062 "dhgroup": "ffdhe6144" 00:20:49.062 } 00:20:49.062 } 00:20:49.062 ]' 00:20:49.062 06:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:49.062 06:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:49.062 06:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:49.062 06:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:49.062 06:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:49.062 06:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:49.062 06:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:49.062 06:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:49.321 06:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzUxYzNkODQ2MDhiNjg0MzY5OTZhNjBjOWUzMDJhMGI5YTgzNmU1NjY1NjM4Yzk3aI3eWg==: --dhchap-ctrl-secret DHHC-1:03:NzFiZjY3NDQ4MGY5YjFkOGY3NDg0Y2M3ZWQxODM5MTNkMjkzNzVjNTU0ZTkxNjQ0OTIxY2NmODBlNWMxYWU1Of+wUH4=: 00:20:49.321 06:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NzUxYzNkODQ2MDhiNjg0MzY5OTZhNjBjOWUzMDJhMGI5YTgzNmU1NjY1NjM4Yzk3aI3eWg==: --dhchap-ctrl-secret DHHC-1:03:NzFiZjY3NDQ4MGY5YjFkOGY3NDg0Y2M3ZWQxODM5MTNkMjkzNzVjNTU0ZTkxNjQ0OTIxY2NmODBlNWMxYWU1Of+wUH4=: 00:20:49.889 06:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:49.889 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:49.889 06:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:49.889 06:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:49.889 06:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.889 06:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:49.889 06:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:49.889 06:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:49.889 06:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:50.148 06:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:20:50.148 06:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:50.148 06:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:50.148 06:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:50.148 06:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:50.148 06:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:50.148 06:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:50.148 06:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.148 06:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.148 06:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.148 06:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:50.148 06:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:50.148 06:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:50.406 00:20:50.406 06:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:50.406 06:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:50.406 06:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:50.666 06:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:50.666 06:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:50.666 06:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.666 06:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.666 06:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.666 06:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:50.666 { 00:20:50.666 "cntlid": 35, 00:20:50.666 "qid": 0, 00:20:50.666 "state": "enabled", 00:20:50.666 "thread": "nvmf_tgt_poll_group_000", 00:20:50.666 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:50.666 "listen_address": { 00:20:50.666 "trtype": "TCP", 00:20:50.666 "adrfam": "IPv4", 00:20:50.666 "traddr": "10.0.0.2", 00:20:50.666 "trsvcid": "4420" 00:20:50.666 }, 00:20:50.666 "peer_address": { 00:20:50.666 "trtype": "TCP", 00:20:50.666 "adrfam": "IPv4", 00:20:50.666 "traddr": "10.0.0.1", 00:20:50.666 "trsvcid": "49788" 00:20:50.666 }, 00:20:50.666 "auth": { 00:20:50.666 "state": "completed", 00:20:50.666 "digest": "sha256", 00:20:50.666 "dhgroup": "ffdhe6144" 00:20:50.666 } 00:20:50.666 } 00:20:50.666 ]' 00:20:50.666 06:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:50.666 06:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:50.666 06:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:50.666 06:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:50.666 06:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:50.924 06:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:50.924 06:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:50.924 06:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:50.924 06:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjA1ODg0YTk2ODNmYjFhMzRhNDY0NzM0ZTgwYmVkYjVuWnga: --dhchap-ctrl-secret DHHC-1:02:NDViNTE0ODk4NjVlZWFmMWVmM2IyNTFjM2FiNzJmMWM4YWJjN2Q1MDI4ZjdiMzQwy7PKjQ==: 00:20:50.925 06:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YjA1ODg0YTk2ODNmYjFhMzRhNDY0NzM0ZTgwYmVkYjVuWnga: --dhchap-ctrl-secret DHHC-1:02:NDViNTE0ODk4NjVlZWFmMWVmM2IyNTFjM2FiNzJmMWM4YWJjN2Q1MDI4ZjdiMzQwy7PKjQ==: 00:20:51.491 06:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:51.491 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:51.491 06:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:51.491 06:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.491 06:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.491 06:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.491 06:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:51.491 06:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:51.491 06:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:51.749 06:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:20:51.749 06:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:51.749 06:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:51.749 06:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:51.750 06:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:51.750 06:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:51.750 06:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:51.750 06:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.750 06:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.750 06:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.750 06:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:51.750 06:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:51.750 06:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:52.009 00:20:52.267 06:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:52.267 06:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:52.267 06:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:52.267 06:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:52.267 06:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:52.267 06:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.267 06:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.267 06:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.267 06:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:52.267 { 00:20:52.267 "cntlid": 37, 00:20:52.267 "qid": 0, 00:20:52.267 "state": "enabled", 00:20:52.267 "thread": "nvmf_tgt_poll_group_000", 00:20:52.267 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:52.267 "listen_address": { 00:20:52.267 "trtype": "TCP", 00:20:52.267 "adrfam": "IPv4", 00:20:52.267 "traddr": "10.0.0.2", 00:20:52.267 "trsvcid": "4420" 00:20:52.267 }, 00:20:52.267 "peer_address": { 00:20:52.267 "trtype": "TCP", 00:20:52.267 "adrfam": "IPv4", 00:20:52.267 "traddr": "10.0.0.1", 00:20:52.267 "trsvcid": "49808" 00:20:52.267 }, 00:20:52.267 "auth": { 00:20:52.267 "state": "completed", 00:20:52.267 "digest": "sha256", 00:20:52.267 "dhgroup": "ffdhe6144" 00:20:52.267 } 00:20:52.267 } 00:20:52.267 ]' 00:20:52.267 06:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:52.526 06:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:52.526 06:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:52.526 06:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:52.526 06:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:52.526 06:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:52.526 06:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:52.526 06:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:52.784 06:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MGIwNmUwZDEyN2Q0ZTA1ZGRiMGExMmU0YzJiODg1MzE1ZGRkYjcxOWQ2Y2VlYjI16WXLmg==: --dhchap-ctrl-secret DHHC-1:01:MjliMDUxZDI0MzVkOTk1MWM1ZWUyODNhNmJjN2I1YzL0PRcf: 00:20:52.784 06:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MGIwNmUwZDEyN2Q0ZTA1ZGRiMGExMmU0YzJiODg1MzE1ZGRkYjcxOWQ2Y2VlYjI16WXLmg==: --dhchap-ctrl-secret DHHC-1:01:MjliMDUxZDI0MzVkOTk1MWM1ZWUyODNhNmJjN2I1YzL0PRcf: 00:20:53.351 06:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:53.351 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:53.351 06:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:53.351 06:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.351 06:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.351 06:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.351 06:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:53.351 06:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:53.351 06:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:53.612 06:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:20:53.612 06:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:53.612 06:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:53.612 06:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:53.612 06:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:53.612 06:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:53.612 06:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:20:53.612 06:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.612 06:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.612 06:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.612 06:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:53.612 06:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:53.612 06:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:53.903 00:20:53.903 06:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:53.903 06:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:53.903 06:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:54.201 06:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:54.201 06:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:54.201 06:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.201 06:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.201 06:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.201 06:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:54.201 { 00:20:54.201 "cntlid": 39, 00:20:54.201 "qid": 0, 00:20:54.201 "state": "enabled", 00:20:54.201 "thread": "nvmf_tgt_poll_group_000", 00:20:54.201 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:54.201 "listen_address": { 00:20:54.201 "trtype": "TCP", 00:20:54.201 "adrfam": "IPv4", 00:20:54.201 "traddr": "10.0.0.2", 00:20:54.201 "trsvcid": "4420" 00:20:54.201 }, 00:20:54.201 "peer_address": { 00:20:54.201 "trtype": "TCP", 00:20:54.201 "adrfam": "IPv4", 00:20:54.201 "traddr": "10.0.0.1", 00:20:54.201 "trsvcid": "49848" 00:20:54.201 }, 00:20:54.201 "auth": { 00:20:54.201 "state": "completed", 00:20:54.201 "digest": "sha256", 00:20:54.201 "dhgroup": "ffdhe6144" 00:20:54.201 } 00:20:54.201 } 00:20:54.201 ]' 00:20:54.201 06:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:54.201 06:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:54.201 06:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:54.201 06:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:54.201 06:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:54.201 06:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:54.201 06:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:54.201 06:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:54.465 06:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmU3YTdhY2FhZWRlMmVmNjBiYjEzOGU2YmZmODNmZmQwOTdkYjg2M2FlZTI2ZGY4ODBkZTI4YTYzOTg3NTY2MhLCEXY=: 00:20:54.465 06:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YmU3YTdhY2FhZWRlMmVmNjBiYjEzOGU2YmZmODNmZmQwOTdkYjg2M2FlZTI2ZGY4ODBkZTI4YTYzOTg3NTY2MhLCEXY=: 00:20:55.029 06:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:55.029 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:55.029 06:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:55.029 06:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.029 06:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.029 06:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.029 06:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:55.029 06:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:55.029 06:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:55.029 06:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:55.029 06:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:20:55.029 06:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:55.029 06:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:55.029 06:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:55.029 06:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:55.029 06:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:55.029 06:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:55.029 06:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.029 06:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.287 06:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.287 06:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:55.287 06:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:55.287 06:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:55.545 00:20:55.545 06:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:55.545 06:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:55.545 06:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:55.803 06:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:55.803 06:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:55.803 06:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.803 06:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.803 06:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.803 06:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:55.803 { 00:20:55.803 "cntlid": 41, 00:20:55.803 "qid": 0, 00:20:55.803 "state": "enabled", 00:20:55.803 "thread": "nvmf_tgt_poll_group_000", 00:20:55.803 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:55.803 "listen_address": { 00:20:55.803 "trtype": "TCP", 00:20:55.803 "adrfam": "IPv4", 00:20:55.803 "traddr": "10.0.0.2", 00:20:55.803 "trsvcid": "4420" 00:20:55.803 }, 00:20:55.803 "peer_address": { 00:20:55.803 "trtype": "TCP", 00:20:55.803 "adrfam": "IPv4", 00:20:55.803 "traddr": "10.0.0.1", 00:20:55.803 "trsvcid": "49884" 00:20:55.803 }, 00:20:55.803 "auth": { 00:20:55.803 "state": "completed", 00:20:55.803 "digest": "sha256", 00:20:55.803 "dhgroup": "ffdhe8192" 00:20:55.803 } 00:20:55.803 } 00:20:55.803 ]' 00:20:55.803 06:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:55.803 06:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:55.803 06:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:56.063 06:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:56.063 06:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:56.063 06:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:56.063 06:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:56.063 06:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:56.063 06:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzUxYzNkODQ2MDhiNjg0MzY5OTZhNjBjOWUzMDJhMGI5YTgzNmU1NjY1NjM4Yzk3aI3eWg==: --dhchap-ctrl-secret DHHC-1:03:NzFiZjY3NDQ4MGY5YjFkOGY3NDg0Y2M3ZWQxODM5MTNkMjkzNzVjNTU0ZTkxNjQ0OTIxY2NmODBlNWMxYWU1Of+wUH4=: 00:20:56.063 06:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NzUxYzNkODQ2MDhiNjg0MzY5OTZhNjBjOWUzMDJhMGI5YTgzNmU1NjY1NjM4Yzk3aI3eWg==: --dhchap-ctrl-secret DHHC-1:03:NzFiZjY3NDQ4MGY5YjFkOGY3NDg0Y2M3ZWQxODM5MTNkMjkzNzVjNTU0ZTkxNjQ0OTIxY2NmODBlNWMxYWU1Of+wUH4=: 00:20:56.630 06:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:56.888 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:56.888 06:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:56.888 06:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.888 06:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.888 06:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.888 06:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:56.888 06:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:56.888 06:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:56.888 06:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:20:56.888 06:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:56.888 06:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:56.888 06:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:56.888 06:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:56.888 06:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:56.888 06:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:56.888 06:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.888 06:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.888 06:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.888 06:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:56.888 06:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:56.888 06:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:57.456 00:20:57.456 06:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:57.456 06:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:57.456 06:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:57.715 06:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:57.715 06:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:57.715 06:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.715 06:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.715 06:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.715 06:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:57.715 { 00:20:57.715 "cntlid": 43, 00:20:57.715 "qid": 0, 00:20:57.715 "state": "enabled", 00:20:57.715 "thread": "nvmf_tgt_poll_group_000", 00:20:57.715 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:57.715 "listen_address": { 00:20:57.715 "trtype": "TCP", 00:20:57.715 "adrfam": "IPv4", 00:20:57.715 "traddr": "10.0.0.2", 00:20:57.715 "trsvcid": "4420" 00:20:57.715 }, 00:20:57.715 "peer_address": { 00:20:57.715 "trtype": "TCP", 00:20:57.715 "adrfam": "IPv4", 00:20:57.715 "traddr": "10.0.0.1", 00:20:57.715 "trsvcid": "49920" 00:20:57.715 }, 00:20:57.715 "auth": { 00:20:57.715 "state": "completed", 00:20:57.715 "digest": "sha256", 00:20:57.715 "dhgroup": "ffdhe8192" 00:20:57.715 } 00:20:57.715 } 00:20:57.715 ]' 00:20:57.715 06:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:57.715 06:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:57.715 06:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:57.715 06:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:57.715 06:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:57.715 06:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:57.715 06:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:57.715 06:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:57.974 06:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjA1ODg0YTk2ODNmYjFhMzRhNDY0NzM0ZTgwYmVkYjVuWnga: --dhchap-ctrl-secret DHHC-1:02:NDViNTE0ODk4NjVlZWFmMWVmM2IyNTFjM2FiNzJmMWM4YWJjN2Q1MDI4ZjdiMzQwy7PKjQ==: 00:20:57.974 06:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YjA1ODg0YTk2ODNmYjFhMzRhNDY0NzM0ZTgwYmVkYjVuWnga: --dhchap-ctrl-secret DHHC-1:02:NDViNTE0ODk4NjVlZWFmMWVmM2IyNTFjM2FiNzJmMWM4YWJjN2Q1MDI4ZjdiMzQwy7PKjQ==: 00:20:58.540 06:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:58.540 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:58.540 06:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:58.540 06:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.540 06:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.540 06:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.540 06:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:58.540 06:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:58.540 06:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:58.799 06:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:20:58.799 06:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:58.799 06:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:58.799 06:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:58.799 06:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:58.799 06:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:58.799 06:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:58.799 06:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.799 06:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.799 06:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.799 06:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:58.799 06:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:58.799 06:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:59.366 00:20:59.366 06:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:59.366 06:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:59.366 06:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:59.366 06:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:59.366 06:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:59.366 06:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.366 06:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.625 06:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.625 06:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:59.625 { 00:20:59.625 "cntlid": 45, 00:20:59.625 "qid": 0, 00:20:59.625 "state": "enabled", 00:20:59.625 "thread": "nvmf_tgt_poll_group_000", 00:20:59.625 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:59.625 "listen_address": { 00:20:59.625 "trtype": "TCP", 00:20:59.625 "adrfam": "IPv4", 00:20:59.625 "traddr": "10.0.0.2", 00:20:59.625 "trsvcid": "4420" 00:20:59.625 }, 00:20:59.625 "peer_address": { 00:20:59.625 "trtype": "TCP", 00:20:59.625 "adrfam": "IPv4", 00:20:59.625 "traddr": "10.0.0.1", 00:20:59.625 "trsvcid": "59534" 00:20:59.625 }, 00:20:59.625 "auth": { 00:20:59.625 "state": "completed", 00:20:59.625 "digest": "sha256", 00:20:59.625 "dhgroup": "ffdhe8192" 00:20:59.625 } 00:20:59.625 } 00:20:59.625 ]' 00:20:59.625 06:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:59.625 06:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:59.625 06:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:59.625 06:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:59.625 06:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:59.625 06:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:59.625 06:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:59.625 06:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:59.883 06:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MGIwNmUwZDEyN2Q0ZTA1ZGRiMGExMmU0YzJiODg1MzE1ZGRkYjcxOWQ2Y2VlYjI16WXLmg==: --dhchap-ctrl-secret DHHC-1:01:MjliMDUxZDI0MzVkOTk1MWM1ZWUyODNhNmJjN2I1YzL0PRcf: 00:20:59.883 06:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MGIwNmUwZDEyN2Q0ZTA1ZGRiMGExMmU0YzJiODg1MzE1ZGRkYjcxOWQ2Y2VlYjI16WXLmg==: --dhchap-ctrl-secret DHHC-1:01:MjliMDUxZDI0MzVkOTk1MWM1ZWUyODNhNmJjN2I1YzL0PRcf: 00:21:00.449 06:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:00.449 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:00.449 06:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:00.449 06:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.449 06:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.449 06:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.449 06:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:00.449 06:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:00.450 06:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:00.708 06:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:21:00.708 06:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:00.708 06:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:00.708 06:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:00.708 06:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:00.708 06:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:00.708 06:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:21:00.708 06:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.708 06:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.708 06:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.708 06:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:00.708 06:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:00.708 06:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:00.967 00:21:01.225 06:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:01.225 06:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:01.225 06:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:01.225 06:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:01.226 06:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:01.226 06:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.226 06:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.226 06:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.226 06:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:01.226 { 00:21:01.226 "cntlid": 47, 00:21:01.226 "qid": 0, 00:21:01.226 "state": "enabled", 00:21:01.226 "thread": "nvmf_tgt_poll_group_000", 00:21:01.226 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:21:01.226 "listen_address": { 00:21:01.226 "trtype": "TCP", 00:21:01.226 "adrfam": "IPv4", 00:21:01.226 "traddr": "10.0.0.2", 00:21:01.226 "trsvcid": "4420" 00:21:01.226 }, 00:21:01.226 "peer_address": { 00:21:01.226 "trtype": "TCP", 00:21:01.226 "adrfam": "IPv4", 00:21:01.226 "traddr": "10.0.0.1", 00:21:01.226 "trsvcid": "59564" 00:21:01.226 }, 00:21:01.226 "auth": { 00:21:01.226 "state": "completed", 00:21:01.226 "digest": "sha256", 00:21:01.226 "dhgroup": "ffdhe8192" 00:21:01.226 } 00:21:01.226 } 00:21:01.226 ]' 00:21:01.226 06:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:01.485 06:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:01.485 06:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:01.485 06:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:01.485 06:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:01.485 06:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:01.485 06:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:01.485 06:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:01.743 06:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmU3YTdhY2FhZWRlMmVmNjBiYjEzOGU2YmZmODNmZmQwOTdkYjg2M2FlZTI2ZGY4ODBkZTI4YTYzOTg3NTY2MhLCEXY=: 00:21:01.743 06:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YmU3YTdhY2FhZWRlMmVmNjBiYjEzOGU2YmZmODNmZmQwOTdkYjg2M2FlZTI2ZGY4ODBkZTI4YTYzOTg3NTY2MhLCEXY=: 00:21:02.309 06:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:02.309 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:02.309 06:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:02.309 06:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.310 06:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.310 06:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.310 06:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:21:02.310 06:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:02.310 06:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:02.310 06:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:02.310 06:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:02.310 06:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:21:02.310 06:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:02.310 06:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:02.310 06:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:02.310 06:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:02.310 06:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:02.310 06:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:02.310 06:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.310 06:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.310 06:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.310 06:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:02.310 06:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:02.310 06:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:02.568 00:21:02.569 06:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:02.569 06:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:02.569 06:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:02.827 06:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:02.827 06:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:02.827 06:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.827 06:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.827 06:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.827 06:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:02.827 { 00:21:02.827 "cntlid": 49, 00:21:02.827 "qid": 0, 00:21:02.827 "state": "enabled", 00:21:02.827 "thread": "nvmf_tgt_poll_group_000", 00:21:02.827 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:21:02.827 "listen_address": { 00:21:02.827 "trtype": "TCP", 00:21:02.827 "adrfam": "IPv4", 00:21:02.827 "traddr": "10.0.0.2", 00:21:02.827 "trsvcid": "4420" 00:21:02.827 }, 00:21:02.827 "peer_address": { 00:21:02.827 "trtype": "TCP", 00:21:02.827 "adrfam": "IPv4", 00:21:02.827 "traddr": "10.0.0.1", 00:21:02.827 "trsvcid": "59586" 00:21:02.827 }, 00:21:02.827 "auth": { 00:21:02.827 "state": "completed", 00:21:02.827 "digest": "sha384", 00:21:02.827 "dhgroup": "null" 00:21:02.827 } 00:21:02.827 } 00:21:02.827 ]' 00:21:02.827 06:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:02.827 06:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:02.827 06:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:02.827 06:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:02.827 06:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:03.086 06:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:03.086 06:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:03.086 06:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:03.086 06:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzUxYzNkODQ2MDhiNjg0MzY5OTZhNjBjOWUzMDJhMGI5YTgzNmU1NjY1NjM4Yzk3aI3eWg==: --dhchap-ctrl-secret DHHC-1:03:NzFiZjY3NDQ4MGY5YjFkOGY3NDg0Y2M3ZWQxODM5MTNkMjkzNzVjNTU0ZTkxNjQ0OTIxY2NmODBlNWMxYWU1Of+wUH4=: 00:21:03.086 06:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NzUxYzNkODQ2MDhiNjg0MzY5OTZhNjBjOWUzMDJhMGI5YTgzNmU1NjY1NjM4Yzk3aI3eWg==: --dhchap-ctrl-secret DHHC-1:03:NzFiZjY3NDQ4MGY5YjFkOGY3NDg0Y2M3ZWQxODM5MTNkMjkzNzVjNTU0ZTkxNjQ0OTIxY2NmODBlNWMxYWU1Of+wUH4=: 00:21:03.653 06:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:03.653 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:03.653 06:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:03.653 06:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.653 06:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.653 06:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.653 06:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:03.653 06:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:03.653 06:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:03.912 06:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:21:03.912 06:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:03.912 06:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:03.912 06:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:03.912 06:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:03.912 06:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:03.912 06:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:03.912 06:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.912 06:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.912 06:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.912 06:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:03.912 06:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:03.912 06:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:04.171 00:21:04.171 06:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:04.171 06:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:04.171 06:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:04.429 06:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:04.429 06:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:04.429 06:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.429 06:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.429 06:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.429 06:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:04.429 { 00:21:04.429 "cntlid": 51, 00:21:04.429 "qid": 0, 00:21:04.429 "state": "enabled", 00:21:04.429 "thread": "nvmf_tgt_poll_group_000", 00:21:04.429 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:21:04.429 "listen_address": { 00:21:04.429 "trtype": "TCP", 00:21:04.429 "adrfam": "IPv4", 00:21:04.429 "traddr": "10.0.0.2", 00:21:04.429 "trsvcid": "4420" 00:21:04.429 }, 00:21:04.429 "peer_address": { 00:21:04.429 "trtype": "TCP", 00:21:04.429 "adrfam": "IPv4", 00:21:04.429 "traddr": "10.0.0.1", 00:21:04.429 "trsvcid": "59614" 00:21:04.429 }, 00:21:04.429 "auth": { 00:21:04.429 "state": "completed", 00:21:04.429 "digest": "sha384", 00:21:04.429 "dhgroup": "null" 00:21:04.429 } 00:21:04.429 } 00:21:04.429 ]' 00:21:04.429 06:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:04.429 06:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:04.429 06:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:04.429 06:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:04.429 06:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:04.429 06:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:04.429 06:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:04.429 06:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:04.688 06:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjA1ODg0YTk2ODNmYjFhMzRhNDY0NzM0ZTgwYmVkYjVuWnga: --dhchap-ctrl-secret DHHC-1:02:NDViNTE0ODk4NjVlZWFmMWVmM2IyNTFjM2FiNzJmMWM4YWJjN2Q1MDI4ZjdiMzQwy7PKjQ==: 00:21:04.688 06:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YjA1ODg0YTk2ODNmYjFhMzRhNDY0NzM0ZTgwYmVkYjVuWnga: --dhchap-ctrl-secret DHHC-1:02:NDViNTE0ODk4NjVlZWFmMWVmM2IyNTFjM2FiNzJmMWM4YWJjN2Q1MDI4ZjdiMzQwy7PKjQ==: 00:21:05.255 06:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:05.255 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:05.255 06:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:05.255 06:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.255 06:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.255 06:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.255 06:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:05.255 06:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:05.255 06:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:05.515 06:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:21:05.515 06:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:05.515 06:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:05.515 06:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:05.515 06:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:05.515 06:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:05.515 06:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:05.515 06:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.515 06:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.515 06:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.515 06:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:05.515 06:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:05.515 06:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:05.774 00:21:05.774 06:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:05.774 06:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:05.774 06:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:06.032 06:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:06.032 06:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:06.032 06:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.032 06:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.032 06:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.032 06:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:06.032 { 00:21:06.032 "cntlid": 53, 00:21:06.032 "qid": 0, 00:21:06.032 "state": "enabled", 00:21:06.032 "thread": "nvmf_tgt_poll_group_000", 00:21:06.032 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:21:06.032 "listen_address": { 00:21:06.032 "trtype": "TCP", 00:21:06.032 "adrfam": "IPv4", 00:21:06.032 "traddr": "10.0.0.2", 00:21:06.032 "trsvcid": "4420" 00:21:06.032 }, 00:21:06.032 "peer_address": { 00:21:06.032 "trtype": "TCP", 00:21:06.032 "adrfam": "IPv4", 00:21:06.032 "traddr": "10.0.0.1", 00:21:06.032 "trsvcid": "59632" 00:21:06.032 }, 00:21:06.032 "auth": { 00:21:06.032 "state": "completed", 00:21:06.032 "digest": "sha384", 00:21:06.032 "dhgroup": "null" 00:21:06.032 } 00:21:06.032 } 00:21:06.032 ]' 00:21:06.032 06:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:06.032 06:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:06.032 06:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:06.032 06:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:06.032 06:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:06.032 06:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:06.032 06:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:06.032 06:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:06.291 06:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MGIwNmUwZDEyN2Q0ZTA1ZGRiMGExMmU0YzJiODg1MzE1ZGRkYjcxOWQ2Y2VlYjI16WXLmg==: --dhchap-ctrl-secret DHHC-1:01:MjliMDUxZDI0MzVkOTk1MWM1ZWUyODNhNmJjN2I1YzL0PRcf: 00:21:06.291 06:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MGIwNmUwZDEyN2Q0ZTA1ZGRiMGExMmU0YzJiODg1MzE1ZGRkYjcxOWQ2Y2VlYjI16WXLmg==: --dhchap-ctrl-secret DHHC-1:01:MjliMDUxZDI0MzVkOTk1MWM1ZWUyODNhNmJjN2I1YzL0PRcf: 00:21:06.858 06:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:06.858 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:06.858 06:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:06.858 06:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.858 06:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.858 06:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.858 06:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:06.858 06:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:06.858 06:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:07.117 06:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:21:07.117 06:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:07.117 06:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:07.117 06:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:07.117 06:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:07.117 06:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:07.118 06:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:21:07.118 06:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.118 06:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.118 06:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.118 06:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:07.118 06:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:07.118 06:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:07.376 00:21:07.376 06:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:07.376 06:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:07.376 06:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:07.376 06:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:07.376 06:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:07.376 06:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.376 06:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.376 06:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.376 06:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:07.376 { 00:21:07.376 "cntlid": 55, 00:21:07.376 "qid": 0, 00:21:07.376 "state": "enabled", 00:21:07.376 "thread": "nvmf_tgt_poll_group_000", 00:21:07.376 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:21:07.376 "listen_address": { 00:21:07.376 "trtype": "TCP", 00:21:07.376 "adrfam": "IPv4", 00:21:07.376 "traddr": "10.0.0.2", 00:21:07.376 "trsvcid": "4420" 00:21:07.376 }, 00:21:07.376 "peer_address": { 00:21:07.376 "trtype": "TCP", 00:21:07.376 "adrfam": "IPv4", 00:21:07.376 "traddr": "10.0.0.1", 00:21:07.376 "trsvcid": "59666" 00:21:07.376 }, 00:21:07.376 "auth": { 00:21:07.376 "state": "completed", 00:21:07.376 "digest": "sha384", 00:21:07.376 "dhgroup": "null" 00:21:07.376 } 00:21:07.376 } 00:21:07.376 ]' 00:21:07.376 06:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:07.634 06:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:07.634 06:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:07.634 06:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:07.634 06:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:07.634 06:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:07.634 06:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:07.634 06:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:07.893 06:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmU3YTdhY2FhZWRlMmVmNjBiYjEzOGU2YmZmODNmZmQwOTdkYjg2M2FlZTI2ZGY4ODBkZTI4YTYzOTg3NTY2MhLCEXY=: 00:21:07.893 06:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YmU3YTdhY2FhZWRlMmVmNjBiYjEzOGU2YmZmODNmZmQwOTdkYjg2M2FlZTI2ZGY4ODBkZTI4YTYzOTg3NTY2MhLCEXY=: 00:21:08.459 06:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:08.459 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:08.459 06:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:08.459 06:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.459 06:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.459 06:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.459 06:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:08.459 06:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:08.459 06:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:08.460 06:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:08.460 06:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:21:08.460 06:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:08.460 06:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:08.460 06:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:08.460 06:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:08.460 06:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:08.719 06:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:08.719 06:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.719 06:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.719 06:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.719 06:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:08.719 06:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:08.719 06:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:08.719 00:21:08.978 06:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:08.978 06:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:08.978 06:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:08.978 06:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:08.978 06:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:08.978 06:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.978 06:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.978 06:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.978 06:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:08.978 { 00:21:08.978 "cntlid": 57, 00:21:08.978 "qid": 0, 00:21:08.978 "state": "enabled", 00:21:08.978 "thread": "nvmf_tgt_poll_group_000", 00:21:08.978 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:21:08.978 "listen_address": { 00:21:08.978 "trtype": "TCP", 00:21:08.978 "adrfam": "IPv4", 00:21:08.978 "traddr": "10.0.0.2", 00:21:08.978 "trsvcid": "4420" 00:21:08.978 }, 00:21:08.978 "peer_address": { 00:21:08.978 "trtype": "TCP", 00:21:08.978 "adrfam": "IPv4", 00:21:08.978 "traddr": "10.0.0.1", 00:21:08.978 "trsvcid": "43228" 00:21:08.978 }, 00:21:08.978 "auth": { 00:21:08.978 "state": "completed", 00:21:08.978 "digest": "sha384", 00:21:08.978 "dhgroup": "ffdhe2048" 00:21:08.978 } 00:21:08.978 } 00:21:08.978 ]' 00:21:08.978 06:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:09.236 06:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:09.236 06:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:09.236 06:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:09.236 06:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:09.236 06:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:09.236 06:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:09.236 06:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:09.495 06:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzUxYzNkODQ2MDhiNjg0MzY5OTZhNjBjOWUzMDJhMGI5YTgzNmU1NjY1NjM4Yzk3aI3eWg==: --dhchap-ctrl-secret DHHC-1:03:NzFiZjY3NDQ4MGY5YjFkOGY3NDg0Y2M3ZWQxODM5MTNkMjkzNzVjNTU0ZTkxNjQ0OTIxY2NmODBlNWMxYWU1Of+wUH4=: 00:21:09.495 06:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NzUxYzNkODQ2MDhiNjg0MzY5OTZhNjBjOWUzMDJhMGI5YTgzNmU1NjY1NjM4Yzk3aI3eWg==: --dhchap-ctrl-secret DHHC-1:03:NzFiZjY3NDQ4MGY5YjFkOGY3NDg0Y2M3ZWQxODM5MTNkMjkzNzVjNTU0ZTkxNjQ0OTIxY2NmODBlNWMxYWU1Of+wUH4=: 00:21:10.080 06:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:10.080 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:10.080 06:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:10.080 06:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:10.080 06:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.080 06:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:10.080 06:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:10.080 06:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:10.081 06:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:10.081 06:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:21:10.081 06:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:10.081 06:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:10.081 06:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:10.081 06:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:10.081 06:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:10.081 06:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:10.081 06:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:10.081 06:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.081 06:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:10.081 06:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:10.081 06:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:10.081 06:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:10.345 00:21:10.345 06:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:10.345 06:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:10.345 06:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:10.604 06:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:10.604 06:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:10.604 06:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:10.604 06:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.604 06:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:10.604 06:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:10.604 { 00:21:10.604 "cntlid": 59, 00:21:10.604 "qid": 0, 00:21:10.604 "state": "enabled", 00:21:10.604 "thread": "nvmf_tgt_poll_group_000", 00:21:10.604 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:21:10.604 "listen_address": { 00:21:10.604 "trtype": "TCP", 00:21:10.604 "adrfam": "IPv4", 00:21:10.604 "traddr": "10.0.0.2", 00:21:10.604 "trsvcid": "4420" 00:21:10.604 }, 00:21:10.604 "peer_address": { 00:21:10.604 "trtype": "TCP", 00:21:10.604 "adrfam": "IPv4", 00:21:10.604 "traddr": "10.0.0.1", 00:21:10.604 "trsvcid": "43250" 00:21:10.604 }, 00:21:10.604 "auth": { 00:21:10.604 "state": "completed", 00:21:10.604 "digest": "sha384", 00:21:10.604 "dhgroup": "ffdhe2048" 00:21:10.604 } 00:21:10.604 } 00:21:10.604 ]' 00:21:10.604 06:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:10.604 06:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:10.604 06:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:10.604 06:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:10.604 06:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:10.863 06:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:10.863 06:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:10.863 06:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:10.863 06:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjA1ODg0YTk2ODNmYjFhMzRhNDY0NzM0ZTgwYmVkYjVuWnga: --dhchap-ctrl-secret DHHC-1:02:NDViNTE0ODk4NjVlZWFmMWVmM2IyNTFjM2FiNzJmMWM4YWJjN2Q1MDI4ZjdiMzQwy7PKjQ==: 00:21:10.863 06:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YjA1ODg0YTk2ODNmYjFhMzRhNDY0NzM0ZTgwYmVkYjVuWnga: --dhchap-ctrl-secret DHHC-1:02:NDViNTE0ODk4NjVlZWFmMWVmM2IyNTFjM2FiNzJmMWM4YWJjN2Q1MDI4ZjdiMzQwy7PKjQ==: 00:21:11.490 06:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:11.490 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:11.490 06:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:11.490 06:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.490 06:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.490 06:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.490 06:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:11.490 06:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:11.490 06:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:11.749 06:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:21:11.749 06:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:11.749 06:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:11.749 06:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:11.749 06:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:11.749 06:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:11.749 06:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:11.749 06:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.749 06:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.749 06:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.749 06:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:11.749 06:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:11.749 06:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:12.007 00:21:12.007 06:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:12.007 06:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:12.007 06:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:12.266 06:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:12.266 06:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:12.266 06:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.266 06:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.266 06:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:12.266 06:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:12.266 { 00:21:12.266 "cntlid": 61, 00:21:12.266 "qid": 0, 00:21:12.266 "state": "enabled", 00:21:12.266 "thread": "nvmf_tgt_poll_group_000", 00:21:12.266 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:21:12.266 "listen_address": { 00:21:12.266 "trtype": "TCP", 00:21:12.266 "adrfam": "IPv4", 00:21:12.266 "traddr": "10.0.0.2", 00:21:12.266 "trsvcid": "4420" 00:21:12.266 }, 00:21:12.266 "peer_address": { 00:21:12.266 "trtype": "TCP", 00:21:12.266 "adrfam": "IPv4", 00:21:12.266 "traddr": "10.0.0.1", 00:21:12.266 "trsvcid": "43272" 00:21:12.266 }, 00:21:12.266 "auth": { 00:21:12.266 "state": "completed", 00:21:12.266 "digest": "sha384", 00:21:12.267 "dhgroup": "ffdhe2048" 00:21:12.267 } 00:21:12.267 } 00:21:12.267 ]' 00:21:12.267 06:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:12.267 06:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:12.267 06:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:12.267 06:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:12.267 06:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:12.267 06:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:12.267 06:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:12.267 06:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:12.523 06:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MGIwNmUwZDEyN2Q0ZTA1ZGRiMGExMmU0YzJiODg1MzE1ZGRkYjcxOWQ2Y2VlYjI16WXLmg==: --dhchap-ctrl-secret DHHC-1:01:MjliMDUxZDI0MzVkOTk1MWM1ZWUyODNhNmJjN2I1YzL0PRcf: 00:21:12.523 06:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MGIwNmUwZDEyN2Q0ZTA1ZGRiMGExMmU0YzJiODg1MzE1ZGRkYjcxOWQ2Y2VlYjI16WXLmg==: --dhchap-ctrl-secret DHHC-1:01:MjliMDUxZDI0MzVkOTk1MWM1ZWUyODNhNmJjN2I1YzL0PRcf: 00:21:13.090 06:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:13.090 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:13.090 06:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:13.090 06:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.090 06:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.090 06:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.090 06:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:13.090 06:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:13.090 06:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:13.349 06:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:21:13.349 06:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:13.349 06:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:13.349 06:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:13.349 06:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:13.349 06:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:13.349 06:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:21:13.349 06:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.349 06:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.349 06:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.349 06:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:13.349 06:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:13.349 06:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:13.608 00:21:13.608 06:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:13.608 06:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:13.608 06:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:13.867 06:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:13.867 06:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:13.867 06:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.867 06:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.867 06:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.867 06:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:13.867 { 00:21:13.867 "cntlid": 63, 00:21:13.867 "qid": 0, 00:21:13.867 "state": "enabled", 00:21:13.867 "thread": "nvmf_tgt_poll_group_000", 00:21:13.867 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:21:13.867 "listen_address": { 00:21:13.867 "trtype": "TCP", 00:21:13.867 "adrfam": "IPv4", 00:21:13.867 "traddr": "10.0.0.2", 00:21:13.867 "trsvcid": "4420" 00:21:13.867 }, 00:21:13.867 "peer_address": { 00:21:13.867 "trtype": "TCP", 00:21:13.867 "adrfam": "IPv4", 00:21:13.868 "traddr": "10.0.0.1", 00:21:13.868 "trsvcid": "43282" 00:21:13.868 }, 00:21:13.868 "auth": { 00:21:13.868 "state": "completed", 00:21:13.868 "digest": "sha384", 00:21:13.868 "dhgroup": "ffdhe2048" 00:21:13.868 } 00:21:13.868 } 00:21:13.868 ]' 00:21:13.868 06:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:13.868 06:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:13.868 06:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:13.868 06:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:13.868 06:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:13.868 06:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:13.868 06:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:13.868 06:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:14.126 06:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmU3YTdhY2FhZWRlMmVmNjBiYjEzOGU2YmZmODNmZmQwOTdkYjg2M2FlZTI2ZGY4ODBkZTI4YTYzOTg3NTY2MhLCEXY=: 00:21:14.126 06:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YmU3YTdhY2FhZWRlMmVmNjBiYjEzOGU2YmZmODNmZmQwOTdkYjg2M2FlZTI2ZGY4ODBkZTI4YTYzOTg3NTY2MhLCEXY=: 00:21:14.693 06:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:14.693 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:14.693 06:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:14.693 06:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.693 06:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.693 06:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.693 06:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:14.693 06:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:14.693 06:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:14.693 06:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:14.953 06:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:21:14.953 06:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:14.953 06:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:14.953 06:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:14.953 06:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:14.953 06:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:14.953 06:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:14.953 06:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.953 06:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.953 06:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.953 06:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:14.953 06:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:14.953 06:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:15.211 00:21:15.211 06:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:15.211 06:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:15.211 06:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:15.470 06:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:15.470 06:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:15.470 06:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.470 06:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.470 06:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.470 06:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:15.470 { 00:21:15.470 "cntlid": 65, 00:21:15.470 "qid": 0, 00:21:15.470 "state": "enabled", 00:21:15.470 "thread": "nvmf_tgt_poll_group_000", 00:21:15.470 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:21:15.470 "listen_address": { 00:21:15.470 "trtype": "TCP", 00:21:15.470 "adrfam": "IPv4", 00:21:15.470 "traddr": "10.0.0.2", 00:21:15.470 "trsvcid": "4420" 00:21:15.470 }, 00:21:15.470 "peer_address": { 00:21:15.470 "trtype": "TCP", 00:21:15.470 "adrfam": "IPv4", 00:21:15.470 "traddr": "10.0.0.1", 00:21:15.470 "trsvcid": "43310" 00:21:15.470 }, 00:21:15.470 "auth": { 00:21:15.470 "state": "completed", 00:21:15.470 "digest": "sha384", 00:21:15.470 "dhgroup": "ffdhe3072" 00:21:15.470 } 00:21:15.470 } 00:21:15.470 ]' 00:21:15.470 06:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:15.470 06:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:15.470 06:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:15.470 06:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:15.471 06:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:15.471 06:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:15.471 06:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:15.471 06:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:15.731 06:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzUxYzNkODQ2MDhiNjg0MzY5OTZhNjBjOWUzMDJhMGI5YTgzNmU1NjY1NjM4Yzk3aI3eWg==: --dhchap-ctrl-secret DHHC-1:03:NzFiZjY3NDQ4MGY5YjFkOGY3NDg0Y2M3ZWQxODM5MTNkMjkzNzVjNTU0ZTkxNjQ0OTIxY2NmODBlNWMxYWU1Of+wUH4=: 00:21:15.731 06:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NzUxYzNkODQ2MDhiNjg0MzY5OTZhNjBjOWUzMDJhMGI5YTgzNmU1NjY1NjM4Yzk3aI3eWg==: --dhchap-ctrl-secret DHHC-1:03:NzFiZjY3NDQ4MGY5YjFkOGY3NDg0Y2M3ZWQxODM5MTNkMjkzNzVjNTU0ZTkxNjQ0OTIxY2NmODBlNWMxYWU1Of+wUH4=: 00:21:16.299 06:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:16.299 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:16.299 06:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:16.299 06:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:16.299 06:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.299 06:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:16.299 06:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:16.299 06:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:16.299 06:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:16.558 06:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:21:16.558 06:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:16.559 06:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:16.559 06:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:16.559 06:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:16.559 06:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:16.559 06:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:16.559 06:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:16.559 06:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.559 06:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:16.559 06:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:16.559 06:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:16.559 06:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:16.818 00:21:16.818 06:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:16.818 06:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:16.818 06:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:16.818 06:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:16.818 06:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:16.818 06:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:16.818 06:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.818 06:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:16.818 06:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:16.818 { 00:21:16.818 "cntlid": 67, 00:21:16.818 "qid": 0, 00:21:16.818 "state": "enabled", 00:21:16.818 "thread": "nvmf_tgt_poll_group_000", 00:21:16.818 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:21:16.818 "listen_address": { 00:21:16.818 "trtype": "TCP", 00:21:16.818 "adrfam": "IPv4", 00:21:16.818 "traddr": "10.0.0.2", 00:21:16.818 "trsvcid": "4420" 00:21:16.818 }, 00:21:16.818 "peer_address": { 00:21:16.818 "trtype": "TCP", 00:21:16.818 "adrfam": "IPv4", 00:21:16.818 "traddr": "10.0.0.1", 00:21:16.818 "trsvcid": "43342" 00:21:16.818 }, 00:21:16.818 "auth": { 00:21:16.818 "state": "completed", 00:21:16.818 "digest": "sha384", 00:21:16.818 "dhgroup": "ffdhe3072" 00:21:16.818 } 00:21:16.818 } 00:21:16.818 ]' 00:21:16.818 06:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:17.077 06:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:17.077 06:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:17.077 06:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:17.077 06:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:17.077 06:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:17.077 06:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:17.077 06:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:17.336 06:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjA1ODg0YTk2ODNmYjFhMzRhNDY0NzM0ZTgwYmVkYjVuWnga: --dhchap-ctrl-secret DHHC-1:02:NDViNTE0ODk4NjVlZWFmMWVmM2IyNTFjM2FiNzJmMWM4YWJjN2Q1MDI4ZjdiMzQwy7PKjQ==: 00:21:17.336 06:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YjA1ODg0YTk2ODNmYjFhMzRhNDY0NzM0ZTgwYmVkYjVuWnga: --dhchap-ctrl-secret DHHC-1:02:NDViNTE0ODk4NjVlZWFmMWVmM2IyNTFjM2FiNzJmMWM4YWJjN2Q1MDI4ZjdiMzQwy7PKjQ==: 00:21:17.903 06:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:17.903 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:17.903 06:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:17.903 06:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:17.903 06:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.903 06:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:17.903 06:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:17.903 06:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:17.903 06:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:17.903 06:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:21:17.903 06:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:17.903 06:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:17.903 06:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:17.903 06:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:17.903 06:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:17.903 06:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:17.903 06:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:17.903 06:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.903 06:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:17.903 06:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:17.903 06:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:17.903 06:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:18.162 00:21:18.420 06:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:18.420 06:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:18.420 06:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:18.421 06:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:18.421 06:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:18.421 06:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:18.421 06:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.421 06:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:18.421 06:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:18.421 { 00:21:18.421 "cntlid": 69, 00:21:18.421 "qid": 0, 00:21:18.421 "state": "enabled", 00:21:18.421 "thread": "nvmf_tgt_poll_group_000", 00:21:18.421 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:21:18.421 "listen_address": { 00:21:18.421 "trtype": "TCP", 00:21:18.421 "adrfam": "IPv4", 00:21:18.421 "traddr": "10.0.0.2", 00:21:18.421 "trsvcid": "4420" 00:21:18.421 }, 00:21:18.421 "peer_address": { 00:21:18.421 "trtype": "TCP", 00:21:18.421 "adrfam": "IPv4", 00:21:18.421 "traddr": "10.0.0.1", 00:21:18.421 "trsvcid": "39124" 00:21:18.421 }, 00:21:18.421 "auth": { 00:21:18.421 "state": "completed", 00:21:18.421 "digest": "sha384", 00:21:18.421 "dhgroup": "ffdhe3072" 00:21:18.421 } 00:21:18.421 } 00:21:18.421 ]' 00:21:18.421 06:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:18.680 06:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:18.680 06:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:18.680 06:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:18.680 06:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:18.680 06:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:18.680 06:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:18.680 06:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:18.939 06:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MGIwNmUwZDEyN2Q0ZTA1ZGRiMGExMmU0YzJiODg1MzE1ZGRkYjcxOWQ2Y2VlYjI16WXLmg==: --dhchap-ctrl-secret DHHC-1:01:MjliMDUxZDI0MzVkOTk1MWM1ZWUyODNhNmJjN2I1YzL0PRcf: 00:21:18.939 06:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MGIwNmUwZDEyN2Q0ZTA1ZGRiMGExMmU0YzJiODg1MzE1ZGRkYjcxOWQ2Y2VlYjI16WXLmg==: --dhchap-ctrl-secret DHHC-1:01:MjliMDUxZDI0MzVkOTk1MWM1ZWUyODNhNmJjN2I1YzL0PRcf: 00:21:19.506 06:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:19.506 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:19.506 06:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:19.506 06:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.506 06:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.506 06:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.506 06:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:19.506 06:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:19.506 06:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:19.506 06:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:21:19.506 06:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:19.506 06:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:19.506 06:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:19.506 06:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:19.506 06:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:19.506 06:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:21:19.506 06:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.506 06:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.506 06:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.506 06:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:19.765 06:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:19.765 06:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:19.765 00:21:20.024 06:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:20.024 06:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:20.024 06:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:20.024 06:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:20.024 06:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:20.024 06:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:20.024 06:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.024 06:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:20.024 06:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:20.024 { 00:21:20.024 "cntlid": 71, 00:21:20.024 "qid": 0, 00:21:20.024 "state": "enabled", 00:21:20.024 "thread": "nvmf_tgt_poll_group_000", 00:21:20.024 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:21:20.024 "listen_address": { 00:21:20.024 "trtype": "TCP", 00:21:20.024 "adrfam": "IPv4", 00:21:20.024 "traddr": "10.0.0.2", 00:21:20.024 "trsvcid": "4420" 00:21:20.024 }, 00:21:20.024 "peer_address": { 00:21:20.024 "trtype": "TCP", 00:21:20.024 "adrfam": "IPv4", 00:21:20.024 "traddr": "10.0.0.1", 00:21:20.024 "trsvcid": "39160" 00:21:20.024 }, 00:21:20.024 "auth": { 00:21:20.024 "state": "completed", 00:21:20.024 "digest": "sha384", 00:21:20.024 "dhgroup": "ffdhe3072" 00:21:20.024 } 00:21:20.024 } 00:21:20.024 ]' 00:21:20.024 06:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:20.283 06:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:20.283 06:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:20.283 06:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:20.283 06:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:20.283 06:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:20.283 06:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:20.283 06:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:20.542 06:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmU3YTdhY2FhZWRlMmVmNjBiYjEzOGU2YmZmODNmZmQwOTdkYjg2M2FlZTI2ZGY4ODBkZTI4YTYzOTg3NTY2MhLCEXY=: 00:21:20.542 06:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YmU3YTdhY2FhZWRlMmVmNjBiYjEzOGU2YmZmODNmZmQwOTdkYjg2M2FlZTI2ZGY4ODBkZTI4YTYzOTg3NTY2MhLCEXY=: 00:21:21.109 06:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:21.109 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:21.109 06:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:21.109 06:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.109 06:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.109 06:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.109 06:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:21.109 06:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:21.109 06:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:21.109 06:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:21.109 06:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:21:21.109 06:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:21.109 06:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:21.109 06:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:21.109 06:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:21.109 06:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:21.109 06:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:21.109 06:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.109 06:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.109 06:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.109 06:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:21.109 06:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:21.109 06:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:21.368 00:21:21.626 06:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:21.626 06:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:21.626 06:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:21.626 06:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:21.626 06:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:21.626 06:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.626 06:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.626 06:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.626 06:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:21.626 { 00:21:21.626 "cntlid": 73, 00:21:21.626 "qid": 0, 00:21:21.626 "state": "enabled", 00:21:21.626 "thread": "nvmf_tgt_poll_group_000", 00:21:21.626 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:21:21.626 "listen_address": { 00:21:21.626 "trtype": "TCP", 00:21:21.626 "adrfam": "IPv4", 00:21:21.626 "traddr": "10.0.0.2", 00:21:21.626 "trsvcid": "4420" 00:21:21.626 }, 00:21:21.626 "peer_address": { 00:21:21.626 "trtype": "TCP", 00:21:21.626 "adrfam": "IPv4", 00:21:21.626 "traddr": "10.0.0.1", 00:21:21.626 "trsvcid": "39182" 00:21:21.626 }, 00:21:21.626 "auth": { 00:21:21.626 "state": "completed", 00:21:21.626 "digest": "sha384", 00:21:21.626 "dhgroup": "ffdhe4096" 00:21:21.626 } 00:21:21.626 } 00:21:21.626 ]' 00:21:21.626 06:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:21.626 06:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:21.626 06:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:21.884 06:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:21.884 06:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:21.884 06:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:21.884 06:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:21.884 06:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:22.143 06:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzUxYzNkODQ2MDhiNjg0MzY5OTZhNjBjOWUzMDJhMGI5YTgzNmU1NjY1NjM4Yzk3aI3eWg==: --dhchap-ctrl-secret DHHC-1:03:NzFiZjY3NDQ4MGY5YjFkOGY3NDg0Y2M3ZWQxODM5MTNkMjkzNzVjNTU0ZTkxNjQ0OTIxY2NmODBlNWMxYWU1Of+wUH4=: 00:21:22.143 06:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NzUxYzNkODQ2MDhiNjg0MzY5OTZhNjBjOWUzMDJhMGI5YTgzNmU1NjY1NjM4Yzk3aI3eWg==: --dhchap-ctrl-secret DHHC-1:03:NzFiZjY3NDQ4MGY5YjFkOGY3NDg0Y2M3ZWQxODM5MTNkMjkzNzVjNTU0ZTkxNjQ0OTIxY2NmODBlNWMxYWU1Of+wUH4=: 00:21:22.709 06:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:22.710 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:22.710 06:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:22.710 06:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.710 06:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.710 06:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.710 06:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:22.710 06:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:22.710 06:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:22.710 06:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:21:22.710 06:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:22.710 06:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:22.710 06:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:22.710 06:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:22.710 06:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:22.710 06:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:22.710 06:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.710 06:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.710 06:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.710 06:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:22.710 06:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:22.710 06:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:22.969 00:21:23.227 06:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:23.227 06:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:23.227 06:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:23.227 06:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:23.227 06:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:23.227 06:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.227 06:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.227 06:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:23.227 06:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:23.227 { 00:21:23.227 "cntlid": 75, 00:21:23.227 "qid": 0, 00:21:23.227 "state": "enabled", 00:21:23.227 "thread": "nvmf_tgt_poll_group_000", 00:21:23.227 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:21:23.227 "listen_address": { 00:21:23.227 "trtype": "TCP", 00:21:23.227 "adrfam": "IPv4", 00:21:23.227 "traddr": "10.0.0.2", 00:21:23.227 "trsvcid": "4420" 00:21:23.227 }, 00:21:23.227 "peer_address": { 00:21:23.227 "trtype": "TCP", 00:21:23.227 "adrfam": "IPv4", 00:21:23.227 "traddr": "10.0.0.1", 00:21:23.227 "trsvcid": "39206" 00:21:23.227 }, 00:21:23.227 "auth": { 00:21:23.227 "state": "completed", 00:21:23.227 "digest": "sha384", 00:21:23.227 "dhgroup": "ffdhe4096" 00:21:23.227 } 00:21:23.227 } 00:21:23.227 ]' 00:21:23.227 06:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:23.227 06:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:23.227 06:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:23.486 06:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:23.486 06:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:23.486 06:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:23.486 06:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:23.486 06:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:23.745 06:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjA1ODg0YTk2ODNmYjFhMzRhNDY0NzM0ZTgwYmVkYjVuWnga: --dhchap-ctrl-secret DHHC-1:02:NDViNTE0ODk4NjVlZWFmMWVmM2IyNTFjM2FiNzJmMWM4YWJjN2Q1MDI4ZjdiMzQwy7PKjQ==: 00:21:23.745 06:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YjA1ODg0YTk2ODNmYjFhMzRhNDY0NzM0ZTgwYmVkYjVuWnga: --dhchap-ctrl-secret DHHC-1:02:NDViNTE0ODk4NjVlZWFmMWVmM2IyNTFjM2FiNzJmMWM4YWJjN2Q1MDI4ZjdiMzQwy7PKjQ==: 00:21:24.312 06:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:24.312 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:24.312 06:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:24.312 06:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:24.312 06:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.312 06:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:24.312 06:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:24.312 06:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:24.312 06:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:24.312 06:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:21:24.312 06:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:24.312 06:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:24.312 06:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:24.312 06:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:24.312 06:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:24.312 06:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:24.312 06:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:24.313 06:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.313 06:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:24.313 06:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:24.313 06:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:24.313 06:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:24.571 00:21:24.830 06:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:24.830 06:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:24.830 06:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:24.830 06:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:24.830 06:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:24.830 06:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:24.830 06:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.830 06:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:24.830 06:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:24.830 { 00:21:24.830 "cntlid": 77, 00:21:24.830 "qid": 0, 00:21:24.830 "state": "enabled", 00:21:24.830 "thread": "nvmf_tgt_poll_group_000", 00:21:24.830 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:21:24.830 "listen_address": { 00:21:24.830 "trtype": "TCP", 00:21:24.830 "adrfam": "IPv4", 00:21:24.830 "traddr": "10.0.0.2", 00:21:24.830 "trsvcid": "4420" 00:21:24.830 }, 00:21:24.830 "peer_address": { 00:21:24.830 "trtype": "TCP", 00:21:24.830 "adrfam": "IPv4", 00:21:24.830 "traddr": "10.0.0.1", 00:21:24.830 "trsvcid": "39226" 00:21:24.830 }, 00:21:24.830 "auth": { 00:21:24.830 "state": "completed", 00:21:24.830 "digest": "sha384", 00:21:24.830 "dhgroup": "ffdhe4096" 00:21:24.830 } 00:21:24.830 } 00:21:24.830 ]' 00:21:24.830 06:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:24.830 06:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:24.830 06:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:25.089 06:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:25.089 06:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:25.089 06:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:25.089 06:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:25.089 06:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:25.347 06:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MGIwNmUwZDEyN2Q0ZTA1ZGRiMGExMmU0YzJiODg1MzE1ZGRkYjcxOWQ2Y2VlYjI16WXLmg==: --dhchap-ctrl-secret DHHC-1:01:MjliMDUxZDI0MzVkOTk1MWM1ZWUyODNhNmJjN2I1YzL0PRcf: 00:21:25.347 06:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MGIwNmUwZDEyN2Q0ZTA1ZGRiMGExMmU0YzJiODg1MzE1ZGRkYjcxOWQ2Y2VlYjI16WXLmg==: --dhchap-ctrl-secret DHHC-1:01:MjliMDUxZDI0MzVkOTk1MWM1ZWUyODNhNmJjN2I1YzL0PRcf: 00:21:25.915 06:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:25.915 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:25.915 06:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:25.915 06:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:25.915 06:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.915 06:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:25.915 06:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:25.915 06:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:25.915 06:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:25.915 06:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:21:25.915 06:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:25.915 06:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:25.915 06:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:25.915 06:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:25.915 06:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:25.915 06:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:21:25.915 06:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:25.915 06:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.915 06:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:25.915 06:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:25.915 06:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:25.915 06:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:26.173 00:21:26.432 06:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:26.432 06:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:26.432 06:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:26.432 06:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:26.432 06:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:26.432 06:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:26.432 06:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.432 06:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:26.432 06:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:26.432 { 00:21:26.432 "cntlid": 79, 00:21:26.432 "qid": 0, 00:21:26.432 "state": "enabled", 00:21:26.432 "thread": "nvmf_tgt_poll_group_000", 00:21:26.432 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:21:26.432 "listen_address": { 00:21:26.432 "trtype": "TCP", 00:21:26.432 "adrfam": "IPv4", 00:21:26.432 "traddr": "10.0.0.2", 00:21:26.432 "trsvcid": "4420" 00:21:26.432 }, 00:21:26.433 "peer_address": { 00:21:26.433 "trtype": "TCP", 00:21:26.433 "adrfam": "IPv4", 00:21:26.433 "traddr": "10.0.0.1", 00:21:26.433 "trsvcid": "39232" 00:21:26.433 }, 00:21:26.433 "auth": { 00:21:26.433 "state": "completed", 00:21:26.433 "digest": "sha384", 00:21:26.433 "dhgroup": "ffdhe4096" 00:21:26.433 } 00:21:26.433 } 00:21:26.433 ]' 00:21:26.433 06:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:26.691 06:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:26.691 06:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:26.691 06:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:26.691 06:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:26.691 06:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:26.691 06:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:26.691 06:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:26.949 06:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmU3YTdhY2FhZWRlMmVmNjBiYjEzOGU2YmZmODNmZmQwOTdkYjg2M2FlZTI2ZGY4ODBkZTI4YTYzOTg3NTY2MhLCEXY=: 00:21:26.949 06:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YmU3YTdhY2FhZWRlMmVmNjBiYjEzOGU2YmZmODNmZmQwOTdkYjg2M2FlZTI2ZGY4ODBkZTI4YTYzOTg3NTY2MhLCEXY=: 00:21:27.515 06:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:27.515 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:27.515 06:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:27.515 06:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:27.515 06:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.515 06:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:27.515 06:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:27.515 06:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:27.515 06:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:27.515 06:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:27.515 06:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:21:27.515 06:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:27.515 06:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:27.515 06:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:27.515 06:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:27.515 06:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:27.515 06:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:27.515 06:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:27.515 06:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.515 06:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:27.515 06:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:27.515 06:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:27.515 06:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:28.081 00:21:28.081 06:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:28.081 06:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:28.081 06:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:28.081 06:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:28.081 06:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:28.081 06:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:28.081 06:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.081 06:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:28.081 06:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:28.081 { 00:21:28.081 "cntlid": 81, 00:21:28.081 "qid": 0, 00:21:28.081 "state": "enabled", 00:21:28.081 "thread": "nvmf_tgt_poll_group_000", 00:21:28.081 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:21:28.081 "listen_address": { 00:21:28.081 "trtype": "TCP", 00:21:28.081 "adrfam": "IPv4", 00:21:28.081 "traddr": "10.0.0.2", 00:21:28.081 "trsvcid": "4420" 00:21:28.081 }, 00:21:28.081 "peer_address": { 00:21:28.081 "trtype": "TCP", 00:21:28.081 "adrfam": "IPv4", 00:21:28.081 "traddr": "10.0.0.1", 00:21:28.081 "trsvcid": "39268" 00:21:28.081 }, 00:21:28.081 "auth": { 00:21:28.081 "state": "completed", 00:21:28.081 "digest": "sha384", 00:21:28.081 "dhgroup": "ffdhe6144" 00:21:28.081 } 00:21:28.081 } 00:21:28.081 ]' 00:21:28.081 06:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:28.338 06:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:28.338 06:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:28.338 06:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:28.338 06:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:28.338 06:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:28.338 06:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:28.338 06:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:28.595 06:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzUxYzNkODQ2MDhiNjg0MzY5OTZhNjBjOWUzMDJhMGI5YTgzNmU1NjY1NjM4Yzk3aI3eWg==: --dhchap-ctrl-secret DHHC-1:03:NzFiZjY3NDQ4MGY5YjFkOGY3NDg0Y2M3ZWQxODM5MTNkMjkzNzVjNTU0ZTkxNjQ0OTIxY2NmODBlNWMxYWU1Of+wUH4=: 00:21:28.596 06:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NzUxYzNkODQ2MDhiNjg0MzY5OTZhNjBjOWUzMDJhMGI5YTgzNmU1NjY1NjM4Yzk3aI3eWg==: --dhchap-ctrl-secret DHHC-1:03:NzFiZjY3NDQ4MGY5YjFkOGY3NDg0Y2M3ZWQxODM5MTNkMjkzNzVjNTU0ZTkxNjQ0OTIxY2NmODBlNWMxYWU1Of+wUH4=: 00:21:29.161 06:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:29.161 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:29.161 06:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:29.161 06:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:29.161 06:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.161 06:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:29.161 06:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:29.161 06:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:29.161 06:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:29.161 06:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:21:29.161 06:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:29.161 06:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:29.161 06:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:29.161 06:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:29.161 06:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:29.161 06:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:29.161 06:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:29.161 06:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.419 06:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:29.419 06:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:29.419 06:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:29.419 06:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:29.676 00:21:29.676 06:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:29.676 06:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:29.676 06:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:29.933 06:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:29.933 06:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:29.933 06:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:29.933 06:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.933 06:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:29.933 06:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:29.933 { 00:21:29.933 "cntlid": 83, 00:21:29.933 "qid": 0, 00:21:29.933 "state": "enabled", 00:21:29.933 "thread": "nvmf_tgt_poll_group_000", 00:21:29.933 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:21:29.933 "listen_address": { 00:21:29.933 "trtype": "TCP", 00:21:29.933 "adrfam": "IPv4", 00:21:29.933 "traddr": "10.0.0.2", 00:21:29.933 "trsvcid": "4420" 00:21:29.933 }, 00:21:29.933 "peer_address": { 00:21:29.933 "trtype": "TCP", 00:21:29.933 "adrfam": "IPv4", 00:21:29.933 "traddr": "10.0.0.1", 00:21:29.933 "trsvcid": "34738" 00:21:29.933 }, 00:21:29.933 "auth": { 00:21:29.933 "state": "completed", 00:21:29.933 "digest": "sha384", 00:21:29.933 "dhgroup": "ffdhe6144" 00:21:29.933 } 00:21:29.933 } 00:21:29.933 ]' 00:21:29.933 06:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:29.933 06:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:29.933 06:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:29.934 06:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:29.934 06:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:29.934 06:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:29.934 06:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:29.934 06:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:30.192 06:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjA1ODg0YTk2ODNmYjFhMzRhNDY0NzM0ZTgwYmVkYjVuWnga: --dhchap-ctrl-secret DHHC-1:02:NDViNTE0ODk4NjVlZWFmMWVmM2IyNTFjM2FiNzJmMWM4YWJjN2Q1MDI4ZjdiMzQwy7PKjQ==: 00:21:30.192 06:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YjA1ODg0YTk2ODNmYjFhMzRhNDY0NzM0ZTgwYmVkYjVuWnga: --dhchap-ctrl-secret DHHC-1:02:NDViNTE0ODk4NjVlZWFmMWVmM2IyNTFjM2FiNzJmMWM4YWJjN2Q1MDI4ZjdiMzQwy7PKjQ==: 00:21:30.759 06:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:30.759 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:30.759 06:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:30.759 06:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.759 06:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.759 06:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.759 06:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:30.759 06:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:30.759 06:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:31.019 06:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:21:31.019 06:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:31.019 06:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:31.019 06:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:31.019 06:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:31.019 06:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:31.019 06:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:31.019 06:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.019 06:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.019 06:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.019 06:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:31.019 06:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:31.019 06:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:31.280 00:21:31.280 06:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:31.280 06:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:31.280 06:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:31.615 06:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:31.615 06:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:31.615 06:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.615 06:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.615 06:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.615 06:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:31.615 { 00:21:31.615 "cntlid": 85, 00:21:31.615 "qid": 0, 00:21:31.615 "state": "enabled", 00:21:31.615 "thread": "nvmf_tgt_poll_group_000", 00:21:31.615 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:21:31.615 "listen_address": { 00:21:31.615 "trtype": "TCP", 00:21:31.615 "adrfam": "IPv4", 00:21:31.615 "traddr": "10.0.0.2", 00:21:31.615 "trsvcid": "4420" 00:21:31.615 }, 00:21:31.615 "peer_address": { 00:21:31.615 "trtype": "TCP", 00:21:31.615 "adrfam": "IPv4", 00:21:31.615 "traddr": "10.0.0.1", 00:21:31.615 "trsvcid": "34754" 00:21:31.615 }, 00:21:31.615 "auth": { 00:21:31.615 "state": "completed", 00:21:31.615 "digest": "sha384", 00:21:31.615 "dhgroup": "ffdhe6144" 00:21:31.615 } 00:21:31.615 } 00:21:31.615 ]' 00:21:31.615 06:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:31.615 06:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:31.615 06:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:31.615 06:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:31.615 06:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:31.615 06:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:31.615 06:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:31.615 06:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:31.945 06:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MGIwNmUwZDEyN2Q0ZTA1ZGRiMGExMmU0YzJiODg1MzE1ZGRkYjcxOWQ2Y2VlYjI16WXLmg==: --dhchap-ctrl-secret DHHC-1:01:MjliMDUxZDI0MzVkOTk1MWM1ZWUyODNhNmJjN2I1YzL0PRcf: 00:21:31.945 06:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MGIwNmUwZDEyN2Q0ZTA1ZGRiMGExMmU0YzJiODg1MzE1ZGRkYjcxOWQ2Y2VlYjI16WXLmg==: --dhchap-ctrl-secret DHHC-1:01:MjliMDUxZDI0MzVkOTk1MWM1ZWUyODNhNmJjN2I1YzL0PRcf: 00:21:32.524 06:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:32.524 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:32.524 06:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:32.524 06:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:32.524 06:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.524 06:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:32.524 06:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:32.524 06:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:32.524 06:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:32.524 06:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:21:32.524 06:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:32.524 06:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:32.524 06:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:32.524 06:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:32.524 06:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:32.524 06:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:21:32.524 06:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:32.524 06:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.524 06:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:32.524 06:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:32.524 06:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:32.524 06:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:33.089 00:21:33.089 06:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:33.089 06:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:33.089 06:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:33.089 06:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:33.089 06:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:33.089 06:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:33.089 06:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.089 06:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:33.089 06:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:33.089 { 00:21:33.089 "cntlid": 87, 00:21:33.089 "qid": 0, 00:21:33.089 "state": "enabled", 00:21:33.089 "thread": "nvmf_tgt_poll_group_000", 00:21:33.089 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:21:33.089 "listen_address": { 00:21:33.089 "trtype": "TCP", 00:21:33.089 "adrfam": "IPv4", 00:21:33.089 "traddr": "10.0.0.2", 00:21:33.089 "trsvcid": "4420" 00:21:33.089 }, 00:21:33.089 "peer_address": { 00:21:33.089 "trtype": "TCP", 00:21:33.089 "adrfam": "IPv4", 00:21:33.089 "traddr": "10.0.0.1", 00:21:33.089 "trsvcid": "34780" 00:21:33.089 }, 00:21:33.089 "auth": { 00:21:33.089 "state": "completed", 00:21:33.089 "digest": "sha384", 00:21:33.089 "dhgroup": "ffdhe6144" 00:21:33.089 } 00:21:33.089 } 00:21:33.089 ]' 00:21:33.089 06:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:33.348 06:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:33.348 06:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:33.348 06:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:33.348 06:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:33.348 06:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:33.348 06:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:33.348 06:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:33.606 06:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmU3YTdhY2FhZWRlMmVmNjBiYjEzOGU2YmZmODNmZmQwOTdkYjg2M2FlZTI2ZGY4ODBkZTI4YTYzOTg3NTY2MhLCEXY=: 00:21:33.606 06:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YmU3YTdhY2FhZWRlMmVmNjBiYjEzOGU2YmZmODNmZmQwOTdkYjg2M2FlZTI2ZGY4ODBkZTI4YTYzOTg3NTY2MhLCEXY=: 00:21:34.173 06:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:34.173 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:34.173 06:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:34.173 06:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:34.173 06:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.173 06:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:34.173 06:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:34.173 06:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:34.173 06:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:34.173 06:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:34.174 06:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:21:34.174 06:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:34.174 06:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:34.174 06:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:34.174 06:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:34.174 06:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:34.174 06:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:34.174 06:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:34.174 06:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.174 06:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:34.174 06:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:34.174 06:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:34.174 06:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:34.742 00:21:34.742 06:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:34.742 06:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:34.742 06:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:35.001 06:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:35.001 06:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:35.001 06:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:35.001 06:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.001 06:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:35.001 06:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:35.001 { 00:21:35.001 "cntlid": 89, 00:21:35.001 "qid": 0, 00:21:35.001 "state": "enabled", 00:21:35.001 "thread": "nvmf_tgt_poll_group_000", 00:21:35.001 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:21:35.001 "listen_address": { 00:21:35.001 "trtype": "TCP", 00:21:35.001 "adrfam": "IPv4", 00:21:35.001 "traddr": "10.0.0.2", 00:21:35.001 "trsvcid": "4420" 00:21:35.001 }, 00:21:35.001 "peer_address": { 00:21:35.001 "trtype": "TCP", 00:21:35.001 "adrfam": "IPv4", 00:21:35.001 "traddr": "10.0.0.1", 00:21:35.001 "trsvcid": "34796" 00:21:35.001 }, 00:21:35.001 "auth": { 00:21:35.001 "state": "completed", 00:21:35.001 "digest": "sha384", 00:21:35.001 "dhgroup": "ffdhe8192" 00:21:35.001 } 00:21:35.001 } 00:21:35.001 ]' 00:21:35.001 06:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:35.001 06:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:35.001 06:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:35.001 06:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:35.001 06:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:35.001 06:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:35.001 06:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:35.001 06:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:35.260 06:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzUxYzNkODQ2MDhiNjg0MzY5OTZhNjBjOWUzMDJhMGI5YTgzNmU1NjY1NjM4Yzk3aI3eWg==: --dhchap-ctrl-secret DHHC-1:03:NzFiZjY3NDQ4MGY5YjFkOGY3NDg0Y2M3ZWQxODM5MTNkMjkzNzVjNTU0ZTkxNjQ0OTIxY2NmODBlNWMxYWU1Of+wUH4=: 00:21:35.260 06:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NzUxYzNkODQ2MDhiNjg0MzY5OTZhNjBjOWUzMDJhMGI5YTgzNmU1NjY1NjM4Yzk3aI3eWg==: --dhchap-ctrl-secret DHHC-1:03:NzFiZjY3NDQ4MGY5YjFkOGY3NDg0Y2M3ZWQxODM5MTNkMjkzNzVjNTU0ZTkxNjQ0OTIxY2NmODBlNWMxYWU1Of+wUH4=: 00:21:35.827 06:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:35.827 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:35.827 06:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:35.827 06:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:35.827 06:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.827 06:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:35.827 06:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:35.827 06:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:35.827 06:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:36.086 06:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:21:36.086 06:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:36.086 06:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:36.086 06:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:36.086 06:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:36.086 06:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:36.086 06:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:36.086 06:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:36.086 06:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.086 06:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:36.086 06:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:36.086 06:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:36.086 06:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:36.653 00:21:36.653 06:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:36.653 06:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:36.653 06:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:36.653 06:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:36.653 06:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:36.653 06:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:36.653 06:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.653 06:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:36.653 06:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:36.653 { 00:21:36.653 "cntlid": 91, 00:21:36.653 "qid": 0, 00:21:36.653 "state": "enabled", 00:21:36.653 "thread": "nvmf_tgt_poll_group_000", 00:21:36.653 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:21:36.653 "listen_address": { 00:21:36.653 "trtype": "TCP", 00:21:36.653 "adrfam": "IPv4", 00:21:36.653 "traddr": "10.0.0.2", 00:21:36.653 "trsvcid": "4420" 00:21:36.653 }, 00:21:36.653 "peer_address": { 00:21:36.653 "trtype": "TCP", 00:21:36.653 "adrfam": "IPv4", 00:21:36.653 "traddr": "10.0.0.1", 00:21:36.653 "trsvcid": "34812" 00:21:36.653 }, 00:21:36.653 "auth": { 00:21:36.653 "state": "completed", 00:21:36.653 "digest": "sha384", 00:21:36.653 "dhgroup": "ffdhe8192" 00:21:36.653 } 00:21:36.653 } 00:21:36.653 ]' 00:21:36.653 06:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:36.912 06:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:36.912 06:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:36.912 06:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:36.912 06:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:36.912 06:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:36.912 06:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:36.912 06:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:37.171 06:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjA1ODg0YTk2ODNmYjFhMzRhNDY0NzM0ZTgwYmVkYjVuWnga: --dhchap-ctrl-secret DHHC-1:02:NDViNTE0ODk4NjVlZWFmMWVmM2IyNTFjM2FiNzJmMWM4YWJjN2Q1MDI4ZjdiMzQwy7PKjQ==: 00:21:37.171 06:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YjA1ODg0YTk2ODNmYjFhMzRhNDY0NzM0ZTgwYmVkYjVuWnga: --dhchap-ctrl-secret DHHC-1:02:NDViNTE0ODk4NjVlZWFmMWVmM2IyNTFjM2FiNzJmMWM4YWJjN2Q1MDI4ZjdiMzQwy7PKjQ==: 00:21:37.739 06:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:37.739 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:37.739 06:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:37.739 06:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:37.739 06:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.739 06:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:37.739 06:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:37.739 06:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:37.739 06:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:37.739 06:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:21:37.739 06:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:37.739 06:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:37.739 06:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:37.739 06:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:37.739 06:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:37.740 06:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:37.740 06:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:37.740 06:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.999 06:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:37.999 06:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:37.999 06:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:37.999 06:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:38.257 00:21:38.257 06:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:38.257 06:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:38.257 06:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:38.516 06:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:38.516 06:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:38.516 06:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:38.516 06:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.516 06:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:38.516 06:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:38.516 { 00:21:38.516 "cntlid": 93, 00:21:38.516 "qid": 0, 00:21:38.516 "state": "enabled", 00:21:38.516 "thread": "nvmf_tgt_poll_group_000", 00:21:38.516 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:21:38.516 "listen_address": { 00:21:38.516 "trtype": "TCP", 00:21:38.516 "adrfam": "IPv4", 00:21:38.516 "traddr": "10.0.0.2", 00:21:38.516 "trsvcid": "4420" 00:21:38.516 }, 00:21:38.516 "peer_address": { 00:21:38.516 "trtype": "TCP", 00:21:38.516 "adrfam": "IPv4", 00:21:38.516 "traddr": "10.0.0.1", 00:21:38.516 "trsvcid": "42878" 00:21:38.516 }, 00:21:38.516 "auth": { 00:21:38.516 "state": "completed", 00:21:38.516 "digest": "sha384", 00:21:38.516 "dhgroup": "ffdhe8192" 00:21:38.516 } 00:21:38.516 } 00:21:38.516 ]' 00:21:38.516 06:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:38.516 06:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:38.516 06:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:38.774 06:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:38.775 06:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:38.775 06:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:38.775 06:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:38.775 06:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:39.033 06:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MGIwNmUwZDEyN2Q0ZTA1ZGRiMGExMmU0YzJiODg1MzE1ZGRkYjcxOWQ2Y2VlYjI16WXLmg==: --dhchap-ctrl-secret DHHC-1:01:MjliMDUxZDI0MzVkOTk1MWM1ZWUyODNhNmJjN2I1YzL0PRcf: 00:21:39.033 06:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MGIwNmUwZDEyN2Q0ZTA1ZGRiMGExMmU0YzJiODg1MzE1ZGRkYjcxOWQ2Y2VlYjI16WXLmg==: --dhchap-ctrl-secret DHHC-1:01:MjliMDUxZDI0MzVkOTk1MWM1ZWUyODNhNmJjN2I1YzL0PRcf: 00:21:39.600 06:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:39.600 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:39.600 06:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:39.600 06:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:39.600 06:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.600 06:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:39.600 06:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:39.600 06:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:39.600 06:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:39.600 06:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:21:39.600 06:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:39.600 06:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:39.600 06:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:39.600 06:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:39.600 06:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:39.600 06:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:21:39.600 06:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:39.600 06:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.600 06:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:39.601 06:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:39.601 06:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:39.601 06:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:40.168 00:21:40.168 06:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:40.168 06:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:40.168 06:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:40.426 06:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:40.426 06:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:40.426 06:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:40.426 06:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.426 06:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:40.426 06:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:40.426 { 00:21:40.426 "cntlid": 95, 00:21:40.426 "qid": 0, 00:21:40.426 "state": "enabled", 00:21:40.426 "thread": "nvmf_tgt_poll_group_000", 00:21:40.426 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:21:40.426 "listen_address": { 00:21:40.426 "trtype": "TCP", 00:21:40.426 "adrfam": "IPv4", 00:21:40.426 "traddr": "10.0.0.2", 00:21:40.426 "trsvcid": "4420" 00:21:40.426 }, 00:21:40.426 "peer_address": { 00:21:40.426 "trtype": "TCP", 00:21:40.426 "adrfam": "IPv4", 00:21:40.426 "traddr": "10.0.0.1", 00:21:40.426 "trsvcid": "42924" 00:21:40.426 }, 00:21:40.427 "auth": { 00:21:40.427 "state": "completed", 00:21:40.427 "digest": "sha384", 00:21:40.427 "dhgroup": "ffdhe8192" 00:21:40.427 } 00:21:40.427 } 00:21:40.427 ]' 00:21:40.427 06:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:40.427 06:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:40.427 06:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:40.427 06:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:40.427 06:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:40.427 06:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:40.427 06:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:40.427 06:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:40.686 06:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmU3YTdhY2FhZWRlMmVmNjBiYjEzOGU2YmZmODNmZmQwOTdkYjg2M2FlZTI2ZGY4ODBkZTI4YTYzOTg3NTY2MhLCEXY=: 00:21:40.686 06:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YmU3YTdhY2FhZWRlMmVmNjBiYjEzOGU2YmZmODNmZmQwOTdkYjg2M2FlZTI2ZGY4ODBkZTI4YTYzOTg3NTY2MhLCEXY=: 00:21:41.253 06:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:41.253 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:41.253 06:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:41.253 06:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.253 06:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.253 06:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:41.253 06:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:21:41.253 06:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:41.253 06:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:41.253 06:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:41.253 06:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:41.512 06:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:21:41.512 06:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:41.512 06:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:41.512 06:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:41.512 06:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:41.512 06:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:41.512 06:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:41.512 06:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.512 06:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.512 06:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:41.512 06:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:41.512 06:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:41.512 06:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:41.771 00:21:41.771 06:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:41.771 06:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:41.771 06:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:42.029 06:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:42.029 06:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:42.029 06:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.029 06:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.029 06:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.029 06:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:42.029 { 00:21:42.029 "cntlid": 97, 00:21:42.029 "qid": 0, 00:21:42.029 "state": "enabled", 00:21:42.029 "thread": "nvmf_tgt_poll_group_000", 00:21:42.029 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:21:42.029 "listen_address": { 00:21:42.029 "trtype": "TCP", 00:21:42.029 "adrfam": "IPv4", 00:21:42.029 "traddr": "10.0.0.2", 00:21:42.029 "trsvcid": "4420" 00:21:42.029 }, 00:21:42.029 "peer_address": { 00:21:42.029 "trtype": "TCP", 00:21:42.029 "adrfam": "IPv4", 00:21:42.029 "traddr": "10.0.0.1", 00:21:42.029 "trsvcid": "42950" 00:21:42.029 }, 00:21:42.029 "auth": { 00:21:42.029 "state": "completed", 00:21:42.029 "digest": "sha512", 00:21:42.029 "dhgroup": "null" 00:21:42.029 } 00:21:42.029 } 00:21:42.029 ]' 00:21:42.029 06:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:42.030 06:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:42.030 06:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:42.030 06:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:42.030 06:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:42.030 06:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:42.030 06:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:42.030 06:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:42.288 06:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzUxYzNkODQ2MDhiNjg0MzY5OTZhNjBjOWUzMDJhMGI5YTgzNmU1NjY1NjM4Yzk3aI3eWg==: --dhchap-ctrl-secret DHHC-1:03:NzFiZjY3NDQ4MGY5YjFkOGY3NDg0Y2M3ZWQxODM5MTNkMjkzNzVjNTU0ZTkxNjQ0OTIxY2NmODBlNWMxYWU1Of+wUH4=: 00:21:42.288 06:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NzUxYzNkODQ2MDhiNjg0MzY5OTZhNjBjOWUzMDJhMGI5YTgzNmU1NjY1NjM4Yzk3aI3eWg==: --dhchap-ctrl-secret DHHC-1:03:NzFiZjY3NDQ4MGY5YjFkOGY3NDg0Y2M3ZWQxODM5MTNkMjkzNzVjNTU0ZTkxNjQ0OTIxY2NmODBlNWMxYWU1Of+wUH4=: 00:21:42.855 06:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:42.855 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:42.855 06:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:42.855 06:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.855 06:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.855 06:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.855 06:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:42.855 06:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:42.855 06:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:43.114 06:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:21:43.114 06:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:43.114 06:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:43.114 06:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:43.114 06:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:43.114 06:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:43.114 06:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:43.114 06:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.114 06:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.114 06:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.114 06:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:43.114 06:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:43.114 06:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:43.373 00:21:43.373 06:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:43.373 06:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:43.373 06:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:43.373 06:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:43.373 06:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:43.373 06:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.373 06:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.632 06:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.632 06:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:43.632 { 00:21:43.632 "cntlid": 99, 00:21:43.632 "qid": 0, 00:21:43.632 "state": "enabled", 00:21:43.632 "thread": "nvmf_tgt_poll_group_000", 00:21:43.632 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:21:43.632 "listen_address": { 00:21:43.632 "trtype": "TCP", 00:21:43.632 "adrfam": "IPv4", 00:21:43.632 "traddr": "10.0.0.2", 00:21:43.632 "trsvcid": "4420" 00:21:43.632 }, 00:21:43.632 "peer_address": { 00:21:43.632 "trtype": "TCP", 00:21:43.632 "adrfam": "IPv4", 00:21:43.632 "traddr": "10.0.0.1", 00:21:43.632 "trsvcid": "42976" 00:21:43.632 }, 00:21:43.632 "auth": { 00:21:43.632 "state": "completed", 00:21:43.632 "digest": "sha512", 00:21:43.632 "dhgroup": "null" 00:21:43.632 } 00:21:43.632 } 00:21:43.632 ]' 00:21:43.632 06:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:43.632 06:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:43.632 06:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:43.632 06:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:43.632 06:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:43.632 06:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:43.632 06:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:43.632 06:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:43.891 06:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjA1ODg0YTk2ODNmYjFhMzRhNDY0NzM0ZTgwYmVkYjVuWnga: --dhchap-ctrl-secret DHHC-1:02:NDViNTE0ODk4NjVlZWFmMWVmM2IyNTFjM2FiNzJmMWM4YWJjN2Q1MDI4ZjdiMzQwy7PKjQ==: 00:21:43.891 06:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YjA1ODg0YTk2ODNmYjFhMzRhNDY0NzM0ZTgwYmVkYjVuWnga: --dhchap-ctrl-secret DHHC-1:02:NDViNTE0ODk4NjVlZWFmMWVmM2IyNTFjM2FiNzJmMWM4YWJjN2Q1MDI4ZjdiMzQwy7PKjQ==: 00:21:44.458 06:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:44.458 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:44.458 06:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:44.458 06:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:44.458 06:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.458 06:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:44.458 06:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:44.458 06:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:44.458 06:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:44.458 06:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:21:44.458 06:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:44.458 06:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:44.458 06:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:44.458 06:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:44.458 06:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:44.458 06:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:44.458 06:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:44.458 06:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.459 06:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:44.459 06:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:44.459 06:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:44.459 06:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:44.717 00:21:44.717 06:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:44.717 06:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:44.717 06:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:44.976 06:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:44.976 06:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:44.976 06:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:44.976 06:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.976 06:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:44.976 06:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:44.976 { 00:21:44.976 "cntlid": 101, 00:21:44.976 "qid": 0, 00:21:44.976 "state": "enabled", 00:21:44.976 "thread": "nvmf_tgt_poll_group_000", 00:21:44.976 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:21:44.976 "listen_address": { 00:21:44.976 "trtype": "TCP", 00:21:44.977 "adrfam": "IPv4", 00:21:44.977 "traddr": "10.0.0.2", 00:21:44.977 "trsvcid": "4420" 00:21:44.977 }, 00:21:44.977 "peer_address": { 00:21:44.977 "trtype": "TCP", 00:21:44.977 "adrfam": "IPv4", 00:21:44.977 "traddr": "10.0.0.1", 00:21:44.977 "trsvcid": "42998" 00:21:44.977 }, 00:21:44.977 "auth": { 00:21:44.977 "state": "completed", 00:21:44.977 "digest": "sha512", 00:21:44.977 "dhgroup": "null" 00:21:44.977 } 00:21:44.977 } 00:21:44.977 ]' 00:21:44.977 06:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:44.977 06:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:44.977 06:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:45.236 06:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:45.236 06:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:45.236 06:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:45.236 06:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:45.236 06:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:45.500 06:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MGIwNmUwZDEyN2Q0ZTA1ZGRiMGExMmU0YzJiODg1MzE1ZGRkYjcxOWQ2Y2VlYjI16WXLmg==: --dhchap-ctrl-secret DHHC-1:01:MjliMDUxZDI0MzVkOTk1MWM1ZWUyODNhNmJjN2I1YzL0PRcf: 00:21:45.501 06:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MGIwNmUwZDEyN2Q0ZTA1ZGRiMGExMmU0YzJiODg1MzE1ZGRkYjcxOWQ2Y2VlYjI16WXLmg==: --dhchap-ctrl-secret DHHC-1:01:MjliMDUxZDI0MzVkOTk1MWM1ZWUyODNhNmJjN2I1YzL0PRcf: 00:21:46.071 06:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:46.071 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:46.071 06:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:46.071 06:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:46.071 06:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.071 06:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:46.071 06:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:46.071 06:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:46.071 06:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:46.071 06:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:21:46.071 06:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:46.071 06:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:46.071 06:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:46.071 06:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:46.071 06:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:46.071 06:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:21:46.071 06:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:46.071 06:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.071 06:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:46.071 06:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:46.071 06:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:46.071 06:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:46.330 00:21:46.330 06:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:46.330 06:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:46.330 06:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:46.588 06:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:46.588 06:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:46.588 06:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:46.588 06:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.588 06:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:46.588 06:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:46.588 { 00:21:46.588 "cntlid": 103, 00:21:46.588 "qid": 0, 00:21:46.588 "state": "enabled", 00:21:46.588 "thread": "nvmf_tgt_poll_group_000", 00:21:46.588 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:21:46.588 "listen_address": { 00:21:46.588 "trtype": "TCP", 00:21:46.588 "adrfam": "IPv4", 00:21:46.588 "traddr": "10.0.0.2", 00:21:46.588 "trsvcid": "4420" 00:21:46.588 }, 00:21:46.588 "peer_address": { 00:21:46.588 "trtype": "TCP", 00:21:46.588 "adrfam": "IPv4", 00:21:46.588 "traddr": "10.0.0.1", 00:21:46.588 "trsvcid": "43024" 00:21:46.588 }, 00:21:46.588 "auth": { 00:21:46.588 "state": "completed", 00:21:46.588 "digest": "sha512", 00:21:46.588 "dhgroup": "null" 00:21:46.588 } 00:21:46.588 } 00:21:46.588 ]' 00:21:46.588 06:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:46.588 06:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:46.588 06:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:46.588 06:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:46.588 06:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:46.588 06:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:46.588 06:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:46.588 06:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:46.846 06:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmU3YTdhY2FhZWRlMmVmNjBiYjEzOGU2YmZmODNmZmQwOTdkYjg2M2FlZTI2ZGY4ODBkZTI4YTYzOTg3NTY2MhLCEXY=: 00:21:46.846 06:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YmU3YTdhY2FhZWRlMmVmNjBiYjEzOGU2YmZmODNmZmQwOTdkYjg2M2FlZTI2ZGY4ODBkZTI4YTYzOTg3NTY2MhLCEXY=: 00:21:47.413 06:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:47.413 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:47.413 06:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:47.413 06:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:47.413 06:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.413 06:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:47.413 06:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:47.413 06:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:47.413 06:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:47.413 06:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:47.672 06:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:21:47.672 06:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:47.672 06:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:47.672 06:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:47.672 06:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:47.672 06:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:47.672 06:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:47.672 06:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:47.672 06:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.672 06:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:47.672 06:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:47.672 06:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:47.672 06:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:47.930 00:21:47.930 06:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:47.930 06:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:47.930 06:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:48.190 06:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:48.190 06:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:48.190 06:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:48.190 06:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.190 06:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:48.190 06:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:48.190 { 00:21:48.190 "cntlid": 105, 00:21:48.190 "qid": 0, 00:21:48.190 "state": "enabled", 00:21:48.190 "thread": "nvmf_tgt_poll_group_000", 00:21:48.190 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:21:48.190 "listen_address": { 00:21:48.190 "trtype": "TCP", 00:21:48.190 "adrfam": "IPv4", 00:21:48.190 "traddr": "10.0.0.2", 00:21:48.190 "trsvcid": "4420" 00:21:48.190 }, 00:21:48.190 "peer_address": { 00:21:48.190 "trtype": "TCP", 00:21:48.190 "adrfam": "IPv4", 00:21:48.190 "traddr": "10.0.0.1", 00:21:48.190 "trsvcid": "43064" 00:21:48.190 }, 00:21:48.190 "auth": { 00:21:48.190 "state": "completed", 00:21:48.190 "digest": "sha512", 00:21:48.190 "dhgroup": "ffdhe2048" 00:21:48.190 } 00:21:48.190 } 00:21:48.190 ]' 00:21:48.190 06:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:48.190 06:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:48.190 06:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:48.190 06:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:48.190 06:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:48.190 06:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:48.190 06:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:48.190 06:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:48.448 06:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzUxYzNkODQ2MDhiNjg0MzY5OTZhNjBjOWUzMDJhMGI5YTgzNmU1NjY1NjM4Yzk3aI3eWg==: --dhchap-ctrl-secret DHHC-1:03:NzFiZjY3NDQ4MGY5YjFkOGY3NDg0Y2M3ZWQxODM5MTNkMjkzNzVjNTU0ZTkxNjQ0OTIxY2NmODBlNWMxYWU1Of+wUH4=: 00:21:48.448 06:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NzUxYzNkODQ2MDhiNjg0MzY5OTZhNjBjOWUzMDJhMGI5YTgzNmU1NjY1NjM4Yzk3aI3eWg==: --dhchap-ctrl-secret DHHC-1:03:NzFiZjY3NDQ4MGY5YjFkOGY3NDg0Y2M3ZWQxODM5MTNkMjkzNzVjNTU0ZTkxNjQ0OTIxY2NmODBlNWMxYWU1Of+wUH4=: 00:21:49.016 06:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:49.016 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:49.016 06:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:49.016 06:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.016 06:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.016 06:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.016 06:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:49.016 06:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:49.016 06:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:49.276 06:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:21:49.276 06:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:49.276 06:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:49.276 06:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:49.276 06:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:49.276 06:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:49.276 06:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:49.276 06:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.276 06:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.276 06:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.276 06:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:49.276 06:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:49.276 06:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:49.535 00:21:49.535 06:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:49.535 06:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:49.535 06:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:49.535 06:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:49.535 06:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:49.535 06:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.535 06:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.793 06:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.793 06:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:49.793 { 00:21:49.793 "cntlid": 107, 00:21:49.793 "qid": 0, 00:21:49.793 "state": "enabled", 00:21:49.793 "thread": "nvmf_tgt_poll_group_000", 00:21:49.793 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:21:49.793 "listen_address": { 00:21:49.793 "trtype": "TCP", 00:21:49.793 "adrfam": "IPv4", 00:21:49.793 "traddr": "10.0.0.2", 00:21:49.793 "trsvcid": "4420" 00:21:49.793 }, 00:21:49.793 "peer_address": { 00:21:49.793 "trtype": "TCP", 00:21:49.793 "adrfam": "IPv4", 00:21:49.793 "traddr": "10.0.0.1", 00:21:49.793 "trsvcid": "35134" 00:21:49.793 }, 00:21:49.793 "auth": { 00:21:49.793 "state": "completed", 00:21:49.793 "digest": "sha512", 00:21:49.793 "dhgroup": "ffdhe2048" 00:21:49.793 } 00:21:49.793 } 00:21:49.793 ]' 00:21:49.793 06:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:49.793 06:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:49.793 06:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:49.793 06:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:49.793 06:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:49.793 06:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:49.793 06:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:49.793 06:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:50.052 06:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjA1ODg0YTk2ODNmYjFhMzRhNDY0NzM0ZTgwYmVkYjVuWnga: --dhchap-ctrl-secret DHHC-1:02:NDViNTE0ODk4NjVlZWFmMWVmM2IyNTFjM2FiNzJmMWM4YWJjN2Q1MDI4ZjdiMzQwy7PKjQ==: 00:21:50.052 06:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YjA1ODg0YTk2ODNmYjFhMzRhNDY0NzM0ZTgwYmVkYjVuWnga: --dhchap-ctrl-secret DHHC-1:02:NDViNTE0ODk4NjVlZWFmMWVmM2IyNTFjM2FiNzJmMWM4YWJjN2Q1MDI4ZjdiMzQwy7PKjQ==: 00:21:50.618 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:50.618 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:50.618 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:50.618 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:50.618 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.618 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:50.618 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:50.618 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:50.618 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:50.618 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:21:50.618 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:50.618 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:50.619 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:50.619 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:50.619 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:50.619 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:50.619 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:50.619 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.877 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:50.877 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:50.877 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:50.877 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:50.877 00:21:51.136 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:51.136 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:51.136 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:51.136 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:51.136 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:51.136 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:51.136 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.136 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:51.136 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:51.136 { 00:21:51.136 "cntlid": 109, 00:21:51.136 "qid": 0, 00:21:51.136 "state": "enabled", 00:21:51.136 "thread": "nvmf_tgt_poll_group_000", 00:21:51.136 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:21:51.136 "listen_address": { 00:21:51.136 "trtype": "TCP", 00:21:51.136 "adrfam": "IPv4", 00:21:51.136 "traddr": "10.0.0.2", 00:21:51.136 "trsvcid": "4420" 00:21:51.136 }, 00:21:51.136 "peer_address": { 00:21:51.136 "trtype": "TCP", 00:21:51.136 "adrfam": "IPv4", 00:21:51.136 "traddr": "10.0.0.1", 00:21:51.136 "trsvcid": "35162" 00:21:51.136 }, 00:21:51.136 "auth": { 00:21:51.136 "state": "completed", 00:21:51.136 "digest": "sha512", 00:21:51.136 "dhgroup": "ffdhe2048" 00:21:51.136 } 00:21:51.136 } 00:21:51.136 ]' 00:21:51.136 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:51.136 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:51.394 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:51.394 06:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:51.395 06:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:51.395 06:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:51.395 06:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:51.395 06:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:51.652 06:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MGIwNmUwZDEyN2Q0ZTA1ZGRiMGExMmU0YzJiODg1MzE1ZGRkYjcxOWQ2Y2VlYjI16WXLmg==: --dhchap-ctrl-secret DHHC-1:01:MjliMDUxZDI0MzVkOTk1MWM1ZWUyODNhNmJjN2I1YzL0PRcf: 00:21:51.652 06:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MGIwNmUwZDEyN2Q0ZTA1ZGRiMGExMmU0YzJiODg1MzE1ZGRkYjcxOWQ2Y2VlYjI16WXLmg==: --dhchap-ctrl-secret DHHC-1:01:MjliMDUxZDI0MzVkOTk1MWM1ZWUyODNhNmJjN2I1YzL0PRcf: 00:21:52.218 06:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:52.218 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:52.218 06:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:52.218 06:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:52.218 06:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.218 06:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:52.218 06:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:52.218 06:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:52.218 06:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:52.218 06:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:21:52.218 06:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:52.218 06:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:52.218 06:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:52.218 06:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:52.218 06:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:52.218 06:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:21:52.218 06:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:52.218 06:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.218 06:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:52.218 06:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:52.218 06:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:52.218 06:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:52.477 00:21:52.477 06:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:52.477 06:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:52.477 06:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:52.736 06:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:52.736 06:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:52.736 06:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:52.736 06:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.736 06:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:52.736 06:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:52.736 { 00:21:52.736 "cntlid": 111, 00:21:52.736 "qid": 0, 00:21:52.736 "state": "enabled", 00:21:52.736 "thread": "nvmf_tgt_poll_group_000", 00:21:52.736 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:21:52.736 "listen_address": { 00:21:52.736 "trtype": "TCP", 00:21:52.736 "adrfam": "IPv4", 00:21:52.736 "traddr": "10.0.0.2", 00:21:52.736 "trsvcid": "4420" 00:21:52.736 }, 00:21:52.736 "peer_address": { 00:21:52.736 "trtype": "TCP", 00:21:52.736 "adrfam": "IPv4", 00:21:52.736 "traddr": "10.0.0.1", 00:21:52.736 "trsvcid": "35184" 00:21:52.736 }, 00:21:52.736 "auth": { 00:21:52.736 "state": "completed", 00:21:52.736 "digest": "sha512", 00:21:52.736 "dhgroup": "ffdhe2048" 00:21:52.736 } 00:21:52.736 } 00:21:52.736 ]' 00:21:52.736 06:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:52.736 06:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:52.736 06:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:52.736 06:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:52.736 06:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:52.994 06:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:52.994 06:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:52.994 06:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:52.994 06:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmU3YTdhY2FhZWRlMmVmNjBiYjEzOGU2YmZmODNmZmQwOTdkYjg2M2FlZTI2ZGY4ODBkZTI4YTYzOTg3NTY2MhLCEXY=: 00:21:52.994 06:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YmU3YTdhY2FhZWRlMmVmNjBiYjEzOGU2YmZmODNmZmQwOTdkYjg2M2FlZTI2ZGY4ODBkZTI4YTYzOTg3NTY2MhLCEXY=: 00:21:53.561 06:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:53.561 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:53.561 06:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:53.562 06:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:53.562 06:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.562 06:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:53.562 06:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:53.562 06:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:53.562 06:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:53.562 06:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:53.821 06:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:21:53.821 06:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:53.821 06:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:53.821 06:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:53.821 06:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:53.821 06:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:53.821 06:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:53.821 06:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:53.821 06:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.821 06:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:53.821 06:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:53.821 06:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:53.821 06:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:54.079 00:21:54.079 06:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:54.079 06:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:54.079 06:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:54.338 06:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:54.338 06:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:54.338 06:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:54.338 06:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.338 06:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:54.338 06:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:54.338 { 00:21:54.338 "cntlid": 113, 00:21:54.338 "qid": 0, 00:21:54.338 "state": "enabled", 00:21:54.338 "thread": "nvmf_tgt_poll_group_000", 00:21:54.338 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:21:54.338 "listen_address": { 00:21:54.338 "trtype": "TCP", 00:21:54.338 "adrfam": "IPv4", 00:21:54.338 "traddr": "10.0.0.2", 00:21:54.338 "trsvcid": "4420" 00:21:54.338 }, 00:21:54.338 "peer_address": { 00:21:54.338 "trtype": "TCP", 00:21:54.338 "adrfam": "IPv4", 00:21:54.338 "traddr": "10.0.0.1", 00:21:54.338 "trsvcid": "35224" 00:21:54.338 }, 00:21:54.338 "auth": { 00:21:54.338 "state": "completed", 00:21:54.338 "digest": "sha512", 00:21:54.338 "dhgroup": "ffdhe3072" 00:21:54.338 } 00:21:54.338 } 00:21:54.338 ]' 00:21:54.338 06:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:54.338 06:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:54.338 06:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:54.338 06:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:54.338 06:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:54.338 06:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:54.338 06:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:54.338 06:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:54.597 06:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzUxYzNkODQ2MDhiNjg0MzY5OTZhNjBjOWUzMDJhMGI5YTgzNmU1NjY1NjM4Yzk3aI3eWg==: --dhchap-ctrl-secret DHHC-1:03:NzFiZjY3NDQ4MGY5YjFkOGY3NDg0Y2M3ZWQxODM5MTNkMjkzNzVjNTU0ZTkxNjQ0OTIxY2NmODBlNWMxYWU1Of+wUH4=: 00:21:54.597 06:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NzUxYzNkODQ2MDhiNjg0MzY5OTZhNjBjOWUzMDJhMGI5YTgzNmU1NjY1NjM4Yzk3aI3eWg==: --dhchap-ctrl-secret DHHC-1:03:NzFiZjY3NDQ4MGY5YjFkOGY3NDg0Y2M3ZWQxODM5MTNkMjkzNzVjNTU0ZTkxNjQ0OTIxY2NmODBlNWMxYWU1Of+wUH4=: 00:21:55.164 06:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:55.164 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:55.164 06:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:55.164 06:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.164 06:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.164 06:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.164 06:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:55.164 06:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:55.164 06:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:55.422 06:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:21:55.422 06:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:55.422 06:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:55.422 06:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:55.422 06:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:55.422 06:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:55.422 06:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:55.422 06:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.422 06:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.422 06:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.422 06:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:55.423 06:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:55.423 06:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:55.682 00:21:55.682 06:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:55.682 06:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:55.682 06:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:55.941 06:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:55.941 06:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:55.941 06:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.941 06:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.941 06:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.941 06:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:55.941 { 00:21:55.941 "cntlid": 115, 00:21:55.941 "qid": 0, 00:21:55.941 "state": "enabled", 00:21:55.941 "thread": "nvmf_tgt_poll_group_000", 00:21:55.941 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:21:55.941 "listen_address": { 00:21:55.941 "trtype": "TCP", 00:21:55.941 "adrfam": "IPv4", 00:21:55.941 "traddr": "10.0.0.2", 00:21:55.941 "trsvcid": "4420" 00:21:55.941 }, 00:21:55.941 "peer_address": { 00:21:55.941 "trtype": "TCP", 00:21:55.941 "adrfam": "IPv4", 00:21:55.941 "traddr": "10.0.0.1", 00:21:55.941 "trsvcid": "35254" 00:21:55.941 }, 00:21:55.941 "auth": { 00:21:55.941 "state": "completed", 00:21:55.941 "digest": "sha512", 00:21:55.941 "dhgroup": "ffdhe3072" 00:21:55.941 } 00:21:55.941 } 00:21:55.941 ]' 00:21:55.941 06:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:55.941 06:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:55.941 06:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:55.941 06:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:55.941 06:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:55.941 06:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:55.941 06:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:55.941 06:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:56.199 06:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjA1ODg0YTk2ODNmYjFhMzRhNDY0NzM0ZTgwYmVkYjVuWnga: --dhchap-ctrl-secret DHHC-1:02:NDViNTE0ODk4NjVlZWFmMWVmM2IyNTFjM2FiNzJmMWM4YWJjN2Q1MDI4ZjdiMzQwy7PKjQ==: 00:21:56.199 06:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YjA1ODg0YTk2ODNmYjFhMzRhNDY0NzM0ZTgwYmVkYjVuWnga: --dhchap-ctrl-secret DHHC-1:02:NDViNTE0ODk4NjVlZWFmMWVmM2IyNTFjM2FiNzJmMWM4YWJjN2Q1MDI4ZjdiMzQwy7PKjQ==: 00:21:56.765 06:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:56.765 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:56.765 06:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:56.765 06:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:56.765 06:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:56.765 06:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:56.765 06:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:56.765 06:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:56.765 06:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:57.024 06:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:21:57.024 06:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:57.024 06:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:57.024 06:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:57.024 06:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:57.024 06:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:57.024 06:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:57.024 06:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:57.024 06:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:57.024 06:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:57.024 06:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:57.024 06:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:57.024 06:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:57.282 00:21:57.282 06:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:57.282 06:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:57.282 06:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:57.539 06:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:57.539 06:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:57.539 06:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:57.539 06:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:57.539 06:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:57.539 06:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:57.539 { 00:21:57.539 "cntlid": 117, 00:21:57.539 "qid": 0, 00:21:57.539 "state": "enabled", 00:21:57.539 "thread": "nvmf_tgt_poll_group_000", 00:21:57.539 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:21:57.539 "listen_address": { 00:21:57.539 "trtype": "TCP", 00:21:57.539 "adrfam": "IPv4", 00:21:57.539 "traddr": "10.0.0.2", 00:21:57.539 "trsvcid": "4420" 00:21:57.539 }, 00:21:57.539 "peer_address": { 00:21:57.539 "trtype": "TCP", 00:21:57.539 "adrfam": "IPv4", 00:21:57.539 "traddr": "10.0.0.1", 00:21:57.539 "trsvcid": "35276" 00:21:57.539 }, 00:21:57.539 "auth": { 00:21:57.539 "state": "completed", 00:21:57.539 "digest": "sha512", 00:21:57.539 "dhgroup": "ffdhe3072" 00:21:57.539 } 00:21:57.539 } 00:21:57.539 ]' 00:21:57.539 06:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:57.539 06:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:57.539 06:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:57.539 06:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:57.539 06:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:57.539 06:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:57.539 06:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:57.539 06:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:57.797 06:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MGIwNmUwZDEyN2Q0ZTA1ZGRiMGExMmU0YzJiODg1MzE1ZGRkYjcxOWQ2Y2VlYjI16WXLmg==: --dhchap-ctrl-secret DHHC-1:01:MjliMDUxZDI0MzVkOTk1MWM1ZWUyODNhNmJjN2I1YzL0PRcf: 00:21:57.797 06:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MGIwNmUwZDEyN2Q0ZTA1ZGRiMGExMmU0YzJiODg1MzE1ZGRkYjcxOWQ2Y2VlYjI16WXLmg==: --dhchap-ctrl-secret DHHC-1:01:MjliMDUxZDI0MzVkOTk1MWM1ZWUyODNhNmJjN2I1YzL0PRcf: 00:21:58.364 06:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:58.364 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:58.364 06:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:58.364 06:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:58.364 06:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.364 06:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:58.364 06:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:58.364 06:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:58.364 06:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:58.623 06:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:21:58.623 06:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:58.623 06:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:58.623 06:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:58.623 06:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:58.623 06:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:58.623 06:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:21:58.623 06:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:58.623 06:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.623 06:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:58.623 06:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:58.623 06:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:58.623 06:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:58.882 00:21:58.882 06:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:58.882 06:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:58.882 06:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:59.142 06:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:59.142 06:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:59.142 06:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:59.142 06:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.142 06:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:59.142 06:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:59.142 { 00:21:59.142 "cntlid": 119, 00:21:59.142 "qid": 0, 00:21:59.142 "state": "enabled", 00:21:59.142 "thread": "nvmf_tgt_poll_group_000", 00:21:59.142 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:21:59.142 "listen_address": { 00:21:59.142 "trtype": "TCP", 00:21:59.142 "adrfam": "IPv4", 00:21:59.142 "traddr": "10.0.0.2", 00:21:59.142 "trsvcid": "4420" 00:21:59.142 }, 00:21:59.142 "peer_address": { 00:21:59.142 "trtype": "TCP", 00:21:59.142 "adrfam": "IPv4", 00:21:59.142 "traddr": "10.0.0.1", 00:21:59.142 "trsvcid": "58494" 00:21:59.142 }, 00:21:59.142 "auth": { 00:21:59.142 "state": "completed", 00:21:59.142 "digest": "sha512", 00:21:59.142 "dhgroup": "ffdhe3072" 00:21:59.142 } 00:21:59.142 } 00:21:59.142 ]' 00:21:59.142 06:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:59.142 06:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:59.142 06:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:59.142 06:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:59.142 06:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:59.142 06:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:59.142 06:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:59.142 06:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:59.400 06:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmU3YTdhY2FhZWRlMmVmNjBiYjEzOGU2YmZmODNmZmQwOTdkYjg2M2FlZTI2ZGY4ODBkZTI4YTYzOTg3NTY2MhLCEXY=: 00:21:59.400 06:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YmU3YTdhY2FhZWRlMmVmNjBiYjEzOGU2YmZmODNmZmQwOTdkYjg2M2FlZTI2ZGY4ODBkZTI4YTYzOTg3NTY2MhLCEXY=: 00:21:59.967 06:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:59.967 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:59.967 06:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:59.967 06:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:59.967 06:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.967 06:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:59.967 06:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:59.967 06:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:59.967 06:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:59.967 06:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:00.226 06:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:22:00.226 06:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:00.226 06:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:00.226 06:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:00.226 06:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:00.226 06:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:00.226 06:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:00.226 06:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:00.226 06:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:00.226 06:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:00.226 06:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:00.226 06:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:00.226 06:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:00.484 00:22:00.484 06:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:00.484 06:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:00.484 06:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:00.743 06:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:00.743 06:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:00.743 06:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:00.743 06:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:00.743 06:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:00.743 06:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:00.743 { 00:22:00.743 "cntlid": 121, 00:22:00.743 "qid": 0, 00:22:00.743 "state": "enabled", 00:22:00.743 "thread": "nvmf_tgt_poll_group_000", 00:22:00.743 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:22:00.743 "listen_address": { 00:22:00.743 "trtype": "TCP", 00:22:00.743 "adrfam": "IPv4", 00:22:00.743 "traddr": "10.0.0.2", 00:22:00.743 "trsvcid": "4420" 00:22:00.743 }, 00:22:00.743 "peer_address": { 00:22:00.743 "trtype": "TCP", 00:22:00.743 "adrfam": "IPv4", 00:22:00.743 "traddr": "10.0.0.1", 00:22:00.743 "trsvcid": "58516" 00:22:00.743 }, 00:22:00.743 "auth": { 00:22:00.743 "state": "completed", 00:22:00.743 "digest": "sha512", 00:22:00.743 "dhgroup": "ffdhe4096" 00:22:00.743 } 00:22:00.743 } 00:22:00.743 ]' 00:22:00.743 06:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:00.743 06:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:00.743 06:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:00.743 06:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:00.743 06:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:00.743 06:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:00.743 06:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:00.743 06:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:01.002 06:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzUxYzNkODQ2MDhiNjg0MzY5OTZhNjBjOWUzMDJhMGI5YTgzNmU1NjY1NjM4Yzk3aI3eWg==: --dhchap-ctrl-secret DHHC-1:03:NzFiZjY3NDQ4MGY5YjFkOGY3NDg0Y2M3ZWQxODM5MTNkMjkzNzVjNTU0ZTkxNjQ0OTIxY2NmODBlNWMxYWU1Of+wUH4=: 00:22:01.002 06:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NzUxYzNkODQ2MDhiNjg0MzY5OTZhNjBjOWUzMDJhMGI5YTgzNmU1NjY1NjM4Yzk3aI3eWg==: --dhchap-ctrl-secret DHHC-1:03:NzFiZjY3NDQ4MGY5YjFkOGY3NDg0Y2M3ZWQxODM5MTNkMjkzNzVjNTU0ZTkxNjQ0OTIxY2NmODBlNWMxYWU1Of+wUH4=: 00:22:01.569 06:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:01.569 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:01.569 06:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:22:01.569 06:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.569 06:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.569 06:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.569 06:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:01.569 06:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:01.569 06:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:01.827 06:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:22:01.827 06:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:01.827 06:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:01.827 06:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:01.827 06:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:01.827 06:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:01.827 06:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:01.827 06:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.827 06:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.827 06:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.827 06:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:01.827 06:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:01.827 06:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:02.086 00:22:02.086 06:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:02.086 06:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:02.086 06:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:02.344 06:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:02.344 06:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:02.344 06:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.344 06:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.344 06:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.344 06:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:02.344 { 00:22:02.344 "cntlid": 123, 00:22:02.344 "qid": 0, 00:22:02.344 "state": "enabled", 00:22:02.344 "thread": "nvmf_tgt_poll_group_000", 00:22:02.344 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:22:02.344 "listen_address": { 00:22:02.344 "trtype": "TCP", 00:22:02.344 "adrfam": "IPv4", 00:22:02.344 "traddr": "10.0.0.2", 00:22:02.344 "trsvcid": "4420" 00:22:02.344 }, 00:22:02.344 "peer_address": { 00:22:02.344 "trtype": "TCP", 00:22:02.344 "adrfam": "IPv4", 00:22:02.344 "traddr": "10.0.0.1", 00:22:02.344 "trsvcid": "58528" 00:22:02.344 }, 00:22:02.344 "auth": { 00:22:02.344 "state": "completed", 00:22:02.344 "digest": "sha512", 00:22:02.344 "dhgroup": "ffdhe4096" 00:22:02.344 } 00:22:02.344 } 00:22:02.344 ]' 00:22:02.344 06:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:02.344 06:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:02.344 06:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:02.344 06:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:02.344 06:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:02.344 06:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:02.344 06:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:02.344 06:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:02.626 06:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjA1ODg0YTk2ODNmYjFhMzRhNDY0NzM0ZTgwYmVkYjVuWnga: --dhchap-ctrl-secret DHHC-1:02:NDViNTE0ODk4NjVlZWFmMWVmM2IyNTFjM2FiNzJmMWM4YWJjN2Q1MDI4ZjdiMzQwy7PKjQ==: 00:22:02.626 06:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YjA1ODg0YTk2ODNmYjFhMzRhNDY0NzM0ZTgwYmVkYjVuWnga: --dhchap-ctrl-secret DHHC-1:02:NDViNTE0ODk4NjVlZWFmMWVmM2IyNTFjM2FiNzJmMWM4YWJjN2Q1MDI4ZjdiMzQwy7PKjQ==: 00:22:03.193 06:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:03.193 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:03.193 06:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:22:03.193 06:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.193 06:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:03.193 06:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.193 06:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:03.193 06:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:03.193 06:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:03.451 06:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:22:03.451 06:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:03.451 06:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:03.451 06:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:03.451 06:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:03.451 06:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:03.451 06:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:03.451 06:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.451 06:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:03.451 06:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.451 06:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:03.451 06:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:03.451 06:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:03.709 00:22:03.709 06:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:03.709 06:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:03.709 06:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:03.969 06:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:03.969 06:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:03.969 06:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.969 06:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:03.969 06:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.969 06:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:03.969 { 00:22:03.969 "cntlid": 125, 00:22:03.969 "qid": 0, 00:22:03.969 "state": "enabled", 00:22:03.969 "thread": "nvmf_tgt_poll_group_000", 00:22:03.969 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:22:03.969 "listen_address": { 00:22:03.969 "trtype": "TCP", 00:22:03.969 "adrfam": "IPv4", 00:22:03.969 "traddr": "10.0.0.2", 00:22:03.969 "trsvcid": "4420" 00:22:03.969 }, 00:22:03.969 "peer_address": { 00:22:03.969 "trtype": "TCP", 00:22:03.969 "adrfam": "IPv4", 00:22:03.969 "traddr": "10.0.0.1", 00:22:03.969 "trsvcid": "58548" 00:22:03.969 }, 00:22:03.969 "auth": { 00:22:03.969 "state": "completed", 00:22:03.969 "digest": "sha512", 00:22:03.969 "dhgroup": "ffdhe4096" 00:22:03.969 } 00:22:03.969 } 00:22:03.969 ]' 00:22:03.969 06:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:03.969 06:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:03.969 06:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:03.969 06:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:03.969 06:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:03.969 06:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:03.969 06:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:03.969 06:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:04.227 06:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MGIwNmUwZDEyN2Q0ZTA1ZGRiMGExMmU0YzJiODg1MzE1ZGRkYjcxOWQ2Y2VlYjI16WXLmg==: --dhchap-ctrl-secret DHHC-1:01:MjliMDUxZDI0MzVkOTk1MWM1ZWUyODNhNmJjN2I1YzL0PRcf: 00:22:04.227 06:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MGIwNmUwZDEyN2Q0ZTA1ZGRiMGExMmU0YzJiODg1MzE1ZGRkYjcxOWQ2Y2VlYjI16WXLmg==: --dhchap-ctrl-secret DHHC-1:01:MjliMDUxZDI0MzVkOTk1MWM1ZWUyODNhNmJjN2I1YzL0PRcf: 00:22:04.794 06:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:04.794 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:04.794 06:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:22:04.794 06:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.794 06:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.794 06:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.794 06:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:04.794 06:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:04.794 06:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:05.052 06:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:22:05.052 06:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:05.052 06:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:05.052 06:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:05.052 06:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:05.052 06:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:05.052 06:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:22:05.052 06:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.052 06:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.052 06:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.052 06:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:05.052 06:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:05.052 06:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:05.310 00:22:05.310 06:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:05.310 06:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:05.310 06:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:05.569 06:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:05.569 06:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:05.569 06:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.569 06:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.569 06:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.569 06:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:05.569 { 00:22:05.569 "cntlid": 127, 00:22:05.569 "qid": 0, 00:22:05.569 "state": "enabled", 00:22:05.569 "thread": "nvmf_tgt_poll_group_000", 00:22:05.569 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:22:05.569 "listen_address": { 00:22:05.569 "trtype": "TCP", 00:22:05.569 "adrfam": "IPv4", 00:22:05.569 "traddr": "10.0.0.2", 00:22:05.569 "trsvcid": "4420" 00:22:05.569 }, 00:22:05.569 "peer_address": { 00:22:05.569 "trtype": "TCP", 00:22:05.569 "adrfam": "IPv4", 00:22:05.569 "traddr": "10.0.0.1", 00:22:05.569 "trsvcid": "58574" 00:22:05.569 }, 00:22:05.569 "auth": { 00:22:05.569 "state": "completed", 00:22:05.569 "digest": "sha512", 00:22:05.569 "dhgroup": "ffdhe4096" 00:22:05.569 } 00:22:05.569 } 00:22:05.569 ]' 00:22:05.569 06:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:05.569 06:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:05.569 06:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:05.569 06:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:05.569 06:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:05.569 06:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:05.569 06:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:05.569 06:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:05.827 06:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmU3YTdhY2FhZWRlMmVmNjBiYjEzOGU2YmZmODNmZmQwOTdkYjg2M2FlZTI2ZGY4ODBkZTI4YTYzOTg3NTY2MhLCEXY=: 00:22:05.827 06:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YmU3YTdhY2FhZWRlMmVmNjBiYjEzOGU2YmZmODNmZmQwOTdkYjg2M2FlZTI2ZGY4ODBkZTI4YTYzOTg3NTY2MhLCEXY=: 00:22:06.394 06:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:06.394 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:06.394 06:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:22:06.394 06:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:06.394 06:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:06.394 06:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:06.394 06:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:06.394 06:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:06.394 06:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:06.394 06:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:06.652 06:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:22:06.652 06:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:06.652 06:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:06.652 06:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:06.652 06:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:06.652 06:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:06.652 06:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:06.652 06:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:06.652 06:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:06.652 06:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:06.652 06:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:06.652 06:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:06.652 06:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:06.911 00:22:06.911 06:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:06.911 06:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:06.911 06:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:07.170 06:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:07.170 06:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:07.170 06:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:07.170 06:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.170 06:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:07.170 06:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:07.170 { 00:22:07.170 "cntlid": 129, 00:22:07.170 "qid": 0, 00:22:07.170 "state": "enabled", 00:22:07.170 "thread": "nvmf_tgt_poll_group_000", 00:22:07.170 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:22:07.170 "listen_address": { 00:22:07.170 "trtype": "TCP", 00:22:07.170 "adrfam": "IPv4", 00:22:07.170 "traddr": "10.0.0.2", 00:22:07.170 "trsvcid": "4420" 00:22:07.170 }, 00:22:07.170 "peer_address": { 00:22:07.170 "trtype": "TCP", 00:22:07.170 "adrfam": "IPv4", 00:22:07.170 "traddr": "10.0.0.1", 00:22:07.170 "trsvcid": "58594" 00:22:07.170 }, 00:22:07.170 "auth": { 00:22:07.170 "state": "completed", 00:22:07.170 "digest": "sha512", 00:22:07.170 "dhgroup": "ffdhe6144" 00:22:07.170 } 00:22:07.170 } 00:22:07.170 ]' 00:22:07.170 06:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:07.170 06:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:07.170 06:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:07.170 06:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:07.170 06:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:07.429 06:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:07.429 06:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:07.429 06:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:07.429 06:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzUxYzNkODQ2MDhiNjg0MzY5OTZhNjBjOWUzMDJhMGI5YTgzNmU1NjY1NjM4Yzk3aI3eWg==: --dhchap-ctrl-secret DHHC-1:03:NzFiZjY3NDQ4MGY5YjFkOGY3NDg0Y2M3ZWQxODM5MTNkMjkzNzVjNTU0ZTkxNjQ0OTIxY2NmODBlNWMxYWU1Of+wUH4=: 00:22:07.429 06:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NzUxYzNkODQ2MDhiNjg0MzY5OTZhNjBjOWUzMDJhMGI5YTgzNmU1NjY1NjM4Yzk3aI3eWg==: --dhchap-ctrl-secret DHHC-1:03:NzFiZjY3NDQ4MGY5YjFkOGY3NDg0Y2M3ZWQxODM5MTNkMjkzNzVjNTU0ZTkxNjQ0OTIxY2NmODBlNWMxYWU1Of+wUH4=: 00:22:07.994 06:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:07.994 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:07.994 06:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:22:07.994 06:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:07.994 06:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.994 06:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:07.994 06:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:07.994 06:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:07.994 06:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:08.276 06:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:22:08.276 06:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:08.276 06:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:08.276 06:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:08.276 06:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:08.276 06:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:08.276 06:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:08.276 06:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:08.276 06:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:08.276 06:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:08.276 06:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:08.276 06:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:08.276 06:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:08.569 00:22:08.569 06:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:08.569 06:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:08.569 06:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:08.839 06:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:08.839 06:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:08.839 06:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:08.839 06:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:08.839 06:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:08.839 06:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:08.839 { 00:22:08.839 "cntlid": 131, 00:22:08.839 "qid": 0, 00:22:08.839 "state": "enabled", 00:22:08.839 "thread": "nvmf_tgt_poll_group_000", 00:22:08.839 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:22:08.839 "listen_address": { 00:22:08.839 "trtype": "TCP", 00:22:08.839 "adrfam": "IPv4", 00:22:08.839 "traddr": "10.0.0.2", 00:22:08.839 "trsvcid": "4420" 00:22:08.839 }, 00:22:08.839 "peer_address": { 00:22:08.839 "trtype": "TCP", 00:22:08.839 "adrfam": "IPv4", 00:22:08.839 "traddr": "10.0.0.1", 00:22:08.839 "trsvcid": "45604" 00:22:08.839 }, 00:22:08.839 "auth": { 00:22:08.839 "state": "completed", 00:22:08.839 "digest": "sha512", 00:22:08.839 "dhgroup": "ffdhe6144" 00:22:08.839 } 00:22:08.839 } 00:22:08.839 ]' 00:22:08.839 06:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:08.839 06:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:08.839 06:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:09.128 06:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:09.128 06:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:09.128 06:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:09.128 06:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:09.128 06:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:09.388 06:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjA1ODg0YTk2ODNmYjFhMzRhNDY0NzM0ZTgwYmVkYjVuWnga: --dhchap-ctrl-secret DHHC-1:02:NDViNTE0ODk4NjVlZWFmMWVmM2IyNTFjM2FiNzJmMWM4YWJjN2Q1MDI4ZjdiMzQwy7PKjQ==: 00:22:09.388 06:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YjA1ODg0YTk2ODNmYjFhMzRhNDY0NzM0ZTgwYmVkYjVuWnga: --dhchap-ctrl-secret DHHC-1:02:NDViNTE0ODk4NjVlZWFmMWVmM2IyNTFjM2FiNzJmMWM4YWJjN2Q1MDI4ZjdiMzQwy7PKjQ==: 00:22:09.954 06:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:09.954 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:09.954 06:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:22:09.954 06:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.954 06:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:09.954 06:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.954 06:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:09.954 06:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:09.955 06:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:09.955 06:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:22:09.955 06:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:09.955 06:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:09.955 06:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:09.955 06:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:09.955 06:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:09.955 06:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:09.955 06:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.955 06:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:09.955 06:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.955 06:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:09.955 06:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:09.955 06:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:10.523 00:22:10.523 06:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:10.523 06:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:10.523 06:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:10.523 06:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:10.523 06:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:10.523 06:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:10.523 06:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:10.523 06:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:10.523 06:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:10.523 { 00:22:10.523 "cntlid": 133, 00:22:10.523 "qid": 0, 00:22:10.523 "state": "enabled", 00:22:10.523 "thread": "nvmf_tgt_poll_group_000", 00:22:10.523 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:22:10.523 "listen_address": { 00:22:10.523 "trtype": "TCP", 00:22:10.523 "adrfam": "IPv4", 00:22:10.523 "traddr": "10.0.0.2", 00:22:10.523 "trsvcid": "4420" 00:22:10.523 }, 00:22:10.523 "peer_address": { 00:22:10.523 "trtype": "TCP", 00:22:10.523 "adrfam": "IPv4", 00:22:10.523 "traddr": "10.0.0.1", 00:22:10.523 "trsvcid": "45636" 00:22:10.523 }, 00:22:10.523 "auth": { 00:22:10.523 "state": "completed", 00:22:10.523 "digest": "sha512", 00:22:10.523 "dhgroup": "ffdhe6144" 00:22:10.523 } 00:22:10.523 } 00:22:10.523 ]' 00:22:10.523 06:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:10.523 06:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:10.523 06:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:10.781 06:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:10.781 06:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:10.781 06:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:10.781 06:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:10.781 06:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:11.041 06:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MGIwNmUwZDEyN2Q0ZTA1ZGRiMGExMmU0YzJiODg1MzE1ZGRkYjcxOWQ2Y2VlYjI16WXLmg==: --dhchap-ctrl-secret DHHC-1:01:MjliMDUxZDI0MzVkOTk1MWM1ZWUyODNhNmJjN2I1YzL0PRcf: 00:22:11.041 06:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MGIwNmUwZDEyN2Q0ZTA1ZGRiMGExMmU0YzJiODg1MzE1ZGRkYjcxOWQ2Y2VlYjI16WXLmg==: --dhchap-ctrl-secret DHHC-1:01:MjliMDUxZDI0MzVkOTk1MWM1ZWUyODNhNmJjN2I1YzL0PRcf: 00:22:11.609 06:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:11.609 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:11.609 06:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:22:11.609 06:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.609 06:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:11.609 06:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.609 06:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:11.609 06:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:11.609 06:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:11.609 06:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:22:11.609 06:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:11.609 06:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:11.609 06:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:11.609 06:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:11.609 06:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:11.609 06:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:22:11.609 06:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.609 06:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:11.609 06:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.609 06:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:11.609 06:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:11.609 06:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:12.176 00:22:12.176 06:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:12.176 06:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:12.176 06:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:12.176 06:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:12.176 06:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:12.176 06:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:12.176 06:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:12.176 06:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:12.176 06:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:12.176 { 00:22:12.176 "cntlid": 135, 00:22:12.176 "qid": 0, 00:22:12.176 "state": "enabled", 00:22:12.176 "thread": "nvmf_tgt_poll_group_000", 00:22:12.176 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:22:12.176 "listen_address": { 00:22:12.176 "trtype": "TCP", 00:22:12.176 "adrfam": "IPv4", 00:22:12.176 "traddr": "10.0.0.2", 00:22:12.176 "trsvcid": "4420" 00:22:12.176 }, 00:22:12.176 "peer_address": { 00:22:12.176 "trtype": "TCP", 00:22:12.177 "adrfam": "IPv4", 00:22:12.177 "traddr": "10.0.0.1", 00:22:12.177 "trsvcid": "45654" 00:22:12.177 }, 00:22:12.177 "auth": { 00:22:12.177 "state": "completed", 00:22:12.177 "digest": "sha512", 00:22:12.177 "dhgroup": "ffdhe6144" 00:22:12.177 } 00:22:12.177 } 00:22:12.177 ]' 00:22:12.177 06:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:12.435 06:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:12.435 06:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:12.435 06:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:12.435 06:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:12.435 06:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:12.435 06:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:12.435 06:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:12.693 06:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmU3YTdhY2FhZWRlMmVmNjBiYjEzOGU2YmZmODNmZmQwOTdkYjg2M2FlZTI2ZGY4ODBkZTI4YTYzOTg3NTY2MhLCEXY=: 00:22:12.693 06:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YmU3YTdhY2FhZWRlMmVmNjBiYjEzOGU2YmZmODNmZmQwOTdkYjg2M2FlZTI2ZGY4ODBkZTI4YTYzOTg3NTY2MhLCEXY=: 00:22:13.260 06:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:13.260 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:13.260 06:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:22:13.260 06:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.260 06:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:13.260 06:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:13.260 06:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:13.260 06:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:13.260 06:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:13.260 06:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:13.260 06:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:22:13.260 06:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:13.260 06:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:13.260 06:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:13.260 06:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:13.260 06:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:13.260 06:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:13.260 06:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.260 06:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:13.519 06:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:13.519 06:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:13.519 06:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:13.519 06:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:13.776 00:22:13.777 06:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:13.777 06:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:13.777 06:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:14.035 06:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:14.035 06:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:14.035 06:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:14.035 06:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:14.035 06:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:14.035 06:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:14.035 { 00:22:14.035 "cntlid": 137, 00:22:14.035 "qid": 0, 00:22:14.035 "state": "enabled", 00:22:14.035 "thread": "nvmf_tgt_poll_group_000", 00:22:14.035 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:22:14.035 "listen_address": { 00:22:14.035 "trtype": "TCP", 00:22:14.035 "adrfam": "IPv4", 00:22:14.035 "traddr": "10.0.0.2", 00:22:14.035 "trsvcid": "4420" 00:22:14.035 }, 00:22:14.035 "peer_address": { 00:22:14.035 "trtype": "TCP", 00:22:14.035 "adrfam": "IPv4", 00:22:14.035 "traddr": "10.0.0.1", 00:22:14.035 "trsvcid": "45682" 00:22:14.035 }, 00:22:14.035 "auth": { 00:22:14.035 "state": "completed", 00:22:14.035 "digest": "sha512", 00:22:14.035 "dhgroup": "ffdhe8192" 00:22:14.035 } 00:22:14.035 } 00:22:14.035 ]' 00:22:14.035 06:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:14.035 06:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:14.035 06:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:14.294 06:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:14.294 06:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:14.294 06:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:14.294 06:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:14.294 06:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:14.294 06:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzUxYzNkODQ2MDhiNjg0MzY5OTZhNjBjOWUzMDJhMGI5YTgzNmU1NjY1NjM4Yzk3aI3eWg==: --dhchap-ctrl-secret DHHC-1:03:NzFiZjY3NDQ4MGY5YjFkOGY3NDg0Y2M3ZWQxODM5MTNkMjkzNzVjNTU0ZTkxNjQ0OTIxY2NmODBlNWMxYWU1Of+wUH4=: 00:22:14.294 06:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NzUxYzNkODQ2MDhiNjg0MzY5OTZhNjBjOWUzMDJhMGI5YTgzNmU1NjY1NjM4Yzk3aI3eWg==: --dhchap-ctrl-secret DHHC-1:03:NzFiZjY3NDQ4MGY5YjFkOGY3NDg0Y2M3ZWQxODM5MTNkMjkzNzVjNTU0ZTkxNjQ0OTIxY2NmODBlNWMxYWU1Of+wUH4=: 00:22:14.861 06:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:14.861 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:14.861 06:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:22:14.861 06:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:14.861 06:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:14.861 06:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:14.861 06:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:14.861 06:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:14.861 06:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:15.120 06:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:22:15.120 06:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:15.120 06:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:15.120 06:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:15.120 06:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:15.120 06:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:15.120 06:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:15.120 06:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:15.120 06:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:15.120 06:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:15.120 06:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:15.120 06:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:15.120 06:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:15.688 00:22:15.688 06:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:15.688 06:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:15.688 06:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:15.947 06:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:15.947 06:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:15.947 06:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:15.947 06:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:15.947 06:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:15.947 06:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:15.947 { 00:22:15.947 "cntlid": 139, 00:22:15.947 "qid": 0, 00:22:15.947 "state": "enabled", 00:22:15.947 "thread": "nvmf_tgt_poll_group_000", 00:22:15.947 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:22:15.947 "listen_address": { 00:22:15.947 "trtype": "TCP", 00:22:15.947 "adrfam": "IPv4", 00:22:15.947 "traddr": "10.0.0.2", 00:22:15.947 "trsvcid": "4420" 00:22:15.947 }, 00:22:15.947 "peer_address": { 00:22:15.947 "trtype": "TCP", 00:22:15.947 "adrfam": "IPv4", 00:22:15.947 "traddr": "10.0.0.1", 00:22:15.947 "trsvcid": "45706" 00:22:15.947 }, 00:22:15.947 "auth": { 00:22:15.947 "state": "completed", 00:22:15.947 "digest": "sha512", 00:22:15.947 "dhgroup": "ffdhe8192" 00:22:15.947 } 00:22:15.947 } 00:22:15.947 ]' 00:22:15.947 06:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:15.947 06:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:15.947 06:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:15.947 06:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:15.947 06:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:15.947 06:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:15.947 06:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:15.947 06:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:16.207 06:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjA1ODg0YTk2ODNmYjFhMzRhNDY0NzM0ZTgwYmVkYjVuWnga: --dhchap-ctrl-secret DHHC-1:02:NDViNTE0ODk4NjVlZWFmMWVmM2IyNTFjM2FiNzJmMWM4YWJjN2Q1MDI4ZjdiMzQwy7PKjQ==: 00:22:16.207 06:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YjA1ODg0YTk2ODNmYjFhMzRhNDY0NzM0ZTgwYmVkYjVuWnga: --dhchap-ctrl-secret DHHC-1:02:NDViNTE0ODk4NjVlZWFmMWVmM2IyNTFjM2FiNzJmMWM4YWJjN2Q1MDI4ZjdiMzQwy7PKjQ==: 00:22:16.775 06:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:16.775 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:16.775 06:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:22:16.775 06:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:16.775 06:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:16.775 06:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:16.775 06:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:16.775 06:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:16.775 06:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:17.034 06:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:22:17.034 06:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:17.034 06:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:17.034 06:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:17.034 06:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:17.034 06:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:17.034 06:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:17.034 06:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.034 06:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:17.034 06:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.034 06:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:17.034 06:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:17.034 06:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:17.601 00:22:17.601 06:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:17.601 06:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:17.601 06:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:17.601 06:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:17.601 06:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:17.601 06:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.601 06:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:17.601 06:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.601 06:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:17.601 { 00:22:17.601 "cntlid": 141, 00:22:17.601 "qid": 0, 00:22:17.601 "state": "enabled", 00:22:17.601 "thread": "nvmf_tgt_poll_group_000", 00:22:17.601 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:22:17.602 "listen_address": { 00:22:17.602 "trtype": "TCP", 00:22:17.602 "adrfam": "IPv4", 00:22:17.602 "traddr": "10.0.0.2", 00:22:17.602 "trsvcid": "4420" 00:22:17.602 }, 00:22:17.602 "peer_address": { 00:22:17.602 "trtype": "TCP", 00:22:17.602 "adrfam": "IPv4", 00:22:17.602 "traddr": "10.0.0.1", 00:22:17.602 "trsvcid": "45728" 00:22:17.602 }, 00:22:17.602 "auth": { 00:22:17.602 "state": "completed", 00:22:17.602 "digest": "sha512", 00:22:17.602 "dhgroup": "ffdhe8192" 00:22:17.602 } 00:22:17.602 } 00:22:17.602 ]' 00:22:17.602 06:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:17.602 06:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:17.860 06:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:17.860 06:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:17.860 06:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:17.860 06:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:17.860 06:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:17.860 06:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:18.118 06:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MGIwNmUwZDEyN2Q0ZTA1ZGRiMGExMmU0YzJiODg1MzE1ZGRkYjcxOWQ2Y2VlYjI16WXLmg==: --dhchap-ctrl-secret DHHC-1:01:MjliMDUxZDI0MzVkOTk1MWM1ZWUyODNhNmJjN2I1YzL0PRcf: 00:22:18.118 06:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MGIwNmUwZDEyN2Q0ZTA1ZGRiMGExMmU0YzJiODg1MzE1ZGRkYjcxOWQ2Y2VlYjI16WXLmg==: --dhchap-ctrl-secret DHHC-1:01:MjliMDUxZDI0MzVkOTk1MWM1ZWUyODNhNmJjN2I1YzL0PRcf: 00:22:18.686 06:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:18.686 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:18.686 06:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:22:18.686 06:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.686 06:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:18.686 06:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:18.686 06:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:18.686 06:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:18.686 06:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:18.686 06:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:22:18.686 06:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:18.686 06:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:18.686 06:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:18.945 06:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:18.945 06:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:18.945 06:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:22:18.945 06:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.945 06:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:18.945 06:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:18.945 06:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:18.945 06:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:18.945 06:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:19.203 00:22:19.203 06:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:19.203 06:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:19.203 06:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:19.462 06:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:19.462 06:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:19.462 06:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:19.462 06:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:19.462 06:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:19.462 06:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:19.462 { 00:22:19.462 "cntlid": 143, 00:22:19.462 "qid": 0, 00:22:19.462 "state": "enabled", 00:22:19.462 "thread": "nvmf_tgt_poll_group_000", 00:22:19.462 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:22:19.462 "listen_address": { 00:22:19.462 "trtype": "TCP", 00:22:19.462 "adrfam": "IPv4", 00:22:19.462 "traddr": "10.0.0.2", 00:22:19.462 "trsvcid": "4420" 00:22:19.462 }, 00:22:19.462 "peer_address": { 00:22:19.462 "trtype": "TCP", 00:22:19.462 "adrfam": "IPv4", 00:22:19.462 "traddr": "10.0.0.1", 00:22:19.462 "trsvcid": "41070" 00:22:19.462 }, 00:22:19.462 "auth": { 00:22:19.462 "state": "completed", 00:22:19.462 "digest": "sha512", 00:22:19.462 "dhgroup": "ffdhe8192" 00:22:19.462 } 00:22:19.462 } 00:22:19.462 ]' 00:22:19.462 06:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:19.462 06:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:19.462 06:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:19.720 06:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:19.720 06:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:19.720 06:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:19.720 06:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:19.720 06:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:19.978 06:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmU3YTdhY2FhZWRlMmVmNjBiYjEzOGU2YmZmODNmZmQwOTdkYjg2M2FlZTI2ZGY4ODBkZTI4YTYzOTg3NTY2MhLCEXY=: 00:22:19.978 06:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YmU3YTdhY2FhZWRlMmVmNjBiYjEzOGU2YmZmODNmZmQwOTdkYjg2M2FlZTI2ZGY4ODBkZTI4YTYzOTg3NTY2MhLCEXY=: 00:22:20.546 06:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:20.546 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:20.546 06:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:22:20.546 06:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:20.546 06:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:20.546 06:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:20.546 06:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:22:20.546 06:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:22:20.546 06:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:22:20.546 06:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:20.546 06:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:20.546 06:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:20.546 06:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:22:20.546 06:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:20.546 06:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:20.546 06:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:20.546 06:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:20.546 06:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:20.546 06:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:20.546 06:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:20.546 06:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:20.546 06:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:20.546 06:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:20.546 06:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:20.546 06:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:21.113 00:22:21.113 06:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:21.113 06:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:21.113 06:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:21.372 06:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:21.372 06:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:21.372 06:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:21.372 06:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:21.372 06:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:21.372 06:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:21.372 { 00:22:21.372 "cntlid": 145, 00:22:21.372 "qid": 0, 00:22:21.372 "state": "enabled", 00:22:21.372 "thread": "nvmf_tgt_poll_group_000", 00:22:21.372 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:22:21.372 "listen_address": { 00:22:21.372 "trtype": "TCP", 00:22:21.372 "adrfam": "IPv4", 00:22:21.372 "traddr": "10.0.0.2", 00:22:21.372 "trsvcid": "4420" 00:22:21.372 }, 00:22:21.372 "peer_address": { 00:22:21.372 "trtype": "TCP", 00:22:21.372 "adrfam": "IPv4", 00:22:21.372 "traddr": "10.0.0.1", 00:22:21.372 "trsvcid": "41108" 00:22:21.372 }, 00:22:21.372 "auth": { 00:22:21.372 "state": "completed", 00:22:21.372 "digest": "sha512", 00:22:21.372 "dhgroup": "ffdhe8192" 00:22:21.372 } 00:22:21.372 } 00:22:21.372 ]' 00:22:21.372 06:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:21.372 06:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:21.372 06:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:21.372 06:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:21.372 06:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:21.372 06:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:21.372 06:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:21.372 06:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:21.631 06:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzUxYzNkODQ2MDhiNjg0MzY5OTZhNjBjOWUzMDJhMGI5YTgzNmU1NjY1NjM4Yzk3aI3eWg==: --dhchap-ctrl-secret DHHC-1:03:NzFiZjY3NDQ4MGY5YjFkOGY3NDg0Y2M3ZWQxODM5MTNkMjkzNzVjNTU0ZTkxNjQ0OTIxY2NmODBlNWMxYWU1Of+wUH4=: 00:22:21.631 06:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NzUxYzNkODQ2MDhiNjg0MzY5OTZhNjBjOWUzMDJhMGI5YTgzNmU1NjY1NjM4Yzk3aI3eWg==: --dhchap-ctrl-secret DHHC-1:03:NzFiZjY3NDQ4MGY5YjFkOGY3NDg0Y2M3ZWQxODM5MTNkMjkzNzVjNTU0ZTkxNjQ0OTIxY2NmODBlNWMxYWU1Of+wUH4=: 00:22:22.198 06:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:22.198 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:22.198 06:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:22:22.198 06:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:22.198 06:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:22.198 06:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:22.198 06:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 00:22:22.198 06:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:22.198 06:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:22.198 06:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:22.198 06:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:22:22.198 06:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:22.198 06:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:22:22.198 06:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:22:22.198 06:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:22.198 06:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:22:22.198 06:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:22.198 06:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key2 00:22:22.198 06:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:22:22.198 06:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:22:22.765 request: 00:22:22.765 { 00:22:22.765 "name": "nvme0", 00:22:22.765 "trtype": "tcp", 00:22:22.765 "traddr": "10.0.0.2", 00:22:22.765 "adrfam": "ipv4", 00:22:22.765 "trsvcid": "4420", 00:22:22.765 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:22.765 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:22:22.765 "prchk_reftag": false, 00:22:22.765 "prchk_guard": false, 00:22:22.765 "hdgst": false, 00:22:22.765 "ddgst": false, 00:22:22.765 "dhchap_key": "key2", 00:22:22.765 "allow_unrecognized_csi": false, 00:22:22.765 "method": "bdev_nvme_attach_controller", 00:22:22.765 "req_id": 1 00:22:22.765 } 00:22:22.765 Got JSON-RPC error response 00:22:22.765 response: 00:22:22.765 { 00:22:22.765 "code": -5, 00:22:22.765 "message": "Input/output error" 00:22:22.765 } 00:22:22.765 06:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:22.765 06:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:22.765 06:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:22.765 06:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:22.765 06:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:22:22.765 06:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:22.765 06:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:22.765 06:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:22.765 06:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:22.765 06:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:22.765 06:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:22.765 06:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:22.765 06:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:22.765 06:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:22.765 06:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:22.765 06:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:22:22.765 06:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:22.765 06:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:22:22.765 06:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:22.765 06:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:22.765 06:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:22.765 06:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:23.024 request: 00:22:23.024 { 00:22:23.024 "name": "nvme0", 00:22:23.024 "trtype": "tcp", 00:22:23.024 "traddr": "10.0.0.2", 00:22:23.024 "adrfam": "ipv4", 00:22:23.024 "trsvcid": "4420", 00:22:23.024 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:23.024 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:22:23.024 "prchk_reftag": false, 00:22:23.024 "prchk_guard": false, 00:22:23.024 "hdgst": false, 00:22:23.024 "ddgst": false, 00:22:23.024 "dhchap_key": "key1", 00:22:23.024 "dhchap_ctrlr_key": "ckey2", 00:22:23.024 "allow_unrecognized_csi": false, 00:22:23.024 "method": "bdev_nvme_attach_controller", 00:22:23.024 "req_id": 1 00:22:23.024 } 00:22:23.024 Got JSON-RPC error response 00:22:23.024 response: 00:22:23.024 { 00:22:23.024 "code": -5, 00:22:23.024 "message": "Input/output error" 00:22:23.024 } 00:22:23.282 06:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:23.283 06:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:23.283 06:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:23.283 06:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:23.283 06:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:22:23.283 06:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:23.283 06:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:23.283 06:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:23.283 06:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 00:22:23.283 06:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:23.283 06:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:23.283 06:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:23.283 06:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:23.283 06:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:23.283 06:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:23.283 06:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:22:23.283 06:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:23.283 06:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:22:23.283 06:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:23.283 06:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:23.283 06:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:23.283 06:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:23.542 request: 00:22:23.542 { 00:22:23.542 "name": "nvme0", 00:22:23.542 "trtype": "tcp", 00:22:23.542 "traddr": "10.0.0.2", 00:22:23.542 "adrfam": "ipv4", 00:22:23.542 "trsvcid": "4420", 00:22:23.542 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:23.542 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:22:23.542 "prchk_reftag": false, 00:22:23.542 "prchk_guard": false, 00:22:23.542 "hdgst": false, 00:22:23.542 "ddgst": false, 00:22:23.542 "dhchap_key": "key1", 00:22:23.542 "dhchap_ctrlr_key": "ckey1", 00:22:23.542 "allow_unrecognized_csi": false, 00:22:23.542 "method": "bdev_nvme_attach_controller", 00:22:23.542 "req_id": 1 00:22:23.542 } 00:22:23.542 Got JSON-RPC error response 00:22:23.542 response: 00:22:23.542 { 00:22:23.542 "code": -5, 00:22:23.542 "message": "Input/output error" 00:22:23.542 } 00:22:23.542 06:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:23.542 06:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:23.542 06:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:23.542 06:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:23.542 06:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:22:23.542 06:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:23.542 06:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:23.542 06:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:23.542 06:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 529977 00:22:23.542 06:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # '[' -z 529977 ']' 00:22:23.542 06:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # kill -0 529977 00:22:23.542 06:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # uname 00:22:23.542 06:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:23.542 06:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 529977 00:22:23.801 06:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:22:23.801 06:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:22:23.801 06:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 529977' 00:22:23.801 killing process with pid 529977 00:22:23.801 06:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@971 -- # kill 529977 00:22:23.801 06:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@976 -- # wait 529977 00:22:23.802 06:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:22:23.802 06:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:23.802 06:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:23.802 06:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:23.802 06:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=551477 00:22:23.802 06:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:22:23.802 06:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 551477 00:22:23.802 06:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 551477 ']' 00:22:23.802 06:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:23.802 06:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:23.802 06:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:23.802 06:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:23.802 06:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:24.735 06:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:24.735 06:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:22:24.735 06:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:24.735 06:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:24.735 06:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:24.735 06:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:24.735 06:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:22:24.735 06:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 551477 00:22:24.735 06:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 551477 ']' 00:22:24.735 06:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:24.735 06:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:24.735 06:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:24.735 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:24.735 06:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:24.736 06:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:24.994 06:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:24.994 06:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:22:24.994 06:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:22:24.994 06:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.994 06:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:24.994 null0 00:22:24.994 06:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.994 06:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:22:24.994 06:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.jNY 00:22:24.995 06:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.995 06:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:24.995 06:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.995 06:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.q4x ]] 00:22:24.995 06:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.q4x 00:22:24.995 06:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.995 06:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:24.995 06:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.995 06:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:22:24.995 06:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.ODS 00:22:24.995 06:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.995 06:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:24.995 06:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.995 06:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.jRx ]] 00:22:24.995 06:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.jRx 00:22:24.995 06:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.995 06:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:24.995 06:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.995 06:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:22:24.995 06:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.1aX 00:22:24.995 06:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.995 06:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:25.253 06:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:25.253 06:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.95D ]] 00:22:25.253 06:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.95D 00:22:25.253 06:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:25.253 06:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:25.253 06:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:25.253 06:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:22:25.253 06:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.Ejz 00:22:25.253 06:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:25.253 06:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:25.253 06:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:25.253 06:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:22:25.253 06:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:22:25.253 06:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:25.253 06:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:25.253 06:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:25.253 06:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:25.253 06:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:25.253 06:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:22:25.253 06:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:25.253 06:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:25.253 06:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:25.253 06:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:25.253 06:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:25.253 06:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:25.820 nvme0n1 00:22:25.820 06:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:25.820 06:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:25.820 06:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:26.079 06:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:26.079 06:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:26.079 06:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:26.079 06:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:26.079 06:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:26.079 06:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:26.079 { 00:22:26.079 "cntlid": 1, 00:22:26.079 "qid": 0, 00:22:26.079 "state": "enabled", 00:22:26.079 "thread": "nvmf_tgt_poll_group_000", 00:22:26.079 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:22:26.079 "listen_address": { 00:22:26.079 "trtype": "TCP", 00:22:26.079 "adrfam": "IPv4", 00:22:26.079 "traddr": "10.0.0.2", 00:22:26.079 "trsvcid": "4420" 00:22:26.079 }, 00:22:26.079 "peer_address": { 00:22:26.079 "trtype": "TCP", 00:22:26.079 "adrfam": "IPv4", 00:22:26.079 "traddr": "10.0.0.1", 00:22:26.079 "trsvcid": "41174" 00:22:26.079 }, 00:22:26.079 "auth": { 00:22:26.079 "state": "completed", 00:22:26.079 "digest": "sha512", 00:22:26.079 "dhgroup": "ffdhe8192" 00:22:26.079 } 00:22:26.079 } 00:22:26.079 ]' 00:22:26.079 06:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:26.079 06:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:26.079 06:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:26.079 06:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:26.079 06:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:26.079 06:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:26.079 06:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:26.079 06:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:26.338 06:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmU3YTdhY2FhZWRlMmVmNjBiYjEzOGU2YmZmODNmZmQwOTdkYjg2M2FlZTI2ZGY4ODBkZTI4YTYzOTg3NTY2MhLCEXY=: 00:22:26.338 06:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YmU3YTdhY2FhZWRlMmVmNjBiYjEzOGU2YmZmODNmZmQwOTdkYjg2M2FlZTI2ZGY4ODBkZTI4YTYzOTg3NTY2MhLCEXY=: 00:22:26.905 06:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:26.905 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:26.905 06:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:22:26.905 06:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:26.905 06:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:26.905 06:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:26.905 06:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:22:26.905 06:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:26.905 06:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:26.905 06:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:26.905 06:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:22:26.905 06:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:22:27.163 06:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:22:27.163 06:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:27.163 06:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:22:27.163 06:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:22:27.163 06:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:27.163 06:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:22:27.163 06:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:27.163 06:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:27.163 06:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:27.163 06:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:27.424 request: 00:22:27.424 { 00:22:27.424 "name": "nvme0", 00:22:27.424 "trtype": "tcp", 00:22:27.424 "traddr": "10.0.0.2", 00:22:27.424 "adrfam": "ipv4", 00:22:27.424 "trsvcid": "4420", 00:22:27.424 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:27.424 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:22:27.424 "prchk_reftag": false, 00:22:27.424 "prchk_guard": false, 00:22:27.424 "hdgst": false, 00:22:27.424 "ddgst": false, 00:22:27.424 "dhchap_key": "key3", 00:22:27.424 "allow_unrecognized_csi": false, 00:22:27.424 "method": "bdev_nvme_attach_controller", 00:22:27.424 "req_id": 1 00:22:27.424 } 00:22:27.424 Got JSON-RPC error response 00:22:27.424 response: 00:22:27.424 { 00:22:27.424 "code": -5, 00:22:27.424 "message": "Input/output error" 00:22:27.424 } 00:22:27.424 06:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:27.424 06:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:27.424 06:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:27.424 06:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:27.424 06:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:22:27.424 06:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:22:27.424 06:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:22:27.424 06:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:22:27.686 06:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:22:27.686 06:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:27.686 06:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:22:27.686 06:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:22:27.686 06:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:27.686 06:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:22:27.686 06:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:27.686 06:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:27.686 06:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:27.686 06:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:27.945 request: 00:22:27.945 { 00:22:27.945 "name": "nvme0", 00:22:27.945 "trtype": "tcp", 00:22:27.945 "traddr": "10.0.0.2", 00:22:27.945 "adrfam": "ipv4", 00:22:27.945 "trsvcid": "4420", 00:22:27.945 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:27.945 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:22:27.945 "prchk_reftag": false, 00:22:27.945 "prchk_guard": false, 00:22:27.945 "hdgst": false, 00:22:27.945 "ddgst": false, 00:22:27.945 "dhchap_key": "key3", 00:22:27.945 "allow_unrecognized_csi": false, 00:22:27.945 "method": "bdev_nvme_attach_controller", 00:22:27.945 "req_id": 1 00:22:27.945 } 00:22:27.945 Got JSON-RPC error response 00:22:27.945 response: 00:22:27.945 { 00:22:27.945 "code": -5, 00:22:27.945 "message": "Input/output error" 00:22:27.945 } 00:22:27.945 06:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:27.945 06:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:27.945 06:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:27.945 06:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:27.945 06:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:22:27.945 06:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:22:27.945 06:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:22:27.945 06:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:27.945 06:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:27.945 06:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:27.945 06:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:22:27.945 06:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:27.945 06:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:27.945 06:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:27.945 06:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:22:27.945 06:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:27.945 06:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:27.945 06:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:27.945 06:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:27.945 06:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:27.945 06:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:27.945 06:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:22:28.204 06:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:28.204 06:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:22:28.204 06:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:28.204 06:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:28.204 06:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:28.204 06:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:28.462 request: 00:22:28.462 { 00:22:28.462 "name": "nvme0", 00:22:28.462 "trtype": "tcp", 00:22:28.462 "traddr": "10.0.0.2", 00:22:28.462 "adrfam": "ipv4", 00:22:28.462 "trsvcid": "4420", 00:22:28.462 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:28.462 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:22:28.462 "prchk_reftag": false, 00:22:28.462 "prchk_guard": false, 00:22:28.462 "hdgst": false, 00:22:28.462 "ddgst": false, 00:22:28.462 "dhchap_key": "key0", 00:22:28.462 "dhchap_ctrlr_key": "key1", 00:22:28.462 "allow_unrecognized_csi": false, 00:22:28.462 "method": "bdev_nvme_attach_controller", 00:22:28.462 "req_id": 1 00:22:28.462 } 00:22:28.462 Got JSON-RPC error response 00:22:28.462 response: 00:22:28.462 { 00:22:28.462 "code": -5, 00:22:28.462 "message": "Input/output error" 00:22:28.462 } 00:22:28.462 06:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:28.462 06:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:28.462 06:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:28.462 06:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:28.462 06:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:22:28.462 06:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:22:28.462 06:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:22:28.720 nvme0n1 00:22:28.720 06:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:22:28.720 06:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:22:28.720 06:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:28.979 06:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:28.979 06:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:28.979 06:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:28.979 06:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 00:22:28.979 06:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:28.979 06:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:29.237 06:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:29.237 06:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:22:29.237 06:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:22:29.237 06:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:22:29.804 nvme0n1 00:22:29.804 06:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:22:29.804 06:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:22:29.805 06:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:30.063 06:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:30.063 06:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:30.063 06:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:30.063 06:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:30.063 06:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:30.063 06:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:22:30.063 06:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:22:30.063 06:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:30.322 06:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:30.322 06:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:MGIwNmUwZDEyN2Q0ZTA1ZGRiMGExMmU0YzJiODg1MzE1ZGRkYjcxOWQ2Y2VlYjI16WXLmg==: --dhchap-ctrl-secret DHHC-1:03:YmU3YTdhY2FhZWRlMmVmNjBiYjEzOGU2YmZmODNmZmQwOTdkYjg2M2FlZTI2ZGY4ODBkZTI4YTYzOTg3NTY2MhLCEXY=: 00:22:30.322 06:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MGIwNmUwZDEyN2Q0ZTA1ZGRiMGExMmU0YzJiODg1MzE1ZGRkYjcxOWQ2Y2VlYjI16WXLmg==: --dhchap-ctrl-secret DHHC-1:03:YmU3YTdhY2FhZWRlMmVmNjBiYjEzOGU2YmZmODNmZmQwOTdkYjg2M2FlZTI2ZGY4ODBkZTI4YTYzOTg3NTY2MhLCEXY=: 00:22:30.889 06:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:22:30.889 06:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:22:30.889 06:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:22:30.889 06:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:22:30.889 06:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:22:30.889 06:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:22:30.889 06:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:22:30.889 06:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:30.889 06:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:30.889 06:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:22:30.889 06:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:30.889 06:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:22:30.889 06:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:22:30.889 06:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:30.889 06:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:22:30.889 06:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:30.889 06:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 00:22:30.889 06:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:22:30.889 06:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:22:31.464 request: 00:22:31.464 { 00:22:31.464 "name": "nvme0", 00:22:31.464 "trtype": "tcp", 00:22:31.464 "traddr": "10.0.0.2", 00:22:31.464 "adrfam": "ipv4", 00:22:31.464 "trsvcid": "4420", 00:22:31.464 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:31.464 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:22:31.464 "prchk_reftag": false, 00:22:31.464 "prchk_guard": false, 00:22:31.464 "hdgst": false, 00:22:31.464 "ddgst": false, 00:22:31.464 "dhchap_key": "key1", 00:22:31.464 "allow_unrecognized_csi": false, 00:22:31.464 "method": "bdev_nvme_attach_controller", 00:22:31.464 "req_id": 1 00:22:31.464 } 00:22:31.464 Got JSON-RPC error response 00:22:31.464 response: 00:22:31.464 { 00:22:31.464 "code": -5, 00:22:31.464 "message": "Input/output error" 00:22:31.464 } 00:22:31.464 06:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:31.464 06:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:31.464 06:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:31.464 06:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:31.464 06:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:31.464 06:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:31.464 06:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:32.400 nvme0n1 00:22:32.400 06:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:22:32.400 06:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:22:32.401 06:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:32.401 06:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:32.401 06:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:32.401 06:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:32.659 06:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:22:32.659 06:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.659 06:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:32.659 06:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.659 06:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:22:32.659 06:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:22:32.659 06:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:22:32.917 nvme0n1 00:22:32.917 06:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:22:32.917 06:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:32.917 06:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:22:32.917 06:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:32.917 06:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:32.917 06:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:33.175 06:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:33.175 06:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.175 06:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:33.175 06:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.175 06:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:YjA1ODg0YTk2ODNmYjFhMzRhNDY0NzM0ZTgwYmVkYjVuWnga: '' 2s 00:22:33.175 06:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:22:33.175 06:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:22:33.175 06:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:YjA1ODg0YTk2ODNmYjFhMzRhNDY0NzM0ZTgwYmVkYjVuWnga: 00:22:33.175 06:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:22:33.175 06:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:22:33.175 06:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:22:33.175 06:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:YjA1ODg0YTk2ODNmYjFhMzRhNDY0NzM0ZTgwYmVkYjVuWnga: ]] 00:22:33.175 06:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:YjA1ODg0YTk2ODNmYjFhMzRhNDY0NzM0ZTgwYmVkYjVuWnga: 00:22:33.175 06:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:22:33.175 06:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:22:33.175 06:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:22:35.709 06:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:22:35.709 06:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1237 -- # local i=0 00:22:35.709 06:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:22:35.709 06:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n1 00:22:35.709 06:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n1 00:22:35.709 06:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:22:35.709 06:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1248 -- # return 0 00:22:35.709 06:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key2 00:22:35.709 06:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:35.709 06:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:35.709 06:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:35.709 06:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:MGIwNmUwZDEyN2Q0ZTA1ZGRiMGExMmU0YzJiODg1MzE1ZGRkYjcxOWQ2Y2VlYjI16WXLmg==: 2s 00:22:35.709 06:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:22:35.709 06:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:22:35.709 06:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:22:35.709 06:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:MGIwNmUwZDEyN2Q0ZTA1ZGRiMGExMmU0YzJiODg1MzE1ZGRkYjcxOWQ2Y2VlYjI16WXLmg==: 00:22:35.709 06:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:22:35.709 06:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:22:35.709 06:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:22:35.709 06:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:MGIwNmUwZDEyN2Q0ZTA1ZGRiMGExMmU0YzJiODg1MzE1ZGRkYjcxOWQ2Y2VlYjI16WXLmg==: ]] 00:22:35.709 06:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:MGIwNmUwZDEyN2Q0ZTA1ZGRiMGExMmU0YzJiODg1MzE1ZGRkYjcxOWQ2Y2VlYjI16WXLmg==: 00:22:35.709 06:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:22:35.709 06:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:22:37.611 06:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:22:37.611 06:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1237 -- # local i=0 00:22:37.611 06:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:22:37.611 06:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n1 00:22:37.611 06:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:22:37.611 06:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n1 00:22:37.611 06:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1248 -- # return 0 00:22:37.611 06:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:37.611 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:37.611 06:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:37.611 06:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:37.611 06:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:37.611 06:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:37.611 06:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:37.611 06:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:37.611 06:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:38.177 nvme0n1 00:22:38.177 06:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:38.177 06:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:38.177 06:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:38.177 06:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:38.177 06:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:38.177 06:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:38.743 06:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:22:38.743 06:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:22:38.743 06:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:38.743 06:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:38.743 06:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:22:38.743 06:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:38.743 06:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:38.743 06:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:38.743 06:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:22:38.743 06:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:22:39.001 06:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:22:39.001 06:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:22:39.001 06:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:39.260 06:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:39.260 06:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:39.260 06:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.260 06:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:39.260 06:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.260 06:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:39.260 06:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:39.260 06:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:39.260 06:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:22:39.260 06:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:39.260 06:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:22:39.260 06:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:39.260 06:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:39.260 06:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:39.825 request: 00:22:39.825 { 00:22:39.825 "name": "nvme0", 00:22:39.825 "dhchap_key": "key1", 00:22:39.825 "dhchap_ctrlr_key": "key3", 00:22:39.825 "method": "bdev_nvme_set_keys", 00:22:39.825 "req_id": 1 00:22:39.825 } 00:22:39.825 Got JSON-RPC error response 00:22:39.825 response: 00:22:39.825 { 00:22:39.825 "code": -13, 00:22:39.825 "message": "Permission denied" 00:22:39.825 } 00:22:39.825 06:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:39.825 06:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:39.825 06:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:39.825 06:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:39.825 06:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:22:39.826 06:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:22:39.826 06:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:39.826 06:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:22:39.826 06:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:22:40.760 06:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:22:40.760 06:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:22:40.760 06:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:41.019 06:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:22:41.019 06:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:41.019 06:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:41.019 06:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:41.019 06:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:41.019 06:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:41.019 06:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:41.019 06:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:41.954 nvme0n1 00:22:41.954 06:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:41.954 06:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:41.954 06:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:41.954 06:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:41.954 06:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:22:41.954 06:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:41.954 06:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:22:41.954 06:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:22:41.954 06:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:41.954 06:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:22:41.954 06:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:41.954 06:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:22:41.954 06:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:22:42.213 request: 00:22:42.213 { 00:22:42.213 "name": "nvme0", 00:22:42.213 "dhchap_key": "key2", 00:22:42.213 "dhchap_ctrlr_key": "key0", 00:22:42.213 "method": "bdev_nvme_set_keys", 00:22:42.213 "req_id": 1 00:22:42.213 } 00:22:42.213 Got JSON-RPC error response 00:22:42.213 response: 00:22:42.213 { 00:22:42.213 "code": -13, 00:22:42.213 "message": "Permission denied" 00:22:42.213 } 00:22:42.213 06:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:42.213 06:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:42.213 06:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:42.213 06:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:42.213 06:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:22:42.213 06:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:22:42.213 06:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:42.471 06:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:22:42.471 06:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:22:43.406 06:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:22:43.406 06:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:22:43.406 06:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:43.665 06:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:22:43.665 06:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:22:43.665 06:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:22:43.665 06:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 530177 00:22:43.665 06:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # '[' -z 530177 ']' 00:22:43.665 06:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # kill -0 530177 00:22:43.665 06:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # uname 00:22:43.665 06:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:43.665 06:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 530177 00:22:43.665 06:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:22:43.665 06:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:22:43.665 06:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 530177' 00:22:43.665 killing process with pid 530177 00:22:43.665 06:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@971 -- # kill 530177 00:22:43.665 06:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@976 -- # wait 530177 00:22:43.924 06:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:22:43.924 06:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:43.924 06:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:22:43.924 06:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:43.924 06:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:22:43.924 06:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:43.924 06:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:44.182 rmmod nvme_tcp 00:22:44.182 rmmod nvme_fabrics 00:22:44.182 rmmod nvme_keyring 00:22:44.182 06:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:44.182 06:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:22:44.182 06:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:22:44.182 06:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 551477 ']' 00:22:44.182 06:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 551477 00:22:44.182 06:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # '[' -z 551477 ']' 00:22:44.182 06:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # kill -0 551477 00:22:44.182 06:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # uname 00:22:44.182 06:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:44.182 06:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 551477 00:22:44.182 06:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:22:44.182 06:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:22:44.182 06:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 551477' 00:22:44.182 killing process with pid 551477 00:22:44.182 06:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@971 -- # kill 551477 00:22:44.182 06:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@976 -- # wait 551477 00:22:44.441 06:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:44.441 06:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:44.441 06:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:44.441 06:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:22:44.441 06:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:22:44.442 06:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:44.442 06:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:22:44.442 06:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:44.442 06:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:44.442 06:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:44.442 06:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:44.442 06:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:46.347 06:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:46.347 06:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.jNY /tmp/spdk.key-sha256.ODS /tmp/spdk.key-sha384.1aX /tmp/spdk.key-sha512.Ejz /tmp/spdk.key-sha512.q4x /tmp/spdk.key-sha384.jRx /tmp/spdk.key-sha256.95D '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:22:46.347 00:22:46.347 real 2m32.777s 00:22:46.347 user 5m51.659s 00:22:46.347 sys 0m24.022s 00:22:46.347 06:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1128 -- # xtrace_disable 00:22:46.347 06:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:46.347 ************************************ 00:22:46.347 END TEST nvmf_auth_target 00:22:46.347 ************************************ 00:22:46.347 06:33:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:22:46.347 06:33:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:22:46.347 06:33:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:22:46.347 06:33:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:22:46.347 06:33:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:46.347 ************************************ 00:22:46.347 START TEST nvmf_bdevio_no_huge 00:22:46.347 ************************************ 00:22:46.347 06:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:22:46.607 * Looking for test storage... 00:22:46.607 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:46.607 06:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:22:46.607 06:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1691 -- # lcov --version 00:22:46.607 06:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:22:46.607 06:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:22:46.607 06:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:46.607 06:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:46.607 06:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:46.607 06:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:22:46.607 06:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:22:46.607 06:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:22:46.607 06:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:22:46.607 06:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:22:46.607 06:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:22:46.607 06:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:22:46.607 06:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:46.607 06:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:22:46.607 06:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:22:46.607 06:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:46.607 06:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:46.607 06:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:22:46.607 06:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:22:46.607 06:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:46.607 06:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:22:46.607 06:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:22:46.607 06:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:22:46.607 06:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:22:46.607 06:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:46.607 06:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:22:46.607 06:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:22:46.607 06:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:46.607 06:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:46.607 06:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:22:46.607 06:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:46.607 06:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:22:46.607 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:46.607 --rc genhtml_branch_coverage=1 00:22:46.607 --rc genhtml_function_coverage=1 00:22:46.607 --rc genhtml_legend=1 00:22:46.607 --rc geninfo_all_blocks=1 00:22:46.607 --rc geninfo_unexecuted_blocks=1 00:22:46.607 00:22:46.607 ' 00:22:46.607 06:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:22:46.607 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:46.607 --rc genhtml_branch_coverage=1 00:22:46.607 --rc genhtml_function_coverage=1 00:22:46.607 --rc genhtml_legend=1 00:22:46.607 --rc geninfo_all_blocks=1 00:22:46.607 --rc geninfo_unexecuted_blocks=1 00:22:46.607 00:22:46.607 ' 00:22:46.607 06:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:22:46.607 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:46.607 --rc genhtml_branch_coverage=1 00:22:46.607 --rc genhtml_function_coverage=1 00:22:46.607 --rc genhtml_legend=1 00:22:46.607 --rc geninfo_all_blocks=1 00:22:46.607 --rc geninfo_unexecuted_blocks=1 00:22:46.607 00:22:46.607 ' 00:22:46.607 06:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:22:46.607 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:46.607 --rc genhtml_branch_coverage=1 00:22:46.607 --rc genhtml_function_coverage=1 00:22:46.607 --rc genhtml_legend=1 00:22:46.608 --rc geninfo_all_blocks=1 00:22:46.608 --rc geninfo_unexecuted_blocks=1 00:22:46.608 00:22:46.608 ' 00:22:46.608 06:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:46.608 06:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:22:46.608 06:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:46.608 06:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:46.608 06:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:46.608 06:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:46.608 06:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:46.608 06:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:46.608 06:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:46.608 06:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:46.608 06:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:46.608 06:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:46.608 06:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:22:46.608 06:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:22:46.608 06:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:46.608 06:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:46.608 06:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:46.608 06:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:46.608 06:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:46.608 06:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:22:46.608 06:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:46.608 06:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:46.608 06:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:46.608 06:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:46.608 06:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:46.608 06:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:46.608 06:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:22:46.608 06:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:46.608 06:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:22:46.608 06:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:46.608 06:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:46.608 06:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:46.608 06:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:46.608 06:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:46.608 06:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:46.608 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:46.608 06:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:46.608 06:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:46.608 06:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:46.608 06:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:46.608 06:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:46.608 06:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:22:46.608 06:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:46.608 06:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:46.608 06:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:46.608 06:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:46.608 06:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:46.608 06:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:46.608 06:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:46.608 06:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:46.608 06:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:46.608 06:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:46.608 06:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:22:46.608 06:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:53.180 06:33:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:53.180 06:33:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:22:53.180 06:33:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:53.180 06:33:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:53.180 06:33:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:53.180 06:33:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:53.180 06:33:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:53.180 06:33:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:22:53.180 06:33:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:53.180 06:33:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:22:53.180 06:33:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:22:53.180 06:33:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:22:53.180 06:33:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:22:53.180 06:33:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:22:53.180 06:33:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:22:53.180 06:33:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:53.180 06:33:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:53.180 06:33:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:53.180 06:33:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:53.180 06:33:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:53.181 06:33:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:53.181 06:33:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:53.181 06:33:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:53.181 06:33:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:53.181 06:33:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:53.181 06:33:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:53.181 06:33:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:53.181 06:33:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:53.181 06:33:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:53.181 06:33:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:53.181 06:33:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:53.181 06:33:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:53.181 06:33:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:53.181 06:33:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:53.181 06:33:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:53.181 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:53.181 06:33:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:53.181 06:33:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:53.181 06:33:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:53.181 06:33:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:53.181 06:33:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:53.181 06:33:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:53.181 06:33:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:53.181 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:53.181 06:33:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:53.181 06:33:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:53.181 06:33:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:53.181 06:33:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:53.181 06:33:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:53.181 06:33:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:53.181 06:33:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:53.181 06:33:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:53.181 06:33:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:53.181 06:33:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:53.181 06:33:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:53.181 06:33:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:53.181 06:33:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:53.181 06:33:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:53.181 06:33:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:53.181 06:33:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:53.181 Found net devices under 0000:86:00.0: cvl_0_0 00:22:53.181 06:33:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:53.181 06:33:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:53.181 06:33:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:53.181 06:33:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:53.181 06:33:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:53.181 06:33:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:53.181 06:33:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:53.181 06:33:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:53.181 06:33:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:53.181 Found net devices under 0000:86:00.1: cvl_0_1 00:22:53.181 06:33:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:53.181 06:33:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:53.181 06:33:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # is_hw=yes 00:22:53.181 06:33:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:53.181 06:33:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:53.181 06:33:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:53.181 06:33:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:53.181 06:33:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:53.181 06:33:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:53.181 06:33:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:53.181 06:33:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:53.181 06:33:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:53.181 06:33:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:53.181 06:33:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:53.181 06:33:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:53.181 06:33:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:53.181 06:33:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:53.181 06:33:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:53.181 06:33:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:53.181 06:33:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:53.181 06:33:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:53.181 06:33:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:53.181 06:33:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:53.181 06:33:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:53.182 06:33:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:53.182 06:33:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:53.182 06:33:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:53.182 06:33:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:53.182 06:33:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:53.182 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:53.182 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.504 ms 00:22:53.182 00:22:53.182 --- 10.0.0.2 ping statistics --- 00:22:53.182 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:53.182 rtt min/avg/max/mdev = 0.504/0.504/0.504/0.000 ms 00:22:53.182 06:33:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:53.182 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:53.182 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.221 ms 00:22:53.182 00:22:53.182 --- 10.0.0.1 ping statistics --- 00:22:53.182 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:53.182 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:22:53.182 06:33:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:53.182 06:33:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # return 0 00:22:53.182 06:33:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:53.182 06:33:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:53.182 06:33:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:53.182 06:33:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:53.182 06:33:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:53.182 06:33:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:53.182 06:33:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:53.182 06:33:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:22:53.182 06:33:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:53.182 06:33:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:53.182 06:33:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:53.182 06:33:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=558876 00:22:53.182 06:33:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:22:53.182 06:33:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 558876 00:22:53.182 06:33:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # '[' -z 558876 ']' 00:22:53.182 06:33:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:53.182 06:33:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:53.182 06:33:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:53.182 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:53.182 06:33:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:53.182 06:33:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:53.182 [2024-11-20 06:33:24.410589] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:22:53.182 [2024-11-20 06:33:24.410639] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:22:53.182 [2024-11-20 06:33:24.497820] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:53.182 [2024-11-20 06:33:24.544375] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:53.182 [2024-11-20 06:33:24.544411] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:53.182 [2024-11-20 06:33:24.544417] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:53.182 [2024-11-20 06:33:24.544423] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:53.182 [2024-11-20 06:33:24.544428] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:53.182 [2024-11-20 06:33:24.545636] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:53.182 [2024-11-20 06:33:24.545668] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:22:53.182 [2024-11-20 06:33:24.545773] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:53.182 [2024-11-20 06:33:24.545775] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:22:53.457 06:33:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:53.457 06:33:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@866 -- # return 0 00:22:53.457 06:33:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:53.457 06:33:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:53.457 06:33:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:53.457 06:33:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:53.457 06:33:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:53.457 06:33:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:53.457 06:33:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:53.759 [2024-11-20 06:33:25.286865] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:53.759 06:33:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:53.759 06:33:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:53.759 06:33:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:53.759 06:33:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:53.759 Malloc0 00:22:53.759 06:33:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:53.759 06:33:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:53.759 06:33:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:53.759 06:33:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:53.759 06:33:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:53.759 06:33:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:53.759 06:33:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:53.759 06:33:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:53.759 06:33:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:53.759 06:33:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:53.759 06:33:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:53.759 06:33:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:53.759 [2024-11-20 06:33:25.331156] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:53.759 06:33:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:53.759 06:33:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:22:53.759 06:33:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:22:53.759 06:33:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:22:53.759 06:33:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:22:53.759 06:33:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:53.759 06:33:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:53.759 { 00:22:53.759 "params": { 00:22:53.759 "name": "Nvme$subsystem", 00:22:53.759 "trtype": "$TEST_TRANSPORT", 00:22:53.759 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:53.759 "adrfam": "ipv4", 00:22:53.759 "trsvcid": "$NVMF_PORT", 00:22:53.759 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:53.759 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:53.759 "hdgst": ${hdgst:-false}, 00:22:53.759 "ddgst": ${ddgst:-false} 00:22:53.759 }, 00:22:53.759 "method": "bdev_nvme_attach_controller" 00:22:53.759 } 00:22:53.759 EOF 00:22:53.759 )") 00:22:53.759 06:33:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:22:53.759 06:33:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:22:53.759 06:33:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:22:53.759 06:33:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:53.759 "params": { 00:22:53.759 "name": "Nvme1", 00:22:53.759 "trtype": "tcp", 00:22:53.759 "traddr": "10.0.0.2", 00:22:53.759 "adrfam": "ipv4", 00:22:53.759 "trsvcid": "4420", 00:22:53.759 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:53.759 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:53.759 "hdgst": false, 00:22:53.759 "ddgst": false 00:22:53.759 }, 00:22:53.759 "method": "bdev_nvme_attach_controller" 00:22:53.759 }' 00:22:53.759 [2024-11-20 06:33:25.380958] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:22:53.759 [2024-11-20 06:33:25.381008] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid559127 ] 00:22:53.759 [2024-11-20 06:33:25.460441] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:53.759 [2024-11-20 06:33:25.508478] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:53.759 [2024-11-20 06:33:25.508582] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:53.759 [2024-11-20 06:33:25.508582] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:54.057 I/O targets: 00:22:54.057 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:22:54.057 00:22:54.057 00:22:54.057 CUnit - A unit testing framework for C - Version 2.1-3 00:22:54.057 http://cunit.sourceforge.net/ 00:22:54.057 00:22:54.057 00:22:54.057 Suite: bdevio tests on: Nvme1n1 00:22:54.057 Test: blockdev write read block ...passed 00:22:54.314 Test: blockdev write zeroes read block ...passed 00:22:54.314 Test: blockdev write zeroes read no split ...passed 00:22:54.314 Test: blockdev write zeroes read split ...passed 00:22:54.314 Test: blockdev write zeroes read split partial ...passed 00:22:54.314 Test: blockdev reset ...[2024-11-20 06:33:25.965697] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:22:54.314 [2024-11-20 06:33:25.965760] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e93920 (9): Bad file descriptor 00:22:54.314 [2024-11-20 06:33:25.979494] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:22:54.314 passed 00:22:54.314 Test: blockdev write read 8 blocks ...passed 00:22:54.314 Test: blockdev write read size > 128k ...passed 00:22:54.314 Test: blockdev write read invalid size ...passed 00:22:54.314 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:22:54.314 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:22:54.314 Test: blockdev write read max offset ...passed 00:22:54.314 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:22:54.314 Test: blockdev writev readv 8 blocks ...passed 00:22:54.314 Test: blockdev writev readv 30 x 1block ...passed 00:22:54.314 Test: blockdev writev readv block ...passed 00:22:54.573 Test: blockdev writev readv size > 128k ...passed 00:22:54.573 Test: blockdev writev readv size > 128k in two iovs ...passed 00:22:54.573 Test: blockdev comparev and writev ...[2024-11-20 06:33:26.149132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:54.573 [2024-11-20 06:33:26.149160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:54.573 [2024-11-20 06:33:26.149174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:54.573 [2024-11-20 06:33:26.149181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:54.573 [2024-11-20 06:33:26.149447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:54.573 [2024-11-20 06:33:26.149457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:54.573 [2024-11-20 06:33:26.149469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:54.573 [2024-11-20 06:33:26.149476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:54.573 [2024-11-20 06:33:26.149702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:54.573 [2024-11-20 06:33:26.149712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:54.573 [2024-11-20 06:33:26.149722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:54.573 [2024-11-20 06:33:26.149730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:54.573 [2024-11-20 06:33:26.149956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:54.573 [2024-11-20 06:33:26.149965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:54.573 [2024-11-20 06:33:26.149976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:54.573 [2024-11-20 06:33:26.149982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:54.573 passed 00:22:54.573 Test: blockdev nvme passthru rw ...passed 00:22:54.573 Test: blockdev nvme passthru vendor specific ...[2024-11-20 06:33:26.231608] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:54.573 [2024-11-20 06:33:26.231628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:54.573 [2024-11-20 06:33:26.231737] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:54.573 [2024-11-20 06:33:26.231746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:54.573 [2024-11-20 06:33:26.231842] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:54.573 [2024-11-20 06:33:26.231851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:54.573 [2024-11-20 06:33:26.231948] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:54.573 [2024-11-20 06:33:26.231957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:54.573 passed 00:22:54.573 Test: blockdev nvme admin passthru ...passed 00:22:54.573 Test: blockdev copy ...passed 00:22:54.573 00:22:54.573 Run Summary: Type Total Ran Passed Failed Inactive 00:22:54.573 suites 1 1 n/a 0 0 00:22:54.573 tests 23 23 23 0 0 00:22:54.573 asserts 152 152 152 0 n/a 00:22:54.573 00:22:54.573 Elapsed time = 0.985 seconds 00:22:54.832 06:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:54.832 06:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:54.832 06:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:54.832 06:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:54.832 06:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:22:54.832 06:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:22:54.832 06:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:54.832 06:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:22:54.832 06:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:54.832 06:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:22:54.832 06:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:54.832 06:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:54.832 rmmod nvme_tcp 00:22:54.832 rmmod nvme_fabrics 00:22:54.832 rmmod nvme_keyring 00:22:54.832 06:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:54.832 06:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:22:54.832 06:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:22:54.832 06:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 558876 ']' 00:22:54.832 06:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 558876 00:22:54.832 06:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # '[' -z 558876 ']' 00:22:54.832 06:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # kill -0 558876 00:22:54.832 06:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@957 -- # uname 00:22:54.832 06:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:54.832 06:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 558876 00:22:55.090 06:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # process_name=reactor_3 00:22:55.090 06:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@962 -- # '[' reactor_3 = sudo ']' 00:22:55.090 06:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@970 -- # echo 'killing process with pid 558876' 00:22:55.090 killing process with pid 558876 00:22:55.090 06:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@971 -- # kill 558876 00:22:55.090 06:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@976 -- # wait 558876 00:22:55.349 06:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:55.349 06:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:55.349 06:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:55.349 06:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:22:55.349 06:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:55.349 06:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:22:55.349 06:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:22:55.349 06:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:55.349 06:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:55.349 06:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:55.349 06:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:55.349 06:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:57.255 06:33:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:57.255 00:22:57.255 real 0m10.864s 00:22:57.255 user 0m13.575s 00:22:57.255 sys 0m5.325s 00:22:57.255 06:33:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1128 -- # xtrace_disable 00:22:57.255 06:33:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:57.255 ************************************ 00:22:57.255 END TEST nvmf_bdevio_no_huge 00:22:57.255 ************************************ 00:22:57.255 06:33:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:22:57.255 06:33:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:22:57.255 06:33:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:22:57.255 06:33:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:57.514 ************************************ 00:22:57.514 START TEST nvmf_tls 00:22:57.514 ************************************ 00:22:57.514 06:33:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:22:57.514 * Looking for test storage... 00:22:57.514 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:57.514 06:33:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:22:57.514 06:33:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1691 -- # lcov --version 00:22:57.514 06:33:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:22:57.514 06:33:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:22:57.514 06:33:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:57.514 06:33:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:57.514 06:33:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:57.514 06:33:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:22:57.514 06:33:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:22:57.514 06:33:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:22:57.514 06:33:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:22:57.514 06:33:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:22:57.514 06:33:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:22:57.514 06:33:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:22:57.514 06:33:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:57.514 06:33:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:22:57.514 06:33:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:22:57.514 06:33:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:57.514 06:33:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:57.514 06:33:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:22:57.514 06:33:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:22:57.514 06:33:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:57.514 06:33:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:22:57.514 06:33:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:22:57.514 06:33:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:22:57.514 06:33:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:22:57.514 06:33:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:57.515 06:33:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:22:57.515 06:33:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:22:57.515 06:33:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:57.515 06:33:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:57.515 06:33:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:22:57.515 06:33:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:57.515 06:33:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:22:57.515 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:57.515 --rc genhtml_branch_coverage=1 00:22:57.515 --rc genhtml_function_coverage=1 00:22:57.515 --rc genhtml_legend=1 00:22:57.515 --rc geninfo_all_blocks=1 00:22:57.515 --rc geninfo_unexecuted_blocks=1 00:22:57.515 00:22:57.515 ' 00:22:57.515 06:33:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:22:57.515 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:57.515 --rc genhtml_branch_coverage=1 00:22:57.515 --rc genhtml_function_coverage=1 00:22:57.515 --rc genhtml_legend=1 00:22:57.515 --rc geninfo_all_blocks=1 00:22:57.515 --rc geninfo_unexecuted_blocks=1 00:22:57.515 00:22:57.515 ' 00:22:57.515 06:33:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:22:57.515 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:57.515 --rc genhtml_branch_coverage=1 00:22:57.515 --rc genhtml_function_coverage=1 00:22:57.515 --rc genhtml_legend=1 00:22:57.515 --rc geninfo_all_blocks=1 00:22:57.515 --rc geninfo_unexecuted_blocks=1 00:22:57.515 00:22:57.515 ' 00:22:57.515 06:33:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:22:57.515 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:57.515 --rc genhtml_branch_coverage=1 00:22:57.515 --rc genhtml_function_coverage=1 00:22:57.515 --rc genhtml_legend=1 00:22:57.515 --rc geninfo_all_blocks=1 00:22:57.515 --rc geninfo_unexecuted_blocks=1 00:22:57.515 00:22:57.515 ' 00:22:57.515 06:33:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:57.515 06:33:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:22:57.515 06:33:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:57.515 06:33:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:57.515 06:33:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:57.515 06:33:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:57.515 06:33:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:57.515 06:33:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:57.515 06:33:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:57.515 06:33:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:57.515 06:33:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:57.515 06:33:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:57.515 06:33:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:22:57.515 06:33:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:22:57.515 06:33:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:57.515 06:33:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:57.515 06:33:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:57.515 06:33:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:57.515 06:33:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:57.515 06:33:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:22:57.515 06:33:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:57.515 06:33:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:57.515 06:33:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:57.515 06:33:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:57.515 06:33:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:57.515 06:33:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:57.515 06:33:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:22:57.515 06:33:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:57.515 06:33:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:22:57.515 06:33:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:57.515 06:33:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:57.515 06:33:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:57.515 06:33:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:57.515 06:33:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:57.516 06:33:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:57.516 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:57.516 06:33:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:57.516 06:33:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:57.516 06:33:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:57.516 06:33:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:57.516 06:33:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:22:57.516 06:33:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:57.516 06:33:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:57.516 06:33:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:57.516 06:33:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:57.516 06:33:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:57.516 06:33:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:57.516 06:33:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:57.516 06:33:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:57.516 06:33:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:57.516 06:33:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:57.516 06:33:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:22:57.516 06:33:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:04.087 06:33:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:04.087 06:33:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:23:04.087 06:33:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:04.087 06:33:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:04.087 06:33:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:04.087 06:33:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:04.087 06:33:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:04.087 06:33:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:23:04.087 06:33:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:04.087 06:33:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:23:04.087 06:33:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:23:04.087 06:33:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:23:04.087 06:33:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:23:04.087 06:33:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:23:04.087 06:33:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:23:04.087 06:33:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:04.087 06:33:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:04.087 06:33:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:04.087 06:33:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:04.087 06:33:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:04.087 06:33:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:04.087 06:33:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:04.087 06:33:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:04.087 06:33:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:04.087 06:33:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:04.087 06:33:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:04.087 06:33:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:04.087 06:33:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:04.087 06:33:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:04.087 06:33:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:04.087 06:33:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:04.087 06:33:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:04.087 06:33:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:04.088 06:33:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:04.088 06:33:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:04.088 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:04.088 06:33:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:04.088 06:33:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:04.088 06:33:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:04.088 06:33:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:04.088 06:33:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:04.088 06:33:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:04.088 06:33:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:04.088 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:04.088 06:33:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:04.088 06:33:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:04.088 06:33:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:04.088 06:33:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:04.088 06:33:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:04.088 06:33:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:04.088 06:33:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:04.088 06:33:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:04.088 06:33:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:04.088 06:33:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:04.088 06:33:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:04.088 06:33:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:04.088 06:33:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:04.088 06:33:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:04.088 06:33:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:04.088 06:33:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:04.088 Found net devices under 0000:86:00.0: cvl_0_0 00:23:04.088 06:33:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:04.088 06:33:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:04.088 06:33:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:04.088 06:33:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:04.088 06:33:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:04.088 06:33:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:04.088 06:33:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:04.088 06:33:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:04.088 06:33:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:04.088 Found net devices under 0000:86:00.1: cvl_0_1 00:23:04.088 06:33:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:04.088 06:33:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:04.088 06:33:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # is_hw=yes 00:23:04.088 06:33:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:04.088 06:33:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:04.088 06:33:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:04.088 06:33:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:04.088 06:33:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:04.088 06:33:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:04.088 06:33:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:04.088 06:33:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:04.088 06:33:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:04.088 06:33:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:04.088 06:33:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:04.088 06:33:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:04.088 06:33:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:04.088 06:33:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:04.088 06:33:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:04.088 06:33:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:04.088 06:33:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:04.088 06:33:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:04.088 06:33:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:04.088 06:33:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:04.088 06:33:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:04.088 06:33:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:04.088 06:33:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:04.088 06:33:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:04.088 06:33:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:04.088 06:33:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:04.088 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:04.088 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.478 ms 00:23:04.088 00:23:04.088 --- 10.0.0.2 ping statistics --- 00:23:04.088 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:04.088 rtt min/avg/max/mdev = 0.478/0.478/0.478/0.000 ms 00:23:04.088 06:33:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:04.088 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:04.088 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.196 ms 00:23:04.088 00:23:04.088 --- 10.0.0.1 ping statistics --- 00:23:04.088 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:04.088 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:23:04.088 06:33:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:04.088 06:33:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # return 0 00:23:04.088 06:33:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:04.088 06:33:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:04.088 06:33:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:04.088 06:33:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:04.088 06:33:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:04.088 06:33:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:04.088 06:33:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:04.088 06:33:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:23:04.088 06:33:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:04.088 06:33:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:04.088 06:33:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:04.088 06:33:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=562913 00:23:04.088 06:33:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:23:04.088 06:33:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 562913 00:23:04.088 06:33:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 562913 ']' 00:23:04.088 06:33:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:04.088 06:33:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:04.088 06:33:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:04.088 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:04.088 06:33:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:04.088 06:33:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:04.088 [2024-11-20 06:33:35.389876] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:23:04.089 [2024-11-20 06:33:35.389919] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:04.089 [2024-11-20 06:33:35.466110] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:04.089 [2024-11-20 06:33:35.506948] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:04.089 [2024-11-20 06:33:35.506984] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:04.089 [2024-11-20 06:33:35.506991] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:04.089 [2024-11-20 06:33:35.506997] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:04.089 [2024-11-20 06:33:35.507002] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:04.089 [2024-11-20 06:33:35.507579] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:04.089 06:33:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:04.089 06:33:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:23:04.089 06:33:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:04.089 06:33:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:04.089 06:33:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:04.089 06:33:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:04.089 06:33:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:23:04.089 06:33:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:23:04.089 true 00:23:04.089 06:33:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:04.089 06:33:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:23:04.347 06:33:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:23:04.347 06:33:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:23:04.347 06:33:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:23:04.347 06:33:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:04.347 06:33:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:23:04.606 06:33:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:23:04.606 06:33:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:23:04.606 06:33:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:23:04.865 06:33:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:04.865 06:33:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:23:05.124 06:33:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:23:05.124 06:33:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:23:05.124 06:33:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:05.124 06:33:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:23:05.124 06:33:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:23:05.124 06:33:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:23:05.124 06:33:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:23:05.383 06:33:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:05.383 06:33:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:23:05.642 06:33:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:23:05.642 06:33:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:23:05.642 06:33:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:23:05.642 06:33:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:05.642 06:33:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:23:05.901 06:33:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:23:05.901 06:33:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:23:05.901 06:33:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:23:05.901 06:33:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:23:05.901 06:33:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:23:05.901 06:33:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:23:05.901 06:33:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:23:05.901 06:33:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:23:05.901 06:33:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:23:05.901 06:33:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:23:05.901 06:33:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:23:05.901 06:33:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:23:05.901 06:33:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:23:05.901 06:33:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:23:05.901 06:33:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:23:05.901 06:33:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:23:05.901 06:33:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:23:05.901 06:33:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:23:05.901 06:33:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:23:05.901 06:33:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.p2B4zWkw6i 00:23:05.901 06:33:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:23:05.901 06:33:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.wAjNDn2XZV 00:23:05.901 06:33:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:23:05.901 06:33:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:23:05.901 06:33:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.p2B4zWkw6i 00:23:06.160 06:33:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.wAjNDn2XZV 00:23:06.160 06:33:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:23:06.160 06:33:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:23:06.419 06:33:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.p2B4zWkw6i 00:23:06.419 06:33:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.p2B4zWkw6i 00:23:06.419 06:33:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:06.678 [2024-11-20 06:33:38.344993] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:06.678 06:33:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:06.937 06:33:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:06.937 [2024-11-20 06:33:38.697898] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:06.937 [2024-11-20 06:33:38.698127] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:06.937 06:33:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:07.196 malloc0 00:23:07.196 06:33:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:07.455 06:33:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.p2B4zWkw6i 00:23:07.455 06:33:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:07.714 06:33:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.p2B4zWkw6i 00:23:19.919 Initializing NVMe Controllers 00:23:19.919 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:19.919 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:19.919 Initialization complete. Launching workers. 00:23:19.919 ======================================================== 00:23:19.919 Latency(us) 00:23:19.919 Device Information : IOPS MiB/s Average min max 00:23:19.919 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 16886.75 65.96 3790.04 789.87 6167.10 00:23:19.919 ======================================================== 00:23:19.919 Total : 16886.75 65.96 3790.04 789.87 6167.10 00:23:19.919 00:23:19.919 06:33:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.p2B4zWkw6i 00:23:19.919 06:33:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:19.919 06:33:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:19.919 06:33:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:19.919 06:33:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.p2B4zWkw6i 00:23:19.919 06:33:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:19.919 06:33:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=565257 00:23:19.919 06:33:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:19.919 06:33:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:19.919 06:33:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 565257 /var/tmp/bdevperf.sock 00:23:19.919 06:33:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 565257 ']' 00:23:19.919 06:33:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:19.919 06:33:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:19.919 06:33:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:19.919 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:19.919 06:33:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:19.919 06:33:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:19.919 [2024-11-20 06:33:49.617660] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:23:19.919 [2024-11-20 06:33:49.617707] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid565257 ] 00:23:19.919 [2024-11-20 06:33:49.692374] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:19.919 [2024-11-20 06:33:49.733862] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:19.919 06:33:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:19.919 06:33:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:23:19.919 06:33:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.p2B4zWkw6i 00:23:19.919 06:33:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:19.919 [2024-11-20 06:33:50.163730] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:19.919 TLSTESTn1 00:23:19.919 06:33:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:19.919 Running I/O for 10 seconds... 00:23:20.853 5409.00 IOPS, 21.13 MiB/s [2024-11-20T05:33:53.623Z] 5476.00 IOPS, 21.39 MiB/s [2024-11-20T05:33:54.559Z] 5511.33 IOPS, 21.53 MiB/s [2024-11-20T05:33:55.495Z] 5515.50 IOPS, 21.54 MiB/s [2024-11-20T05:33:56.430Z] 5545.40 IOPS, 21.66 MiB/s [2024-11-20T05:33:57.367Z] 5516.00 IOPS, 21.55 MiB/s [2024-11-20T05:33:58.743Z] 5535.00 IOPS, 21.62 MiB/s [2024-11-20T05:33:59.680Z] 5543.25 IOPS, 21.65 MiB/s [2024-11-20T05:34:00.617Z] 5555.33 IOPS, 21.70 MiB/s [2024-11-20T05:34:00.617Z] 5551.90 IOPS, 21.69 MiB/s 00:23:28.781 Latency(us) 00:23:28.781 [2024-11-20T05:34:00.617Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:28.781 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:28.781 Verification LBA range: start 0x0 length 0x2000 00:23:28.781 TLSTESTn1 : 10.01 5556.67 21.71 0.00 0.00 23002.28 5710.99 27088.21 00:23:28.781 [2024-11-20T05:34:00.617Z] =================================================================================================================== 00:23:28.781 [2024-11-20T05:34:00.617Z] Total : 5556.67 21.71 0.00 0.00 23002.28 5710.99 27088.21 00:23:28.781 { 00:23:28.781 "results": [ 00:23:28.781 { 00:23:28.781 "job": "TLSTESTn1", 00:23:28.781 "core_mask": "0x4", 00:23:28.781 "workload": "verify", 00:23:28.781 "status": "finished", 00:23:28.781 "verify_range": { 00:23:28.781 "start": 0, 00:23:28.781 "length": 8192 00:23:28.781 }, 00:23:28.781 "queue_depth": 128, 00:23:28.781 "io_size": 4096, 00:23:28.781 "runtime": 10.014263, 00:23:28.781 "iops": 5556.674515139057, 00:23:28.781 "mibps": 21.70575982476194, 00:23:28.781 "io_failed": 0, 00:23:28.781 "io_timeout": 0, 00:23:28.781 "avg_latency_us": 23002.276200539807, 00:23:28.781 "min_latency_us": 5710.994285714286, 00:23:28.781 "max_latency_us": 27088.213333333333 00:23:28.781 } 00:23:28.781 ], 00:23:28.781 "core_count": 1 00:23:28.781 } 00:23:28.781 06:34:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:28.781 06:34:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 565257 00:23:28.781 06:34:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 565257 ']' 00:23:28.781 06:34:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 565257 00:23:28.781 06:34:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:23:28.782 06:34:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:28.782 06:34:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 565257 00:23:28.782 06:34:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:23:28.782 06:34:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:23:28.782 06:34:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 565257' 00:23:28.782 killing process with pid 565257 00:23:28.782 06:34:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 565257 00:23:28.782 Received shutdown signal, test time was about 10.000000 seconds 00:23:28.782 00:23:28.782 Latency(us) 00:23:28.782 [2024-11-20T05:34:00.618Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:28.782 [2024-11-20T05:34:00.618Z] =================================================================================================================== 00:23:28.782 [2024-11-20T05:34:00.618Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:28.782 06:34:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 565257 00:23:28.782 06:34:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.wAjNDn2XZV 00:23:28.782 06:34:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:23:28.782 06:34:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.wAjNDn2XZV 00:23:28.782 06:34:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:23:28.782 06:34:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:28.782 06:34:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:23:28.782 06:34:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:28.782 06:34:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.wAjNDn2XZV 00:23:28.782 06:34:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:28.782 06:34:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:28.782 06:34:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:28.782 06:34:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.wAjNDn2XZV 00:23:28.782 06:34:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:28.782 06:34:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=567094 00:23:28.782 06:34:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:28.782 06:34:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:28.782 06:34:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 567094 /var/tmp/bdevperf.sock 00:23:28.782 06:34:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 567094 ']' 00:23:29.042 06:34:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:29.042 06:34:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:29.042 06:34:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:29.042 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:29.042 06:34:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:29.042 06:34:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:29.042 [2024-11-20 06:34:00.654767] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:23:29.042 [2024-11-20 06:34:00.654813] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid567094 ] 00:23:29.042 [2024-11-20 06:34:00.727851] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:29.042 [2024-11-20 06:34:00.767388] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:29.042 06:34:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:29.042 06:34:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:23:29.042 06:34:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.wAjNDn2XZV 00:23:29.301 06:34:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:29.560 [2024-11-20 06:34:01.213131] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:29.560 [2024-11-20 06:34:01.217730] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:29.560 [2024-11-20 06:34:01.218377] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1275170 (107): Transport endpoint is not connected 00:23:29.560 [2024-11-20 06:34:01.219370] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1275170 (9): Bad file descriptor 00:23:29.560 [2024-11-20 06:34:01.220371] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:23:29.560 [2024-11-20 06:34:01.220380] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:29.560 [2024-11-20 06:34:01.220388] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:23:29.560 [2024-11-20 06:34:01.220398] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:23:29.560 request: 00:23:29.560 { 00:23:29.560 "name": "TLSTEST", 00:23:29.560 "trtype": "tcp", 00:23:29.560 "traddr": "10.0.0.2", 00:23:29.560 "adrfam": "ipv4", 00:23:29.560 "trsvcid": "4420", 00:23:29.560 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:29.560 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:29.560 "prchk_reftag": false, 00:23:29.560 "prchk_guard": false, 00:23:29.560 "hdgst": false, 00:23:29.560 "ddgst": false, 00:23:29.560 "psk": "key0", 00:23:29.560 "allow_unrecognized_csi": false, 00:23:29.560 "method": "bdev_nvme_attach_controller", 00:23:29.560 "req_id": 1 00:23:29.560 } 00:23:29.560 Got JSON-RPC error response 00:23:29.560 response: 00:23:29.560 { 00:23:29.560 "code": -5, 00:23:29.560 "message": "Input/output error" 00:23:29.560 } 00:23:29.560 06:34:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 567094 00:23:29.560 06:34:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 567094 ']' 00:23:29.560 06:34:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 567094 00:23:29.560 06:34:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:23:29.560 06:34:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:29.560 06:34:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 567094 00:23:29.560 06:34:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:23:29.560 06:34:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:23:29.560 06:34:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 567094' 00:23:29.560 killing process with pid 567094 00:23:29.560 06:34:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 567094 00:23:29.560 Received shutdown signal, test time was about 10.000000 seconds 00:23:29.560 00:23:29.560 Latency(us) 00:23:29.560 [2024-11-20T05:34:01.396Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:29.560 [2024-11-20T05:34:01.396Z] =================================================================================================================== 00:23:29.560 [2024-11-20T05:34:01.397Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:29.561 06:34:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 567094 00:23:29.819 06:34:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:29.819 06:34:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:23:29.819 06:34:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:29.819 06:34:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:29.819 06:34:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:29.819 06:34:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.p2B4zWkw6i 00:23:29.819 06:34:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:23:29.819 06:34:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.p2B4zWkw6i 00:23:29.819 06:34:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:23:29.819 06:34:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:29.819 06:34:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:23:29.819 06:34:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:29.819 06:34:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.p2B4zWkw6i 00:23:29.819 06:34:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:29.819 06:34:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:29.819 06:34:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:23:29.819 06:34:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.p2B4zWkw6i 00:23:29.819 06:34:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:29.819 06:34:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=567112 00:23:29.819 06:34:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:29.819 06:34:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:29.819 06:34:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 567112 /var/tmp/bdevperf.sock 00:23:29.819 06:34:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 567112 ']' 00:23:29.819 06:34:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:29.819 06:34:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:29.819 06:34:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:29.819 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:29.819 06:34:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:29.819 06:34:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:29.819 [2024-11-20 06:34:01.502231] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:23:29.819 [2024-11-20 06:34:01.502282] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid567112 ] 00:23:29.819 [2024-11-20 06:34:01.578673] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:29.819 [2024-11-20 06:34:01.618499] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:30.078 06:34:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:30.078 06:34:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:23:30.078 06:34:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.p2B4zWkw6i 00:23:30.337 06:34:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:23:30.337 [2024-11-20 06:34:02.097319] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:30.337 [2024-11-20 06:34:02.102069] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:23:30.337 [2024-11-20 06:34:02.102093] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:23:30.337 [2024-11-20 06:34:02.102139] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:30.337 [2024-11-20 06:34:02.102784] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b4170 (107): Transport endpoint is not connected 00:23:30.337 [2024-11-20 06:34:02.103776] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b4170 (9): Bad file descriptor 00:23:30.337 [2024-11-20 06:34:02.104778] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:23:30.337 [2024-11-20 06:34:02.104787] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:30.337 [2024-11-20 06:34:02.104794] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:23:30.337 [2024-11-20 06:34:02.104803] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:23:30.337 request: 00:23:30.337 { 00:23:30.337 "name": "TLSTEST", 00:23:30.337 "trtype": "tcp", 00:23:30.337 "traddr": "10.0.0.2", 00:23:30.337 "adrfam": "ipv4", 00:23:30.337 "trsvcid": "4420", 00:23:30.337 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:30.337 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:30.337 "prchk_reftag": false, 00:23:30.337 "prchk_guard": false, 00:23:30.337 "hdgst": false, 00:23:30.337 "ddgst": false, 00:23:30.337 "psk": "key0", 00:23:30.337 "allow_unrecognized_csi": false, 00:23:30.337 "method": "bdev_nvme_attach_controller", 00:23:30.337 "req_id": 1 00:23:30.337 } 00:23:30.337 Got JSON-RPC error response 00:23:30.337 response: 00:23:30.337 { 00:23:30.337 "code": -5, 00:23:30.337 "message": "Input/output error" 00:23:30.337 } 00:23:30.337 06:34:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 567112 00:23:30.337 06:34:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 567112 ']' 00:23:30.337 06:34:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 567112 00:23:30.337 06:34:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:23:30.337 06:34:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:30.337 06:34:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 567112 00:23:30.596 06:34:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:23:30.596 06:34:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:23:30.596 06:34:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 567112' 00:23:30.596 killing process with pid 567112 00:23:30.596 06:34:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 567112 00:23:30.596 Received shutdown signal, test time was about 10.000000 seconds 00:23:30.596 00:23:30.596 Latency(us) 00:23:30.596 [2024-11-20T05:34:02.432Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:30.596 [2024-11-20T05:34:02.432Z] =================================================================================================================== 00:23:30.596 [2024-11-20T05:34:02.432Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:30.596 06:34:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 567112 00:23:30.596 06:34:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:30.597 06:34:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:23:30.597 06:34:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:30.597 06:34:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:30.597 06:34:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:30.597 06:34:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.p2B4zWkw6i 00:23:30.597 06:34:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:23:30.597 06:34:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.p2B4zWkw6i 00:23:30.597 06:34:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:23:30.597 06:34:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:30.597 06:34:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:23:30.597 06:34:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:30.597 06:34:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.p2B4zWkw6i 00:23:30.597 06:34:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:30.597 06:34:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:23:30.597 06:34:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:30.597 06:34:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.p2B4zWkw6i 00:23:30.597 06:34:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:30.597 06:34:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=567346 00:23:30.597 06:34:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:30.597 06:34:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:30.597 06:34:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 567346 /var/tmp/bdevperf.sock 00:23:30.597 06:34:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 567346 ']' 00:23:30.597 06:34:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:30.597 06:34:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:30.597 06:34:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:30.597 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:30.597 06:34:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:30.597 06:34:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:30.597 [2024-11-20 06:34:02.383188] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:23:30.597 [2024-11-20 06:34:02.383247] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid567346 ] 00:23:30.856 [2024-11-20 06:34:02.454775] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:30.856 [2024-11-20 06:34:02.491730] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:30.856 06:34:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:30.856 06:34:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:23:30.856 06:34:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.p2B4zWkw6i 00:23:31.114 06:34:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:31.373 [2024-11-20 06:34:02.949515] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:31.373 [2024-11-20 06:34:02.954155] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:23:31.373 [2024-11-20 06:34:02.954178] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:23:31.373 [2024-11-20 06:34:02.954207] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:31.373 [2024-11-20 06:34:02.954857] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cac170 (107): Transport endpoint is not connected 00:23:31.373 [2024-11-20 06:34:02.955850] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cac170 (9): Bad file descriptor 00:23:31.374 [2024-11-20 06:34:02.956851] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:23:31.374 [2024-11-20 06:34:02.956861] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:31.374 [2024-11-20 06:34:02.956868] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:23:31.374 [2024-11-20 06:34:02.956879] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:23:31.374 request: 00:23:31.374 { 00:23:31.374 "name": "TLSTEST", 00:23:31.374 "trtype": "tcp", 00:23:31.374 "traddr": "10.0.0.2", 00:23:31.374 "adrfam": "ipv4", 00:23:31.374 "trsvcid": "4420", 00:23:31.374 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:31.374 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:31.374 "prchk_reftag": false, 00:23:31.374 "prchk_guard": false, 00:23:31.374 "hdgst": false, 00:23:31.374 "ddgst": false, 00:23:31.374 "psk": "key0", 00:23:31.374 "allow_unrecognized_csi": false, 00:23:31.374 "method": "bdev_nvme_attach_controller", 00:23:31.374 "req_id": 1 00:23:31.374 } 00:23:31.374 Got JSON-RPC error response 00:23:31.374 response: 00:23:31.374 { 00:23:31.374 "code": -5, 00:23:31.374 "message": "Input/output error" 00:23:31.374 } 00:23:31.374 06:34:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 567346 00:23:31.374 06:34:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 567346 ']' 00:23:31.374 06:34:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 567346 00:23:31.374 06:34:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:23:31.374 06:34:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:31.374 06:34:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 567346 00:23:31.374 06:34:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:23:31.374 06:34:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:23:31.374 06:34:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 567346' 00:23:31.374 killing process with pid 567346 00:23:31.374 06:34:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 567346 00:23:31.374 Received shutdown signal, test time was about 10.000000 seconds 00:23:31.374 00:23:31.374 Latency(us) 00:23:31.374 [2024-11-20T05:34:03.210Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:31.374 [2024-11-20T05:34:03.210Z] =================================================================================================================== 00:23:31.374 [2024-11-20T05:34:03.210Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:31.374 06:34:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 567346 00:23:31.374 06:34:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:31.374 06:34:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:23:31.374 06:34:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:31.374 06:34:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:31.374 06:34:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:31.374 06:34:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:31.374 06:34:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:23:31.374 06:34:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:31.374 06:34:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:23:31.374 06:34:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:31.374 06:34:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:23:31.374 06:34:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:31.374 06:34:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:31.374 06:34:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:31.374 06:34:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:31.374 06:34:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:31.374 06:34:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:23:31.374 06:34:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:31.374 06:34:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=567520 00:23:31.374 06:34:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:31.374 06:34:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:31.374 06:34:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 567520 /var/tmp/bdevperf.sock 00:23:31.374 06:34:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 567520 ']' 00:23:31.374 06:34:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:31.374 06:34:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:31.374 06:34:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:31.374 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:31.374 06:34:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:31.374 06:34:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:31.633 [2024-11-20 06:34:03.235799] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:23:31.633 [2024-11-20 06:34:03.235852] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid567520 ] 00:23:31.633 [2024-11-20 06:34:03.310887] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:31.633 [2024-11-20 06:34:03.348928] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:31.633 06:34:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:31.633 06:34:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:23:31.633 06:34:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:23:31.892 [2024-11-20 06:34:03.602590] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:23:31.892 [2024-11-20 06:34:03.602621] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:23:31.892 request: 00:23:31.892 { 00:23:31.892 "name": "key0", 00:23:31.892 "path": "", 00:23:31.892 "method": "keyring_file_add_key", 00:23:31.892 "req_id": 1 00:23:31.892 } 00:23:31.892 Got JSON-RPC error response 00:23:31.892 response: 00:23:31.892 { 00:23:31.892 "code": -1, 00:23:31.892 "message": "Operation not permitted" 00:23:31.892 } 00:23:31.892 06:34:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:32.151 [2024-11-20 06:34:03.803199] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:32.151 [2024-11-20 06:34:03.803233] bdev_nvme.c:6622:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:23:32.151 request: 00:23:32.151 { 00:23:32.151 "name": "TLSTEST", 00:23:32.151 "trtype": "tcp", 00:23:32.151 "traddr": "10.0.0.2", 00:23:32.151 "adrfam": "ipv4", 00:23:32.151 "trsvcid": "4420", 00:23:32.151 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:32.151 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:32.151 "prchk_reftag": false, 00:23:32.151 "prchk_guard": false, 00:23:32.151 "hdgst": false, 00:23:32.151 "ddgst": false, 00:23:32.151 "psk": "key0", 00:23:32.151 "allow_unrecognized_csi": false, 00:23:32.151 "method": "bdev_nvme_attach_controller", 00:23:32.151 "req_id": 1 00:23:32.151 } 00:23:32.151 Got JSON-RPC error response 00:23:32.151 response: 00:23:32.151 { 00:23:32.151 "code": -126, 00:23:32.151 "message": "Required key not available" 00:23:32.151 } 00:23:32.151 06:34:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 567520 00:23:32.151 06:34:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 567520 ']' 00:23:32.151 06:34:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 567520 00:23:32.151 06:34:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:23:32.151 06:34:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:32.151 06:34:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 567520 00:23:32.151 06:34:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:23:32.151 06:34:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:23:32.151 06:34:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 567520' 00:23:32.151 killing process with pid 567520 00:23:32.151 06:34:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 567520 00:23:32.151 Received shutdown signal, test time was about 10.000000 seconds 00:23:32.151 00:23:32.151 Latency(us) 00:23:32.151 [2024-11-20T05:34:03.987Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:32.151 [2024-11-20T05:34:03.987Z] =================================================================================================================== 00:23:32.151 [2024-11-20T05:34:03.987Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:32.151 06:34:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 567520 00:23:32.410 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:32.410 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:23:32.410 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:32.410 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:32.410 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:32.410 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 562913 00:23:32.410 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 562913 ']' 00:23:32.410 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 562913 00:23:32.410 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:23:32.410 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:32.410 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 562913 00:23:32.410 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:23:32.410 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:23:32.410 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 562913' 00:23:32.410 killing process with pid 562913 00:23:32.410 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 562913 00:23:32.410 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 562913 00:23:32.410 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:23:32.410 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:23:32.410 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:23:32.410 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:23:32.410 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:23:32.410 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:23:32.410 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:23:32.670 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:23:32.670 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:23:32.670 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.PwrbRy4nk8 00:23:32.670 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:23:32.670 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.PwrbRy4nk8 00:23:32.670 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:23:32.670 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:32.670 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:32.670 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:32.670 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=567613 00:23:32.670 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:32.670 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 567613 00:23:32.670 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 567613 ']' 00:23:32.670 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:32.670 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:32.670 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:32.670 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:32.670 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:32.670 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:32.670 [2024-11-20 06:34:04.335893] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:23:32.670 [2024-11-20 06:34:04.335943] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:32.670 [2024-11-20 06:34:04.400812] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:32.670 [2024-11-20 06:34:04.442009] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:32.670 [2024-11-20 06:34:04.442045] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:32.670 [2024-11-20 06:34:04.442055] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:32.670 [2024-11-20 06:34:04.442061] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:32.670 [2024-11-20 06:34:04.442066] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:32.670 [2024-11-20 06:34:04.442639] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:32.929 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:32.929 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:23:32.929 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:32.929 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:32.929 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:32.929 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:32.929 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.PwrbRy4nk8 00:23:32.929 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.PwrbRy4nk8 00:23:32.929 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:32.929 [2024-11-20 06:34:04.745392] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:32.929 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:33.188 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:33.447 [2024-11-20 06:34:05.110346] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:33.447 [2024-11-20 06:34:05.110574] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:33.447 06:34:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:33.706 malloc0 00:23:33.706 06:34:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:33.706 06:34:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.PwrbRy4nk8 00:23:33.965 06:34:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:34.224 06:34:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.PwrbRy4nk8 00:23:34.224 06:34:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:34.224 06:34:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:34.224 06:34:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:34.224 06:34:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.PwrbRy4nk8 00:23:34.224 06:34:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:34.224 06:34:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=567940 00:23:34.224 06:34:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:34.224 06:34:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:34.224 06:34:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 567940 /var/tmp/bdevperf.sock 00:23:34.224 06:34:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 567940 ']' 00:23:34.224 06:34:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:34.224 06:34:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:34.224 06:34:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:34.224 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:34.224 06:34:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:34.224 06:34:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:34.224 [2024-11-20 06:34:05.922370] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:23:34.224 [2024-11-20 06:34:05.922425] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid567940 ] 00:23:34.224 [2024-11-20 06:34:05.998736] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:34.224 [2024-11-20 06:34:06.039922] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:34.483 06:34:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:34.483 06:34:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:23:34.483 06:34:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.PwrbRy4nk8 00:23:34.483 06:34:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:34.742 [2024-11-20 06:34:06.469829] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:34.742 TLSTESTn1 00:23:34.742 06:34:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:35.001 Running I/O for 10 seconds... 00:23:36.874 5393.00 IOPS, 21.07 MiB/s [2024-11-20T05:34:10.090Z] 5513.50 IOPS, 21.54 MiB/s [2024-11-20T05:34:11.026Z] 5572.67 IOPS, 21.77 MiB/s [2024-11-20T05:34:11.993Z] 5540.00 IOPS, 21.64 MiB/s [2024-11-20T05:34:12.930Z] 5559.80 IOPS, 21.72 MiB/s [2024-11-20T05:34:13.868Z] 5567.33 IOPS, 21.75 MiB/s [2024-11-20T05:34:14.804Z] 5593.57 IOPS, 21.85 MiB/s [2024-11-20T05:34:15.740Z] 5586.75 IOPS, 21.82 MiB/s [2024-11-20T05:34:17.118Z] 5596.67 IOPS, 21.86 MiB/s [2024-11-20T05:34:17.118Z] 5585.10 IOPS, 21.82 MiB/s 00:23:45.282 Latency(us) 00:23:45.282 [2024-11-20T05:34:17.118Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:45.282 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:45.282 Verification LBA range: start 0x0 length 0x2000 00:23:45.283 TLSTESTn1 : 10.01 5590.87 21.84 0.00 0.00 22862.12 5492.54 23967.45 00:23:45.283 [2024-11-20T05:34:17.119Z] =================================================================================================================== 00:23:45.283 [2024-11-20T05:34:17.119Z] Total : 5590.87 21.84 0.00 0.00 22862.12 5492.54 23967.45 00:23:45.283 { 00:23:45.283 "results": [ 00:23:45.283 { 00:23:45.283 "job": "TLSTESTn1", 00:23:45.283 "core_mask": "0x4", 00:23:45.283 "workload": "verify", 00:23:45.283 "status": "finished", 00:23:45.283 "verify_range": { 00:23:45.283 "start": 0, 00:23:45.283 "length": 8192 00:23:45.283 }, 00:23:45.283 "queue_depth": 128, 00:23:45.283 "io_size": 4096, 00:23:45.283 "runtime": 10.012575, 00:23:45.283 "iops": 5590.869481626854, 00:23:45.283 "mibps": 21.8393339126049, 00:23:45.283 "io_failed": 0, 00:23:45.283 "io_timeout": 0, 00:23:45.283 "avg_latency_us": 22862.116965154448, 00:23:45.283 "min_latency_us": 5492.540952380952, 00:23:45.283 "max_latency_us": 23967.45142857143 00:23:45.283 } 00:23:45.283 ], 00:23:45.283 "core_count": 1 00:23:45.283 } 00:23:45.283 06:34:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:45.283 06:34:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 567940 00:23:45.283 06:34:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 567940 ']' 00:23:45.283 06:34:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 567940 00:23:45.283 06:34:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:23:45.283 06:34:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:45.283 06:34:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 567940 00:23:45.283 06:34:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:23:45.283 06:34:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:23:45.283 06:34:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 567940' 00:23:45.283 killing process with pid 567940 00:23:45.283 06:34:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 567940 00:23:45.283 Received shutdown signal, test time was about 10.000000 seconds 00:23:45.283 00:23:45.283 Latency(us) 00:23:45.283 [2024-11-20T05:34:17.119Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:45.283 [2024-11-20T05:34:17.119Z] =================================================================================================================== 00:23:45.283 [2024-11-20T05:34:17.119Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:45.283 06:34:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 567940 00:23:45.283 06:34:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.PwrbRy4nk8 00:23:45.283 06:34:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.PwrbRy4nk8 00:23:45.283 06:34:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:23:45.283 06:34:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.PwrbRy4nk8 00:23:45.283 06:34:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:23:45.283 06:34:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:45.283 06:34:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:23:45.283 06:34:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:45.283 06:34:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.PwrbRy4nk8 00:23:45.283 06:34:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:45.283 06:34:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:45.283 06:34:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:45.283 06:34:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.PwrbRy4nk8 00:23:45.283 06:34:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:45.283 06:34:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=569702 00:23:45.283 06:34:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:45.283 06:34:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:45.283 06:34:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 569702 /var/tmp/bdevperf.sock 00:23:45.283 06:34:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 569702 ']' 00:23:45.283 06:34:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:45.283 06:34:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:45.283 06:34:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:45.283 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:45.283 06:34:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:45.283 06:34:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:45.283 [2024-11-20 06:34:16.975589] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:23:45.283 [2024-11-20 06:34:16.975636] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid569702 ] 00:23:45.283 [2024-11-20 06:34:17.049771] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:45.283 [2024-11-20 06:34:17.090812] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:45.542 06:34:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:45.542 06:34:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:23:45.542 06:34:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.PwrbRy4nk8 00:23:45.542 [2024-11-20 06:34:17.356240] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.PwrbRy4nk8': 0100666 00:23:45.542 [2024-11-20 06:34:17.356266] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:23:45.542 request: 00:23:45.542 { 00:23:45.542 "name": "key0", 00:23:45.542 "path": "/tmp/tmp.PwrbRy4nk8", 00:23:45.542 "method": "keyring_file_add_key", 00:23:45.542 "req_id": 1 00:23:45.542 } 00:23:45.542 Got JSON-RPC error response 00:23:45.542 response: 00:23:45.542 { 00:23:45.542 "code": -1, 00:23:45.542 "message": "Operation not permitted" 00:23:45.542 } 00:23:45.542 06:34:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:45.801 [2024-11-20 06:34:17.528770] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:45.801 [2024-11-20 06:34:17.528803] bdev_nvme.c:6622:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:23:45.801 request: 00:23:45.801 { 00:23:45.801 "name": "TLSTEST", 00:23:45.801 "trtype": "tcp", 00:23:45.801 "traddr": "10.0.0.2", 00:23:45.801 "adrfam": "ipv4", 00:23:45.801 "trsvcid": "4420", 00:23:45.801 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:45.801 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:45.801 "prchk_reftag": false, 00:23:45.801 "prchk_guard": false, 00:23:45.801 "hdgst": false, 00:23:45.801 "ddgst": false, 00:23:45.801 "psk": "key0", 00:23:45.801 "allow_unrecognized_csi": false, 00:23:45.801 "method": "bdev_nvme_attach_controller", 00:23:45.801 "req_id": 1 00:23:45.801 } 00:23:45.801 Got JSON-RPC error response 00:23:45.801 response: 00:23:45.801 { 00:23:45.801 "code": -126, 00:23:45.801 "message": "Required key not available" 00:23:45.801 } 00:23:45.801 06:34:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 569702 00:23:45.801 06:34:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 569702 ']' 00:23:45.801 06:34:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 569702 00:23:45.801 06:34:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:23:45.801 06:34:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:45.801 06:34:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 569702 00:23:45.801 06:34:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:23:45.801 06:34:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:23:45.801 06:34:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 569702' 00:23:45.801 killing process with pid 569702 00:23:45.801 06:34:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 569702 00:23:45.801 Received shutdown signal, test time was about 10.000000 seconds 00:23:45.801 00:23:45.801 Latency(us) 00:23:45.801 [2024-11-20T05:34:17.637Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:45.801 [2024-11-20T05:34:17.638Z] =================================================================================================================== 00:23:45.802 [2024-11-20T05:34:17.638Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:45.802 06:34:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 569702 00:23:46.061 06:34:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:46.061 06:34:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:23:46.061 06:34:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:46.061 06:34:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:46.061 06:34:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:46.061 06:34:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 567613 00:23:46.061 06:34:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 567613 ']' 00:23:46.061 06:34:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 567613 00:23:46.061 06:34:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:23:46.061 06:34:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:46.061 06:34:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 567613 00:23:46.061 06:34:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:23:46.061 06:34:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:23:46.061 06:34:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 567613' 00:23:46.061 killing process with pid 567613 00:23:46.061 06:34:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 567613 00:23:46.061 06:34:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 567613 00:23:46.320 06:34:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:23:46.320 06:34:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:46.320 06:34:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:46.320 06:34:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:46.320 06:34:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=569939 00:23:46.320 06:34:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:46.320 06:34:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 569939 00:23:46.320 06:34:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 569939 ']' 00:23:46.320 06:34:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:46.320 06:34:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:46.320 06:34:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:46.320 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:46.320 06:34:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:46.320 06:34:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:46.320 [2024-11-20 06:34:18.017842] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:23:46.320 [2024-11-20 06:34:18.017894] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:46.320 [2024-11-20 06:34:18.095479] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:46.320 [2024-11-20 06:34:18.130044] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:46.320 [2024-11-20 06:34:18.130081] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:46.320 [2024-11-20 06:34:18.130088] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:46.320 [2024-11-20 06:34:18.130094] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:46.321 [2024-11-20 06:34:18.130098] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:46.321 [2024-11-20 06:34:18.130679] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:46.589 06:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:46.589 06:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:23:46.589 06:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:46.589 06:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:46.589 06:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:46.589 06:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:46.589 06:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.PwrbRy4nk8 00:23:46.589 06:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:23:46.589 06:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.PwrbRy4nk8 00:23:46.589 06:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:23:46.589 06:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:46.589 06:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:23:46.589 06:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:46.589 06:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /tmp/tmp.PwrbRy4nk8 00:23:46.589 06:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.PwrbRy4nk8 00:23:46.589 06:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:46.881 [2024-11-20 06:34:18.441632] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:46.881 06:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:46.881 06:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:47.185 [2024-11-20 06:34:18.834659] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:47.186 [2024-11-20 06:34:18.834855] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:47.186 06:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:47.467 malloc0 00:23:47.467 06:34:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:47.467 06:34:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.PwrbRy4nk8 00:23:47.726 [2024-11-20 06:34:19.412289] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.PwrbRy4nk8': 0100666 00:23:47.726 [2024-11-20 06:34:19.412318] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:23:47.726 request: 00:23:47.726 { 00:23:47.726 "name": "key0", 00:23:47.726 "path": "/tmp/tmp.PwrbRy4nk8", 00:23:47.726 "method": "keyring_file_add_key", 00:23:47.726 "req_id": 1 00:23:47.726 } 00:23:47.726 Got JSON-RPC error response 00:23:47.726 response: 00:23:47.726 { 00:23:47.726 "code": -1, 00:23:47.726 "message": "Operation not permitted" 00:23:47.726 } 00:23:47.726 06:34:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:47.985 [2024-11-20 06:34:19.596780] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:23:47.985 [2024-11-20 06:34:19.596810] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:23:47.985 request: 00:23:47.985 { 00:23:47.985 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:47.985 "host": "nqn.2016-06.io.spdk:host1", 00:23:47.985 "psk": "key0", 00:23:47.985 "method": "nvmf_subsystem_add_host", 00:23:47.985 "req_id": 1 00:23:47.985 } 00:23:47.985 Got JSON-RPC error response 00:23:47.985 response: 00:23:47.985 { 00:23:47.985 "code": -32603, 00:23:47.985 "message": "Internal error" 00:23:47.985 } 00:23:47.985 06:34:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:23:47.985 06:34:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:47.985 06:34:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:47.985 06:34:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:47.985 06:34:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 569939 00:23:47.985 06:34:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 569939 ']' 00:23:47.985 06:34:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 569939 00:23:47.985 06:34:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:23:47.985 06:34:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:47.985 06:34:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 569939 00:23:47.985 06:34:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:23:47.985 06:34:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:23:47.985 06:34:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 569939' 00:23:47.985 killing process with pid 569939 00:23:47.985 06:34:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 569939 00:23:47.985 06:34:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 569939 00:23:48.243 06:34:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.PwrbRy4nk8 00:23:48.244 06:34:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:23:48.244 06:34:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:48.244 06:34:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:48.244 06:34:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:48.244 06:34:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=570216 00:23:48.244 06:34:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:48.244 06:34:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 570216 00:23:48.244 06:34:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 570216 ']' 00:23:48.244 06:34:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:48.244 06:34:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:48.244 06:34:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:48.244 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:48.244 06:34:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:48.244 06:34:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:48.244 [2024-11-20 06:34:19.899504] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:23:48.244 [2024-11-20 06:34:19.899553] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:48.244 [2024-11-20 06:34:19.977663] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:48.244 [2024-11-20 06:34:20.019612] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:48.244 [2024-11-20 06:34:20.019644] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:48.244 [2024-11-20 06:34:20.019652] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:48.244 [2024-11-20 06:34:20.019659] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:48.244 [2024-11-20 06:34:20.019665] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:48.244 [2024-11-20 06:34:20.020106] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:48.503 06:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:48.503 06:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:23:48.503 06:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:48.503 06:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:48.503 06:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:48.503 06:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:48.503 06:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.PwrbRy4nk8 00:23:48.503 06:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.PwrbRy4nk8 00:23:48.503 06:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:48.761 [2024-11-20 06:34:20.335861] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:48.761 06:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:48.761 06:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:49.020 [2024-11-20 06:34:20.720845] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:49.020 [2024-11-20 06:34:20.721052] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:49.020 06:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:49.278 malloc0 00:23:49.278 06:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:49.537 06:34:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.PwrbRy4nk8 00:23:49.537 06:34:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:49.795 06:34:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=570560 00:23:49.795 06:34:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:49.795 06:34:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:49.795 06:34:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 570560 /var/tmp/bdevperf.sock 00:23:49.795 06:34:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 570560 ']' 00:23:49.795 06:34:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:49.795 06:34:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:49.795 06:34:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:49.795 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:49.795 06:34:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:49.795 06:34:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:49.795 [2024-11-20 06:34:21.583079] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:23:49.795 [2024-11-20 06:34:21.583132] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid570560 ] 00:23:50.053 [2024-11-20 06:34:21.658649] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:50.053 [2024-11-20 06:34:21.699049] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:50.053 06:34:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:50.053 06:34:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:23:50.053 06:34:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.PwrbRy4nk8 00:23:50.312 06:34:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:50.570 [2024-11-20 06:34:22.153986] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:50.570 TLSTESTn1 00:23:50.570 06:34:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:23:50.830 06:34:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:23:50.830 "subsystems": [ 00:23:50.830 { 00:23:50.830 "subsystem": "keyring", 00:23:50.830 "config": [ 00:23:50.830 { 00:23:50.830 "method": "keyring_file_add_key", 00:23:50.830 "params": { 00:23:50.830 "name": "key0", 00:23:50.830 "path": "/tmp/tmp.PwrbRy4nk8" 00:23:50.830 } 00:23:50.830 } 00:23:50.830 ] 00:23:50.830 }, 00:23:50.830 { 00:23:50.830 "subsystem": "iobuf", 00:23:50.830 "config": [ 00:23:50.830 { 00:23:50.830 "method": "iobuf_set_options", 00:23:50.830 "params": { 00:23:50.830 "small_pool_count": 8192, 00:23:50.830 "large_pool_count": 1024, 00:23:50.830 "small_bufsize": 8192, 00:23:50.830 "large_bufsize": 135168, 00:23:50.830 "enable_numa": false 00:23:50.830 } 00:23:50.830 } 00:23:50.830 ] 00:23:50.830 }, 00:23:50.830 { 00:23:50.830 "subsystem": "sock", 00:23:50.830 "config": [ 00:23:50.830 { 00:23:50.830 "method": "sock_set_default_impl", 00:23:50.830 "params": { 00:23:50.830 "impl_name": "posix" 00:23:50.830 } 00:23:50.830 }, 00:23:50.830 { 00:23:50.830 "method": "sock_impl_set_options", 00:23:50.830 "params": { 00:23:50.830 "impl_name": "ssl", 00:23:50.830 "recv_buf_size": 4096, 00:23:50.830 "send_buf_size": 4096, 00:23:50.830 "enable_recv_pipe": true, 00:23:50.830 "enable_quickack": false, 00:23:50.830 "enable_placement_id": 0, 00:23:50.830 "enable_zerocopy_send_server": true, 00:23:50.830 "enable_zerocopy_send_client": false, 00:23:50.830 "zerocopy_threshold": 0, 00:23:50.830 "tls_version": 0, 00:23:50.830 "enable_ktls": false 00:23:50.830 } 00:23:50.830 }, 00:23:50.830 { 00:23:50.830 "method": "sock_impl_set_options", 00:23:50.830 "params": { 00:23:50.830 "impl_name": "posix", 00:23:50.830 "recv_buf_size": 2097152, 00:23:50.830 "send_buf_size": 2097152, 00:23:50.830 "enable_recv_pipe": true, 00:23:50.830 "enable_quickack": false, 00:23:50.830 "enable_placement_id": 0, 00:23:50.830 "enable_zerocopy_send_server": true, 00:23:50.830 "enable_zerocopy_send_client": false, 00:23:50.830 "zerocopy_threshold": 0, 00:23:50.830 "tls_version": 0, 00:23:50.830 "enable_ktls": false 00:23:50.830 } 00:23:50.830 } 00:23:50.830 ] 00:23:50.830 }, 00:23:50.830 { 00:23:50.830 "subsystem": "vmd", 00:23:50.830 "config": [] 00:23:50.830 }, 00:23:50.830 { 00:23:50.830 "subsystem": "accel", 00:23:50.830 "config": [ 00:23:50.830 { 00:23:50.830 "method": "accel_set_options", 00:23:50.830 "params": { 00:23:50.830 "small_cache_size": 128, 00:23:50.830 "large_cache_size": 16, 00:23:50.830 "task_count": 2048, 00:23:50.830 "sequence_count": 2048, 00:23:50.830 "buf_count": 2048 00:23:50.830 } 00:23:50.830 } 00:23:50.830 ] 00:23:50.830 }, 00:23:50.830 { 00:23:50.830 "subsystem": "bdev", 00:23:50.830 "config": [ 00:23:50.830 { 00:23:50.830 "method": "bdev_set_options", 00:23:50.830 "params": { 00:23:50.830 "bdev_io_pool_size": 65535, 00:23:50.830 "bdev_io_cache_size": 256, 00:23:50.830 "bdev_auto_examine": true, 00:23:50.830 "iobuf_small_cache_size": 128, 00:23:50.830 "iobuf_large_cache_size": 16 00:23:50.830 } 00:23:50.830 }, 00:23:50.830 { 00:23:50.830 "method": "bdev_raid_set_options", 00:23:50.830 "params": { 00:23:50.830 "process_window_size_kb": 1024, 00:23:50.830 "process_max_bandwidth_mb_sec": 0 00:23:50.830 } 00:23:50.830 }, 00:23:50.830 { 00:23:50.830 "method": "bdev_iscsi_set_options", 00:23:50.830 "params": { 00:23:50.830 "timeout_sec": 30 00:23:50.830 } 00:23:50.830 }, 00:23:50.830 { 00:23:50.830 "method": "bdev_nvme_set_options", 00:23:50.830 "params": { 00:23:50.830 "action_on_timeout": "none", 00:23:50.830 "timeout_us": 0, 00:23:50.830 "timeout_admin_us": 0, 00:23:50.830 "keep_alive_timeout_ms": 10000, 00:23:50.830 "arbitration_burst": 0, 00:23:50.830 "low_priority_weight": 0, 00:23:50.830 "medium_priority_weight": 0, 00:23:50.830 "high_priority_weight": 0, 00:23:50.830 "nvme_adminq_poll_period_us": 10000, 00:23:50.830 "nvme_ioq_poll_period_us": 0, 00:23:50.830 "io_queue_requests": 0, 00:23:50.830 "delay_cmd_submit": true, 00:23:50.830 "transport_retry_count": 4, 00:23:50.830 "bdev_retry_count": 3, 00:23:50.830 "transport_ack_timeout": 0, 00:23:50.830 "ctrlr_loss_timeout_sec": 0, 00:23:50.830 "reconnect_delay_sec": 0, 00:23:50.830 "fast_io_fail_timeout_sec": 0, 00:23:50.830 "disable_auto_failback": false, 00:23:50.830 "generate_uuids": false, 00:23:50.830 "transport_tos": 0, 00:23:50.830 "nvme_error_stat": false, 00:23:50.830 "rdma_srq_size": 0, 00:23:50.830 "io_path_stat": false, 00:23:50.830 "allow_accel_sequence": false, 00:23:50.830 "rdma_max_cq_size": 0, 00:23:50.830 "rdma_cm_event_timeout_ms": 0, 00:23:50.830 "dhchap_digests": [ 00:23:50.830 "sha256", 00:23:50.830 "sha384", 00:23:50.830 "sha512" 00:23:50.830 ], 00:23:50.830 "dhchap_dhgroups": [ 00:23:50.830 "null", 00:23:50.830 "ffdhe2048", 00:23:50.830 "ffdhe3072", 00:23:50.830 "ffdhe4096", 00:23:50.830 "ffdhe6144", 00:23:50.830 "ffdhe8192" 00:23:50.830 ] 00:23:50.830 } 00:23:50.830 }, 00:23:50.830 { 00:23:50.830 "method": "bdev_nvme_set_hotplug", 00:23:50.830 "params": { 00:23:50.830 "period_us": 100000, 00:23:50.830 "enable": false 00:23:50.830 } 00:23:50.830 }, 00:23:50.830 { 00:23:50.830 "method": "bdev_malloc_create", 00:23:50.830 "params": { 00:23:50.830 "name": "malloc0", 00:23:50.830 "num_blocks": 8192, 00:23:50.830 "block_size": 4096, 00:23:50.830 "physical_block_size": 4096, 00:23:50.830 "uuid": "968e592f-9093-4c3d-918f-520eaee0aa41", 00:23:50.830 "optimal_io_boundary": 0, 00:23:50.830 "md_size": 0, 00:23:50.830 "dif_type": 0, 00:23:50.830 "dif_is_head_of_md": false, 00:23:50.830 "dif_pi_format": 0 00:23:50.830 } 00:23:50.830 }, 00:23:50.830 { 00:23:50.830 "method": "bdev_wait_for_examine" 00:23:50.830 } 00:23:50.830 ] 00:23:50.830 }, 00:23:50.830 { 00:23:50.831 "subsystem": "nbd", 00:23:50.831 "config": [] 00:23:50.831 }, 00:23:50.831 { 00:23:50.831 "subsystem": "scheduler", 00:23:50.831 "config": [ 00:23:50.831 { 00:23:50.831 "method": "framework_set_scheduler", 00:23:50.831 "params": { 00:23:50.831 "name": "static" 00:23:50.831 } 00:23:50.831 } 00:23:50.831 ] 00:23:50.831 }, 00:23:50.831 { 00:23:50.831 "subsystem": "nvmf", 00:23:50.831 "config": [ 00:23:50.831 { 00:23:50.831 "method": "nvmf_set_config", 00:23:50.831 "params": { 00:23:50.831 "discovery_filter": "match_any", 00:23:50.831 "admin_cmd_passthru": { 00:23:50.831 "identify_ctrlr": false 00:23:50.831 }, 00:23:50.831 "dhchap_digests": [ 00:23:50.831 "sha256", 00:23:50.831 "sha384", 00:23:50.831 "sha512" 00:23:50.831 ], 00:23:50.831 "dhchap_dhgroups": [ 00:23:50.831 "null", 00:23:50.831 "ffdhe2048", 00:23:50.831 "ffdhe3072", 00:23:50.831 "ffdhe4096", 00:23:50.831 "ffdhe6144", 00:23:50.831 "ffdhe8192" 00:23:50.831 ] 00:23:50.831 } 00:23:50.831 }, 00:23:50.831 { 00:23:50.831 "method": "nvmf_set_max_subsystems", 00:23:50.831 "params": { 00:23:50.831 "max_subsystems": 1024 00:23:50.831 } 00:23:50.831 }, 00:23:50.831 { 00:23:50.831 "method": "nvmf_set_crdt", 00:23:50.831 "params": { 00:23:50.831 "crdt1": 0, 00:23:50.831 "crdt2": 0, 00:23:50.831 "crdt3": 0 00:23:50.831 } 00:23:50.831 }, 00:23:50.831 { 00:23:50.831 "method": "nvmf_create_transport", 00:23:50.831 "params": { 00:23:50.831 "trtype": "TCP", 00:23:50.831 "max_queue_depth": 128, 00:23:50.831 "max_io_qpairs_per_ctrlr": 127, 00:23:50.831 "in_capsule_data_size": 4096, 00:23:50.831 "max_io_size": 131072, 00:23:50.831 "io_unit_size": 131072, 00:23:50.831 "max_aq_depth": 128, 00:23:50.831 "num_shared_buffers": 511, 00:23:50.831 "buf_cache_size": 4294967295, 00:23:50.831 "dif_insert_or_strip": false, 00:23:50.831 "zcopy": false, 00:23:50.831 "c2h_success": false, 00:23:50.831 "sock_priority": 0, 00:23:50.831 "abort_timeout_sec": 1, 00:23:50.831 "ack_timeout": 0, 00:23:50.831 "data_wr_pool_size": 0 00:23:50.831 } 00:23:50.831 }, 00:23:50.831 { 00:23:50.831 "method": "nvmf_create_subsystem", 00:23:50.831 "params": { 00:23:50.831 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:50.831 "allow_any_host": false, 00:23:50.831 "serial_number": "SPDK00000000000001", 00:23:50.831 "model_number": "SPDK bdev Controller", 00:23:50.831 "max_namespaces": 10, 00:23:50.831 "min_cntlid": 1, 00:23:50.831 "max_cntlid": 65519, 00:23:50.831 "ana_reporting": false 00:23:50.831 } 00:23:50.831 }, 00:23:50.831 { 00:23:50.831 "method": "nvmf_subsystem_add_host", 00:23:50.831 "params": { 00:23:50.831 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:50.831 "host": "nqn.2016-06.io.spdk:host1", 00:23:50.831 "psk": "key0" 00:23:50.831 } 00:23:50.831 }, 00:23:50.831 { 00:23:50.831 "method": "nvmf_subsystem_add_ns", 00:23:50.831 "params": { 00:23:50.831 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:50.831 "namespace": { 00:23:50.831 "nsid": 1, 00:23:50.831 "bdev_name": "malloc0", 00:23:50.831 "nguid": "968E592F90934C3D918F520EAEE0AA41", 00:23:50.831 "uuid": "968e592f-9093-4c3d-918f-520eaee0aa41", 00:23:50.831 "no_auto_visible": false 00:23:50.831 } 00:23:50.831 } 00:23:50.831 }, 00:23:50.831 { 00:23:50.831 "method": "nvmf_subsystem_add_listener", 00:23:50.831 "params": { 00:23:50.831 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:50.831 "listen_address": { 00:23:50.831 "trtype": "TCP", 00:23:50.831 "adrfam": "IPv4", 00:23:50.831 "traddr": "10.0.0.2", 00:23:50.831 "trsvcid": "4420" 00:23:50.831 }, 00:23:50.831 "secure_channel": true 00:23:50.831 } 00:23:50.831 } 00:23:50.831 ] 00:23:50.831 } 00:23:50.831 ] 00:23:50.831 }' 00:23:50.831 06:34:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:23:51.090 06:34:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:23:51.090 "subsystems": [ 00:23:51.090 { 00:23:51.090 "subsystem": "keyring", 00:23:51.090 "config": [ 00:23:51.090 { 00:23:51.090 "method": "keyring_file_add_key", 00:23:51.090 "params": { 00:23:51.090 "name": "key0", 00:23:51.090 "path": "/tmp/tmp.PwrbRy4nk8" 00:23:51.090 } 00:23:51.090 } 00:23:51.090 ] 00:23:51.090 }, 00:23:51.090 { 00:23:51.090 "subsystem": "iobuf", 00:23:51.090 "config": [ 00:23:51.090 { 00:23:51.090 "method": "iobuf_set_options", 00:23:51.090 "params": { 00:23:51.090 "small_pool_count": 8192, 00:23:51.090 "large_pool_count": 1024, 00:23:51.090 "small_bufsize": 8192, 00:23:51.090 "large_bufsize": 135168, 00:23:51.090 "enable_numa": false 00:23:51.090 } 00:23:51.090 } 00:23:51.090 ] 00:23:51.090 }, 00:23:51.090 { 00:23:51.090 "subsystem": "sock", 00:23:51.090 "config": [ 00:23:51.090 { 00:23:51.090 "method": "sock_set_default_impl", 00:23:51.090 "params": { 00:23:51.090 "impl_name": "posix" 00:23:51.090 } 00:23:51.090 }, 00:23:51.090 { 00:23:51.090 "method": "sock_impl_set_options", 00:23:51.090 "params": { 00:23:51.090 "impl_name": "ssl", 00:23:51.090 "recv_buf_size": 4096, 00:23:51.090 "send_buf_size": 4096, 00:23:51.090 "enable_recv_pipe": true, 00:23:51.090 "enable_quickack": false, 00:23:51.090 "enable_placement_id": 0, 00:23:51.091 "enable_zerocopy_send_server": true, 00:23:51.091 "enable_zerocopy_send_client": false, 00:23:51.091 "zerocopy_threshold": 0, 00:23:51.091 "tls_version": 0, 00:23:51.091 "enable_ktls": false 00:23:51.091 } 00:23:51.091 }, 00:23:51.091 { 00:23:51.091 "method": "sock_impl_set_options", 00:23:51.091 "params": { 00:23:51.091 "impl_name": "posix", 00:23:51.091 "recv_buf_size": 2097152, 00:23:51.091 "send_buf_size": 2097152, 00:23:51.091 "enable_recv_pipe": true, 00:23:51.091 "enable_quickack": false, 00:23:51.091 "enable_placement_id": 0, 00:23:51.091 "enable_zerocopy_send_server": true, 00:23:51.091 "enable_zerocopy_send_client": false, 00:23:51.091 "zerocopy_threshold": 0, 00:23:51.091 "tls_version": 0, 00:23:51.091 "enable_ktls": false 00:23:51.091 } 00:23:51.091 } 00:23:51.091 ] 00:23:51.091 }, 00:23:51.091 { 00:23:51.091 "subsystem": "vmd", 00:23:51.091 "config": [] 00:23:51.091 }, 00:23:51.091 { 00:23:51.091 "subsystem": "accel", 00:23:51.091 "config": [ 00:23:51.091 { 00:23:51.091 "method": "accel_set_options", 00:23:51.091 "params": { 00:23:51.091 "small_cache_size": 128, 00:23:51.091 "large_cache_size": 16, 00:23:51.091 "task_count": 2048, 00:23:51.091 "sequence_count": 2048, 00:23:51.091 "buf_count": 2048 00:23:51.091 } 00:23:51.091 } 00:23:51.091 ] 00:23:51.091 }, 00:23:51.091 { 00:23:51.091 "subsystem": "bdev", 00:23:51.091 "config": [ 00:23:51.091 { 00:23:51.091 "method": "bdev_set_options", 00:23:51.091 "params": { 00:23:51.091 "bdev_io_pool_size": 65535, 00:23:51.091 "bdev_io_cache_size": 256, 00:23:51.091 "bdev_auto_examine": true, 00:23:51.091 "iobuf_small_cache_size": 128, 00:23:51.091 "iobuf_large_cache_size": 16 00:23:51.091 } 00:23:51.091 }, 00:23:51.091 { 00:23:51.091 "method": "bdev_raid_set_options", 00:23:51.091 "params": { 00:23:51.091 "process_window_size_kb": 1024, 00:23:51.091 "process_max_bandwidth_mb_sec": 0 00:23:51.091 } 00:23:51.091 }, 00:23:51.091 { 00:23:51.091 "method": "bdev_iscsi_set_options", 00:23:51.091 "params": { 00:23:51.091 "timeout_sec": 30 00:23:51.091 } 00:23:51.091 }, 00:23:51.091 { 00:23:51.091 "method": "bdev_nvme_set_options", 00:23:51.091 "params": { 00:23:51.091 "action_on_timeout": "none", 00:23:51.091 "timeout_us": 0, 00:23:51.091 "timeout_admin_us": 0, 00:23:51.091 "keep_alive_timeout_ms": 10000, 00:23:51.091 "arbitration_burst": 0, 00:23:51.091 "low_priority_weight": 0, 00:23:51.091 "medium_priority_weight": 0, 00:23:51.091 "high_priority_weight": 0, 00:23:51.091 "nvme_adminq_poll_period_us": 10000, 00:23:51.091 "nvme_ioq_poll_period_us": 0, 00:23:51.091 "io_queue_requests": 512, 00:23:51.091 "delay_cmd_submit": true, 00:23:51.091 "transport_retry_count": 4, 00:23:51.091 "bdev_retry_count": 3, 00:23:51.091 "transport_ack_timeout": 0, 00:23:51.091 "ctrlr_loss_timeout_sec": 0, 00:23:51.091 "reconnect_delay_sec": 0, 00:23:51.091 "fast_io_fail_timeout_sec": 0, 00:23:51.091 "disable_auto_failback": false, 00:23:51.091 "generate_uuids": false, 00:23:51.091 "transport_tos": 0, 00:23:51.091 "nvme_error_stat": false, 00:23:51.091 "rdma_srq_size": 0, 00:23:51.091 "io_path_stat": false, 00:23:51.091 "allow_accel_sequence": false, 00:23:51.091 "rdma_max_cq_size": 0, 00:23:51.091 "rdma_cm_event_timeout_ms": 0, 00:23:51.091 "dhchap_digests": [ 00:23:51.091 "sha256", 00:23:51.091 "sha384", 00:23:51.091 "sha512" 00:23:51.091 ], 00:23:51.091 "dhchap_dhgroups": [ 00:23:51.091 "null", 00:23:51.091 "ffdhe2048", 00:23:51.091 "ffdhe3072", 00:23:51.091 "ffdhe4096", 00:23:51.091 "ffdhe6144", 00:23:51.091 "ffdhe8192" 00:23:51.091 ] 00:23:51.091 } 00:23:51.091 }, 00:23:51.091 { 00:23:51.091 "method": "bdev_nvme_attach_controller", 00:23:51.091 "params": { 00:23:51.091 "name": "TLSTEST", 00:23:51.091 "trtype": "TCP", 00:23:51.091 "adrfam": "IPv4", 00:23:51.091 "traddr": "10.0.0.2", 00:23:51.091 "trsvcid": "4420", 00:23:51.091 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:51.091 "prchk_reftag": false, 00:23:51.091 "prchk_guard": false, 00:23:51.091 "ctrlr_loss_timeout_sec": 0, 00:23:51.091 "reconnect_delay_sec": 0, 00:23:51.091 "fast_io_fail_timeout_sec": 0, 00:23:51.091 "psk": "key0", 00:23:51.091 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:51.091 "hdgst": false, 00:23:51.091 "ddgst": false, 00:23:51.091 "multipath": "multipath" 00:23:51.091 } 00:23:51.091 }, 00:23:51.091 { 00:23:51.091 "method": "bdev_nvme_set_hotplug", 00:23:51.091 "params": { 00:23:51.091 "period_us": 100000, 00:23:51.091 "enable": false 00:23:51.091 } 00:23:51.091 }, 00:23:51.091 { 00:23:51.091 "method": "bdev_wait_for_examine" 00:23:51.091 } 00:23:51.091 ] 00:23:51.091 }, 00:23:51.091 { 00:23:51.091 "subsystem": "nbd", 00:23:51.091 "config": [] 00:23:51.091 } 00:23:51.091 ] 00:23:51.091 }' 00:23:51.091 06:34:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 570560 00:23:51.091 06:34:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 570560 ']' 00:23:51.091 06:34:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 570560 00:23:51.091 06:34:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:23:51.091 06:34:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:51.091 06:34:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 570560 00:23:51.091 06:34:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:23:51.091 06:34:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:23:51.091 06:34:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 570560' 00:23:51.091 killing process with pid 570560 00:23:51.091 06:34:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 570560 00:23:51.091 Received shutdown signal, test time was about 10.000000 seconds 00:23:51.091 00:23:51.091 Latency(us) 00:23:51.091 [2024-11-20T05:34:22.927Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:51.091 [2024-11-20T05:34:22.927Z] =================================================================================================================== 00:23:51.091 [2024-11-20T05:34:22.927Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:51.091 06:34:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 570560 00:23:51.350 06:34:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 570216 00:23:51.350 06:34:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 570216 ']' 00:23:51.350 06:34:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 570216 00:23:51.350 06:34:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:23:51.350 06:34:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:51.350 06:34:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 570216 00:23:51.350 06:34:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:23:51.350 06:34:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:23:51.350 06:34:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 570216' 00:23:51.350 killing process with pid 570216 00:23:51.350 06:34:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 570216 00:23:51.350 06:34:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 570216 00:23:51.610 06:34:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:23:51.610 06:34:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:51.610 06:34:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:51.610 06:34:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:23:51.610 "subsystems": [ 00:23:51.610 { 00:23:51.610 "subsystem": "keyring", 00:23:51.610 "config": [ 00:23:51.610 { 00:23:51.610 "method": "keyring_file_add_key", 00:23:51.610 "params": { 00:23:51.610 "name": "key0", 00:23:51.610 "path": "/tmp/tmp.PwrbRy4nk8" 00:23:51.610 } 00:23:51.610 } 00:23:51.610 ] 00:23:51.610 }, 00:23:51.610 { 00:23:51.610 "subsystem": "iobuf", 00:23:51.610 "config": [ 00:23:51.610 { 00:23:51.610 "method": "iobuf_set_options", 00:23:51.610 "params": { 00:23:51.610 "small_pool_count": 8192, 00:23:51.610 "large_pool_count": 1024, 00:23:51.610 "small_bufsize": 8192, 00:23:51.610 "large_bufsize": 135168, 00:23:51.610 "enable_numa": false 00:23:51.610 } 00:23:51.610 } 00:23:51.610 ] 00:23:51.610 }, 00:23:51.610 { 00:23:51.610 "subsystem": "sock", 00:23:51.610 "config": [ 00:23:51.610 { 00:23:51.610 "method": "sock_set_default_impl", 00:23:51.610 "params": { 00:23:51.610 "impl_name": "posix" 00:23:51.610 } 00:23:51.610 }, 00:23:51.610 { 00:23:51.610 "method": "sock_impl_set_options", 00:23:51.610 "params": { 00:23:51.610 "impl_name": "ssl", 00:23:51.610 "recv_buf_size": 4096, 00:23:51.610 "send_buf_size": 4096, 00:23:51.610 "enable_recv_pipe": true, 00:23:51.610 "enable_quickack": false, 00:23:51.610 "enable_placement_id": 0, 00:23:51.610 "enable_zerocopy_send_server": true, 00:23:51.610 "enable_zerocopy_send_client": false, 00:23:51.610 "zerocopy_threshold": 0, 00:23:51.610 "tls_version": 0, 00:23:51.610 "enable_ktls": false 00:23:51.610 } 00:23:51.610 }, 00:23:51.610 { 00:23:51.610 "method": "sock_impl_set_options", 00:23:51.610 "params": { 00:23:51.610 "impl_name": "posix", 00:23:51.610 "recv_buf_size": 2097152, 00:23:51.610 "send_buf_size": 2097152, 00:23:51.610 "enable_recv_pipe": true, 00:23:51.610 "enable_quickack": false, 00:23:51.610 "enable_placement_id": 0, 00:23:51.610 "enable_zerocopy_send_server": true, 00:23:51.610 "enable_zerocopy_send_client": false, 00:23:51.610 "zerocopy_threshold": 0, 00:23:51.610 "tls_version": 0, 00:23:51.610 "enable_ktls": false 00:23:51.610 } 00:23:51.610 } 00:23:51.610 ] 00:23:51.610 }, 00:23:51.610 { 00:23:51.610 "subsystem": "vmd", 00:23:51.610 "config": [] 00:23:51.610 }, 00:23:51.610 { 00:23:51.610 "subsystem": "accel", 00:23:51.610 "config": [ 00:23:51.610 { 00:23:51.610 "method": "accel_set_options", 00:23:51.610 "params": { 00:23:51.610 "small_cache_size": 128, 00:23:51.610 "large_cache_size": 16, 00:23:51.610 "task_count": 2048, 00:23:51.610 "sequence_count": 2048, 00:23:51.611 "buf_count": 2048 00:23:51.611 } 00:23:51.611 } 00:23:51.611 ] 00:23:51.611 }, 00:23:51.611 { 00:23:51.611 "subsystem": "bdev", 00:23:51.611 "config": [ 00:23:51.611 { 00:23:51.611 "method": "bdev_set_options", 00:23:51.611 "params": { 00:23:51.611 "bdev_io_pool_size": 65535, 00:23:51.611 "bdev_io_cache_size": 256, 00:23:51.611 "bdev_auto_examine": true, 00:23:51.611 "iobuf_small_cache_size": 128, 00:23:51.611 "iobuf_large_cache_size": 16 00:23:51.611 } 00:23:51.611 }, 00:23:51.611 { 00:23:51.611 "method": "bdev_raid_set_options", 00:23:51.611 "params": { 00:23:51.611 "process_window_size_kb": 1024, 00:23:51.611 "process_max_bandwidth_mb_sec": 0 00:23:51.611 } 00:23:51.611 }, 00:23:51.611 { 00:23:51.611 "method": "bdev_iscsi_set_options", 00:23:51.611 "params": { 00:23:51.611 "timeout_sec": 30 00:23:51.611 } 00:23:51.611 }, 00:23:51.611 { 00:23:51.611 "method": "bdev_nvme_set_options", 00:23:51.611 "params": { 00:23:51.611 "action_on_timeout": "none", 00:23:51.611 "timeout_us": 0, 00:23:51.611 "timeout_admin_us": 0, 00:23:51.611 "keep_alive_timeout_ms": 10000, 00:23:51.611 "arbitration_burst": 0, 00:23:51.611 "low_priority_weight": 0, 00:23:51.611 "medium_priority_weight": 0, 00:23:51.611 "high_priority_weight": 0, 00:23:51.611 "nvme_adminq_poll_period_us": 10000, 00:23:51.611 "nvme_ioq_poll_period_us": 0, 00:23:51.611 "io_queue_requests": 0, 00:23:51.611 "delay_cmd_submit": true, 00:23:51.611 "transport_retry_count": 4, 00:23:51.611 "bdev_retry_count": 3, 00:23:51.611 "transport_ack_timeout": 0, 00:23:51.611 "ctrlr_loss_timeout_sec": 0, 00:23:51.611 "reconnect_delay_sec": 0, 00:23:51.611 "fast_io_fail_timeout_sec": 0, 00:23:51.611 "disable_auto_failback": false, 00:23:51.611 "generate_uuids": false, 00:23:51.611 "transport_tos": 0, 00:23:51.611 "nvme_error_stat": false, 00:23:51.611 "rdma_srq_size": 0, 00:23:51.611 "io_path_stat": false, 00:23:51.611 "allow_accel_sequence": false, 00:23:51.611 "rdma_max_cq_size": 0, 00:23:51.611 "rdma_cm_event_timeout_ms": 0, 00:23:51.611 "dhchap_digests": [ 00:23:51.611 "sha256", 00:23:51.611 "sha384", 00:23:51.611 "sha512" 00:23:51.611 ], 00:23:51.611 "dhchap_dhgroups": [ 00:23:51.611 "null", 00:23:51.611 "ffdhe2048", 00:23:51.611 "ffdhe3072", 00:23:51.611 "ffdhe4096", 00:23:51.611 "ffdhe6144", 00:23:51.611 "ffdhe8192" 00:23:51.611 ] 00:23:51.611 } 00:23:51.611 }, 00:23:51.611 { 00:23:51.611 "method": "bdev_nvme_set_hotplug", 00:23:51.611 "params": { 00:23:51.611 "period_us": 100000, 00:23:51.611 "enable": false 00:23:51.611 } 00:23:51.611 }, 00:23:51.611 { 00:23:51.611 "method": "bdev_malloc_create", 00:23:51.611 "params": { 00:23:51.611 "name": "malloc0", 00:23:51.611 "num_blocks": 8192, 00:23:51.611 "block_size": 4096, 00:23:51.611 "physical_block_size": 4096, 00:23:51.611 "uuid": "968e592f-9093-4c3d-918f-520eaee0aa41", 00:23:51.611 "optimal_io_boundary": 0, 00:23:51.611 "md_size": 0, 00:23:51.611 "dif_type": 0, 00:23:51.611 "dif_is_head_of_md": false, 00:23:51.611 "dif_pi_format": 0 00:23:51.611 } 00:23:51.611 }, 00:23:51.611 { 00:23:51.611 "method": "bdev_wait_for_examine" 00:23:51.611 } 00:23:51.611 ] 00:23:51.611 }, 00:23:51.611 { 00:23:51.611 "subsystem": "nbd", 00:23:51.611 "config": [] 00:23:51.611 }, 00:23:51.611 { 00:23:51.611 "subsystem": "scheduler", 00:23:51.611 "config": [ 00:23:51.611 { 00:23:51.611 "method": "framework_set_scheduler", 00:23:51.611 "params": { 00:23:51.611 "name": "static" 00:23:51.611 } 00:23:51.611 } 00:23:51.611 ] 00:23:51.611 }, 00:23:51.611 { 00:23:51.611 "subsystem": "nvmf", 00:23:51.611 "config": [ 00:23:51.611 { 00:23:51.611 "method": "nvmf_set_config", 00:23:51.611 "params": { 00:23:51.611 "discovery_filter": "match_any", 00:23:51.611 "admin_cmd_passthru": { 00:23:51.611 "identify_ctrlr": false 00:23:51.611 }, 00:23:51.611 "dhchap_digests": [ 00:23:51.611 "sha256", 00:23:51.611 "sha384", 00:23:51.611 "sha512" 00:23:51.611 ], 00:23:51.611 "dhchap_dhgroups": [ 00:23:51.611 "null", 00:23:51.611 "ffdhe2048", 00:23:51.611 "ffdhe3072", 00:23:51.611 "ffdhe4096", 00:23:51.611 "ffdhe6144", 00:23:51.611 "ffdhe8192" 00:23:51.611 ] 00:23:51.611 } 00:23:51.611 }, 00:23:51.611 { 00:23:51.611 "method": "nvmf_set_max_subsystems", 00:23:51.611 "params": { 00:23:51.611 "max_subsystems": 1024 00:23:51.611 } 00:23:51.611 }, 00:23:51.611 { 00:23:51.611 "method": "nvmf_set_crdt", 00:23:51.611 "params": { 00:23:51.611 "crdt1": 0, 00:23:51.611 "crdt2": 0, 00:23:51.611 "crdt3": 0 00:23:51.611 } 00:23:51.611 }, 00:23:51.611 { 00:23:51.611 "method": "nvmf_create_transport", 00:23:51.611 "params": { 00:23:51.611 "trtype": "TCP", 00:23:51.611 "max_queue_depth": 128, 00:23:51.611 "max_io_qpairs_per_ctrlr": 127, 00:23:51.611 "in_capsule_data_size": 4096, 00:23:51.611 "max_io_size": 131072, 00:23:51.611 "io_unit_size": 131072, 00:23:51.611 "max_aq_depth": 128, 00:23:51.611 "num_shared_buffers": 511, 00:23:51.611 "buf_cache_size": 4294967295, 00:23:51.611 "dif_insert_or_strip": false, 00:23:51.611 "zcopy": false, 00:23:51.611 "c2h_success": false, 00:23:51.611 "sock_priority": 0, 00:23:51.611 "abort_timeout_sec": 1, 00:23:51.611 "ack_timeout": 0, 00:23:51.611 "data_wr_pool_size": 0 00:23:51.611 } 00:23:51.611 }, 00:23:51.611 { 00:23:51.611 "method": "nvmf_create_subsystem", 00:23:51.611 "params": { 00:23:51.611 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:51.611 "allow_any_host": false, 00:23:51.611 "serial_number": "SPDK00000000000001", 00:23:51.611 "model_number": "SPDK bdev Controller", 00:23:51.611 "max_namespaces": 10, 00:23:51.611 "min_cntlid": 1, 00:23:51.611 "max_cntlid": 65519, 00:23:51.611 "ana_reporting": false 00:23:51.611 } 00:23:51.612 }, 00:23:51.612 { 00:23:51.612 "method": "nvmf_subsystem_add_host", 00:23:51.612 "params": { 00:23:51.612 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:51.612 "host": "nqn.2016-06.io.spdk:host1", 00:23:51.612 "psk": "key0" 00:23:51.612 } 00:23:51.612 }, 00:23:51.612 { 00:23:51.612 "method": "nvmf_subsystem_add_ns", 00:23:51.612 "params": { 00:23:51.612 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:51.612 "namespace": { 00:23:51.612 "nsid": 1, 00:23:51.612 "bdev_name": "malloc0", 00:23:51.612 "nguid": "968E592F90934C3D918F520EAEE0AA41", 00:23:51.612 "uuid": "968e592f-9093-4c3d-918f-520eaee0aa41", 00:23:51.612 "no_auto_visible": false 00:23:51.612 } 00:23:51.612 } 00:23:51.612 }, 00:23:51.612 { 00:23:51.612 "method": "nvmf_subsystem_add_listener", 00:23:51.612 "params": { 00:23:51.612 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:51.612 "listen_address": { 00:23:51.612 "trtype": "TCP", 00:23:51.612 "adrfam": "IPv4", 00:23:51.612 "traddr": "10.0.0.2", 00:23:51.612 "trsvcid": "4420" 00:23:51.612 }, 00:23:51.612 "secure_channel": true 00:23:51.612 } 00:23:51.612 } 00:23:51.612 ] 00:23:51.612 } 00:23:51.612 ] 00:23:51.612 }' 00:23:51.612 06:34:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:51.612 06:34:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=570934 00:23:51.612 06:34:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:23:51.612 06:34:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 570934 00:23:51.612 06:34:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 570934 ']' 00:23:51.612 06:34:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:51.612 06:34:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:51.612 06:34:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:51.612 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:51.612 06:34:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:51.612 06:34:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:51.612 [2024-11-20 06:34:23.259313] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:23:51.612 [2024-11-20 06:34:23.259360] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:51.612 [2024-11-20 06:34:23.318647] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:51.612 [2024-11-20 06:34:23.359463] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:51.612 [2024-11-20 06:34:23.359495] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:51.612 [2024-11-20 06:34:23.359502] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:51.612 [2024-11-20 06:34:23.359507] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:51.612 [2024-11-20 06:34:23.359512] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:51.612 [2024-11-20 06:34:23.360096] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:51.871 [2024-11-20 06:34:23.571981] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:51.871 [2024-11-20 06:34:23.604011] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:51.871 [2024-11-20 06:34:23.604215] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:52.438 06:34:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:52.438 06:34:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:23:52.438 06:34:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:52.438 06:34:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:52.438 06:34:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:52.438 06:34:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:52.438 06:34:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=570969 00:23:52.438 06:34:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 570969 /var/tmp/bdevperf.sock 00:23:52.438 06:34:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 570969 ']' 00:23:52.438 06:34:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:52.438 06:34:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:23:52.438 06:34:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:52.438 06:34:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:52.438 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:52.438 06:34:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:23:52.438 "subsystems": [ 00:23:52.438 { 00:23:52.438 "subsystem": "keyring", 00:23:52.438 "config": [ 00:23:52.438 { 00:23:52.438 "method": "keyring_file_add_key", 00:23:52.438 "params": { 00:23:52.438 "name": "key0", 00:23:52.438 "path": "/tmp/tmp.PwrbRy4nk8" 00:23:52.438 } 00:23:52.438 } 00:23:52.438 ] 00:23:52.438 }, 00:23:52.438 { 00:23:52.438 "subsystem": "iobuf", 00:23:52.438 "config": [ 00:23:52.438 { 00:23:52.438 "method": "iobuf_set_options", 00:23:52.438 "params": { 00:23:52.438 "small_pool_count": 8192, 00:23:52.438 "large_pool_count": 1024, 00:23:52.438 "small_bufsize": 8192, 00:23:52.438 "large_bufsize": 135168, 00:23:52.438 "enable_numa": false 00:23:52.438 } 00:23:52.438 } 00:23:52.438 ] 00:23:52.438 }, 00:23:52.438 { 00:23:52.438 "subsystem": "sock", 00:23:52.438 "config": [ 00:23:52.438 { 00:23:52.438 "method": "sock_set_default_impl", 00:23:52.438 "params": { 00:23:52.438 "impl_name": "posix" 00:23:52.438 } 00:23:52.438 }, 00:23:52.438 { 00:23:52.438 "method": "sock_impl_set_options", 00:23:52.438 "params": { 00:23:52.438 "impl_name": "ssl", 00:23:52.438 "recv_buf_size": 4096, 00:23:52.438 "send_buf_size": 4096, 00:23:52.438 "enable_recv_pipe": true, 00:23:52.438 "enable_quickack": false, 00:23:52.438 "enable_placement_id": 0, 00:23:52.438 "enable_zerocopy_send_server": true, 00:23:52.438 "enable_zerocopy_send_client": false, 00:23:52.438 "zerocopy_threshold": 0, 00:23:52.438 "tls_version": 0, 00:23:52.438 "enable_ktls": false 00:23:52.438 } 00:23:52.438 }, 00:23:52.438 { 00:23:52.438 "method": "sock_impl_set_options", 00:23:52.438 "params": { 00:23:52.438 "impl_name": "posix", 00:23:52.438 "recv_buf_size": 2097152, 00:23:52.438 "send_buf_size": 2097152, 00:23:52.438 "enable_recv_pipe": true, 00:23:52.438 "enable_quickack": false, 00:23:52.438 "enable_placement_id": 0, 00:23:52.438 "enable_zerocopy_send_server": true, 00:23:52.438 "enable_zerocopy_send_client": false, 00:23:52.438 "zerocopy_threshold": 0, 00:23:52.438 "tls_version": 0, 00:23:52.438 "enable_ktls": false 00:23:52.438 } 00:23:52.438 } 00:23:52.438 ] 00:23:52.438 }, 00:23:52.438 { 00:23:52.438 "subsystem": "vmd", 00:23:52.438 "config": [] 00:23:52.438 }, 00:23:52.438 { 00:23:52.438 "subsystem": "accel", 00:23:52.438 "config": [ 00:23:52.438 { 00:23:52.438 "method": "accel_set_options", 00:23:52.438 "params": { 00:23:52.438 "small_cache_size": 128, 00:23:52.438 "large_cache_size": 16, 00:23:52.438 "task_count": 2048, 00:23:52.438 "sequence_count": 2048, 00:23:52.438 "buf_count": 2048 00:23:52.438 } 00:23:52.438 } 00:23:52.438 ] 00:23:52.438 }, 00:23:52.438 { 00:23:52.438 "subsystem": "bdev", 00:23:52.438 "config": [ 00:23:52.438 { 00:23:52.438 "method": "bdev_set_options", 00:23:52.438 "params": { 00:23:52.438 "bdev_io_pool_size": 65535, 00:23:52.438 "bdev_io_cache_size": 256, 00:23:52.438 "bdev_auto_examine": true, 00:23:52.438 "iobuf_small_cache_size": 128, 00:23:52.438 "iobuf_large_cache_size": 16 00:23:52.438 } 00:23:52.438 }, 00:23:52.438 { 00:23:52.438 "method": "bdev_raid_set_options", 00:23:52.438 "params": { 00:23:52.438 "process_window_size_kb": 1024, 00:23:52.438 "process_max_bandwidth_mb_sec": 0 00:23:52.438 } 00:23:52.438 }, 00:23:52.438 { 00:23:52.438 "method": "bdev_iscsi_set_options", 00:23:52.438 "params": { 00:23:52.438 "timeout_sec": 30 00:23:52.438 } 00:23:52.438 }, 00:23:52.438 { 00:23:52.438 "method": "bdev_nvme_set_options", 00:23:52.438 "params": { 00:23:52.438 "action_on_timeout": "none", 00:23:52.438 "timeout_us": 0, 00:23:52.438 "timeout_admin_us": 0, 00:23:52.438 "keep_alive_timeout_ms": 10000, 00:23:52.438 "arbitration_burst": 0, 00:23:52.438 "low_priority_weight": 0, 00:23:52.438 "medium_priority_weight": 0, 00:23:52.438 "high_priority_weight": 0, 00:23:52.438 "nvme_adminq_poll_period_us": 10000, 00:23:52.438 "nvme_ioq_poll_period_us": 0, 00:23:52.438 "io_queue_requests": 512, 00:23:52.438 "delay_cmd_submit": true, 00:23:52.438 "transport_retry_count": 4, 00:23:52.438 "bdev_retry_count": 3, 00:23:52.438 "transport_ack_timeout": 0, 00:23:52.438 "ctrlr_loss_timeout_sec": 0, 00:23:52.438 "reconnect_delay_sec": 0, 00:23:52.438 "fast_io_fail_timeout_sec": 0, 00:23:52.438 "disable_auto_failback": false, 00:23:52.438 "generate_uuids": false, 00:23:52.438 "transport_tos": 0, 00:23:52.438 "nvme_error_stat": false, 00:23:52.438 "rdma_srq_size": 0, 00:23:52.438 "io_path_stat": false, 00:23:52.438 "allow_accel_sequence": false, 00:23:52.438 "rdma_max_cq_size": 0, 00:23:52.438 "rdma_cm_event_timeout_ms": 0, 00:23:52.438 "dhchap_digests": [ 00:23:52.438 "sha256", 00:23:52.438 "sha384", 00:23:52.438 "sha512" 00:23:52.438 ], 00:23:52.438 "dhchap_dhgroups": [ 00:23:52.438 "null", 00:23:52.438 "ffdhe2048", 00:23:52.438 "ffdhe3072", 00:23:52.438 "ffdhe4096", 00:23:52.438 "ffdhe6144", 00:23:52.438 "ffdhe8192" 00:23:52.438 ] 00:23:52.438 } 00:23:52.438 }, 00:23:52.438 { 00:23:52.438 "method": "bdev_nvme_attach_controller", 00:23:52.438 "params": { 00:23:52.438 "name": "TLSTEST", 00:23:52.438 "trtype": "TCP", 00:23:52.438 "adrfam": "IPv4", 00:23:52.438 "traddr": "10.0.0.2", 00:23:52.438 "trsvcid": "4420", 00:23:52.438 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:52.438 "prchk_reftag": false, 00:23:52.438 "prchk_guard": false, 00:23:52.438 "ctrlr_loss_timeout_sec": 0, 00:23:52.438 "reconnect_delay_sec": 0, 00:23:52.438 "fast_io_fail_timeout_sec": 0, 00:23:52.438 "psk": "key0", 00:23:52.438 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:52.438 "hdgst": false, 00:23:52.438 "ddgst": false, 00:23:52.438 "multipath": "multipath" 00:23:52.438 } 00:23:52.438 }, 00:23:52.438 { 00:23:52.438 "method": "bdev_nvme_set_hotplug", 00:23:52.438 "params": { 00:23:52.438 "period_us": 100000, 00:23:52.438 "enable": false 00:23:52.438 } 00:23:52.438 }, 00:23:52.438 { 00:23:52.438 "method": "bdev_wait_for_examine" 00:23:52.438 } 00:23:52.438 ] 00:23:52.438 }, 00:23:52.438 { 00:23:52.438 "subsystem": "nbd", 00:23:52.438 "config": [] 00:23:52.438 } 00:23:52.438 ] 00:23:52.438 }' 00:23:52.438 06:34:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:52.438 06:34:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:52.438 [2024-11-20 06:34:24.180744] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:23:52.439 [2024-11-20 06:34:24.180793] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid570969 ] 00:23:52.439 [2024-11-20 06:34:24.257298] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:52.697 [2024-11-20 06:34:24.299634] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:52.697 [2024-11-20 06:34:24.450982] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:53.264 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:53.264 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:23:53.264 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:53.523 Running I/O for 10 seconds... 00:23:55.398 5350.00 IOPS, 20.90 MiB/s [2024-11-20T05:34:28.180Z] 5459.00 IOPS, 21.32 MiB/s [2024-11-20T05:34:29.556Z] 5522.33 IOPS, 21.57 MiB/s [2024-11-20T05:34:30.492Z] 5519.50 IOPS, 21.56 MiB/s [2024-11-20T05:34:31.428Z] 5532.80 IOPS, 21.61 MiB/s [2024-11-20T05:34:32.366Z] 5533.17 IOPS, 21.61 MiB/s [2024-11-20T05:34:33.302Z] 5541.00 IOPS, 21.64 MiB/s [2024-11-20T05:34:34.237Z] 5547.38 IOPS, 21.67 MiB/s [2024-11-20T05:34:35.174Z] 5552.78 IOPS, 21.69 MiB/s [2024-11-20T05:34:35.174Z] 5558.60 IOPS, 21.71 MiB/s 00:24:03.338 Latency(us) 00:24:03.338 [2024-11-20T05:34:35.174Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:03.338 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:03.338 Verification LBA range: start 0x0 length 0x2000 00:24:03.338 TLSTESTn1 : 10.02 5557.90 21.71 0.00 0.00 22991.19 5492.54 47435.58 00:24:03.338 [2024-11-20T05:34:35.174Z] =================================================================================================================== 00:24:03.338 [2024-11-20T05:34:35.174Z] Total : 5557.90 21.71 0.00 0.00 22991.19 5492.54 47435.58 00:24:03.338 { 00:24:03.338 "results": [ 00:24:03.338 { 00:24:03.338 "job": "TLSTESTn1", 00:24:03.338 "core_mask": "0x4", 00:24:03.338 "workload": "verify", 00:24:03.338 "status": "finished", 00:24:03.338 "verify_range": { 00:24:03.338 "start": 0, 00:24:03.338 "length": 8192 00:24:03.338 }, 00:24:03.338 "queue_depth": 128, 00:24:03.338 "io_size": 4096, 00:24:03.338 "runtime": 10.024283, 00:24:03.338 "iops": 5557.90374234247, 00:24:03.338 "mibps": 21.710561493525272, 00:24:03.338 "io_failed": 0, 00:24:03.338 "io_timeout": 0, 00:24:03.338 "avg_latency_us": 22991.188277820227, 00:24:03.338 "min_latency_us": 5492.540952380952, 00:24:03.338 "max_latency_us": 47435.58095238095 00:24:03.338 } 00:24:03.338 ], 00:24:03.338 "core_count": 1 00:24:03.338 } 00:24:03.598 06:34:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:03.598 06:34:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 570969 00:24:03.598 06:34:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 570969 ']' 00:24:03.598 06:34:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 570969 00:24:03.598 06:34:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:24:03.598 06:34:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:03.598 06:34:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 570969 00:24:03.598 06:34:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:24:03.598 06:34:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:24:03.598 06:34:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 570969' 00:24:03.598 killing process with pid 570969 00:24:03.598 06:34:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 570969 00:24:03.598 Received shutdown signal, test time was about 10.000000 seconds 00:24:03.598 00:24:03.598 Latency(us) 00:24:03.598 [2024-11-20T05:34:35.434Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:03.598 [2024-11-20T05:34:35.434Z] =================================================================================================================== 00:24:03.598 [2024-11-20T05:34:35.434Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:03.598 06:34:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 570969 00:24:03.598 06:34:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 570934 00:24:03.598 06:34:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 570934 ']' 00:24:03.598 06:34:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 570934 00:24:03.598 06:34:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:24:03.598 06:34:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:03.598 06:34:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 570934 00:24:03.857 06:34:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:24:03.857 06:34:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:24:03.857 06:34:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 570934' 00:24:03.857 killing process with pid 570934 00:24:03.857 06:34:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 570934 00:24:03.857 06:34:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 570934 00:24:03.857 06:34:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:24:03.857 06:34:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:03.857 06:34:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:03.857 06:34:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:03.857 06:34:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=572816 00:24:03.857 06:34:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:24:03.857 06:34:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 572816 00:24:03.857 06:34:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 572816 ']' 00:24:03.857 06:34:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:03.857 06:34:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:03.857 06:34:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:03.857 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:03.857 06:34:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:03.857 06:34:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:03.857 [2024-11-20 06:34:35.657811] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:24:03.858 [2024-11-20 06:34:35.657862] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:04.117 [2024-11-20 06:34:35.735613] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:04.117 [2024-11-20 06:34:35.778198] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:04.117 [2024-11-20 06:34:35.778237] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:04.117 [2024-11-20 06:34:35.778245] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:04.117 [2024-11-20 06:34:35.778251] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:04.117 [2024-11-20 06:34:35.778256] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:04.117 [2024-11-20 06:34:35.778835] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:04.685 06:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:04.685 06:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:24:04.685 06:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:04.685 06:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:04.685 06:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:04.944 06:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:04.944 06:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.PwrbRy4nk8 00:24:04.944 06:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.PwrbRy4nk8 00:24:04.944 06:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:04.944 [2024-11-20 06:34:36.689148] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:04.944 06:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:24:05.203 06:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:24:05.462 [2024-11-20 06:34:37.046071] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:05.462 [2024-11-20 06:34:37.046302] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:05.462 06:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:24:05.462 malloc0 00:24:05.462 06:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:24:05.721 06:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.PwrbRy4nk8 00:24:05.980 06:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:24:05.980 06:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=573287 00:24:05.980 06:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:24:05.980 06:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:05.980 06:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 573287 /var/tmp/bdevperf.sock 00:24:05.980 06:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 573287 ']' 00:24:05.980 06:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:05.980 06:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:05.980 06:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:05.980 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:05.980 06:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:05.980 06:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:06.239 [2024-11-20 06:34:37.814919] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:24:06.239 [2024-11-20 06:34:37.814964] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid573287 ] 00:24:06.239 [2024-11-20 06:34:37.887671] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:06.239 [2024-11-20 06:34:37.927754] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:06.239 06:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:06.239 06:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:24:06.239 06:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.PwrbRy4nk8 00:24:06.498 06:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:24:06.757 [2024-11-20 06:34:38.379338] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:06.757 nvme0n1 00:24:06.757 06:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:06.757 Running I/O for 1 seconds... 00:24:08.135 5130.00 IOPS, 20.04 MiB/s 00:24:08.135 Latency(us) 00:24:08.135 [2024-11-20T05:34:39.971Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:08.135 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:24:08.135 Verification LBA range: start 0x0 length 0x2000 00:24:08.135 nvme0n1 : 1.02 5166.04 20.18 0.00 0.00 24572.23 7458.62 63913.20 00:24:08.135 [2024-11-20T05:34:39.971Z] =================================================================================================================== 00:24:08.135 [2024-11-20T05:34:39.971Z] Total : 5166.04 20.18 0.00 0.00 24572.23 7458.62 63913.20 00:24:08.135 { 00:24:08.135 "results": [ 00:24:08.135 { 00:24:08.135 "job": "nvme0n1", 00:24:08.135 "core_mask": "0x2", 00:24:08.135 "workload": "verify", 00:24:08.135 "status": "finished", 00:24:08.135 "verify_range": { 00:24:08.135 "start": 0, 00:24:08.135 "length": 8192 00:24:08.135 }, 00:24:08.135 "queue_depth": 128, 00:24:08.135 "io_size": 4096, 00:24:08.135 "runtime": 1.0178, 00:24:08.135 "iops": 5166.04440951071, 00:24:08.135 "mibps": 20.17986097465121, 00:24:08.135 "io_failed": 0, 00:24:08.135 "io_timeout": 0, 00:24:08.135 "avg_latency_us": 24572.22813961492, 00:24:08.135 "min_latency_us": 7458.620952380952, 00:24:08.135 "max_latency_us": 63913.20380952381 00:24:08.135 } 00:24:08.135 ], 00:24:08.135 "core_count": 1 00:24:08.135 } 00:24:08.135 06:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 573287 00:24:08.135 06:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 573287 ']' 00:24:08.135 06:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 573287 00:24:08.135 06:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:24:08.135 06:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:08.135 06:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 573287 00:24:08.135 06:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:24:08.135 06:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:24:08.135 06:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 573287' 00:24:08.135 killing process with pid 573287 00:24:08.135 06:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 573287 00:24:08.135 Received shutdown signal, test time was about 1.000000 seconds 00:24:08.135 00:24:08.135 Latency(us) 00:24:08.135 [2024-11-20T05:34:39.971Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:08.135 [2024-11-20T05:34:39.971Z] =================================================================================================================== 00:24:08.135 [2024-11-20T05:34:39.971Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:08.135 06:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 573287 00:24:08.135 06:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 572816 00:24:08.135 06:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 572816 ']' 00:24:08.135 06:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 572816 00:24:08.135 06:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:24:08.135 06:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:08.135 06:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 572816 00:24:08.135 06:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:24:08.135 06:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:24:08.135 06:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 572816' 00:24:08.135 killing process with pid 572816 00:24:08.135 06:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 572816 00:24:08.135 06:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 572816 00:24:08.395 06:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:24:08.395 06:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:08.395 06:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:08.395 06:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:08.395 06:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=573640 00:24:08.395 06:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 573640 00:24:08.395 06:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:24:08.395 06:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 573640 ']' 00:24:08.395 06:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:08.395 06:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:08.395 06:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:08.395 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:08.395 06:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:08.395 06:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:08.395 [2024-11-20 06:34:40.088232] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:24:08.395 [2024-11-20 06:34:40.088278] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:08.395 [2024-11-20 06:34:40.164717] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:08.395 [2024-11-20 06:34:40.206734] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:08.395 [2024-11-20 06:34:40.206766] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:08.395 [2024-11-20 06:34:40.206775] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:08.395 [2024-11-20 06:34:40.206783] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:08.395 [2024-11-20 06:34:40.206790] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:08.395 [2024-11-20 06:34:40.207388] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:08.654 06:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:08.654 06:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:24:08.654 06:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:08.654 06:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:08.654 06:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:08.654 06:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:08.654 06:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:24:08.654 06:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:08.654 06:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:08.654 [2024-11-20 06:34:40.350558] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:08.654 malloc0 00:24:08.654 [2024-11-20 06:34:40.378604] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:08.654 [2024-11-20 06:34:40.378793] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:08.654 06:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:08.654 06:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=573782 00:24:08.654 06:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:24:08.654 06:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 573782 /var/tmp/bdevperf.sock 00:24:08.654 06:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 573782 ']' 00:24:08.654 06:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:08.654 06:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:08.654 06:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:08.654 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:08.654 06:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:08.654 06:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:08.654 [2024-11-20 06:34:40.454307] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:24:08.654 [2024-11-20 06:34:40.454347] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid573782 ] 00:24:08.913 [2024-11-20 06:34:40.528789] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:08.913 [2024-11-20 06:34:40.570986] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:08.913 06:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:08.913 06:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:24:08.913 06:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.PwrbRy4nk8 00:24:09.171 06:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:24:09.431 [2024-11-20 06:34:41.006777] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:09.431 nvme0n1 00:24:09.431 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:09.431 Running I/O for 1 seconds... 00:24:10.368 5340.00 IOPS, 20.86 MiB/s 00:24:10.368 Latency(us) 00:24:10.368 [2024-11-20T05:34:42.204Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:10.368 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:24:10.368 Verification LBA range: start 0x0 length 0x2000 00:24:10.368 nvme0n1 : 1.01 5400.46 21.10 0.00 0.00 23544.66 5274.09 42692.02 00:24:10.368 [2024-11-20T05:34:42.204Z] =================================================================================================================== 00:24:10.368 [2024-11-20T05:34:42.204Z] Total : 5400.46 21.10 0.00 0.00 23544.66 5274.09 42692.02 00:24:10.368 { 00:24:10.368 "results": [ 00:24:10.368 { 00:24:10.368 "job": "nvme0n1", 00:24:10.368 "core_mask": "0x2", 00:24:10.368 "workload": "verify", 00:24:10.368 "status": "finished", 00:24:10.368 "verify_range": { 00:24:10.368 "start": 0, 00:24:10.368 "length": 8192 00:24:10.368 }, 00:24:10.368 "queue_depth": 128, 00:24:10.368 "io_size": 4096, 00:24:10.368 "runtime": 1.012692, 00:24:10.368 "iops": 5400.457394745885, 00:24:10.368 "mibps": 21.095536698226113, 00:24:10.368 "io_failed": 0, 00:24:10.368 "io_timeout": 0, 00:24:10.368 "avg_latency_us": 23544.664939877577, 00:24:10.368 "min_latency_us": 5274.087619047619, 00:24:10.368 "max_latency_us": 42692.02285714286 00:24:10.368 } 00:24:10.368 ], 00:24:10.368 "core_count": 1 00:24:10.368 } 00:24:10.628 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:24:10.628 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:10.628 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:10.628 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:10.628 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:24:10.628 "subsystems": [ 00:24:10.628 { 00:24:10.628 "subsystem": "keyring", 00:24:10.628 "config": [ 00:24:10.628 { 00:24:10.628 "method": "keyring_file_add_key", 00:24:10.628 "params": { 00:24:10.628 "name": "key0", 00:24:10.628 "path": "/tmp/tmp.PwrbRy4nk8" 00:24:10.628 } 00:24:10.628 } 00:24:10.628 ] 00:24:10.628 }, 00:24:10.628 { 00:24:10.628 "subsystem": "iobuf", 00:24:10.628 "config": [ 00:24:10.628 { 00:24:10.628 "method": "iobuf_set_options", 00:24:10.628 "params": { 00:24:10.628 "small_pool_count": 8192, 00:24:10.628 "large_pool_count": 1024, 00:24:10.628 "small_bufsize": 8192, 00:24:10.628 "large_bufsize": 135168, 00:24:10.628 "enable_numa": false 00:24:10.628 } 00:24:10.628 } 00:24:10.628 ] 00:24:10.628 }, 00:24:10.628 { 00:24:10.628 "subsystem": "sock", 00:24:10.628 "config": [ 00:24:10.628 { 00:24:10.628 "method": "sock_set_default_impl", 00:24:10.628 "params": { 00:24:10.628 "impl_name": "posix" 00:24:10.628 } 00:24:10.628 }, 00:24:10.628 { 00:24:10.628 "method": "sock_impl_set_options", 00:24:10.628 "params": { 00:24:10.628 "impl_name": "ssl", 00:24:10.628 "recv_buf_size": 4096, 00:24:10.628 "send_buf_size": 4096, 00:24:10.628 "enable_recv_pipe": true, 00:24:10.628 "enable_quickack": false, 00:24:10.628 "enable_placement_id": 0, 00:24:10.628 "enable_zerocopy_send_server": true, 00:24:10.628 "enable_zerocopy_send_client": false, 00:24:10.628 "zerocopy_threshold": 0, 00:24:10.628 "tls_version": 0, 00:24:10.628 "enable_ktls": false 00:24:10.628 } 00:24:10.628 }, 00:24:10.628 { 00:24:10.628 "method": "sock_impl_set_options", 00:24:10.628 "params": { 00:24:10.628 "impl_name": "posix", 00:24:10.628 "recv_buf_size": 2097152, 00:24:10.628 "send_buf_size": 2097152, 00:24:10.628 "enable_recv_pipe": true, 00:24:10.628 "enable_quickack": false, 00:24:10.628 "enable_placement_id": 0, 00:24:10.628 "enable_zerocopy_send_server": true, 00:24:10.628 "enable_zerocopy_send_client": false, 00:24:10.628 "zerocopy_threshold": 0, 00:24:10.628 "tls_version": 0, 00:24:10.628 "enable_ktls": false 00:24:10.628 } 00:24:10.628 } 00:24:10.628 ] 00:24:10.628 }, 00:24:10.628 { 00:24:10.628 "subsystem": "vmd", 00:24:10.628 "config": [] 00:24:10.628 }, 00:24:10.628 { 00:24:10.628 "subsystem": "accel", 00:24:10.628 "config": [ 00:24:10.628 { 00:24:10.628 "method": "accel_set_options", 00:24:10.628 "params": { 00:24:10.628 "small_cache_size": 128, 00:24:10.628 "large_cache_size": 16, 00:24:10.628 "task_count": 2048, 00:24:10.628 "sequence_count": 2048, 00:24:10.628 "buf_count": 2048 00:24:10.628 } 00:24:10.628 } 00:24:10.628 ] 00:24:10.628 }, 00:24:10.628 { 00:24:10.628 "subsystem": "bdev", 00:24:10.628 "config": [ 00:24:10.628 { 00:24:10.628 "method": "bdev_set_options", 00:24:10.628 "params": { 00:24:10.628 "bdev_io_pool_size": 65535, 00:24:10.628 "bdev_io_cache_size": 256, 00:24:10.628 "bdev_auto_examine": true, 00:24:10.628 "iobuf_small_cache_size": 128, 00:24:10.628 "iobuf_large_cache_size": 16 00:24:10.628 } 00:24:10.628 }, 00:24:10.628 { 00:24:10.628 "method": "bdev_raid_set_options", 00:24:10.628 "params": { 00:24:10.628 "process_window_size_kb": 1024, 00:24:10.628 "process_max_bandwidth_mb_sec": 0 00:24:10.628 } 00:24:10.628 }, 00:24:10.628 { 00:24:10.628 "method": "bdev_iscsi_set_options", 00:24:10.628 "params": { 00:24:10.628 "timeout_sec": 30 00:24:10.628 } 00:24:10.628 }, 00:24:10.628 { 00:24:10.628 "method": "bdev_nvme_set_options", 00:24:10.628 "params": { 00:24:10.628 "action_on_timeout": "none", 00:24:10.628 "timeout_us": 0, 00:24:10.628 "timeout_admin_us": 0, 00:24:10.628 "keep_alive_timeout_ms": 10000, 00:24:10.628 "arbitration_burst": 0, 00:24:10.628 "low_priority_weight": 0, 00:24:10.628 "medium_priority_weight": 0, 00:24:10.628 "high_priority_weight": 0, 00:24:10.628 "nvme_adminq_poll_period_us": 10000, 00:24:10.628 "nvme_ioq_poll_period_us": 0, 00:24:10.628 "io_queue_requests": 0, 00:24:10.628 "delay_cmd_submit": true, 00:24:10.628 "transport_retry_count": 4, 00:24:10.628 "bdev_retry_count": 3, 00:24:10.628 "transport_ack_timeout": 0, 00:24:10.628 "ctrlr_loss_timeout_sec": 0, 00:24:10.628 "reconnect_delay_sec": 0, 00:24:10.628 "fast_io_fail_timeout_sec": 0, 00:24:10.628 "disable_auto_failback": false, 00:24:10.628 "generate_uuids": false, 00:24:10.628 "transport_tos": 0, 00:24:10.628 "nvme_error_stat": false, 00:24:10.628 "rdma_srq_size": 0, 00:24:10.628 "io_path_stat": false, 00:24:10.628 "allow_accel_sequence": false, 00:24:10.628 "rdma_max_cq_size": 0, 00:24:10.628 "rdma_cm_event_timeout_ms": 0, 00:24:10.628 "dhchap_digests": [ 00:24:10.628 "sha256", 00:24:10.628 "sha384", 00:24:10.628 "sha512" 00:24:10.628 ], 00:24:10.628 "dhchap_dhgroups": [ 00:24:10.628 "null", 00:24:10.628 "ffdhe2048", 00:24:10.628 "ffdhe3072", 00:24:10.628 "ffdhe4096", 00:24:10.628 "ffdhe6144", 00:24:10.628 "ffdhe8192" 00:24:10.628 ] 00:24:10.628 } 00:24:10.628 }, 00:24:10.628 { 00:24:10.628 "method": "bdev_nvme_set_hotplug", 00:24:10.628 "params": { 00:24:10.628 "period_us": 100000, 00:24:10.628 "enable": false 00:24:10.628 } 00:24:10.628 }, 00:24:10.628 { 00:24:10.628 "method": "bdev_malloc_create", 00:24:10.628 "params": { 00:24:10.628 "name": "malloc0", 00:24:10.628 "num_blocks": 8192, 00:24:10.628 "block_size": 4096, 00:24:10.628 "physical_block_size": 4096, 00:24:10.628 "uuid": "5a5bb891-d687-4a60-86fb-5f22d99fdd2a", 00:24:10.628 "optimal_io_boundary": 0, 00:24:10.628 "md_size": 0, 00:24:10.628 "dif_type": 0, 00:24:10.628 "dif_is_head_of_md": false, 00:24:10.628 "dif_pi_format": 0 00:24:10.628 } 00:24:10.628 }, 00:24:10.628 { 00:24:10.628 "method": "bdev_wait_for_examine" 00:24:10.628 } 00:24:10.628 ] 00:24:10.628 }, 00:24:10.628 { 00:24:10.628 "subsystem": "nbd", 00:24:10.628 "config": [] 00:24:10.628 }, 00:24:10.628 { 00:24:10.628 "subsystem": "scheduler", 00:24:10.628 "config": [ 00:24:10.628 { 00:24:10.628 "method": "framework_set_scheduler", 00:24:10.628 "params": { 00:24:10.628 "name": "static" 00:24:10.628 } 00:24:10.628 } 00:24:10.628 ] 00:24:10.628 }, 00:24:10.628 { 00:24:10.628 "subsystem": "nvmf", 00:24:10.628 "config": [ 00:24:10.628 { 00:24:10.628 "method": "nvmf_set_config", 00:24:10.628 "params": { 00:24:10.628 "discovery_filter": "match_any", 00:24:10.628 "admin_cmd_passthru": { 00:24:10.628 "identify_ctrlr": false 00:24:10.628 }, 00:24:10.628 "dhchap_digests": [ 00:24:10.628 "sha256", 00:24:10.628 "sha384", 00:24:10.628 "sha512" 00:24:10.628 ], 00:24:10.628 "dhchap_dhgroups": [ 00:24:10.628 "null", 00:24:10.628 "ffdhe2048", 00:24:10.628 "ffdhe3072", 00:24:10.628 "ffdhe4096", 00:24:10.628 "ffdhe6144", 00:24:10.628 "ffdhe8192" 00:24:10.628 ] 00:24:10.628 } 00:24:10.628 }, 00:24:10.628 { 00:24:10.628 "method": "nvmf_set_max_subsystems", 00:24:10.628 "params": { 00:24:10.628 "max_subsystems": 1024 00:24:10.628 } 00:24:10.628 }, 00:24:10.628 { 00:24:10.628 "method": "nvmf_set_crdt", 00:24:10.628 "params": { 00:24:10.628 "crdt1": 0, 00:24:10.628 "crdt2": 0, 00:24:10.628 "crdt3": 0 00:24:10.628 } 00:24:10.628 }, 00:24:10.628 { 00:24:10.628 "method": "nvmf_create_transport", 00:24:10.628 "params": { 00:24:10.628 "trtype": "TCP", 00:24:10.628 "max_queue_depth": 128, 00:24:10.628 "max_io_qpairs_per_ctrlr": 127, 00:24:10.628 "in_capsule_data_size": 4096, 00:24:10.628 "max_io_size": 131072, 00:24:10.628 "io_unit_size": 131072, 00:24:10.628 "max_aq_depth": 128, 00:24:10.628 "num_shared_buffers": 511, 00:24:10.628 "buf_cache_size": 4294967295, 00:24:10.628 "dif_insert_or_strip": false, 00:24:10.628 "zcopy": false, 00:24:10.628 "c2h_success": false, 00:24:10.628 "sock_priority": 0, 00:24:10.628 "abort_timeout_sec": 1, 00:24:10.628 "ack_timeout": 0, 00:24:10.628 "data_wr_pool_size": 0 00:24:10.628 } 00:24:10.628 }, 00:24:10.628 { 00:24:10.628 "method": "nvmf_create_subsystem", 00:24:10.628 "params": { 00:24:10.628 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:10.628 "allow_any_host": false, 00:24:10.628 "serial_number": "00000000000000000000", 00:24:10.628 "model_number": "SPDK bdev Controller", 00:24:10.629 "max_namespaces": 32, 00:24:10.629 "min_cntlid": 1, 00:24:10.629 "max_cntlid": 65519, 00:24:10.629 "ana_reporting": false 00:24:10.629 } 00:24:10.629 }, 00:24:10.629 { 00:24:10.629 "method": "nvmf_subsystem_add_host", 00:24:10.629 "params": { 00:24:10.629 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:10.629 "host": "nqn.2016-06.io.spdk:host1", 00:24:10.629 "psk": "key0" 00:24:10.629 } 00:24:10.629 }, 00:24:10.629 { 00:24:10.629 "method": "nvmf_subsystem_add_ns", 00:24:10.629 "params": { 00:24:10.629 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:10.629 "namespace": { 00:24:10.629 "nsid": 1, 00:24:10.629 "bdev_name": "malloc0", 00:24:10.629 "nguid": "5A5BB891D6874A6086FB5F22D99FDD2A", 00:24:10.629 "uuid": "5a5bb891-d687-4a60-86fb-5f22d99fdd2a", 00:24:10.629 "no_auto_visible": false 00:24:10.629 } 00:24:10.629 } 00:24:10.629 }, 00:24:10.629 { 00:24:10.629 "method": "nvmf_subsystem_add_listener", 00:24:10.629 "params": { 00:24:10.629 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:10.629 "listen_address": { 00:24:10.629 "trtype": "TCP", 00:24:10.629 "adrfam": "IPv4", 00:24:10.629 "traddr": "10.0.0.2", 00:24:10.629 "trsvcid": "4420" 00:24:10.629 }, 00:24:10.629 "secure_channel": false, 00:24:10.629 "sock_impl": "ssl" 00:24:10.629 } 00:24:10.629 } 00:24:10.629 ] 00:24:10.629 } 00:24:10.629 ] 00:24:10.629 }' 00:24:10.629 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:24:10.888 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:24:10.888 "subsystems": [ 00:24:10.888 { 00:24:10.888 "subsystem": "keyring", 00:24:10.888 "config": [ 00:24:10.888 { 00:24:10.888 "method": "keyring_file_add_key", 00:24:10.888 "params": { 00:24:10.888 "name": "key0", 00:24:10.888 "path": "/tmp/tmp.PwrbRy4nk8" 00:24:10.888 } 00:24:10.888 } 00:24:10.888 ] 00:24:10.888 }, 00:24:10.888 { 00:24:10.888 "subsystem": "iobuf", 00:24:10.888 "config": [ 00:24:10.888 { 00:24:10.888 "method": "iobuf_set_options", 00:24:10.888 "params": { 00:24:10.888 "small_pool_count": 8192, 00:24:10.888 "large_pool_count": 1024, 00:24:10.888 "small_bufsize": 8192, 00:24:10.888 "large_bufsize": 135168, 00:24:10.889 "enable_numa": false 00:24:10.889 } 00:24:10.889 } 00:24:10.889 ] 00:24:10.889 }, 00:24:10.889 { 00:24:10.889 "subsystem": "sock", 00:24:10.889 "config": [ 00:24:10.889 { 00:24:10.889 "method": "sock_set_default_impl", 00:24:10.889 "params": { 00:24:10.889 "impl_name": "posix" 00:24:10.889 } 00:24:10.889 }, 00:24:10.889 { 00:24:10.889 "method": "sock_impl_set_options", 00:24:10.889 "params": { 00:24:10.889 "impl_name": "ssl", 00:24:10.889 "recv_buf_size": 4096, 00:24:10.889 "send_buf_size": 4096, 00:24:10.889 "enable_recv_pipe": true, 00:24:10.889 "enable_quickack": false, 00:24:10.889 "enable_placement_id": 0, 00:24:10.889 "enable_zerocopy_send_server": true, 00:24:10.889 "enable_zerocopy_send_client": false, 00:24:10.889 "zerocopy_threshold": 0, 00:24:10.889 "tls_version": 0, 00:24:10.889 "enable_ktls": false 00:24:10.889 } 00:24:10.889 }, 00:24:10.889 { 00:24:10.889 "method": "sock_impl_set_options", 00:24:10.889 "params": { 00:24:10.889 "impl_name": "posix", 00:24:10.889 "recv_buf_size": 2097152, 00:24:10.889 "send_buf_size": 2097152, 00:24:10.889 "enable_recv_pipe": true, 00:24:10.889 "enable_quickack": false, 00:24:10.889 "enable_placement_id": 0, 00:24:10.889 "enable_zerocopy_send_server": true, 00:24:10.889 "enable_zerocopy_send_client": false, 00:24:10.889 "zerocopy_threshold": 0, 00:24:10.889 "tls_version": 0, 00:24:10.889 "enable_ktls": false 00:24:10.889 } 00:24:10.889 } 00:24:10.889 ] 00:24:10.889 }, 00:24:10.889 { 00:24:10.889 "subsystem": "vmd", 00:24:10.889 "config": [] 00:24:10.889 }, 00:24:10.889 { 00:24:10.889 "subsystem": "accel", 00:24:10.889 "config": [ 00:24:10.889 { 00:24:10.889 "method": "accel_set_options", 00:24:10.889 "params": { 00:24:10.889 "small_cache_size": 128, 00:24:10.889 "large_cache_size": 16, 00:24:10.889 "task_count": 2048, 00:24:10.889 "sequence_count": 2048, 00:24:10.889 "buf_count": 2048 00:24:10.889 } 00:24:10.889 } 00:24:10.889 ] 00:24:10.889 }, 00:24:10.889 { 00:24:10.889 "subsystem": "bdev", 00:24:10.889 "config": [ 00:24:10.889 { 00:24:10.889 "method": "bdev_set_options", 00:24:10.889 "params": { 00:24:10.889 "bdev_io_pool_size": 65535, 00:24:10.889 "bdev_io_cache_size": 256, 00:24:10.889 "bdev_auto_examine": true, 00:24:10.889 "iobuf_small_cache_size": 128, 00:24:10.889 "iobuf_large_cache_size": 16 00:24:10.889 } 00:24:10.889 }, 00:24:10.889 { 00:24:10.889 "method": "bdev_raid_set_options", 00:24:10.889 "params": { 00:24:10.889 "process_window_size_kb": 1024, 00:24:10.889 "process_max_bandwidth_mb_sec": 0 00:24:10.889 } 00:24:10.889 }, 00:24:10.889 { 00:24:10.889 "method": "bdev_iscsi_set_options", 00:24:10.889 "params": { 00:24:10.889 "timeout_sec": 30 00:24:10.889 } 00:24:10.889 }, 00:24:10.889 { 00:24:10.889 "method": "bdev_nvme_set_options", 00:24:10.889 "params": { 00:24:10.889 "action_on_timeout": "none", 00:24:10.889 "timeout_us": 0, 00:24:10.889 "timeout_admin_us": 0, 00:24:10.889 "keep_alive_timeout_ms": 10000, 00:24:10.889 "arbitration_burst": 0, 00:24:10.889 "low_priority_weight": 0, 00:24:10.889 "medium_priority_weight": 0, 00:24:10.889 "high_priority_weight": 0, 00:24:10.889 "nvme_adminq_poll_period_us": 10000, 00:24:10.889 "nvme_ioq_poll_period_us": 0, 00:24:10.889 "io_queue_requests": 512, 00:24:10.889 "delay_cmd_submit": true, 00:24:10.889 "transport_retry_count": 4, 00:24:10.889 "bdev_retry_count": 3, 00:24:10.889 "transport_ack_timeout": 0, 00:24:10.889 "ctrlr_loss_timeout_sec": 0, 00:24:10.889 "reconnect_delay_sec": 0, 00:24:10.889 "fast_io_fail_timeout_sec": 0, 00:24:10.889 "disable_auto_failback": false, 00:24:10.889 "generate_uuids": false, 00:24:10.889 "transport_tos": 0, 00:24:10.889 "nvme_error_stat": false, 00:24:10.889 "rdma_srq_size": 0, 00:24:10.889 "io_path_stat": false, 00:24:10.889 "allow_accel_sequence": false, 00:24:10.889 "rdma_max_cq_size": 0, 00:24:10.889 "rdma_cm_event_timeout_ms": 0, 00:24:10.889 "dhchap_digests": [ 00:24:10.889 "sha256", 00:24:10.889 "sha384", 00:24:10.889 "sha512" 00:24:10.889 ], 00:24:10.889 "dhchap_dhgroups": [ 00:24:10.889 "null", 00:24:10.889 "ffdhe2048", 00:24:10.889 "ffdhe3072", 00:24:10.889 "ffdhe4096", 00:24:10.889 "ffdhe6144", 00:24:10.889 "ffdhe8192" 00:24:10.889 ] 00:24:10.889 } 00:24:10.889 }, 00:24:10.889 { 00:24:10.889 "method": "bdev_nvme_attach_controller", 00:24:10.889 "params": { 00:24:10.889 "name": "nvme0", 00:24:10.889 "trtype": "TCP", 00:24:10.889 "adrfam": "IPv4", 00:24:10.889 "traddr": "10.0.0.2", 00:24:10.889 "trsvcid": "4420", 00:24:10.889 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:10.889 "prchk_reftag": false, 00:24:10.889 "prchk_guard": false, 00:24:10.889 "ctrlr_loss_timeout_sec": 0, 00:24:10.889 "reconnect_delay_sec": 0, 00:24:10.889 "fast_io_fail_timeout_sec": 0, 00:24:10.889 "psk": "key0", 00:24:10.889 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:10.889 "hdgst": false, 00:24:10.889 "ddgst": false, 00:24:10.889 "multipath": "multipath" 00:24:10.889 } 00:24:10.889 }, 00:24:10.889 { 00:24:10.889 "method": "bdev_nvme_set_hotplug", 00:24:10.889 "params": { 00:24:10.889 "period_us": 100000, 00:24:10.889 "enable": false 00:24:10.889 } 00:24:10.889 }, 00:24:10.889 { 00:24:10.889 "method": "bdev_enable_histogram", 00:24:10.889 "params": { 00:24:10.889 "name": "nvme0n1", 00:24:10.889 "enable": true 00:24:10.889 } 00:24:10.889 }, 00:24:10.889 { 00:24:10.889 "method": "bdev_wait_for_examine" 00:24:10.889 } 00:24:10.889 ] 00:24:10.889 }, 00:24:10.889 { 00:24:10.889 "subsystem": "nbd", 00:24:10.889 "config": [] 00:24:10.889 } 00:24:10.889 ] 00:24:10.889 }' 00:24:10.889 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 573782 00:24:10.889 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 573782 ']' 00:24:10.889 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 573782 00:24:10.889 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:24:10.889 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:10.889 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 573782 00:24:10.889 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:24:10.889 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:24:10.889 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 573782' 00:24:10.889 killing process with pid 573782 00:24:10.889 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 573782 00:24:10.889 Received shutdown signal, test time was about 1.000000 seconds 00:24:10.889 00:24:10.889 Latency(us) 00:24:10.889 [2024-11-20T05:34:42.725Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:10.889 [2024-11-20T05:34:42.725Z] =================================================================================================================== 00:24:10.889 [2024-11-20T05:34:42.725Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:10.889 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 573782 00:24:11.149 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 573640 00:24:11.149 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 573640 ']' 00:24:11.149 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 573640 00:24:11.149 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:24:11.149 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:11.149 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 573640 00:24:11.149 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:24:11.149 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:24:11.149 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 573640' 00:24:11.149 killing process with pid 573640 00:24:11.149 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 573640 00:24:11.149 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 573640 00:24:11.409 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:24:11.409 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:11.409 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:11.409 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:24:11.409 "subsystems": [ 00:24:11.409 { 00:24:11.409 "subsystem": "keyring", 00:24:11.409 "config": [ 00:24:11.409 { 00:24:11.409 "method": "keyring_file_add_key", 00:24:11.409 "params": { 00:24:11.409 "name": "key0", 00:24:11.409 "path": "/tmp/tmp.PwrbRy4nk8" 00:24:11.409 } 00:24:11.409 } 00:24:11.409 ] 00:24:11.409 }, 00:24:11.409 { 00:24:11.409 "subsystem": "iobuf", 00:24:11.409 "config": [ 00:24:11.409 { 00:24:11.409 "method": "iobuf_set_options", 00:24:11.409 "params": { 00:24:11.409 "small_pool_count": 8192, 00:24:11.409 "large_pool_count": 1024, 00:24:11.409 "small_bufsize": 8192, 00:24:11.409 "large_bufsize": 135168, 00:24:11.409 "enable_numa": false 00:24:11.409 } 00:24:11.409 } 00:24:11.409 ] 00:24:11.409 }, 00:24:11.409 { 00:24:11.409 "subsystem": "sock", 00:24:11.409 "config": [ 00:24:11.409 { 00:24:11.409 "method": "sock_set_default_impl", 00:24:11.409 "params": { 00:24:11.409 "impl_name": "posix" 00:24:11.409 } 00:24:11.409 }, 00:24:11.409 { 00:24:11.409 "method": "sock_impl_set_options", 00:24:11.409 "params": { 00:24:11.409 "impl_name": "ssl", 00:24:11.409 "recv_buf_size": 4096, 00:24:11.409 "send_buf_size": 4096, 00:24:11.409 "enable_recv_pipe": true, 00:24:11.409 "enable_quickack": false, 00:24:11.409 "enable_placement_id": 0, 00:24:11.409 "enable_zerocopy_send_server": true, 00:24:11.409 "enable_zerocopy_send_client": false, 00:24:11.409 "zerocopy_threshold": 0, 00:24:11.409 "tls_version": 0, 00:24:11.409 "enable_ktls": false 00:24:11.409 } 00:24:11.409 }, 00:24:11.409 { 00:24:11.409 "method": "sock_impl_set_options", 00:24:11.409 "params": { 00:24:11.409 "impl_name": "posix", 00:24:11.409 "recv_buf_size": 2097152, 00:24:11.409 "send_buf_size": 2097152, 00:24:11.409 "enable_recv_pipe": true, 00:24:11.409 "enable_quickack": false, 00:24:11.409 "enable_placement_id": 0, 00:24:11.409 "enable_zerocopy_send_server": true, 00:24:11.409 "enable_zerocopy_send_client": false, 00:24:11.409 "zerocopy_threshold": 0, 00:24:11.409 "tls_version": 0, 00:24:11.409 "enable_ktls": false 00:24:11.409 } 00:24:11.409 } 00:24:11.409 ] 00:24:11.409 }, 00:24:11.409 { 00:24:11.409 "subsystem": "vmd", 00:24:11.409 "config": [] 00:24:11.409 }, 00:24:11.409 { 00:24:11.409 "subsystem": "accel", 00:24:11.409 "config": [ 00:24:11.409 { 00:24:11.409 "method": "accel_set_options", 00:24:11.409 "params": { 00:24:11.409 "small_cache_size": 128, 00:24:11.409 "large_cache_size": 16, 00:24:11.409 "task_count": 2048, 00:24:11.409 "sequence_count": 2048, 00:24:11.409 "buf_count": 2048 00:24:11.409 } 00:24:11.409 } 00:24:11.409 ] 00:24:11.409 }, 00:24:11.409 { 00:24:11.409 "subsystem": "bdev", 00:24:11.409 "config": [ 00:24:11.409 { 00:24:11.409 "method": "bdev_set_options", 00:24:11.409 "params": { 00:24:11.409 "bdev_io_pool_size": 65535, 00:24:11.409 "bdev_io_cache_size": 256, 00:24:11.409 "bdev_auto_examine": true, 00:24:11.409 "iobuf_small_cache_size": 128, 00:24:11.409 "iobuf_large_cache_size": 16 00:24:11.409 } 00:24:11.409 }, 00:24:11.409 { 00:24:11.409 "method": "bdev_raid_set_options", 00:24:11.409 "params": { 00:24:11.409 "process_window_size_kb": 1024, 00:24:11.409 "process_max_bandwidth_mb_sec": 0 00:24:11.409 } 00:24:11.409 }, 00:24:11.409 { 00:24:11.409 "method": "bdev_iscsi_set_options", 00:24:11.409 "params": { 00:24:11.409 "timeout_sec": 30 00:24:11.409 } 00:24:11.409 }, 00:24:11.409 { 00:24:11.409 "method": "bdev_nvme_set_options", 00:24:11.409 "params": { 00:24:11.409 "action_on_timeout": "none", 00:24:11.409 "timeout_us": 0, 00:24:11.409 "timeout_admin_us": 0, 00:24:11.409 "keep_alive_timeout_ms": 10000, 00:24:11.409 "arbitration_burst": 0, 00:24:11.409 "low_priority_weight": 0, 00:24:11.409 "medium_priority_weight": 0, 00:24:11.409 "high_priority_weight": 0, 00:24:11.409 "nvme_adminq_poll_period_us": 10000, 00:24:11.409 "nvme_ioq_poll_period_us": 0, 00:24:11.409 "io_queue_requests": 0, 00:24:11.409 "delay_cmd_submit": true, 00:24:11.409 "transport_retry_count": 4, 00:24:11.409 "bdev_retry_count": 3, 00:24:11.409 "transport_ack_timeout": 0, 00:24:11.409 "ctrlr_loss_timeout_sec": 0, 00:24:11.409 "reconnect_delay_sec": 0, 00:24:11.409 "fast_io_fail_timeout_sec": 0, 00:24:11.409 "disable_auto_failback": false, 00:24:11.409 "generate_uuids": false, 00:24:11.409 "transport_tos": 0, 00:24:11.409 "nvme_error_stat": false, 00:24:11.409 "rdma_srq_size": 0, 00:24:11.409 "io_path_stat": false, 00:24:11.410 "allow_accel_sequence": false, 00:24:11.410 "rdma_max_cq_size": 0, 00:24:11.410 "rdma_cm_event_timeout_ms": 0, 00:24:11.410 "dhchap_digests": [ 00:24:11.410 "sha256", 00:24:11.410 "sha384", 00:24:11.410 "sha512" 00:24:11.410 ], 00:24:11.410 "dhchap_dhgroups": [ 00:24:11.410 "null", 00:24:11.410 "ffdhe2048", 00:24:11.410 "ffdhe3072", 00:24:11.410 "ffdhe4096", 00:24:11.410 "ffdhe6144", 00:24:11.410 "ffdhe8192" 00:24:11.410 ] 00:24:11.410 } 00:24:11.410 }, 00:24:11.410 { 00:24:11.410 "method": "bdev_nvme_set_hotplug", 00:24:11.410 "params": { 00:24:11.410 "period_us": 100000, 00:24:11.410 "enable": false 00:24:11.410 } 00:24:11.410 }, 00:24:11.410 { 00:24:11.410 "method": "bdev_malloc_create", 00:24:11.410 "params": { 00:24:11.410 "name": "malloc0", 00:24:11.410 "num_blocks": 8192, 00:24:11.410 "block_size": 4096, 00:24:11.410 "physical_block_size": 4096, 00:24:11.410 "uuid": "5a5bb891-d687-4a60-86fb-5f22d99fdd2a", 00:24:11.410 "optimal_io_boundary": 0, 00:24:11.410 "md_size": 0, 00:24:11.410 "dif_type": 0, 00:24:11.410 "dif_is_head_of_md": false, 00:24:11.410 "dif_pi_format": 0 00:24:11.410 } 00:24:11.410 }, 00:24:11.410 { 00:24:11.410 "method": "bdev_wait_for_examine" 00:24:11.410 } 00:24:11.410 ] 00:24:11.410 }, 00:24:11.410 { 00:24:11.410 "subsystem": "nbd", 00:24:11.410 "config": [] 00:24:11.410 }, 00:24:11.410 { 00:24:11.410 "subsystem": "scheduler", 00:24:11.410 "config": [ 00:24:11.410 { 00:24:11.410 "method": "framework_set_scheduler", 00:24:11.410 "params": { 00:24:11.410 "name": "static" 00:24:11.410 } 00:24:11.410 } 00:24:11.410 ] 00:24:11.410 }, 00:24:11.410 { 00:24:11.410 "subsystem": "nvmf", 00:24:11.410 "config": [ 00:24:11.410 { 00:24:11.410 "method": "nvmf_set_config", 00:24:11.410 "params": { 00:24:11.410 "discovery_filter": "match_any", 00:24:11.410 "admin_cmd_passthru": { 00:24:11.410 "identify_ctrlr": false 00:24:11.410 }, 00:24:11.410 "dhchap_digests": [ 00:24:11.410 "sha256", 00:24:11.410 "sha384", 00:24:11.410 "sha512" 00:24:11.410 ], 00:24:11.410 "dhchap_dhgroups": [ 00:24:11.410 "null", 00:24:11.410 "ffdhe2048", 00:24:11.410 "ffdhe3072", 00:24:11.410 "ffdhe4096", 00:24:11.410 "ffdhe6144", 00:24:11.410 "ffdhe8192" 00:24:11.410 ] 00:24:11.410 } 00:24:11.410 }, 00:24:11.410 { 00:24:11.410 "method": "nvmf_set_max_subsystems", 00:24:11.410 "params": { 00:24:11.410 "max_subsystems": 1024 00:24:11.410 } 00:24:11.410 }, 00:24:11.410 { 00:24:11.410 "method": "nvmf_set_crdt", 00:24:11.410 "params": { 00:24:11.410 "crdt1": 0, 00:24:11.410 "crdt2": 0, 00:24:11.410 "crdt3": 0 00:24:11.410 } 00:24:11.410 }, 00:24:11.410 { 00:24:11.410 "method": "nvmf_create_transport", 00:24:11.410 "params": { 00:24:11.410 "trtype": "TCP", 00:24:11.410 "max_queue_depth": 128, 00:24:11.410 "max_io_qpairs_per_ctrlr": 127, 00:24:11.410 "in_capsule_data_size": 4096, 00:24:11.410 "max_io_size": 131072, 00:24:11.410 "io_unit_size": 131072, 00:24:11.410 "max_aq_depth": 128, 00:24:11.410 "num_shared_buffers": 511, 00:24:11.410 "buf_cache_size": 4294967295, 00:24:11.410 "dif_insert_or_strip": false, 00:24:11.410 "zcopy": false, 00:24:11.410 "c2h_success": false, 00:24:11.410 "sock_priority": 0, 00:24:11.410 "abort_timeout_sec": 1, 00:24:11.410 "ack_timeout": 0, 00:24:11.410 "data_wr_pool_size": 0 00:24:11.410 } 00:24:11.410 }, 00:24:11.410 { 00:24:11.410 "method": "nvmf_create_subsystem", 00:24:11.410 "params": { 00:24:11.410 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:11.410 "allow_any_host": false, 00:24:11.410 "serial_number": "00000000000000000000", 00:24:11.410 "model_number": "SPDK bdev Controller", 00:24:11.410 "max_namespaces": 32, 00:24:11.410 "min_cntlid": 1, 00:24:11.410 "max_cntlid": 65519, 00:24:11.410 "ana_reporting": false 00:24:11.410 } 00:24:11.410 }, 00:24:11.410 { 00:24:11.410 "method": "nvmf_subsystem_add_host", 00:24:11.410 "params": { 00:24:11.410 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:11.410 "host": "nqn.2016-06.io.spdk:host1", 00:24:11.410 "psk": "key0" 00:24:11.410 } 00:24:11.410 }, 00:24:11.410 { 00:24:11.410 "method": "nvmf_subsystem_add_ns", 00:24:11.410 "params": { 00:24:11.410 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:11.410 "namespace": { 00:24:11.410 "nsid": 1, 00:24:11.410 "bdev_name": "malloc0", 00:24:11.410 "nguid": "5A5BB891D6874A6086FB5F22D99FDD2A", 00:24:11.410 "uuid": "5a5bb891-d687-4a60-86fb-5f22d99fdd2a", 00:24:11.410 "no_auto_visible": false 00:24:11.410 } 00:24:11.410 } 00:24:11.410 }, 00:24:11.410 { 00:24:11.410 "method": "nvmf_subsystem_add_listener", 00:24:11.410 "params": { 00:24:11.410 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:11.410 "listen_address": { 00:24:11.410 "trtype": "TCP", 00:24:11.410 "adrfam": "IPv4", 00:24:11.410 "traddr": "10.0.0.2", 00:24:11.410 "trsvcid": "4420" 00:24:11.410 }, 00:24:11.410 "secure_channel": false, 00:24:11.410 "sock_impl": "ssl" 00:24:11.410 } 00:24:11.410 } 00:24:11.410 ] 00:24:11.410 } 00:24:11.410 ] 00:24:11.410 }' 00:24:11.410 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:11.410 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=574207 00:24:11.410 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:24:11.410 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 574207 00:24:11.410 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 574207 ']' 00:24:11.410 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:11.410 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:11.410 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:11.410 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:11.410 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:11.410 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:11.410 [2024-11-20 06:34:43.049362] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:24:11.410 [2024-11-20 06:34:43.049408] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:11.410 [2024-11-20 06:34:43.108499] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:11.410 [2024-11-20 06:34:43.146880] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:11.410 [2024-11-20 06:34:43.146916] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:11.410 [2024-11-20 06:34:43.146923] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:11.410 [2024-11-20 06:34:43.146929] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:11.410 [2024-11-20 06:34:43.146934] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:11.410 [2024-11-20 06:34:43.147510] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:11.669 [2024-11-20 06:34:43.359635] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:11.669 [2024-11-20 06:34:43.391671] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:11.669 [2024-11-20 06:34:43.391870] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:12.238 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:12.238 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:24:12.238 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:12.238 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:12.238 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:12.238 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:12.238 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=574284 00:24:12.238 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 574284 /var/tmp/bdevperf.sock 00:24:12.238 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 574284 ']' 00:24:12.238 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:12.238 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:24:12.238 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:12.238 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:12.238 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:12.238 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:24:12.238 "subsystems": [ 00:24:12.238 { 00:24:12.238 "subsystem": "keyring", 00:24:12.238 "config": [ 00:24:12.238 { 00:24:12.238 "method": "keyring_file_add_key", 00:24:12.238 "params": { 00:24:12.238 "name": "key0", 00:24:12.238 "path": "/tmp/tmp.PwrbRy4nk8" 00:24:12.238 } 00:24:12.238 } 00:24:12.238 ] 00:24:12.238 }, 00:24:12.238 { 00:24:12.238 "subsystem": "iobuf", 00:24:12.238 "config": [ 00:24:12.238 { 00:24:12.238 "method": "iobuf_set_options", 00:24:12.238 "params": { 00:24:12.238 "small_pool_count": 8192, 00:24:12.238 "large_pool_count": 1024, 00:24:12.238 "small_bufsize": 8192, 00:24:12.238 "large_bufsize": 135168, 00:24:12.238 "enable_numa": false 00:24:12.238 } 00:24:12.238 } 00:24:12.238 ] 00:24:12.238 }, 00:24:12.238 { 00:24:12.238 "subsystem": "sock", 00:24:12.238 "config": [ 00:24:12.238 { 00:24:12.238 "method": "sock_set_default_impl", 00:24:12.238 "params": { 00:24:12.238 "impl_name": "posix" 00:24:12.238 } 00:24:12.238 }, 00:24:12.238 { 00:24:12.238 "method": "sock_impl_set_options", 00:24:12.238 "params": { 00:24:12.238 "impl_name": "ssl", 00:24:12.238 "recv_buf_size": 4096, 00:24:12.238 "send_buf_size": 4096, 00:24:12.238 "enable_recv_pipe": true, 00:24:12.238 "enable_quickack": false, 00:24:12.238 "enable_placement_id": 0, 00:24:12.238 "enable_zerocopy_send_server": true, 00:24:12.238 "enable_zerocopy_send_client": false, 00:24:12.238 "zerocopy_threshold": 0, 00:24:12.238 "tls_version": 0, 00:24:12.238 "enable_ktls": false 00:24:12.238 } 00:24:12.238 }, 00:24:12.238 { 00:24:12.238 "method": "sock_impl_set_options", 00:24:12.238 "params": { 00:24:12.238 "impl_name": "posix", 00:24:12.238 "recv_buf_size": 2097152, 00:24:12.238 "send_buf_size": 2097152, 00:24:12.238 "enable_recv_pipe": true, 00:24:12.238 "enable_quickack": false, 00:24:12.238 "enable_placement_id": 0, 00:24:12.238 "enable_zerocopy_send_server": true, 00:24:12.238 "enable_zerocopy_send_client": false, 00:24:12.238 "zerocopy_threshold": 0, 00:24:12.238 "tls_version": 0, 00:24:12.238 "enable_ktls": false 00:24:12.238 } 00:24:12.238 } 00:24:12.238 ] 00:24:12.238 }, 00:24:12.238 { 00:24:12.238 "subsystem": "vmd", 00:24:12.238 "config": [] 00:24:12.238 }, 00:24:12.238 { 00:24:12.238 "subsystem": "accel", 00:24:12.238 "config": [ 00:24:12.238 { 00:24:12.238 "method": "accel_set_options", 00:24:12.238 "params": { 00:24:12.238 "small_cache_size": 128, 00:24:12.238 "large_cache_size": 16, 00:24:12.238 "task_count": 2048, 00:24:12.238 "sequence_count": 2048, 00:24:12.238 "buf_count": 2048 00:24:12.238 } 00:24:12.238 } 00:24:12.238 ] 00:24:12.238 }, 00:24:12.238 { 00:24:12.238 "subsystem": "bdev", 00:24:12.238 "config": [ 00:24:12.238 { 00:24:12.238 "method": "bdev_set_options", 00:24:12.238 "params": { 00:24:12.238 "bdev_io_pool_size": 65535, 00:24:12.238 "bdev_io_cache_size": 256, 00:24:12.238 "bdev_auto_examine": true, 00:24:12.238 "iobuf_small_cache_size": 128, 00:24:12.238 "iobuf_large_cache_size": 16 00:24:12.238 } 00:24:12.238 }, 00:24:12.238 { 00:24:12.238 "method": "bdev_raid_set_options", 00:24:12.238 "params": { 00:24:12.238 "process_window_size_kb": 1024, 00:24:12.238 "process_max_bandwidth_mb_sec": 0 00:24:12.238 } 00:24:12.238 }, 00:24:12.238 { 00:24:12.238 "method": "bdev_iscsi_set_options", 00:24:12.238 "params": { 00:24:12.238 "timeout_sec": 30 00:24:12.238 } 00:24:12.238 }, 00:24:12.238 { 00:24:12.238 "method": "bdev_nvme_set_options", 00:24:12.238 "params": { 00:24:12.238 "action_on_timeout": "none", 00:24:12.238 "timeout_us": 0, 00:24:12.238 "timeout_admin_us": 0, 00:24:12.238 "keep_alive_timeout_ms": 10000, 00:24:12.238 "arbitration_burst": 0, 00:24:12.238 "low_priority_weight": 0, 00:24:12.238 "medium_priority_weight": 0, 00:24:12.238 "high_priority_weight": 0, 00:24:12.238 "nvme_adminq_poll_period_us": 10000, 00:24:12.238 "nvme_ioq_poll_period_us": 0, 00:24:12.238 "io_queue_requests": 512, 00:24:12.238 "delay_cmd_submit": true, 00:24:12.238 "transport_retry_count": 4, 00:24:12.238 "bdev_retry_count": 3, 00:24:12.238 "transport_ack_timeout": 0, 00:24:12.238 "ctrlr_loss_timeout_sec": 0, 00:24:12.238 "reconnect_delay_sec": 0, 00:24:12.238 "fast_io_fail_timeout_sec": 0, 00:24:12.238 "disable_auto_failback": false, 00:24:12.238 "generate_uuids": false, 00:24:12.238 "transport_tos": 0, 00:24:12.238 "nvme_error_stat": false, 00:24:12.238 "rdma_srq_size": 0, 00:24:12.238 "io_path_stat": false, 00:24:12.238 "allow_accel_sequence": false, 00:24:12.238 "rdma_max_cq_size": 0, 00:24:12.238 "rdma_cm_event_timeout_ms": 0, 00:24:12.238 "dhchap_digests": [ 00:24:12.238 "sha256", 00:24:12.238 "sha384", 00:24:12.238 "sha512" 00:24:12.238 ], 00:24:12.238 "dhchap_dhgroups": [ 00:24:12.238 "null", 00:24:12.238 "ffdhe2048", 00:24:12.238 "ffdhe3072", 00:24:12.238 "ffdhe4096", 00:24:12.238 "ffdhe6144", 00:24:12.238 "ffdhe8192" 00:24:12.238 ] 00:24:12.238 } 00:24:12.238 }, 00:24:12.238 { 00:24:12.238 "method": "bdev_nvme_attach_controller", 00:24:12.238 "params": { 00:24:12.238 "name": "nvme0", 00:24:12.238 "trtype": "TCP", 00:24:12.238 "adrfam": "IPv4", 00:24:12.238 "traddr": "10.0.0.2", 00:24:12.238 "trsvcid": "4420", 00:24:12.238 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:12.238 "prchk_reftag": false, 00:24:12.238 "prchk_guard": false, 00:24:12.238 "ctrlr_loss_timeout_sec": 0, 00:24:12.238 "reconnect_delay_sec": 0, 00:24:12.238 "fast_io_fail_timeout_sec": 0, 00:24:12.238 "psk": "key0", 00:24:12.238 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:12.238 "hdgst": false, 00:24:12.239 "ddgst": false, 00:24:12.239 "multipath": "multipath" 00:24:12.239 } 00:24:12.239 }, 00:24:12.239 { 00:24:12.239 "method": "bdev_nvme_set_hotplug", 00:24:12.239 "params": { 00:24:12.239 "period_us": 100000, 00:24:12.239 "enable": false 00:24:12.239 } 00:24:12.239 }, 00:24:12.239 { 00:24:12.239 "method": "bdev_enable_histogram", 00:24:12.239 "params": { 00:24:12.239 "name": "nvme0n1", 00:24:12.239 "enable": true 00:24:12.239 } 00:24:12.239 }, 00:24:12.239 { 00:24:12.239 "method": "bdev_wait_for_examine" 00:24:12.239 } 00:24:12.239 ] 00:24:12.239 }, 00:24:12.239 { 00:24:12.239 "subsystem": "nbd", 00:24:12.239 "config": [] 00:24:12.239 } 00:24:12.239 ] 00:24:12.239 }' 00:24:12.239 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:12.239 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:12.239 [2024-11-20 06:34:43.972501] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:24:12.239 [2024-11-20 06:34:43.972546] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid574284 ] 00:24:12.239 [2024-11-20 06:34:44.046494] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:12.498 [2024-11-20 06:34:44.089423] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:12.498 [2024-11-20 06:34:44.240997] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:13.065 06:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:13.065 06:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:24:13.065 06:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:13.065 06:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:24:13.323 06:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:13.323 06:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:13.323 Running I/O for 1 seconds... 00:24:14.701 5325.00 IOPS, 20.80 MiB/s 00:24:14.701 Latency(us) 00:24:14.701 [2024-11-20T05:34:46.537Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:14.701 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:24:14.701 Verification LBA range: start 0x0 length 0x2000 00:24:14.701 nvme0n1 : 1.01 5386.68 21.04 0.00 0.00 23607.89 5336.50 34702.87 00:24:14.701 [2024-11-20T05:34:46.537Z] =================================================================================================================== 00:24:14.701 [2024-11-20T05:34:46.537Z] Total : 5386.68 21.04 0.00 0.00 23607.89 5336.50 34702.87 00:24:14.701 { 00:24:14.701 "results": [ 00:24:14.701 { 00:24:14.701 "job": "nvme0n1", 00:24:14.701 "core_mask": "0x2", 00:24:14.701 "workload": "verify", 00:24:14.701 "status": "finished", 00:24:14.701 "verify_range": { 00:24:14.701 "start": 0, 00:24:14.701 "length": 8192 00:24:14.701 }, 00:24:14.701 "queue_depth": 128, 00:24:14.701 "io_size": 4096, 00:24:14.701 "runtime": 1.012498, 00:24:14.701 "iops": 5386.677307016903, 00:24:14.701 "mibps": 21.041708230534777, 00:24:14.701 "io_failed": 0, 00:24:14.701 "io_timeout": 0, 00:24:14.701 "avg_latency_us": 23607.88861717918, 00:24:14.701 "min_latency_us": 5336.5028571428575, 00:24:14.701 "max_latency_us": 34702.87238095238 00:24:14.701 } 00:24:14.701 ], 00:24:14.701 "core_count": 1 00:24:14.701 } 00:24:14.701 06:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:24:14.701 06:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:24:14.701 06:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:24:14.701 06:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@810 -- # type=--id 00:24:14.701 06:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@811 -- # id=0 00:24:14.701 06:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # '[' --id = --pid ']' 00:24:14.701 06:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:24:14.701 06:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # shm_files=nvmf_trace.0 00:24:14.701 06:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # [[ -z nvmf_trace.0 ]] 00:24:14.701 06:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@822 -- # for n in $shm_files 00:24:14.701 06:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@823 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:24:14.701 nvmf_trace.0 00:24:14.701 06:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # return 0 00:24:14.701 06:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 574284 00:24:14.701 06:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 574284 ']' 00:24:14.701 06:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 574284 00:24:14.701 06:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:24:14.701 06:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:14.701 06:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 574284 00:24:14.701 06:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:24:14.701 06:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:24:14.701 06:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 574284' 00:24:14.701 killing process with pid 574284 00:24:14.701 06:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 574284 00:24:14.701 Received shutdown signal, test time was about 1.000000 seconds 00:24:14.701 00:24:14.701 Latency(us) 00:24:14.701 [2024-11-20T05:34:46.537Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:14.701 [2024-11-20T05:34:46.537Z] =================================================================================================================== 00:24:14.701 [2024-11-20T05:34:46.537Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:14.701 06:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 574284 00:24:14.701 06:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:24:14.701 06:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:14.701 06:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:24:14.701 06:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:14.701 06:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:24:14.701 06:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:14.701 06:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:14.701 rmmod nvme_tcp 00:24:14.701 rmmod nvme_fabrics 00:24:14.701 rmmod nvme_keyring 00:24:14.701 06:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:14.701 06:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:24:14.701 06:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:24:14.701 06:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 574207 ']' 00:24:14.701 06:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 574207 00:24:14.701 06:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 574207 ']' 00:24:14.701 06:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 574207 00:24:14.701 06:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:24:14.701 06:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:14.701 06:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 574207 00:24:14.961 06:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:24:14.961 06:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:24:14.961 06:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 574207' 00:24:14.961 killing process with pid 574207 00:24:14.961 06:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 574207 00:24:14.961 06:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 574207 00:24:14.961 06:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:14.961 06:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:14.961 06:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:14.961 06:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:24:14.961 06:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:24:14.961 06:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:14.961 06:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:24:14.961 06:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:14.961 06:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:14.961 06:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:14.961 06:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:14.961 06:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:17.499 06:34:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:17.500 06:34:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.p2B4zWkw6i /tmp/tmp.wAjNDn2XZV /tmp/tmp.PwrbRy4nk8 00:24:17.500 00:24:17.500 real 1m19.669s 00:24:17.500 user 2m1.947s 00:24:17.500 sys 0m30.088s 00:24:17.500 06:34:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:17.500 06:34:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:17.500 ************************************ 00:24:17.500 END TEST nvmf_tls 00:24:17.500 ************************************ 00:24:17.500 06:34:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:24:17.500 06:34:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:24:17.500 06:34:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:17.500 06:34:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:17.500 ************************************ 00:24:17.500 START TEST nvmf_fips 00:24:17.500 ************************************ 00:24:17.500 06:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:24:17.500 * Looking for test storage... 00:24:17.500 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:24:17.500 06:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:24:17.500 06:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1691 -- # lcov --version 00:24:17.500 06:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:24:17.500 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:24:17.500 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:17.500 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:17.500 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:17.500 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:24:17.500 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:24:17.500 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:24:17.500 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:24:17.500 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:24:17.500 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:24:17.500 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:24:17.500 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:17.500 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:24:17.500 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:24:17.500 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:17.500 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:17.500 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:24:17.500 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:24:17.500 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:17.500 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:24:17.500 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:24:17.500 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:24:17.500 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:24:17.500 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:17.500 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:24:17.500 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:24:17.500 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:17.500 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:17.500 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:24:17.500 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:17.500 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:24:17.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:17.500 --rc genhtml_branch_coverage=1 00:24:17.500 --rc genhtml_function_coverage=1 00:24:17.500 --rc genhtml_legend=1 00:24:17.500 --rc geninfo_all_blocks=1 00:24:17.500 --rc geninfo_unexecuted_blocks=1 00:24:17.500 00:24:17.500 ' 00:24:17.500 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:24:17.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:17.500 --rc genhtml_branch_coverage=1 00:24:17.500 --rc genhtml_function_coverage=1 00:24:17.500 --rc genhtml_legend=1 00:24:17.500 --rc geninfo_all_blocks=1 00:24:17.500 --rc geninfo_unexecuted_blocks=1 00:24:17.500 00:24:17.500 ' 00:24:17.500 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:24:17.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:17.500 --rc genhtml_branch_coverage=1 00:24:17.500 --rc genhtml_function_coverage=1 00:24:17.500 --rc genhtml_legend=1 00:24:17.500 --rc geninfo_all_blocks=1 00:24:17.500 --rc geninfo_unexecuted_blocks=1 00:24:17.500 00:24:17.500 ' 00:24:17.500 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:24:17.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:17.500 --rc genhtml_branch_coverage=1 00:24:17.500 --rc genhtml_function_coverage=1 00:24:17.500 --rc genhtml_legend=1 00:24:17.500 --rc geninfo_all_blocks=1 00:24:17.500 --rc geninfo_unexecuted_blocks=1 00:24:17.500 00:24:17.500 ' 00:24:17.500 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:17.500 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:24:17.500 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:17.500 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:17.500 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:17.500 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:17.500 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:17.500 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:17.500 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:17.500 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:17.500 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:17.500 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:17.500 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:24:17.500 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:24:17.500 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:17.500 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:17.500 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:17.500 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:17.500 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:17.500 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:24:17.500 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:17.500 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:17.500 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:17.500 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:17.500 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:17.500 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:17.500 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:24:17.500 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:17.501 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:24:17.501 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:17.501 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:17.501 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:17.501 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:17.501 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:17.501 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:17.501 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:17.501 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:17.501 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:17.501 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:17.501 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:17.501 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:24:17.501 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:24:17.501 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:24:17.501 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:24:17.501 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:24:17.501 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:24:17.501 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:17.501 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:17.501 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:24:17.501 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:24:17.501 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:24:17.501 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:24:17.501 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:24:17.501 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:24:17.501 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:24:17.501 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:17.501 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:24:17.501 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:24:17.501 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:17.501 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:17.501 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:24:17.501 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:24:17.501 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:24:17.501 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:24:17.501 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:24:17.501 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:24:17.501 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:24:17.501 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:24:17.501 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:24:17.501 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:24:17.501 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:17.501 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:17.501 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:24:17.501 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:17.501 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:24:17.501 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:24:17.501 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:17.501 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:24:17.501 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:24:17.501 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:24:17.501 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:24:17.501 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:24:17.501 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:24:17.501 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:24:17.501 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:17.501 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:24:17.501 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:24:17.501 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:24:17.501 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:24:17.501 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:24:17.501 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:24:17.501 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:24:17.501 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:24:17.501 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:24:17.501 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:24:17.501 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:24:17.501 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:24:17.501 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:24:17.501 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:24:17.501 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:24:17.501 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:24:17.501 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:24:17.501 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:24:17.501 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:24:17.501 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:24:17.501 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:24:17.501 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@650 -- # local es=0 00:24:17.501 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:24:17.501 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:24:17.501 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@638 -- # local arg=openssl 00:24:17.501 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:17.501 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # type -t openssl 00:24:17.501 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:17.501 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -P openssl 00:24:17.501 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:17.501 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:24:17.501 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:24:17.501 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:24:17.501 Error setting digest 00:24:17.501 408211018E7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:24:17.501 408211018E7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:24:17.501 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # es=1 00:24:17.501 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:17.501 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:17.501 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:17.501 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:24:17.501 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:17.501 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:17.501 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:17.501 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:17.501 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:17.501 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:17.501 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:17.501 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:17.501 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:17.501 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:17.501 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:24:17.501 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:24.072 06:34:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:24.072 06:34:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:24:24.072 06:34:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:24.072 06:34:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:24.072 06:34:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:24.072 06:34:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:24.072 06:34:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:24.072 06:34:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:24:24.072 06:34:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:24.072 06:34:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:24:24.072 06:34:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:24:24.072 06:34:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:24:24.072 06:34:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:24:24.072 06:34:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:24:24.072 06:34:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:24:24.072 06:34:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:24.072 06:34:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:24.072 06:34:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:24.072 06:34:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:24.072 06:34:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:24.072 06:34:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:24.072 06:34:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:24.072 06:34:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:24.072 06:34:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:24.072 06:34:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:24.072 06:34:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:24.072 06:34:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:24.072 06:34:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:24.072 06:34:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:24.072 06:34:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:24.072 06:34:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:24.072 06:34:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:24.072 06:34:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:24.072 06:34:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:24.072 06:34:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:24.072 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:24.072 06:34:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:24.072 06:34:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:24.072 06:34:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:24.072 06:34:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:24.072 06:34:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:24.072 06:34:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:24.072 06:34:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:24.072 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:24.072 06:34:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:24.072 06:34:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:24.072 06:34:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:24.072 06:34:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:24.072 06:34:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:24.072 06:34:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:24.072 06:34:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:24.072 06:34:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:24.072 06:34:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:24.072 06:34:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:24.072 06:34:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:24.072 06:34:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:24.072 06:34:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:24.072 06:34:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:24.072 06:34:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:24.072 06:34:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:24.072 Found net devices under 0000:86:00.0: cvl_0_0 00:24:24.072 06:34:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:24.072 06:34:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:24.072 06:34:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:24.072 06:34:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:24.072 06:34:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:24.072 06:34:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:24.072 06:34:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:24.072 06:34:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:24.072 06:34:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:24.073 Found net devices under 0000:86:00.1: cvl_0_1 00:24:24.073 06:34:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:24.073 06:34:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:24.073 06:34:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # is_hw=yes 00:24:24.073 06:34:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:24.073 06:34:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:24.073 06:34:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:24.073 06:34:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:24.073 06:34:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:24.073 06:34:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:24.073 06:34:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:24.073 06:34:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:24.073 06:34:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:24.073 06:34:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:24.073 06:34:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:24.073 06:34:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:24.073 06:34:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:24.073 06:34:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:24.073 06:34:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:24.073 06:34:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:24.073 06:34:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:24.073 06:34:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:24.073 06:34:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:24.073 06:34:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:24.073 06:34:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:24.073 06:34:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:24.073 06:34:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:24.073 06:34:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:24.073 06:34:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:24.073 06:34:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:24.073 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:24.073 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.470 ms 00:24:24.073 00:24:24.073 --- 10.0.0.2 ping statistics --- 00:24:24.073 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:24.073 rtt min/avg/max/mdev = 0.470/0.470/0.470/0.000 ms 00:24:24.073 06:34:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:24.073 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:24.073 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.220 ms 00:24:24.073 00:24:24.073 --- 10.0.0.1 ping statistics --- 00:24:24.073 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:24.073 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:24:24.073 06:34:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:24.073 06:34:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # return 0 00:24:24.073 06:34:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:24.073 06:34:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:24.073 06:34:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:24.073 06:34:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:24.073 06:34:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:24.073 06:34:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:24.073 06:34:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:24.073 06:34:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:24:24.073 06:34:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:24.073 06:34:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:24.073 06:34:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:24.073 06:34:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=578303 00:24:24.073 06:34:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:24.073 06:34:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 578303 00:24:24.073 06:34:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@833 -- # '[' -z 578303 ']' 00:24:24.073 06:34:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:24.073 06:34:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:24.073 06:34:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:24.073 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:24.073 06:34:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:24.073 06:34:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:24.073 [2024-11-20 06:34:55.254959] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:24:24.073 [2024-11-20 06:34:55.255003] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:24.073 [2024-11-20 06:34:55.329766] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:24.073 [2024-11-20 06:34:55.367790] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:24.073 [2024-11-20 06:34:55.367819] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:24.073 [2024-11-20 06:34:55.367826] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:24.073 [2024-11-20 06:34:55.367832] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:24.073 [2024-11-20 06:34:55.367837] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:24.073 [2024-11-20 06:34:55.368422] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:24.333 06:34:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:24.333 06:34:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@866 -- # return 0 00:24:24.333 06:34:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:24.333 06:34:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:24.333 06:34:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:24.333 06:34:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:24.333 06:34:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:24:24.333 06:34:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:24:24.333 06:34:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:24:24.333 06:34:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.y2A 00:24:24.333 06:34:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:24:24.333 06:34:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.y2A 00:24:24.333 06:34:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.y2A 00:24:24.333 06:34:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.y2A 00:24:24.333 06:34:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:24.592 [2024-11-20 06:34:56.275921] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:24.592 [2024-11-20 06:34:56.291935] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:24.592 [2024-11-20 06:34:56.292096] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:24.592 malloc0 00:24:24.592 06:34:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:24.592 06:34:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=578552 00:24:24.592 06:34:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:24.592 06:34:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 578552 /var/tmp/bdevperf.sock 00:24:24.592 06:34:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@833 -- # '[' -z 578552 ']' 00:24:24.592 06:34:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:24.592 06:34:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:24.592 06:34:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:24.592 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:24.592 06:34:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:24.592 06:34:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:24.592 [2024-11-20 06:34:56.423268] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:24:24.592 [2024-11-20 06:34:56.423320] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid578552 ] 00:24:24.852 [2024-11-20 06:34:56.498093] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:24.852 [2024-11-20 06:34:56.538681] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:25.419 06:34:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:25.678 06:34:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@866 -- # return 0 00:24:25.678 06:34:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.y2A 00:24:25.678 06:34:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:25.937 [2024-11-20 06:34:57.631238] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:25.937 TLSTESTn1 00:24:25.937 06:34:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:26.196 Running I/O for 10 seconds... 00:24:28.069 5308.00 IOPS, 20.73 MiB/s [2024-11-20T05:35:00.841Z] 5502.50 IOPS, 21.49 MiB/s [2024-11-20T05:35:02.219Z] 5325.67 IOPS, 20.80 MiB/s [2024-11-20T05:35:03.155Z] 5256.25 IOPS, 20.53 MiB/s [2024-11-20T05:35:04.113Z] 5144.60 IOPS, 20.10 MiB/s [2024-11-20T05:35:05.147Z] 5103.17 IOPS, 19.93 MiB/s [2024-11-20T05:35:06.091Z] 5087.71 IOPS, 19.87 MiB/s [2024-11-20T05:35:07.028Z] 5061.75 IOPS, 19.77 MiB/s [2024-11-20T05:35:07.964Z] 5071.67 IOPS, 19.81 MiB/s [2024-11-20T05:35:07.964Z] 5069.40 IOPS, 19.80 MiB/s 00:24:36.128 Latency(us) 00:24:36.128 [2024-11-20T05:35:07.964Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:36.128 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:36.128 Verification LBA range: start 0x0 length 0x2000 00:24:36.128 TLSTESTn1 : 10.02 5072.41 19.81 0.00 0.00 25197.90 6584.81 31831.77 00:24:36.128 [2024-11-20T05:35:07.964Z] =================================================================================================================== 00:24:36.128 [2024-11-20T05:35:07.964Z] Total : 5072.41 19.81 0.00 0.00 25197.90 6584.81 31831.77 00:24:36.128 { 00:24:36.128 "results": [ 00:24:36.128 { 00:24:36.128 "job": "TLSTESTn1", 00:24:36.128 "core_mask": "0x4", 00:24:36.128 "workload": "verify", 00:24:36.128 "status": "finished", 00:24:36.128 "verify_range": { 00:24:36.128 "start": 0, 00:24:36.128 "length": 8192 00:24:36.128 }, 00:24:36.128 "queue_depth": 128, 00:24:36.128 "io_size": 4096, 00:24:36.128 "runtime": 10.019292, 00:24:36.128 "iops": 5072.4142983356505, 00:24:36.128 "mibps": 19.814118352873635, 00:24:36.128 "io_failed": 0, 00:24:36.128 "io_timeout": 0, 00:24:36.128 "avg_latency_us": 25197.89909776606, 00:24:36.128 "min_latency_us": 6584.8076190476195, 00:24:36.128 "max_latency_us": 31831.77142857143 00:24:36.128 } 00:24:36.128 ], 00:24:36.128 "core_count": 1 00:24:36.128 } 00:24:36.128 06:35:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:24:36.128 06:35:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:24:36.128 06:35:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@810 -- # type=--id 00:24:36.128 06:35:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@811 -- # id=0 00:24:36.128 06:35:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # '[' --id = --pid ']' 00:24:36.128 06:35:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:24:36.128 06:35:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # shm_files=nvmf_trace.0 00:24:36.128 06:35:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # [[ -z nvmf_trace.0 ]] 00:24:36.128 06:35:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@822 -- # for n in $shm_files 00:24:36.128 06:35:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@823 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:24:36.128 nvmf_trace.0 00:24:36.387 06:35:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # return 0 00:24:36.387 06:35:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 578552 00:24:36.387 06:35:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@952 -- # '[' -z 578552 ']' 00:24:36.387 06:35:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # kill -0 578552 00:24:36.387 06:35:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@957 -- # uname 00:24:36.387 06:35:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:36.387 06:35:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 578552 00:24:36.387 06:35:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:24:36.387 06:35:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:24:36.387 06:35:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@970 -- # echo 'killing process with pid 578552' 00:24:36.387 killing process with pid 578552 00:24:36.387 06:35:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@971 -- # kill 578552 00:24:36.387 Received shutdown signal, test time was about 10.000000 seconds 00:24:36.387 00:24:36.387 Latency(us) 00:24:36.387 [2024-11-20T05:35:08.223Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:36.387 [2024-11-20T05:35:08.223Z] =================================================================================================================== 00:24:36.387 [2024-11-20T05:35:08.223Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:36.387 06:35:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@976 -- # wait 578552 00:24:36.387 06:35:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:24:36.387 06:35:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:36.387 06:35:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:24:36.387 06:35:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:36.387 06:35:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:24:36.387 06:35:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:36.387 06:35:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:36.387 rmmod nvme_tcp 00:24:36.387 rmmod nvme_fabrics 00:24:36.387 rmmod nvme_keyring 00:24:36.647 06:35:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:36.647 06:35:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:24:36.647 06:35:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:24:36.647 06:35:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 578303 ']' 00:24:36.647 06:35:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 578303 00:24:36.647 06:35:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@952 -- # '[' -z 578303 ']' 00:24:36.647 06:35:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # kill -0 578303 00:24:36.647 06:35:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@957 -- # uname 00:24:36.647 06:35:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:36.647 06:35:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 578303 00:24:36.647 06:35:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:24:36.647 06:35:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:24:36.647 06:35:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@970 -- # echo 'killing process with pid 578303' 00:24:36.647 killing process with pid 578303 00:24:36.647 06:35:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@971 -- # kill 578303 00:24:36.647 06:35:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@976 -- # wait 578303 00:24:36.647 06:35:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:36.647 06:35:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:36.647 06:35:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:36.647 06:35:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:24:36.647 06:35:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:24:36.647 06:35:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:36.647 06:35:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:24:36.647 06:35:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:36.647 06:35:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:36.647 06:35:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:36.647 06:35:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:36.647 06:35:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:39.188 06:35:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:39.188 06:35:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.y2A 00:24:39.188 00:24:39.188 real 0m21.666s 00:24:39.188 user 0m22.637s 00:24:39.188 sys 0m10.438s 00:24:39.188 06:35:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:39.188 06:35:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:39.188 ************************************ 00:24:39.188 END TEST nvmf_fips 00:24:39.188 ************************************ 00:24:39.188 06:35:10 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:24:39.188 06:35:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:24:39.188 06:35:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:39.188 06:35:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:39.188 ************************************ 00:24:39.188 START TEST nvmf_control_msg_list 00:24:39.188 ************************************ 00:24:39.188 06:35:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:24:39.188 * Looking for test storage... 00:24:39.188 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:39.188 06:35:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:24:39.188 06:35:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1691 -- # lcov --version 00:24:39.188 06:35:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:24:39.188 06:35:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:24:39.188 06:35:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:39.188 06:35:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:39.188 06:35:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:39.188 06:35:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:24:39.188 06:35:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:24:39.188 06:35:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:24:39.188 06:35:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:24:39.188 06:35:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:24:39.188 06:35:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:24:39.188 06:35:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:24:39.188 06:35:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:39.188 06:35:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:24:39.188 06:35:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:24:39.188 06:35:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:39.188 06:35:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:39.188 06:35:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:24:39.188 06:35:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:24:39.188 06:35:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:39.188 06:35:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:24:39.188 06:35:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:24:39.188 06:35:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:24:39.188 06:35:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:24:39.188 06:35:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:39.188 06:35:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:24:39.188 06:35:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:24:39.188 06:35:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:39.188 06:35:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:39.188 06:35:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:24:39.188 06:35:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:39.188 06:35:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:24:39.188 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:39.188 --rc genhtml_branch_coverage=1 00:24:39.188 --rc genhtml_function_coverage=1 00:24:39.188 --rc genhtml_legend=1 00:24:39.188 --rc geninfo_all_blocks=1 00:24:39.188 --rc geninfo_unexecuted_blocks=1 00:24:39.188 00:24:39.188 ' 00:24:39.188 06:35:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:24:39.188 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:39.188 --rc genhtml_branch_coverage=1 00:24:39.188 --rc genhtml_function_coverage=1 00:24:39.188 --rc genhtml_legend=1 00:24:39.188 --rc geninfo_all_blocks=1 00:24:39.188 --rc geninfo_unexecuted_blocks=1 00:24:39.188 00:24:39.188 ' 00:24:39.188 06:35:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:24:39.188 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:39.188 --rc genhtml_branch_coverage=1 00:24:39.188 --rc genhtml_function_coverage=1 00:24:39.188 --rc genhtml_legend=1 00:24:39.188 --rc geninfo_all_blocks=1 00:24:39.188 --rc geninfo_unexecuted_blocks=1 00:24:39.188 00:24:39.188 ' 00:24:39.188 06:35:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:24:39.188 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:39.188 --rc genhtml_branch_coverage=1 00:24:39.188 --rc genhtml_function_coverage=1 00:24:39.188 --rc genhtml_legend=1 00:24:39.188 --rc geninfo_all_blocks=1 00:24:39.188 --rc geninfo_unexecuted_blocks=1 00:24:39.188 00:24:39.188 ' 00:24:39.188 06:35:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:39.188 06:35:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:24:39.188 06:35:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:39.188 06:35:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:39.188 06:35:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:39.188 06:35:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:39.188 06:35:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:39.188 06:35:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:39.188 06:35:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:39.188 06:35:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:39.189 06:35:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:39.189 06:35:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:39.189 06:35:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:24:39.189 06:35:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:24:39.189 06:35:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:39.189 06:35:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:39.189 06:35:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:39.189 06:35:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:39.189 06:35:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:39.189 06:35:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:24:39.189 06:35:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:39.189 06:35:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:39.189 06:35:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:39.189 06:35:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:39.189 06:35:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:39.189 06:35:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:39.189 06:35:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:24:39.189 06:35:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:39.189 06:35:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:24:39.189 06:35:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:39.189 06:35:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:39.189 06:35:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:39.189 06:35:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:39.189 06:35:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:39.189 06:35:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:39.189 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:39.189 06:35:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:39.189 06:35:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:39.189 06:35:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:39.189 06:35:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:24:39.189 06:35:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:39.189 06:35:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:39.189 06:35:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:39.189 06:35:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:39.189 06:35:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:39.189 06:35:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:39.189 06:35:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:39.189 06:35:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:39.189 06:35:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:39.189 06:35:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:39.189 06:35:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:24:39.189 06:35:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:45.760 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:45.760 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:24:45.760 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:45.760 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:45.760 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:45.760 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:45.760 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:45.760 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:24:45.760 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:45.760 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:24:45.760 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:24:45.760 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:24:45.760 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:24:45.760 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:24:45.760 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:24:45.760 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:45.760 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:45.760 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:45.760 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:45.760 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:45.760 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:45.760 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:45.760 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:45.760 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:45.760 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:45.760 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:45.760 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:45.760 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:45.760 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:45.760 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:45.760 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:45.760 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:45.760 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:45.760 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:45.760 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:45.760 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:45.760 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:45.760 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:45.760 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:45.760 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:45.760 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:45.760 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:45.760 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:45.760 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:45.760 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:45.760 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:45.760 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:45.760 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:45.760 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:45.760 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:45.760 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:45.760 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:45.760 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:45.760 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:45.760 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:45.760 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:45.760 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:45.760 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:45.760 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:45.760 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:45.760 Found net devices under 0000:86:00.0: cvl_0_0 00:24:45.760 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:45.760 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:45.760 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:45.760 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:45.760 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:45.760 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:45.760 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:45.760 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:45.760 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:45.760 Found net devices under 0000:86:00.1: cvl_0_1 00:24:45.760 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:45.760 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:45.760 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # is_hw=yes 00:24:45.760 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:45.760 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:45.760 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:45.760 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:45.761 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:45.761 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:45.761 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:45.761 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:45.761 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:45.761 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:45.761 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:45.761 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:45.761 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:45.761 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:45.761 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:45.761 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:45.761 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:45.761 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:45.761 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:45.761 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:45.761 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:45.761 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:45.761 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:45.761 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:45.761 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:45.761 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:45.761 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:45.761 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.484 ms 00:24:45.761 00:24:45.761 --- 10.0.0.2 ping statistics --- 00:24:45.761 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:45.761 rtt min/avg/max/mdev = 0.484/0.484/0.484/0.000 ms 00:24:45.761 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:45.761 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:45.761 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.200 ms 00:24:45.761 00:24:45.761 --- 10.0.0.1 ping statistics --- 00:24:45.761 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:45.761 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:24:45.761 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:45.761 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@450 -- # return 0 00:24:45.761 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:45.761 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:45.761 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:45.761 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:45.761 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:45.761 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:45.761 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:45.761 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:24:45.761 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:45.761 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:45.761 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:45.761 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=583934 00:24:45.761 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 583934 00:24:45.761 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:24:45.761 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@833 -- # '[' -z 583934 ']' 00:24:45.761 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:45.761 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:45.761 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:45.761 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:45.761 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:45.761 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:45.761 [2024-11-20 06:35:16.812070] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:24:45.761 [2024-11-20 06:35:16.812118] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:45.761 [2024-11-20 06:35:16.891737] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:45.761 [2024-11-20 06:35:16.932053] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:45.761 [2024-11-20 06:35:16.932087] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:45.761 [2024-11-20 06:35:16.932094] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:45.761 [2024-11-20 06:35:16.932100] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:45.761 [2024-11-20 06:35:16.932105] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:45.761 [2024-11-20 06:35:16.932659] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:45.761 06:35:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:45.761 06:35:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@866 -- # return 0 00:24:45.761 06:35:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:45.761 06:35:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:45.761 06:35:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:45.761 06:35:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:45.761 06:35:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:24:45.761 06:35:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:24:45.761 06:35:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:24:45.761 06:35:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:45.761 06:35:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:45.761 [2024-11-20 06:35:17.067854] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:45.761 06:35:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:45.761 06:35:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:24:45.761 06:35:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:45.761 06:35:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:45.761 06:35:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:45.761 06:35:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:24:45.761 06:35:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:45.761 06:35:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:45.761 Malloc0 00:24:45.761 06:35:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:45.761 06:35:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:24:45.761 06:35:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:45.761 06:35:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:45.761 06:35:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:45.761 06:35:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:45.761 06:35:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:45.761 06:35:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:45.761 [2024-11-20 06:35:17.108094] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:45.761 06:35:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:45.761 06:35:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=583977 00:24:45.761 06:35:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:45.761 06:35:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=583978 00:24:45.761 06:35:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:45.761 06:35:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=583980 00:24:45.761 06:35:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:45.761 06:35:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 583977 00:24:45.761 [2024-11-20 06:35:17.196764] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:24:45.761 [2024-11-20 06:35:17.196953] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:24:45.761 [2024-11-20 06:35:17.197100] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:24:46.696 Initializing NVMe Controllers 00:24:46.696 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:24:46.696 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:24:46.696 Initialization complete. Launching workers. 00:24:46.696 ======================================================== 00:24:46.696 Latency(us) 00:24:46.696 Device Information : IOPS MiB/s Average min max 00:24:46.696 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 25.00 0.10 41096.21 40719.31 41997.97 00:24:46.696 ======================================================== 00:24:46.696 Total : 25.00 0.10 41096.21 40719.31 41997.97 00:24:46.696 00:24:46.696 Initializing NVMe Controllers 00:24:46.696 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:24:46.696 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:24:46.696 Initialization complete. Launching workers. 00:24:46.696 ======================================================== 00:24:46.696 Latency(us) 00:24:46.696 Device Information : IOPS MiB/s Average min max 00:24:46.696 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 7008.00 27.38 142.36 133.93 327.42 00:24:46.696 ======================================================== 00:24:46.696 Total : 7008.00 27.38 142.36 133.93 327.42 00:24:46.696 00:24:46.696 Initializing NVMe Controllers 00:24:46.696 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:24:46.696 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:24:46.696 Initialization complete. Launching workers. 00:24:46.696 ======================================================== 00:24:46.696 Latency(us) 00:24:46.696 Device Information : IOPS MiB/s Average min max 00:24:46.696 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 25.00 0.10 41167.65 40566.06 41932.09 00:24:46.696 ======================================================== 00:24:46.696 Total : 25.00 0.10 41167.65 40566.06 41932.09 00:24:46.696 00:24:46.696 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 583978 00:24:46.696 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 583980 00:24:46.696 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:24:46.696 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:24:46.696 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:46.696 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:24:46.696 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:46.696 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:24:46.696 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:46.696 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:46.696 rmmod nvme_tcp 00:24:46.696 rmmod nvme_fabrics 00:24:46.696 rmmod nvme_keyring 00:24:46.696 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:46.696 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:24:46.696 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:24:46.696 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 583934 ']' 00:24:46.696 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 583934 00:24:46.696 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@952 -- # '[' -z 583934 ']' 00:24:46.696 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@956 -- # kill -0 583934 00:24:46.696 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@957 -- # uname 00:24:46.696 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:46.696 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 583934 00:24:46.696 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:24:46.696 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:24:46.696 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@970 -- # echo 'killing process with pid 583934' 00:24:46.696 killing process with pid 583934 00:24:46.696 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@971 -- # kill 583934 00:24:46.696 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@976 -- # wait 583934 00:24:46.955 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:46.955 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:46.955 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:46.955 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:24:46.955 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:24:46.955 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:46.955 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:24:46.955 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:46.955 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:46.955 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:46.955 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:46.955 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:48.858 06:35:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:49.117 00:24:49.117 real 0m10.095s 00:24:49.117 user 0m6.583s 00:24:49.117 sys 0m5.409s 00:24:49.117 06:35:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:49.117 06:35:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:49.117 ************************************ 00:24:49.117 END TEST nvmf_control_msg_list 00:24:49.117 ************************************ 00:24:49.117 06:35:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:24:49.117 06:35:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:24:49.117 06:35:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:49.117 06:35:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:49.117 ************************************ 00:24:49.117 START TEST nvmf_wait_for_buf 00:24:49.117 ************************************ 00:24:49.117 06:35:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:24:49.117 * Looking for test storage... 00:24:49.117 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:49.117 06:35:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:24:49.117 06:35:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1691 -- # lcov --version 00:24:49.117 06:35:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:24:49.117 06:35:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:24:49.117 06:35:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:49.117 06:35:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:49.117 06:35:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:49.117 06:35:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:24:49.117 06:35:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:24:49.117 06:35:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:24:49.117 06:35:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:24:49.117 06:35:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:24:49.117 06:35:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:24:49.117 06:35:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:24:49.117 06:35:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:49.117 06:35:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:24:49.117 06:35:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:24:49.117 06:35:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:49.117 06:35:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:49.117 06:35:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:24:49.117 06:35:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:24:49.117 06:35:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:49.117 06:35:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:24:49.117 06:35:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:24:49.117 06:35:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:24:49.117 06:35:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:24:49.117 06:35:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:49.117 06:35:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:24:49.117 06:35:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:24:49.117 06:35:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:49.117 06:35:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:49.117 06:35:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:24:49.117 06:35:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:49.117 06:35:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:24:49.117 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:49.117 --rc genhtml_branch_coverage=1 00:24:49.117 --rc genhtml_function_coverage=1 00:24:49.117 --rc genhtml_legend=1 00:24:49.117 --rc geninfo_all_blocks=1 00:24:49.117 --rc geninfo_unexecuted_blocks=1 00:24:49.117 00:24:49.117 ' 00:24:49.117 06:35:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:24:49.117 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:49.117 --rc genhtml_branch_coverage=1 00:24:49.117 --rc genhtml_function_coverage=1 00:24:49.117 --rc genhtml_legend=1 00:24:49.117 --rc geninfo_all_blocks=1 00:24:49.117 --rc geninfo_unexecuted_blocks=1 00:24:49.117 00:24:49.117 ' 00:24:49.117 06:35:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:24:49.117 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:49.117 --rc genhtml_branch_coverage=1 00:24:49.117 --rc genhtml_function_coverage=1 00:24:49.117 --rc genhtml_legend=1 00:24:49.117 --rc geninfo_all_blocks=1 00:24:49.117 --rc geninfo_unexecuted_blocks=1 00:24:49.117 00:24:49.117 ' 00:24:49.117 06:35:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:24:49.117 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:49.117 --rc genhtml_branch_coverage=1 00:24:49.117 --rc genhtml_function_coverage=1 00:24:49.117 --rc genhtml_legend=1 00:24:49.118 --rc geninfo_all_blocks=1 00:24:49.118 --rc geninfo_unexecuted_blocks=1 00:24:49.118 00:24:49.118 ' 00:24:49.118 06:35:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:49.118 06:35:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:24:49.376 06:35:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:49.377 06:35:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:49.377 06:35:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:49.377 06:35:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:49.377 06:35:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:49.377 06:35:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:49.377 06:35:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:49.377 06:35:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:49.377 06:35:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:49.377 06:35:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:49.377 06:35:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:24:49.377 06:35:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:24:49.377 06:35:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:49.377 06:35:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:49.377 06:35:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:49.377 06:35:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:49.377 06:35:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:49.377 06:35:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:24:49.377 06:35:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:49.377 06:35:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:49.377 06:35:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:49.377 06:35:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:49.377 06:35:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:49.377 06:35:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:49.377 06:35:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:24:49.377 06:35:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:49.377 06:35:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:24:49.377 06:35:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:49.377 06:35:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:49.377 06:35:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:49.377 06:35:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:49.377 06:35:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:49.377 06:35:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:49.377 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:49.377 06:35:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:49.377 06:35:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:49.377 06:35:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:49.377 06:35:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:24:49.377 06:35:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:49.377 06:35:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:49.377 06:35:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:49.377 06:35:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:49.377 06:35:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:49.377 06:35:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:49.377 06:35:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:49.377 06:35:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:49.377 06:35:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:49.377 06:35:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:49.377 06:35:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:24:49.377 06:35:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:55.951 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:55.951 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:24:55.951 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:55.951 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:55.951 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:55.951 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:55.951 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:55.951 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:24:55.951 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:55.951 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:24:55.951 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:24:55.951 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:24:55.951 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:24:55.951 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:24:55.951 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:24:55.951 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:55.951 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:55.951 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:55.951 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:55.951 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:55.951 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:55.951 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:55.951 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:55.951 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:55.951 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:55.951 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:55.951 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:55.951 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:55.951 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:55.951 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:55.951 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:55.951 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:55.951 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:55.951 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:55.951 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:55.951 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:55.952 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:55.952 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:55.952 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:55.952 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:55.952 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:55.952 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:55.952 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:55.952 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:55.952 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:55.952 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:55.952 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:55.952 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:55.952 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:55.952 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:55.952 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:55.952 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:55.952 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:55.952 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:55.952 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:55.952 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:55.952 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:55.952 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:55.952 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:55.952 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:55.952 Found net devices under 0000:86:00.0: cvl_0_0 00:24:55.952 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:55.952 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:55.952 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:55.952 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:55.952 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:55.952 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:55.952 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:55.952 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:55.952 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:55.952 Found net devices under 0000:86:00.1: cvl_0_1 00:24:55.952 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:55.952 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:55.952 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # is_hw=yes 00:24:55.952 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:55.952 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:55.952 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:55.952 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:55.952 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:55.952 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:55.952 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:55.952 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:55.952 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:55.952 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:55.952 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:55.952 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:55.952 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:55.952 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:55.952 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:55.952 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:55.952 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:55.952 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:55.952 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:55.952 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:55.952 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:55.952 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:55.952 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:55.952 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:55.952 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:55.952 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:55.953 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:55.953 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.461 ms 00:24:55.953 00:24:55.953 --- 10.0.0.2 ping statistics --- 00:24:55.953 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:55.953 rtt min/avg/max/mdev = 0.461/0.461/0.461/0.000 ms 00:24:55.953 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:55.953 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:55.953 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.218 ms 00:24:55.953 00:24:55.953 --- 10.0.0.1 ping statistics --- 00:24:55.953 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:55.953 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:24:55.953 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:55.953 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@450 -- # return 0 00:24:55.953 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:55.953 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:55.953 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:55.953 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:55.953 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:55.953 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:55.953 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:55.953 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:24:55.953 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:55.953 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:55.953 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:55.953 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=587715 00:24:55.953 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 587715 00:24:55.953 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:24:55.953 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@833 -- # '[' -z 587715 ']' 00:24:55.953 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:55.953 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:55.953 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:55.953 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:55.953 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:55.953 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:55.953 [2024-11-20 06:35:26.976677] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:24:55.953 [2024-11-20 06:35:26.976719] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:55.953 [2024-11-20 06:35:27.055841] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:55.953 [2024-11-20 06:35:27.096150] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:55.953 [2024-11-20 06:35:27.096183] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:55.953 [2024-11-20 06:35:27.096190] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:55.953 [2024-11-20 06:35:27.096198] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:55.953 [2024-11-20 06:35:27.096207] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:55.953 [2024-11-20 06:35:27.096765] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:55.953 06:35:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:55.953 06:35:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@866 -- # return 0 00:24:55.953 06:35:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:55.953 06:35:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:55.953 06:35:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:55.953 06:35:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:55.953 06:35:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:24:55.953 06:35:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:24:55.953 06:35:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:24:55.953 06:35:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:55.953 06:35:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:55.953 06:35:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:55.953 06:35:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:24:55.953 06:35:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:55.953 06:35:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:55.953 06:35:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:55.953 06:35:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:24:55.953 06:35:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:55.953 06:35:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:55.953 06:35:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:55.953 06:35:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:24:55.953 06:35:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:55.953 06:35:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:55.953 Malloc0 00:24:55.953 06:35:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:55.953 06:35:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:24:55.953 06:35:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:55.953 06:35:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:55.953 [2024-11-20 06:35:27.261912] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:55.954 06:35:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:55.954 06:35:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:24:55.954 06:35:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:55.954 06:35:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:55.954 06:35:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:55.954 06:35:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:24:55.954 06:35:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:55.954 06:35:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:55.954 06:35:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:55.954 06:35:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:55.954 06:35:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:55.954 06:35:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:55.954 [2024-11-20 06:35:27.290094] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:55.954 06:35:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:55.954 06:35:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:55.954 [2024-11-20 06:35:27.373287] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:24:56.891 Initializing NVMe Controllers 00:24:56.891 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:24:56.891 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:24:56.891 Initialization complete. Launching workers. 00:24:56.891 ======================================================== 00:24:56.891 Latency(us) 00:24:56.891 Device Information : IOPS MiB/s Average min max 00:24:56.891 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 124.00 15.50 33538.99 7273.10 71837.07 00:24:56.891 ======================================================== 00:24:56.891 Total : 124.00 15.50 33538.99 7273.10 71837.07 00:24:56.891 00:24:57.150 06:35:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:24:57.151 06:35:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:24:57.151 06:35:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:57.151 06:35:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:57.151 06:35:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:57.151 06:35:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=1958 00:24:57.151 06:35:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 1958 -eq 0 ]] 00:24:57.151 06:35:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:24:57.151 06:35:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:24:57.151 06:35:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:57.151 06:35:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:24:57.151 06:35:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:57.151 06:35:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:24:57.151 06:35:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:57.151 06:35:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:57.151 rmmod nvme_tcp 00:24:57.151 rmmod nvme_fabrics 00:24:57.151 rmmod nvme_keyring 00:24:57.151 06:35:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:57.151 06:35:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:24:57.151 06:35:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:24:57.151 06:35:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 587715 ']' 00:24:57.151 06:35:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 587715 00:24:57.151 06:35:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@952 -- # '[' -z 587715 ']' 00:24:57.151 06:35:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@956 -- # kill -0 587715 00:24:57.151 06:35:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@957 -- # uname 00:24:57.151 06:35:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:57.151 06:35:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 587715 00:24:57.151 06:35:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:24:57.151 06:35:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:24:57.151 06:35:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@970 -- # echo 'killing process with pid 587715' 00:24:57.151 killing process with pid 587715 00:24:57.151 06:35:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@971 -- # kill 587715 00:24:57.151 06:35:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@976 -- # wait 587715 00:24:57.410 06:35:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:57.410 06:35:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:57.410 06:35:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:57.410 06:35:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:24:57.410 06:35:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:24:57.410 06:35:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:57.410 06:35:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:24:57.410 06:35:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:57.410 06:35:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:57.410 06:35:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:57.410 06:35:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:57.410 06:35:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:59.333 06:35:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:59.333 00:24:59.333 real 0m10.352s 00:24:59.333 user 0m3.911s 00:24:59.333 sys 0m4.865s 00:24:59.333 06:35:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:59.333 06:35:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:59.333 ************************************ 00:24:59.333 END TEST nvmf_wait_for_buf 00:24:59.333 ************************************ 00:24:59.333 06:35:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:24:59.333 06:35:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:24:59.333 06:35:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:24:59.333 06:35:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:24:59.333 06:35:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:24:59.333 06:35:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:05.901 06:35:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:05.901 06:35:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:25:05.901 06:35:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:05.901 06:35:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:05.901 06:35:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:05.901 06:35:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:05.901 06:35:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:05.901 06:35:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:25:05.901 06:35:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:05.901 06:35:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:25:05.901 06:35:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:25:05.901 06:35:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:25:05.901 06:35:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:25:05.901 06:35:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:25:05.901 06:35:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:25:05.901 06:35:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:05.901 06:35:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:05.901 06:35:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:05.901 06:35:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:05.901 06:35:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:05.901 06:35:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:05.901 06:35:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:05.901 06:35:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:05.901 06:35:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:05.901 06:35:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:05.901 06:35:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:05.901 06:35:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:05.901 06:35:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:05.901 06:35:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:05.901 06:35:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:05.901 06:35:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:05.901 06:35:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:05.901 06:35:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:05.901 06:35:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:05.901 06:35:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:25:05.901 Found 0000:86:00.0 (0x8086 - 0x159b) 00:25:05.901 06:35:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:05.901 06:35:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:05.902 06:35:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:05.902 06:35:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:05.902 06:35:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:05.902 06:35:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:05.902 06:35:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:25:05.902 Found 0000:86:00.1 (0x8086 - 0x159b) 00:25:05.902 06:35:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:05.902 06:35:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:05.902 06:35:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:05.902 06:35:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:05.902 06:35:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:05.902 06:35:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:05.902 06:35:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:05.902 06:35:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:05.902 06:35:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:05.902 06:35:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:05.902 06:35:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:05.902 06:35:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:05.902 06:35:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:05.902 06:35:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:05.902 06:35:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:05.902 06:35:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:25:05.902 Found net devices under 0000:86:00.0: cvl_0_0 00:25:05.902 06:35:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:05.902 06:35:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:05.902 06:35:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:05.902 06:35:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:05.902 06:35:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:05.902 06:35:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:05.902 06:35:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:05.902 06:35:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:05.902 06:35:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:25:05.902 Found net devices under 0000:86:00.1: cvl_0_1 00:25:05.902 06:35:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:05.902 06:35:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:05.902 06:35:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:05.902 06:35:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:25:05.902 06:35:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:25:05.902 06:35:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:25:05.902 06:35:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:25:05.902 06:35:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:05.902 ************************************ 00:25:05.902 START TEST nvmf_perf_adq 00:25:05.902 ************************************ 00:25:05.902 06:35:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:25:05.902 * Looking for test storage... 00:25:05.902 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:05.902 06:35:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:25:05.902 06:35:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1691 -- # lcov --version 00:25:05.902 06:35:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:25:05.902 06:35:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:25:05.902 06:35:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:05.902 06:35:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:05.902 06:35:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:05.902 06:35:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:25:05.902 06:35:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:25:05.902 06:35:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:25:05.902 06:35:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:25:05.902 06:35:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:25:05.902 06:35:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:25:05.902 06:35:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:25:05.902 06:35:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:05.902 06:35:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:25:05.902 06:35:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:25:05.902 06:35:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:05.902 06:35:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:05.902 06:35:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:25:05.902 06:35:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:25:05.902 06:35:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:05.902 06:35:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:25:05.902 06:35:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:25:05.902 06:35:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:25:05.902 06:35:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:25:05.902 06:35:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:05.902 06:35:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:25:05.902 06:35:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:25:05.902 06:35:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:05.902 06:35:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:05.902 06:35:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:25:05.902 06:35:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:05.902 06:35:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:25:05.902 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:05.902 --rc genhtml_branch_coverage=1 00:25:05.902 --rc genhtml_function_coverage=1 00:25:05.902 --rc genhtml_legend=1 00:25:05.902 --rc geninfo_all_blocks=1 00:25:05.902 --rc geninfo_unexecuted_blocks=1 00:25:05.902 00:25:05.902 ' 00:25:05.902 06:35:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:25:05.902 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:05.902 --rc genhtml_branch_coverage=1 00:25:05.902 --rc genhtml_function_coverage=1 00:25:05.902 --rc genhtml_legend=1 00:25:05.902 --rc geninfo_all_blocks=1 00:25:05.902 --rc geninfo_unexecuted_blocks=1 00:25:05.902 00:25:05.902 ' 00:25:05.902 06:35:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:25:05.902 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:05.902 --rc genhtml_branch_coverage=1 00:25:05.902 --rc genhtml_function_coverage=1 00:25:05.902 --rc genhtml_legend=1 00:25:05.902 --rc geninfo_all_blocks=1 00:25:05.902 --rc geninfo_unexecuted_blocks=1 00:25:05.902 00:25:05.902 ' 00:25:05.902 06:35:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:25:05.902 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:05.902 --rc genhtml_branch_coverage=1 00:25:05.902 --rc genhtml_function_coverage=1 00:25:05.902 --rc genhtml_legend=1 00:25:05.902 --rc geninfo_all_blocks=1 00:25:05.902 --rc geninfo_unexecuted_blocks=1 00:25:05.902 00:25:05.902 ' 00:25:05.902 06:35:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:05.902 06:35:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:25:05.903 06:35:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:05.903 06:35:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:05.903 06:35:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:05.903 06:35:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:05.903 06:35:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:05.903 06:35:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:05.903 06:35:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:05.903 06:35:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:05.903 06:35:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:05.903 06:35:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:05.903 06:35:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:25:05.903 06:35:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:25:05.903 06:35:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:05.903 06:35:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:05.903 06:35:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:05.903 06:35:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:05.903 06:35:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:05.903 06:35:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:25:05.903 06:35:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:05.903 06:35:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:05.903 06:35:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:05.903 06:35:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:05.903 06:35:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:05.903 06:35:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:05.903 06:35:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:25:05.903 06:35:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:05.903 06:35:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:25:05.903 06:35:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:05.903 06:35:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:05.903 06:35:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:05.903 06:35:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:05.903 06:35:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:05.903 06:35:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:05.903 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:05.903 06:35:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:05.903 06:35:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:05.903 06:35:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:05.903 06:35:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:25:05.903 06:35:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:25:05.903 06:35:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:11.177 06:35:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:11.177 06:35:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:25:11.177 06:35:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:11.177 06:35:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:11.177 06:35:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:11.177 06:35:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:11.177 06:35:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:11.178 06:35:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:25:11.178 06:35:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:11.178 06:35:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:25:11.178 06:35:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:25:11.178 06:35:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:25:11.178 06:35:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:25:11.178 06:35:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:25:11.178 06:35:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:25:11.178 06:35:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:11.178 06:35:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:11.178 06:35:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:11.178 06:35:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:11.178 06:35:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:11.178 06:35:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:11.178 06:35:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:11.178 06:35:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:11.178 06:35:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:11.178 06:35:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:11.178 06:35:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:11.178 06:35:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:11.178 06:35:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:11.178 06:35:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:11.178 06:35:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:11.178 06:35:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:11.178 06:35:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:11.178 06:35:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:11.178 06:35:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:11.178 06:35:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:25:11.178 Found 0000:86:00.0 (0x8086 - 0x159b) 00:25:11.178 06:35:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:11.178 06:35:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:11.178 06:35:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:11.178 06:35:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:11.178 06:35:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:11.178 06:35:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:11.178 06:35:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:25:11.178 Found 0000:86:00.1 (0x8086 - 0x159b) 00:25:11.178 06:35:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:11.178 06:35:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:11.178 06:35:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:11.178 06:35:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:11.178 06:35:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:11.178 06:35:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:11.178 06:35:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:11.178 06:35:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:11.178 06:35:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:11.178 06:35:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:11.178 06:35:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:11.178 06:35:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:11.178 06:35:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:11.178 06:35:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:11.178 06:35:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:11.178 06:35:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:25:11.178 Found net devices under 0000:86:00.0: cvl_0_0 00:25:11.178 06:35:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:11.178 06:35:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:11.178 06:35:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:11.178 06:35:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:11.178 06:35:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:11.178 06:35:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:11.178 06:35:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:11.178 06:35:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:11.178 06:35:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:25:11.178 Found net devices under 0000:86:00.1: cvl_0_1 00:25:11.178 06:35:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:11.178 06:35:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:11.178 06:35:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:11.178 06:35:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:25:11.178 06:35:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:25:11.178 06:35:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:25:11.178 06:35:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:25:11.178 06:35:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:25:12.554 06:35:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:25:14.462 06:35:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:25:19.792 06:35:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:25:19.792 06:35:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:19.792 06:35:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:19.792 06:35:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:19.792 06:35:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:19.792 06:35:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:19.792 06:35:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:19.792 06:35:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:19.792 06:35:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:19.792 06:35:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:19.792 06:35:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:19.792 06:35:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:25:19.792 06:35:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:19.792 06:35:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:19.792 06:35:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:25:19.792 06:35:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:19.792 06:35:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:19.792 06:35:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:19.793 06:35:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:19.793 06:35:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:19.793 06:35:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:25:19.793 06:35:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:19.793 06:35:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:25:19.793 06:35:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:25:19.793 06:35:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:25:19.793 06:35:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:25:19.793 06:35:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:25:19.793 06:35:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:25:19.793 06:35:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:19.793 06:35:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:19.793 06:35:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:19.793 06:35:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:19.793 06:35:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:19.793 06:35:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:19.793 06:35:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:19.793 06:35:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:19.793 06:35:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:19.793 06:35:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:19.793 06:35:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:19.793 06:35:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:19.793 06:35:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:19.793 06:35:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:19.793 06:35:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:19.793 06:35:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:19.793 06:35:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:19.793 06:35:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:19.793 06:35:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:19.793 06:35:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:25:19.793 Found 0000:86:00.0 (0x8086 - 0x159b) 00:25:19.793 06:35:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:19.793 06:35:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:19.793 06:35:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:19.793 06:35:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:19.793 06:35:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:19.793 06:35:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:19.793 06:35:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:25:19.793 Found 0000:86:00.1 (0x8086 - 0x159b) 00:25:19.793 06:35:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:19.793 06:35:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:19.793 06:35:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:19.793 06:35:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:19.793 06:35:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:19.793 06:35:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:19.793 06:35:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:19.793 06:35:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:19.793 06:35:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:19.793 06:35:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:19.793 06:35:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:19.793 06:35:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:19.793 06:35:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:19.793 06:35:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:19.793 06:35:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:19.793 06:35:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:25:19.793 Found net devices under 0000:86:00.0: cvl_0_0 00:25:19.793 06:35:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:19.793 06:35:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:19.793 06:35:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:19.793 06:35:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:19.793 06:35:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:19.793 06:35:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:19.793 06:35:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:19.793 06:35:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:19.793 06:35:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:25:19.793 Found net devices under 0000:86:00.1: cvl_0_1 00:25:19.793 06:35:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:19.793 06:35:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:19.793 06:35:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:25:19.793 06:35:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:19.793 06:35:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:19.793 06:35:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:19.793 06:35:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:19.793 06:35:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:19.793 06:35:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:19.793 06:35:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:19.793 06:35:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:19.793 06:35:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:19.793 06:35:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:19.793 06:35:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:19.793 06:35:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:19.793 06:35:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:19.793 06:35:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:19.793 06:35:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:19.793 06:35:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:19.793 06:35:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:19.793 06:35:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:19.793 06:35:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:19.793 06:35:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:19.793 06:35:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:19.793 06:35:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:19.793 06:35:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:19.793 06:35:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:19.793 06:35:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:19.793 06:35:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:19.793 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:19.793 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.392 ms 00:25:19.793 00:25:19.793 --- 10.0.0.2 ping statistics --- 00:25:19.793 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:19.793 rtt min/avg/max/mdev = 0.392/0.392/0.392/0.000 ms 00:25:19.793 06:35:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:19.793 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:19.793 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.186 ms 00:25:19.793 00:25:19.793 --- 10.0.0.1 ping statistics --- 00:25:19.793 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:19.793 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:25:19.793 06:35:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:19.793 06:35:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:25:19.793 06:35:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:19.793 06:35:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:19.793 06:35:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:19.794 06:35:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:19.794 06:35:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:19.794 06:35:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:19.794 06:35:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:19.794 06:35:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:25:19.794 06:35:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:19.794 06:35:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:19.794 06:35:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:19.794 06:35:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=596059 00:25:19.794 06:35:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 596059 00:25:19.794 06:35:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:25:19.794 06:35:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@833 -- # '[' -z 596059 ']' 00:25:19.794 06:35:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:19.794 06:35:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:19.794 06:35:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:19.794 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:19.794 06:35:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:19.794 06:35:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:19.794 [2024-11-20 06:35:51.481470] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:25:19.794 [2024-11-20 06:35:51.481513] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:19.794 [2024-11-20 06:35:51.558528] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:19.794 [2024-11-20 06:35:51.601535] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:19.794 [2024-11-20 06:35:51.601572] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:19.794 [2024-11-20 06:35:51.601579] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:19.794 [2024-11-20 06:35:51.601585] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:19.794 [2024-11-20 06:35:51.601590] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:19.794 [2024-11-20 06:35:51.603238] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:19.794 [2024-11-20 06:35:51.603292] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:19.794 [2024-11-20 06:35:51.603402] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:19.794 [2024-11-20 06:35:51.603402] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:20.729 06:35:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:20.729 06:35:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@866 -- # return 0 00:25:20.729 06:35:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:20.729 06:35:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:20.729 06:35:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:20.729 06:35:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:20.729 06:35:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:25:20.729 06:35:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:25:20.730 06:35:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:25:20.730 06:35:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.730 06:35:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:20.730 06:35:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.730 06:35:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:25:20.730 06:35:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:25:20.730 06:35:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.730 06:35:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:20.730 06:35:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.730 06:35:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:25:20.730 06:35:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.730 06:35:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:20.730 06:35:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.730 06:35:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:25:20.730 06:35:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.730 06:35:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:20.730 [2024-11-20 06:35:52.490868] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:20.730 06:35:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.730 06:35:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:25:20.730 06:35:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.730 06:35:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:20.730 Malloc1 00:25:20.730 06:35:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.730 06:35:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:20.730 06:35:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.730 06:35:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:20.730 06:35:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.730 06:35:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:25:20.730 06:35:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.730 06:35:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:20.730 06:35:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.730 06:35:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:20.730 06:35:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.730 06:35:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:20.730 [2024-11-20 06:35:52.553447] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:20.730 06:35:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.730 06:35:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=596308 00:25:20.730 06:35:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:25:20.730 06:35:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:25:23.261 06:35:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:25:23.261 06:35:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:23.261 06:35:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:23.261 06:35:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:23.261 06:35:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:25:23.261 "tick_rate": 2100000000, 00:25:23.261 "poll_groups": [ 00:25:23.261 { 00:25:23.261 "name": "nvmf_tgt_poll_group_000", 00:25:23.261 "admin_qpairs": 1, 00:25:23.261 "io_qpairs": 1, 00:25:23.261 "current_admin_qpairs": 1, 00:25:23.261 "current_io_qpairs": 1, 00:25:23.261 "pending_bdev_io": 0, 00:25:23.261 "completed_nvme_io": 19738, 00:25:23.261 "transports": [ 00:25:23.261 { 00:25:23.261 "trtype": "TCP" 00:25:23.261 } 00:25:23.261 ] 00:25:23.261 }, 00:25:23.261 { 00:25:23.261 "name": "nvmf_tgt_poll_group_001", 00:25:23.261 "admin_qpairs": 0, 00:25:23.261 "io_qpairs": 1, 00:25:23.261 "current_admin_qpairs": 0, 00:25:23.261 "current_io_qpairs": 1, 00:25:23.261 "pending_bdev_io": 0, 00:25:23.261 "completed_nvme_io": 19725, 00:25:23.261 "transports": [ 00:25:23.261 { 00:25:23.261 "trtype": "TCP" 00:25:23.261 } 00:25:23.261 ] 00:25:23.261 }, 00:25:23.261 { 00:25:23.261 "name": "nvmf_tgt_poll_group_002", 00:25:23.261 "admin_qpairs": 0, 00:25:23.261 "io_qpairs": 1, 00:25:23.261 "current_admin_qpairs": 0, 00:25:23.261 "current_io_qpairs": 1, 00:25:23.261 "pending_bdev_io": 0, 00:25:23.261 "completed_nvme_io": 19681, 00:25:23.261 "transports": [ 00:25:23.261 { 00:25:23.261 "trtype": "TCP" 00:25:23.261 } 00:25:23.261 ] 00:25:23.261 }, 00:25:23.261 { 00:25:23.261 "name": "nvmf_tgt_poll_group_003", 00:25:23.261 "admin_qpairs": 0, 00:25:23.261 "io_qpairs": 1, 00:25:23.261 "current_admin_qpairs": 0, 00:25:23.261 "current_io_qpairs": 1, 00:25:23.261 "pending_bdev_io": 0, 00:25:23.261 "completed_nvme_io": 19631, 00:25:23.261 "transports": [ 00:25:23.261 { 00:25:23.261 "trtype": "TCP" 00:25:23.261 } 00:25:23.261 ] 00:25:23.261 } 00:25:23.261 ] 00:25:23.261 }' 00:25:23.261 06:35:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:25:23.261 06:35:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:25:23.261 06:35:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:25:23.261 06:35:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:25:23.261 06:35:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 596308 00:25:31.373 Initializing NVMe Controllers 00:25:31.373 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:31.373 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:25:31.373 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:25:31.373 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:25:31.373 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:25:31.373 Initialization complete. Launching workers. 00:25:31.373 ======================================================== 00:25:31.373 Latency(us) 00:25:31.373 Device Information : IOPS MiB/s Average min max 00:25:31.373 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10413.38 40.68 6147.31 1959.96 10380.77 00:25:31.373 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10534.98 41.15 6075.33 1883.03 12817.05 00:25:31.373 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10448.38 40.81 6124.79 1672.05 12724.30 00:25:31.373 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10471.98 40.91 6111.86 1812.58 10636.66 00:25:31.373 ======================================================== 00:25:31.373 Total : 41868.73 163.55 6114.71 1672.05 12817.05 00:25:31.373 00:25:31.373 [2024-11-20 06:36:02.710811] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x92b4f0 is same with the state(6) to be set 00:25:31.373 06:36:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:25:31.373 06:36:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:31.373 06:36:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:25:31.373 06:36:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:31.373 06:36:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:25:31.373 06:36:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:31.373 06:36:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:31.373 rmmod nvme_tcp 00:25:31.373 rmmod nvme_fabrics 00:25:31.373 rmmod nvme_keyring 00:25:31.373 06:36:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:31.373 06:36:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:25:31.373 06:36:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:25:31.373 06:36:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 596059 ']' 00:25:31.373 06:36:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 596059 00:25:31.373 06:36:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@952 -- # '[' -z 596059 ']' 00:25:31.373 06:36:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # kill -0 596059 00:25:31.373 06:36:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@957 -- # uname 00:25:31.373 06:36:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:31.373 06:36:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 596059 00:25:31.373 06:36:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:25:31.373 06:36:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:25:31.373 06:36:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@970 -- # echo 'killing process with pid 596059' 00:25:31.373 killing process with pid 596059 00:25:31.373 06:36:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@971 -- # kill 596059 00:25:31.373 06:36:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@976 -- # wait 596059 00:25:31.373 06:36:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:31.373 06:36:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:31.373 06:36:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:31.373 06:36:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:25:31.373 06:36:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:25:31.373 06:36:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:31.373 06:36:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:25:31.373 06:36:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:31.373 06:36:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:31.373 06:36:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:31.373 06:36:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:31.373 06:36:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:33.280 06:36:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:33.280 06:36:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:25:33.280 06:36:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:25:33.280 06:36:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:25:34.659 06:36:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:25:37.195 06:36:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:25:42.475 06:36:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:25:42.475 06:36:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:42.475 06:36:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:42.475 06:36:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:42.475 06:36:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:42.475 06:36:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:42.475 06:36:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:42.475 06:36:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:42.475 06:36:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:42.475 06:36:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:42.475 06:36:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:42.475 06:36:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:25:42.475 06:36:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:42.475 06:36:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:42.475 06:36:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:25:42.475 06:36:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:42.475 06:36:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:42.475 06:36:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:42.475 06:36:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:42.475 06:36:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:42.475 06:36:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:25:42.475 06:36:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:42.475 06:36:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:25:42.475 06:36:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:25:42.475 06:36:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:25:42.475 06:36:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:25:42.475 06:36:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:25:42.475 06:36:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:25:42.475 06:36:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:42.475 06:36:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:42.475 06:36:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:42.475 06:36:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:42.475 06:36:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:42.475 06:36:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:42.475 06:36:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:42.475 06:36:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:42.475 06:36:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:42.475 06:36:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:42.475 06:36:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:42.475 06:36:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:42.475 06:36:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:42.475 06:36:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:42.475 06:36:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:42.475 06:36:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:42.475 06:36:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:42.475 06:36:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:42.475 06:36:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:42.475 06:36:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:25:42.475 Found 0000:86:00.0 (0x8086 - 0x159b) 00:25:42.475 06:36:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:42.475 06:36:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:42.475 06:36:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:42.475 06:36:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:42.475 06:36:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:42.475 06:36:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:42.475 06:36:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:25:42.475 Found 0000:86:00.1 (0x8086 - 0x159b) 00:25:42.475 06:36:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:42.475 06:36:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:42.475 06:36:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:42.475 06:36:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:42.475 06:36:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:42.476 06:36:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:42.476 06:36:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:42.476 06:36:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:42.476 06:36:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:42.476 06:36:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:42.476 06:36:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:42.476 06:36:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:42.476 06:36:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:42.476 06:36:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:42.476 06:36:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:42.476 06:36:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:25:42.476 Found net devices under 0000:86:00.0: cvl_0_0 00:25:42.476 06:36:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:42.476 06:36:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:42.476 06:36:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:42.476 06:36:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:42.476 06:36:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:42.476 06:36:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:42.476 06:36:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:42.476 06:36:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:42.476 06:36:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:25:42.476 Found net devices under 0000:86:00.1: cvl_0_1 00:25:42.476 06:36:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:42.476 06:36:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:42.476 06:36:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:25:42.476 06:36:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:42.476 06:36:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:42.476 06:36:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:42.476 06:36:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:42.476 06:36:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:42.476 06:36:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:42.476 06:36:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:42.476 06:36:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:42.476 06:36:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:42.476 06:36:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:42.476 06:36:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:42.476 06:36:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:42.476 06:36:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:42.476 06:36:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:42.476 06:36:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:42.476 06:36:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:42.476 06:36:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:42.476 06:36:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:42.476 06:36:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:42.476 06:36:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:42.476 06:36:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:42.476 06:36:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:42.476 06:36:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:42.476 06:36:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:42.476 06:36:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:42.476 06:36:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:42.476 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:42.476 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.443 ms 00:25:42.476 00:25:42.476 --- 10.0.0.2 ping statistics --- 00:25:42.476 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:42.476 rtt min/avg/max/mdev = 0.443/0.443/0.443/0.000 ms 00:25:42.476 06:36:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:42.476 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:42.476 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.208 ms 00:25:42.476 00:25:42.476 --- 10.0.0.1 ping statistics --- 00:25:42.476 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:42.476 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:25:42.476 06:36:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:42.476 06:36:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:25:42.476 06:36:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:42.476 06:36:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:42.476 06:36:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:42.476 06:36:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:42.476 06:36:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:42.476 06:36:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:42.476 06:36:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:42.476 06:36:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:25:42.476 06:36:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:25:42.476 06:36:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:25:42.476 06:36:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:25:42.476 net.core.busy_poll = 1 00:25:42.476 06:36:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:25:42.476 net.core.busy_read = 1 00:25:42.476 06:36:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:25:42.476 06:36:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:25:42.476 06:36:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:25:42.476 06:36:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:25:42.476 06:36:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:25:42.476 06:36:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:25:42.476 06:36:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:42.476 06:36:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:42.476 06:36:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:42.476 06:36:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=600606 00:25:42.476 06:36:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 600606 00:25:42.476 06:36:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:25:42.476 06:36:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@833 -- # '[' -z 600606 ']' 00:25:42.476 06:36:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:42.477 06:36:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:42.477 06:36:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:42.477 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:42.477 06:36:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:42.477 06:36:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:42.477 [2024-11-20 06:36:14.084295] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:25:42.477 [2024-11-20 06:36:14.084338] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:42.477 [2024-11-20 06:36:14.163734] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:42.477 [2024-11-20 06:36:14.205350] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:42.477 [2024-11-20 06:36:14.205386] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:42.477 [2024-11-20 06:36:14.205394] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:42.477 [2024-11-20 06:36:14.205400] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:42.477 [2024-11-20 06:36:14.205405] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:42.477 [2024-11-20 06:36:14.206967] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:42.477 [2024-11-20 06:36:14.207088] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:42.477 [2024-11-20 06:36:14.207198] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:42.477 [2024-11-20 06:36:14.207199] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:43.089 06:36:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:43.089 06:36:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@866 -- # return 0 00:25:43.089 06:36:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:43.089 06:36:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:43.089 06:36:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:43.364 06:36:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:43.364 06:36:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:25:43.364 06:36:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:25:43.364 06:36:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:25:43.364 06:36:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:43.364 06:36:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:43.364 06:36:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:43.364 06:36:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:25:43.364 06:36:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:25:43.364 06:36:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:43.364 06:36:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:43.364 06:36:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:43.364 06:36:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:25:43.364 06:36:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:43.364 06:36:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:43.364 06:36:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:43.364 06:36:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:25:43.364 06:36:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:43.364 06:36:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:43.364 [2024-11-20 06:36:15.090471] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:43.364 06:36:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:43.364 06:36:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:25:43.364 06:36:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:43.365 06:36:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:43.365 Malloc1 00:25:43.365 06:36:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:43.365 06:36:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:43.365 06:36:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:43.365 06:36:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:43.365 06:36:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:43.365 06:36:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:25:43.365 06:36:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:43.365 06:36:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:43.365 06:36:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:43.365 06:36:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:43.365 06:36:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:43.365 06:36:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:43.365 [2024-11-20 06:36:15.157047] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:43.365 06:36:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:43.365 06:36:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=600857 00:25:43.365 06:36:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:25:43.365 06:36:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:25:45.927 06:36:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:25:45.927 06:36:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:45.927 06:36:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:45.927 06:36:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:45.927 06:36:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:25:45.927 "tick_rate": 2100000000, 00:25:45.927 "poll_groups": [ 00:25:45.927 { 00:25:45.927 "name": "nvmf_tgt_poll_group_000", 00:25:45.927 "admin_qpairs": 1, 00:25:45.927 "io_qpairs": 2, 00:25:45.927 "current_admin_qpairs": 1, 00:25:45.927 "current_io_qpairs": 2, 00:25:45.927 "pending_bdev_io": 0, 00:25:45.927 "completed_nvme_io": 29541, 00:25:45.927 "transports": [ 00:25:45.927 { 00:25:45.927 "trtype": "TCP" 00:25:45.927 } 00:25:45.927 ] 00:25:45.927 }, 00:25:45.927 { 00:25:45.927 "name": "nvmf_tgt_poll_group_001", 00:25:45.927 "admin_qpairs": 0, 00:25:45.927 "io_qpairs": 2, 00:25:45.927 "current_admin_qpairs": 0, 00:25:45.927 "current_io_qpairs": 2, 00:25:45.927 "pending_bdev_io": 0, 00:25:45.927 "completed_nvme_io": 30045, 00:25:45.927 "transports": [ 00:25:45.927 { 00:25:45.927 "trtype": "TCP" 00:25:45.927 } 00:25:45.927 ] 00:25:45.927 }, 00:25:45.927 { 00:25:45.927 "name": "nvmf_tgt_poll_group_002", 00:25:45.927 "admin_qpairs": 0, 00:25:45.927 "io_qpairs": 0, 00:25:45.927 "current_admin_qpairs": 0, 00:25:45.927 "current_io_qpairs": 0, 00:25:45.927 "pending_bdev_io": 0, 00:25:45.927 "completed_nvme_io": 0, 00:25:45.927 "transports": [ 00:25:45.927 { 00:25:45.927 "trtype": "TCP" 00:25:45.927 } 00:25:45.927 ] 00:25:45.927 }, 00:25:45.927 { 00:25:45.927 "name": "nvmf_tgt_poll_group_003", 00:25:45.927 "admin_qpairs": 0, 00:25:45.927 "io_qpairs": 0, 00:25:45.927 "current_admin_qpairs": 0, 00:25:45.927 "current_io_qpairs": 0, 00:25:45.927 "pending_bdev_io": 0, 00:25:45.927 "completed_nvme_io": 0, 00:25:45.927 "transports": [ 00:25:45.927 { 00:25:45.927 "trtype": "TCP" 00:25:45.927 } 00:25:45.927 ] 00:25:45.927 } 00:25:45.927 ] 00:25:45.927 }' 00:25:45.927 06:36:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:25:45.927 06:36:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:25:45.927 06:36:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=2 00:25:45.927 06:36:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 2 -lt 2 ]] 00:25:45.927 06:36:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 600857 00:25:54.035 Initializing NVMe Controllers 00:25:54.035 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:54.035 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:25:54.035 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:25:54.035 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:25:54.035 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:25:54.035 Initialization complete. Launching workers. 00:25:54.035 ======================================================== 00:25:54.035 Latency(us) 00:25:54.035 Device Information : IOPS MiB/s Average min max 00:25:54.035 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 6915.30 27.01 9278.20 1600.41 52859.68 00:25:54.035 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 7590.20 29.65 8434.07 1467.06 52291.42 00:25:54.035 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 8649.80 33.79 7397.40 1504.74 53268.38 00:25:54.035 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 8024.10 31.34 7978.25 1358.03 52889.39 00:25:54.035 ======================================================== 00:25:54.035 Total : 31179.39 121.79 8216.39 1358.03 53268.38 00:25:54.035 00:25:54.035 06:36:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:25:54.035 06:36:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:54.035 06:36:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:25:54.035 06:36:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:54.035 06:36:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:25:54.035 06:36:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:54.035 06:36:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:54.035 rmmod nvme_tcp 00:25:54.035 rmmod nvme_fabrics 00:25:54.035 rmmod nvme_keyring 00:25:54.035 06:36:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:54.035 06:36:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:25:54.035 06:36:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:25:54.035 06:36:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 600606 ']' 00:25:54.035 06:36:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 600606 00:25:54.035 06:36:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@952 -- # '[' -z 600606 ']' 00:25:54.035 06:36:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # kill -0 600606 00:25:54.035 06:36:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@957 -- # uname 00:25:54.035 06:36:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:54.035 06:36:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 600606 00:25:54.035 06:36:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:25:54.035 06:36:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:25:54.035 06:36:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@970 -- # echo 'killing process with pid 600606' 00:25:54.035 killing process with pid 600606 00:25:54.035 06:36:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@971 -- # kill 600606 00:25:54.035 06:36:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@976 -- # wait 600606 00:25:54.035 06:36:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:54.035 06:36:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:54.035 06:36:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:54.035 06:36:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:25:54.035 06:36:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:25:54.035 06:36:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:54.035 06:36:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:25:54.035 06:36:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:54.035 06:36:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:54.035 06:36:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:54.035 06:36:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:54.035 06:36:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:57.325 06:36:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:57.325 06:36:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:25:57.325 00:25:57.325 real 0m51.872s 00:25:57.325 user 2m49.432s 00:25:57.325 sys 0m10.298s 00:25:57.325 06:36:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1128 -- # xtrace_disable 00:25:57.325 06:36:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:57.325 ************************************ 00:25:57.325 END TEST nvmf_perf_adq 00:25:57.325 ************************************ 00:25:57.325 06:36:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:25:57.325 06:36:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:25:57.325 06:36:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:25:57.325 06:36:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:57.325 ************************************ 00:25:57.325 START TEST nvmf_shutdown 00:25:57.325 ************************************ 00:25:57.325 06:36:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:25:57.325 * Looking for test storage... 00:25:57.325 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:57.325 06:36:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:25:57.325 06:36:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1691 -- # lcov --version 00:25:57.325 06:36:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:25:57.325 06:36:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:25:57.325 06:36:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:57.325 06:36:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:57.325 06:36:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:57.325 06:36:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:25:57.325 06:36:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:25:57.325 06:36:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:25:57.325 06:36:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:25:57.325 06:36:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:25:57.325 06:36:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:25:57.325 06:36:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:25:57.325 06:36:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:57.325 06:36:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:25:57.325 06:36:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:25:57.325 06:36:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:57.325 06:36:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:57.325 06:36:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:25:57.325 06:36:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:25:57.325 06:36:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:57.325 06:36:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:25:57.325 06:36:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:25:57.325 06:36:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:25:57.325 06:36:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:25:57.325 06:36:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:57.325 06:36:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:25:57.325 06:36:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:25:57.325 06:36:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:57.325 06:36:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:57.325 06:36:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:25:57.325 06:36:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:57.325 06:36:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:25:57.325 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:57.325 --rc genhtml_branch_coverage=1 00:25:57.325 --rc genhtml_function_coverage=1 00:25:57.325 --rc genhtml_legend=1 00:25:57.325 --rc geninfo_all_blocks=1 00:25:57.325 --rc geninfo_unexecuted_blocks=1 00:25:57.325 00:25:57.325 ' 00:25:57.326 06:36:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:25:57.326 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:57.326 --rc genhtml_branch_coverage=1 00:25:57.326 --rc genhtml_function_coverage=1 00:25:57.326 --rc genhtml_legend=1 00:25:57.326 --rc geninfo_all_blocks=1 00:25:57.326 --rc geninfo_unexecuted_blocks=1 00:25:57.326 00:25:57.326 ' 00:25:57.326 06:36:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:25:57.326 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:57.326 --rc genhtml_branch_coverage=1 00:25:57.326 --rc genhtml_function_coverage=1 00:25:57.326 --rc genhtml_legend=1 00:25:57.326 --rc geninfo_all_blocks=1 00:25:57.326 --rc geninfo_unexecuted_blocks=1 00:25:57.326 00:25:57.326 ' 00:25:57.326 06:36:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:25:57.326 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:57.326 --rc genhtml_branch_coverage=1 00:25:57.326 --rc genhtml_function_coverage=1 00:25:57.326 --rc genhtml_legend=1 00:25:57.326 --rc geninfo_all_blocks=1 00:25:57.326 --rc geninfo_unexecuted_blocks=1 00:25:57.326 00:25:57.326 ' 00:25:57.326 06:36:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:57.326 06:36:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:25:57.326 06:36:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:57.326 06:36:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:57.326 06:36:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:57.326 06:36:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:57.326 06:36:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:57.326 06:36:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:57.326 06:36:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:57.326 06:36:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:57.326 06:36:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:57.326 06:36:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:57.326 06:36:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:25:57.326 06:36:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:25:57.326 06:36:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:57.326 06:36:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:57.326 06:36:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:57.326 06:36:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:57.326 06:36:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:57.326 06:36:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:25:57.326 06:36:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:57.326 06:36:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:57.326 06:36:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:57.326 06:36:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:57.326 06:36:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:57.326 06:36:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:57.326 06:36:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:25:57.326 06:36:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:57.326 06:36:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:25:57.326 06:36:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:57.326 06:36:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:57.326 06:36:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:57.326 06:36:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:57.326 06:36:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:57.326 06:36:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:57.326 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:57.326 06:36:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:57.326 06:36:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:57.326 06:36:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:57.326 06:36:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:25:57.326 06:36:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:25:57.326 06:36:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:25:57.326 06:36:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:25:57.326 06:36:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1109 -- # xtrace_disable 00:25:57.326 06:36:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:25:57.326 ************************************ 00:25:57.326 START TEST nvmf_shutdown_tc1 00:25:57.326 ************************************ 00:25:57.326 06:36:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1127 -- # nvmf_shutdown_tc1 00:25:57.326 06:36:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:25:57.326 06:36:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:25:57.326 06:36:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:57.326 06:36:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:57.326 06:36:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:57.326 06:36:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:57.326 06:36:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:57.326 06:36:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:57.326 06:36:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:57.326 06:36:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:57.326 06:36:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:57.326 06:36:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:57.326 06:36:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:25:57.326 06:36:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:03.917 06:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:03.917 06:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:26:03.917 06:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:03.917 06:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:03.917 06:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:03.917 06:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:03.917 06:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:03.917 06:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:26:03.917 06:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:03.917 06:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:26:03.917 06:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:26:03.917 06:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:26:03.917 06:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:26:03.917 06:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:26:03.917 06:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:26:03.917 06:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:03.917 06:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:03.917 06:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:03.917 06:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:03.917 06:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:03.917 06:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:03.917 06:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:03.917 06:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:03.917 06:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:03.917 06:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:03.917 06:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:03.917 06:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:03.917 06:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:03.917 06:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:03.917 06:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:03.917 06:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:03.917 06:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:03.917 06:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:03.917 06:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:03.917 06:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:26:03.917 Found 0000:86:00.0 (0x8086 - 0x159b) 00:26:03.917 06:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:03.917 06:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:03.917 06:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:03.917 06:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:03.917 06:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:03.917 06:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:03.917 06:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:26:03.917 Found 0000:86:00.1 (0x8086 - 0x159b) 00:26:03.917 06:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:03.917 06:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:03.917 06:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:03.917 06:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:03.917 06:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:03.917 06:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:03.917 06:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:03.917 06:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:03.917 06:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:03.917 06:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:03.917 06:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:03.917 06:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:03.917 06:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:03.917 06:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:03.917 06:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:03.917 06:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:26:03.917 Found net devices under 0000:86:00.0: cvl_0_0 00:26:03.917 06:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:03.917 06:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:03.917 06:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:03.917 06:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:03.917 06:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:03.917 06:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:03.917 06:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:03.917 06:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:03.917 06:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:26:03.917 Found net devices under 0000:86:00.1: cvl_0_1 00:26:03.917 06:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:03.917 06:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:03.917 06:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes 00:26:03.917 06:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:03.917 06:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:03.918 06:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:03.918 06:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:03.918 06:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:03.918 06:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:03.918 06:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:03.918 06:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:03.918 06:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:03.918 06:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:03.918 06:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:03.918 06:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:03.918 06:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:03.918 06:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:03.918 06:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:03.918 06:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:03.918 06:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:03.918 06:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:03.918 06:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:03.918 06:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:03.918 06:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:03.918 06:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:03.918 06:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:03.918 06:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:03.918 06:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:03.918 06:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:03.918 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:03.918 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.376 ms 00:26:03.918 00:26:03.918 --- 10.0.0.2 ping statistics --- 00:26:03.918 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:03.918 rtt min/avg/max/mdev = 0.376/0.376/0.376/0.000 ms 00:26:03.918 06:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:03.918 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:03.918 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.129 ms 00:26:03.918 00:26:03.918 --- 10.0.0.1 ping statistics --- 00:26:03.918 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:03.918 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:26:03.918 06:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:03.918 06:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0 00:26:03.918 06:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:03.918 06:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:03.918 06:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:03.918 06:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:03.918 06:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:03.918 06:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:03.918 06:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:03.918 06:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:26:03.918 06:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:03.918 06:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:03.918 06:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:03.918 06:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=606312 00:26:03.918 06:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 606312 00:26:03.918 06:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:26:03.918 06:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # '[' -z 606312 ']' 00:26:03.918 06:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:03.918 06:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:03.918 06:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:03.918 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:03.918 06:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:03.918 06:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:03.918 [2024-11-20 06:36:35.037761] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:26:03.918 [2024-11-20 06:36:35.037805] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:03.918 [2024-11-20 06:36:35.114614] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:03.918 [2024-11-20 06:36:35.156184] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:03.918 [2024-11-20 06:36:35.156226] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:03.918 [2024-11-20 06:36:35.156233] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:03.918 [2024-11-20 06:36:35.156239] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:03.918 [2024-11-20 06:36:35.156247] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:03.918 [2024-11-20 06:36:35.157806] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:03.918 [2024-11-20 06:36:35.157914] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:03.918 [2024-11-20 06:36:35.158032] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:03.918 [2024-11-20 06:36:35.158032] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:26:03.918 06:36:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:03.918 06:36:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@866 -- # return 0 00:26:03.918 06:36:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:03.918 06:36:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:03.918 06:36:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:03.918 06:36:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:03.918 06:36:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:03.918 06:36:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:03.918 06:36:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:03.918 [2024-11-20 06:36:35.301391] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:03.918 06:36:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:03.918 06:36:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:26:03.918 06:36:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:26:03.918 06:36:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:03.918 06:36:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:03.918 06:36:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:26:03.918 06:36:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:03.918 06:36:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:26:03.918 06:36:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:03.918 06:36:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:26:03.918 06:36:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:03.918 06:36:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:26:03.918 06:36:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:03.918 06:36:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:26:03.918 06:36:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:03.918 06:36:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:26:03.918 06:36:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:03.918 06:36:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:26:03.918 06:36:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:03.918 06:36:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:26:03.918 06:36:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:03.918 06:36:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:26:03.918 06:36:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:03.918 06:36:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:26:03.918 06:36:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:03.918 06:36:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:26:03.918 06:36:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:26:03.918 06:36:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:03.918 06:36:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:03.918 Malloc1 00:26:03.918 [2024-11-20 06:36:35.407642] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:03.918 Malloc2 00:26:03.918 Malloc3 00:26:03.918 Malloc4 00:26:03.918 Malloc5 00:26:03.918 Malloc6 00:26:03.918 Malloc7 00:26:03.918 Malloc8 00:26:03.918 Malloc9 00:26:04.178 Malloc10 00:26:04.178 06:36:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:04.178 06:36:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:26:04.178 06:36:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:04.178 06:36:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:04.178 06:36:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=606439 00:26:04.178 06:36:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 606439 /var/tmp/bdevperf.sock 00:26:04.178 06:36:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # '[' -z 606439 ']' 00:26:04.178 06:36:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:04.178 06:36:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:26:04.178 06:36:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:04.178 06:36:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:26:04.178 06:36:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:04.178 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:04.178 06:36:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:26:04.178 06:36:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:04.178 06:36:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:26:04.178 06:36:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:04.178 06:36:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:04.178 06:36:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:04.178 { 00:26:04.178 "params": { 00:26:04.178 "name": "Nvme$subsystem", 00:26:04.178 "trtype": "$TEST_TRANSPORT", 00:26:04.178 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:04.178 "adrfam": "ipv4", 00:26:04.178 "trsvcid": "$NVMF_PORT", 00:26:04.178 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:04.178 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:04.178 "hdgst": ${hdgst:-false}, 00:26:04.178 "ddgst": ${ddgst:-false} 00:26:04.178 }, 00:26:04.178 "method": "bdev_nvme_attach_controller" 00:26:04.178 } 00:26:04.178 EOF 00:26:04.178 )") 00:26:04.178 06:36:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:26:04.178 06:36:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:04.178 06:36:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:04.178 { 00:26:04.178 "params": { 00:26:04.178 "name": "Nvme$subsystem", 00:26:04.178 "trtype": "$TEST_TRANSPORT", 00:26:04.178 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:04.178 "adrfam": "ipv4", 00:26:04.178 "trsvcid": "$NVMF_PORT", 00:26:04.178 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:04.178 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:04.178 "hdgst": ${hdgst:-false}, 00:26:04.178 "ddgst": ${ddgst:-false} 00:26:04.178 }, 00:26:04.178 "method": "bdev_nvme_attach_controller" 00:26:04.178 } 00:26:04.178 EOF 00:26:04.178 )") 00:26:04.178 06:36:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:26:04.178 06:36:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:04.178 06:36:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:04.178 { 00:26:04.178 "params": { 00:26:04.178 "name": "Nvme$subsystem", 00:26:04.178 "trtype": "$TEST_TRANSPORT", 00:26:04.178 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:04.178 "adrfam": "ipv4", 00:26:04.178 "trsvcid": "$NVMF_PORT", 00:26:04.178 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:04.178 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:04.178 "hdgst": ${hdgst:-false}, 00:26:04.178 "ddgst": ${ddgst:-false} 00:26:04.178 }, 00:26:04.178 "method": "bdev_nvme_attach_controller" 00:26:04.178 } 00:26:04.178 EOF 00:26:04.178 )") 00:26:04.178 06:36:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:26:04.178 06:36:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:04.178 06:36:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:04.178 { 00:26:04.178 "params": { 00:26:04.178 "name": "Nvme$subsystem", 00:26:04.178 "trtype": "$TEST_TRANSPORT", 00:26:04.178 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:04.178 "adrfam": "ipv4", 00:26:04.178 "trsvcid": "$NVMF_PORT", 00:26:04.178 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:04.178 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:04.178 "hdgst": ${hdgst:-false}, 00:26:04.178 "ddgst": ${ddgst:-false} 00:26:04.178 }, 00:26:04.178 "method": "bdev_nvme_attach_controller" 00:26:04.178 } 00:26:04.178 EOF 00:26:04.178 )") 00:26:04.178 06:36:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:26:04.178 06:36:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:04.178 06:36:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:04.178 { 00:26:04.178 "params": { 00:26:04.178 "name": "Nvme$subsystem", 00:26:04.178 "trtype": "$TEST_TRANSPORT", 00:26:04.179 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:04.179 "adrfam": "ipv4", 00:26:04.179 "trsvcid": "$NVMF_PORT", 00:26:04.179 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:04.179 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:04.179 "hdgst": ${hdgst:-false}, 00:26:04.179 "ddgst": ${ddgst:-false} 00:26:04.179 }, 00:26:04.179 "method": "bdev_nvme_attach_controller" 00:26:04.179 } 00:26:04.179 EOF 00:26:04.179 )") 00:26:04.179 06:36:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:26:04.179 06:36:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:04.179 06:36:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:04.179 { 00:26:04.179 "params": { 00:26:04.179 "name": "Nvme$subsystem", 00:26:04.179 "trtype": "$TEST_TRANSPORT", 00:26:04.179 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:04.179 "adrfam": "ipv4", 00:26:04.179 "trsvcid": "$NVMF_PORT", 00:26:04.179 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:04.179 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:04.179 "hdgst": ${hdgst:-false}, 00:26:04.179 "ddgst": ${ddgst:-false} 00:26:04.179 }, 00:26:04.179 "method": "bdev_nvme_attach_controller" 00:26:04.179 } 00:26:04.179 EOF 00:26:04.179 )") 00:26:04.179 06:36:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:26:04.179 06:36:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:04.179 06:36:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:04.179 { 00:26:04.179 "params": { 00:26:04.179 "name": "Nvme$subsystem", 00:26:04.179 "trtype": "$TEST_TRANSPORT", 00:26:04.179 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:04.179 "adrfam": "ipv4", 00:26:04.179 "trsvcid": "$NVMF_PORT", 00:26:04.179 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:04.179 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:04.179 "hdgst": ${hdgst:-false}, 00:26:04.179 "ddgst": ${ddgst:-false} 00:26:04.179 }, 00:26:04.179 "method": "bdev_nvme_attach_controller" 00:26:04.179 } 00:26:04.179 EOF 00:26:04.179 )") 00:26:04.179 [2024-11-20 06:36:35.883221] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:26:04.179 [2024-11-20 06:36:35.883269] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:26:04.179 06:36:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:26:04.179 06:36:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:04.179 06:36:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:04.179 { 00:26:04.179 "params": { 00:26:04.179 "name": "Nvme$subsystem", 00:26:04.179 "trtype": "$TEST_TRANSPORT", 00:26:04.179 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:04.179 "adrfam": "ipv4", 00:26:04.179 "trsvcid": "$NVMF_PORT", 00:26:04.179 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:04.179 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:04.179 "hdgst": ${hdgst:-false}, 00:26:04.179 "ddgst": ${ddgst:-false} 00:26:04.179 }, 00:26:04.179 "method": "bdev_nvme_attach_controller" 00:26:04.179 } 00:26:04.179 EOF 00:26:04.179 )") 00:26:04.179 06:36:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:26:04.179 06:36:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:04.179 06:36:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:04.179 { 00:26:04.179 "params": { 00:26:04.179 "name": "Nvme$subsystem", 00:26:04.179 "trtype": "$TEST_TRANSPORT", 00:26:04.179 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:04.179 "adrfam": "ipv4", 00:26:04.179 "trsvcid": "$NVMF_PORT", 00:26:04.179 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:04.179 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:04.179 "hdgst": ${hdgst:-false}, 00:26:04.179 "ddgst": ${ddgst:-false} 00:26:04.179 }, 00:26:04.179 "method": "bdev_nvme_attach_controller" 00:26:04.179 } 00:26:04.179 EOF 00:26:04.179 )") 00:26:04.179 06:36:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:26:04.179 06:36:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:04.179 06:36:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:04.179 { 00:26:04.179 "params": { 00:26:04.179 "name": "Nvme$subsystem", 00:26:04.179 "trtype": "$TEST_TRANSPORT", 00:26:04.179 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:04.179 "adrfam": "ipv4", 00:26:04.179 "trsvcid": "$NVMF_PORT", 00:26:04.179 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:04.179 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:04.179 "hdgst": ${hdgst:-false}, 00:26:04.179 "ddgst": ${ddgst:-false} 00:26:04.179 }, 00:26:04.179 "method": "bdev_nvme_attach_controller" 00:26:04.179 } 00:26:04.179 EOF 00:26:04.179 )") 00:26:04.179 06:36:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:26:04.179 06:36:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:26:04.179 06:36:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:26:04.179 06:36:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:26:04.179 "params": { 00:26:04.179 "name": "Nvme1", 00:26:04.179 "trtype": "tcp", 00:26:04.179 "traddr": "10.0.0.2", 00:26:04.179 "adrfam": "ipv4", 00:26:04.179 "trsvcid": "4420", 00:26:04.179 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:04.179 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:04.179 "hdgst": false, 00:26:04.179 "ddgst": false 00:26:04.179 }, 00:26:04.179 "method": "bdev_nvme_attach_controller" 00:26:04.179 },{ 00:26:04.179 "params": { 00:26:04.179 "name": "Nvme2", 00:26:04.179 "trtype": "tcp", 00:26:04.179 "traddr": "10.0.0.2", 00:26:04.179 "adrfam": "ipv4", 00:26:04.179 "trsvcid": "4420", 00:26:04.179 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:26:04.179 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:26:04.179 "hdgst": false, 00:26:04.179 "ddgst": false 00:26:04.179 }, 00:26:04.179 "method": "bdev_nvme_attach_controller" 00:26:04.179 },{ 00:26:04.179 "params": { 00:26:04.179 "name": "Nvme3", 00:26:04.179 "trtype": "tcp", 00:26:04.179 "traddr": "10.0.0.2", 00:26:04.179 "adrfam": "ipv4", 00:26:04.179 "trsvcid": "4420", 00:26:04.179 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:26:04.179 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:26:04.179 "hdgst": false, 00:26:04.179 "ddgst": false 00:26:04.179 }, 00:26:04.180 "method": "bdev_nvme_attach_controller" 00:26:04.180 },{ 00:26:04.180 "params": { 00:26:04.180 "name": "Nvme4", 00:26:04.180 "trtype": "tcp", 00:26:04.180 "traddr": "10.0.0.2", 00:26:04.180 "adrfam": "ipv4", 00:26:04.180 "trsvcid": "4420", 00:26:04.180 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:26:04.180 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:26:04.180 "hdgst": false, 00:26:04.180 "ddgst": false 00:26:04.180 }, 00:26:04.180 "method": "bdev_nvme_attach_controller" 00:26:04.180 },{ 00:26:04.180 "params": { 00:26:04.180 "name": "Nvme5", 00:26:04.180 "trtype": "tcp", 00:26:04.180 "traddr": "10.0.0.2", 00:26:04.180 "adrfam": "ipv4", 00:26:04.180 "trsvcid": "4420", 00:26:04.180 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:26:04.180 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:26:04.180 "hdgst": false, 00:26:04.180 "ddgst": false 00:26:04.180 }, 00:26:04.180 "method": "bdev_nvme_attach_controller" 00:26:04.180 },{ 00:26:04.180 "params": { 00:26:04.180 "name": "Nvme6", 00:26:04.180 "trtype": "tcp", 00:26:04.180 "traddr": "10.0.0.2", 00:26:04.180 "adrfam": "ipv4", 00:26:04.180 "trsvcid": "4420", 00:26:04.180 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:26:04.180 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:26:04.180 "hdgst": false, 00:26:04.180 "ddgst": false 00:26:04.180 }, 00:26:04.180 "method": "bdev_nvme_attach_controller" 00:26:04.180 },{ 00:26:04.180 "params": { 00:26:04.180 "name": "Nvme7", 00:26:04.180 "trtype": "tcp", 00:26:04.180 "traddr": "10.0.0.2", 00:26:04.180 "adrfam": "ipv4", 00:26:04.180 "trsvcid": "4420", 00:26:04.180 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:26:04.180 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:26:04.180 "hdgst": false, 00:26:04.180 "ddgst": false 00:26:04.180 }, 00:26:04.180 "method": "bdev_nvme_attach_controller" 00:26:04.180 },{ 00:26:04.180 "params": { 00:26:04.180 "name": "Nvme8", 00:26:04.180 "trtype": "tcp", 00:26:04.180 "traddr": "10.0.0.2", 00:26:04.180 "adrfam": "ipv4", 00:26:04.180 "trsvcid": "4420", 00:26:04.180 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:26:04.180 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:26:04.180 "hdgst": false, 00:26:04.180 "ddgst": false 00:26:04.180 }, 00:26:04.180 "method": "bdev_nvme_attach_controller" 00:26:04.180 },{ 00:26:04.180 "params": { 00:26:04.180 "name": "Nvme9", 00:26:04.180 "trtype": "tcp", 00:26:04.180 "traddr": "10.0.0.2", 00:26:04.180 "adrfam": "ipv4", 00:26:04.180 "trsvcid": "4420", 00:26:04.180 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:26:04.180 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:26:04.180 "hdgst": false, 00:26:04.180 "ddgst": false 00:26:04.180 }, 00:26:04.180 "method": "bdev_nvme_attach_controller" 00:26:04.180 },{ 00:26:04.180 "params": { 00:26:04.180 "name": "Nvme10", 00:26:04.180 "trtype": "tcp", 00:26:04.180 "traddr": "10.0.0.2", 00:26:04.180 "adrfam": "ipv4", 00:26:04.180 "trsvcid": "4420", 00:26:04.180 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:26:04.180 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:26:04.180 "hdgst": false, 00:26:04.180 "ddgst": false 00:26:04.180 }, 00:26:04.180 "method": "bdev_nvme_attach_controller" 00:26:04.180 }' 00:26:04.180 [2024-11-20 06:36:35.963368] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:04.180 [2024-11-20 06:36:36.004447] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:06.081 06:36:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:06.081 06:36:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@866 -- # return 0 00:26:06.081 06:36:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:26:06.081 06:36:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.081 06:36:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:06.081 06:36:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.081 06:36:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 606439 00:26:06.081 06:36:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:26:06.081 06:36:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:26:07.015 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 606439 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:26:07.015 06:36:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 606312 00:26:07.015 06:36:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:26:07.015 06:36:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:26:07.015 06:36:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:26:07.015 06:36:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:26:07.015 06:36:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:07.015 06:36:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:07.015 { 00:26:07.015 "params": { 00:26:07.015 "name": "Nvme$subsystem", 00:26:07.015 "trtype": "$TEST_TRANSPORT", 00:26:07.015 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:07.015 "adrfam": "ipv4", 00:26:07.015 "trsvcid": "$NVMF_PORT", 00:26:07.015 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:07.015 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:07.015 "hdgst": ${hdgst:-false}, 00:26:07.015 "ddgst": ${ddgst:-false} 00:26:07.015 }, 00:26:07.015 "method": "bdev_nvme_attach_controller" 00:26:07.015 } 00:26:07.015 EOF 00:26:07.015 )") 00:26:07.015 06:36:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:26:07.015 06:36:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:07.015 06:36:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:07.015 { 00:26:07.015 "params": { 00:26:07.015 "name": "Nvme$subsystem", 00:26:07.015 "trtype": "$TEST_TRANSPORT", 00:26:07.015 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:07.015 "adrfam": "ipv4", 00:26:07.015 "trsvcid": "$NVMF_PORT", 00:26:07.015 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:07.015 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:07.015 "hdgst": ${hdgst:-false}, 00:26:07.016 "ddgst": ${ddgst:-false} 00:26:07.016 }, 00:26:07.016 "method": "bdev_nvme_attach_controller" 00:26:07.016 } 00:26:07.016 EOF 00:26:07.016 )") 00:26:07.016 06:36:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:26:07.016 06:36:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:07.016 06:36:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:07.016 { 00:26:07.016 "params": { 00:26:07.016 "name": "Nvme$subsystem", 00:26:07.016 "trtype": "$TEST_TRANSPORT", 00:26:07.016 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:07.016 "adrfam": "ipv4", 00:26:07.016 "trsvcid": "$NVMF_PORT", 00:26:07.016 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:07.016 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:07.016 "hdgst": ${hdgst:-false}, 00:26:07.016 "ddgst": ${ddgst:-false} 00:26:07.016 }, 00:26:07.016 "method": "bdev_nvme_attach_controller" 00:26:07.016 } 00:26:07.016 EOF 00:26:07.016 )") 00:26:07.016 06:36:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:26:07.016 06:36:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:07.016 06:36:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:07.016 { 00:26:07.016 "params": { 00:26:07.016 "name": "Nvme$subsystem", 00:26:07.016 "trtype": "$TEST_TRANSPORT", 00:26:07.016 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:07.016 "adrfam": "ipv4", 00:26:07.016 "trsvcid": "$NVMF_PORT", 00:26:07.016 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:07.016 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:07.016 "hdgst": ${hdgst:-false}, 00:26:07.016 "ddgst": ${ddgst:-false} 00:26:07.016 }, 00:26:07.016 "method": "bdev_nvme_attach_controller" 00:26:07.016 } 00:26:07.016 EOF 00:26:07.016 )") 00:26:07.016 06:36:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:26:07.016 06:36:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:07.016 06:36:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:07.016 { 00:26:07.016 "params": { 00:26:07.016 "name": "Nvme$subsystem", 00:26:07.016 "trtype": "$TEST_TRANSPORT", 00:26:07.016 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:07.016 "adrfam": "ipv4", 00:26:07.016 "trsvcid": "$NVMF_PORT", 00:26:07.016 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:07.016 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:07.016 "hdgst": ${hdgst:-false}, 00:26:07.016 "ddgst": ${ddgst:-false} 00:26:07.016 }, 00:26:07.016 "method": "bdev_nvme_attach_controller" 00:26:07.016 } 00:26:07.016 EOF 00:26:07.016 )") 00:26:07.016 06:36:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:26:07.016 06:36:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:07.016 06:36:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:07.016 { 00:26:07.016 "params": { 00:26:07.016 "name": "Nvme$subsystem", 00:26:07.016 "trtype": "$TEST_TRANSPORT", 00:26:07.016 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:07.016 "adrfam": "ipv4", 00:26:07.016 "trsvcid": "$NVMF_PORT", 00:26:07.016 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:07.016 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:07.016 "hdgst": ${hdgst:-false}, 00:26:07.016 "ddgst": ${ddgst:-false} 00:26:07.016 }, 00:26:07.016 "method": "bdev_nvme_attach_controller" 00:26:07.016 } 00:26:07.016 EOF 00:26:07.016 )") 00:26:07.016 06:36:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:26:07.016 06:36:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:07.016 06:36:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:07.016 { 00:26:07.016 "params": { 00:26:07.016 "name": "Nvme$subsystem", 00:26:07.016 "trtype": "$TEST_TRANSPORT", 00:26:07.016 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:07.016 "adrfam": "ipv4", 00:26:07.016 "trsvcid": "$NVMF_PORT", 00:26:07.016 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:07.016 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:07.016 "hdgst": ${hdgst:-false}, 00:26:07.016 "ddgst": ${ddgst:-false} 00:26:07.016 }, 00:26:07.016 "method": "bdev_nvme_attach_controller" 00:26:07.016 } 00:26:07.016 EOF 00:26:07.016 )") 00:26:07.016 06:36:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:26:07.016 [2024-11-20 06:36:38.802665] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:26:07.016 [2024-11-20 06:36:38.802710] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid606985 ] 00:26:07.016 06:36:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:07.016 06:36:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:07.016 { 00:26:07.016 "params": { 00:26:07.016 "name": "Nvme$subsystem", 00:26:07.016 "trtype": "$TEST_TRANSPORT", 00:26:07.016 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:07.016 "adrfam": "ipv4", 00:26:07.016 "trsvcid": "$NVMF_PORT", 00:26:07.016 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:07.016 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:07.016 "hdgst": ${hdgst:-false}, 00:26:07.016 "ddgst": ${ddgst:-false} 00:26:07.016 }, 00:26:07.016 "method": "bdev_nvme_attach_controller" 00:26:07.016 } 00:26:07.016 EOF 00:26:07.016 )") 00:26:07.016 06:36:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:26:07.016 06:36:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:07.016 06:36:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:07.016 { 00:26:07.016 "params": { 00:26:07.016 "name": "Nvme$subsystem", 00:26:07.016 "trtype": "$TEST_TRANSPORT", 00:26:07.016 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:07.016 "adrfam": "ipv4", 00:26:07.016 "trsvcid": "$NVMF_PORT", 00:26:07.016 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:07.016 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:07.016 "hdgst": ${hdgst:-false}, 00:26:07.016 "ddgst": ${ddgst:-false} 00:26:07.016 }, 00:26:07.016 "method": "bdev_nvme_attach_controller" 00:26:07.016 } 00:26:07.016 EOF 00:26:07.016 )") 00:26:07.016 06:36:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:26:07.016 06:36:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:07.016 06:36:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:07.016 { 00:26:07.016 "params": { 00:26:07.016 "name": "Nvme$subsystem", 00:26:07.016 "trtype": "$TEST_TRANSPORT", 00:26:07.016 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:07.016 "adrfam": "ipv4", 00:26:07.016 "trsvcid": "$NVMF_PORT", 00:26:07.016 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:07.016 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:07.016 "hdgst": ${hdgst:-false}, 00:26:07.016 "ddgst": ${ddgst:-false} 00:26:07.016 }, 00:26:07.016 "method": "bdev_nvme_attach_controller" 00:26:07.016 } 00:26:07.016 EOF 00:26:07.016 )") 00:26:07.016 06:36:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:26:07.016 06:36:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:26:07.016 06:36:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:26:07.016 06:36:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:26:07.016 "params": { 00:26:07.016 "name": "Nvme1", 00:26:07.016 "trtype": "tcp", 00:26:07.016 "traddr": "10.0.0.2", 00:26:07.016 "adrfam": "ipv4", 00:26:07.016 "trsvcid": "4420", 00:26:07.016 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:07.016 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:07.016 "hdgst": false, 00:26:07.016 "ddgst": false 00:26:07.016 }, 00:26:07.016 "method": "bdev_nvme_attach_controller" 00:26:07.016 },{ 00:26:07.016 "params": { 00:26:07.016 "name": "Nvme2", 00:26:07.016 "trtype": "tcp", 00:26:07.016 "traddr": "10.0.0.2", 00:26:07.016 "adrfam": "ipv4", 00:26:07.016 "trsvcid": "4420", 00:26:07.016 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:26:07.016 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:26:07.016 "hdgst": false, 00:26:07.016 "ddgst": false 00:26:07.016 }, 00:26:07.016 "method": "bdev_nvme_attach_controller" 00:26:07.016 },{ 00:26:07.016 "params": { 00:26:07.016 "name": "Nvme3", 00:26:07.016 "trtype": "tcp", 00:26:07.016 "traddr": "10.0.0.2", 00:26:07.016 "adrfam": "ipv4", 00:26:07.016 "trsvcid": "4420", 00:26:07.016 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:26:07.016 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:26:07.016 "hdgst": false, 00:26:07.016 "ddgst": false 00:26:07.016 }, 00:26:07.016 "method": "bdev_nvme_attach_controller" 00:26:07.016 },{ 00:26:07.016 "params": { 00:26:07.016 "name": "Nvme4", 00:26:07.016 "trtype": "tcp", 00:26:07.016 "traddr": "10.0.0.2", 00:26:07.016 "adrfam": "ipv4", 00:26:07.016 "trsvcid": "4420", 00:26:07.017 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:26:07.017 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:26:07.017 "hdgst": false, 00:26:07.017 "ddgst": false 00:26:07.017 }, 00:26:07.017 "method": "bdev_nvme_attach_controller" 00:26:07.017 },{ 00:26:07.017 "params": { 00:26:07.017 "name": "Nvme5", 00:26:07.017 "trtype": "tcp", 00:26:07.017 "traddr": "10.0.0.2", 00:26:07.017 "adrfam": "ipv4", 00:26:07.017 "trsvcid": "4420", 00:26:07.017 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:26:07.017 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:26:07.017 "hdgst": false, 00:26:07.017 "ddgst": false 00:26:07.017 }, 00:26:07.017 "method": "bdev_nvme_attach_controller" 00:26:07.017 },{ 00:26:07.017 "params": { 00:26:07.017 "name": "Nvme6", 00:26:07.017 "trtype": "tcp", 00:26:07.017 "traddr": "10.0.0.2", 00:26:07.017 "adrfam": "ipv4", 00:26:07.017 "trsvcid": "4420", 00:26:07.017 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:26:07.017 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:26:07.017 "hdgst": false, 00:26:07.017 "ddgst": false 00:26:07.017 }, 00:26:07.017 "method": "bdev_nvme_attach_controller" 00:26:07.017 },{ 00:26:07.017 "params": { 00:26:07.017 "name": "Nvme7", 00:26:07.017 "trtype": "tcp", 00:26:07.017 "traddr": "10.0.0.2", 00:26:07.017 "adrfam": "ipv4", 00:26:07.017 "trsvcid": "4420", 00:26:07.017 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:26:07.017 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:26:07.017 "hdgst": false, 00:26:07.017 "ddgst": false 00:26:07.017 }, 00:26:07.017 "method": "bdev_nvme_attach_controller" 00:26:07.017 },{ 00:26:07.017 "params": { 00:26:07.017 "name": "Nvme8", 00:26:07.017 "trtype": "tcp", 00:26:07.017 "traddr": "10.0.0.2", 00:26:07.017 "adrfam": "ipv4", 00:26:07.017 "trsvcid": "4420", 00:26:07.017 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:26:07.017 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:26:07.017 "hdgst": false, 00:26:07.017 "ddgst": false 00:26:07.017 }, 00:26:07.017 "method": "bdev_nvme_attach_controller" 00:26:07.017 },{ 00:26:07.017 "params": { 00:26:07.017 "name": "Nvme9", 00:26:07.017 "trtype": "tcp", 00:26:07.017 "traddr": "10.0.0.2", 00:26:07.017 "adrfam": "ipv4", 00:26:07.017 "trsvcid": "4420", 00:26:07.017 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:26:07.017 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:26:07.017 "hdgst": false, 00:26:07.017 "ddgst": false 00:26:07.017 }, 00:26:07.017 "method": "bdev_nvme_attach_controller" 00:26:07.017 },{ 00:26:07.017 "params": { 00:26:07.017 "name": "Nvme10", 00:26:07.017 "trtype": "tcp", 00:26:07.017 "traddr": "10.0.0.2", 00:26:07.017 "adrfam": "ipv4", 00:26:07.017 "trsvcid": "4420", 00:26:07.017 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:26:07.017 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:26:07.017 "hdgst": false, 00:26:07.017 "ddgst": false 00:26:07.017 }, 00:26:07.017 "method": "bdev_nvme_attach_controller" 00:26:07.017 }' 00:26:07.275 [2024-11-20 06:36:38.876366] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:07.275 [2024-11-20 06:36:38.917399] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:08.661 Running I/O for 1 seconds... 00:26:09.853 2314.00 IOPS, 144.62 MiB/s 00:26:09.853 Latency(us) 00:26:09.853 [2024-11-20T05:36:41.689Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:09.853 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:09.853 Verification LBA range: start 0x0 length 0x400 00:26:09.853 Nvme1n1 : 1.15 282.28 17.64 0.00 0.00 223780.40 14542.75 208716.56 00:26:09.853 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:09.853 Verification LBA range: start 0x0 length 0x400 00:26:09.853 Nvme2n1 : 1.15 277.49 17.34 0.00 0.00 224133.41 15915.89 206719.27 00:26:09.853 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:09.853 Verification LBA range: start 0x0 length 0x400 00:26:09.853 Nvme3n1 : 1.13 282.51 17.66 0.00 0.00 218420.57 13544.11 212711.13 00:26:09.853 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:09.853 Verification LBA range: start 0x0 length 0x400 00:26:09.853 Nvme4n1 : 1.08 301.35 18.83 0.00 0.00 199735.18 13294.45 214708.42 00:26:09.853 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:09.853 Verification LBA range: start 0x0 length 0x400 00:26:09.853 Nvme5n1 : 1.16 276.16 17.26 0.00 0.00 217427.38 18100.42 219701.64 00:26:09.853 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:09.853 Verification LBA range: start 0x0 length 0x400 00:26:09.853 Nvme6n1 : 1.15 278.04 17.38 0.00 0.00 212843.76 15603.81 214708.42 00:26:09.853 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:09.853 Verification LBA range: start 0x0 length 0x400 00:26:09.853 Nvme7n1 : 1.15 279.28 17.45 0.00 0.00 208694.91 16727.28 211712.49 00:26:09.853 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:09.853 Verification LBA range: start 0x0 length 0x400 00:26:09.853 Nvme8n1 : 1.16 275.59 17.22 0.00 0.00 208634.83 11734.06 221698.93 00:26:09.853 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:09.853 Verification LBA range: start 0x0 length 0x400 00:26:09.853 Nvme9n1 : 1.17 273.90 17.12 0.00 0.00 207002.48 16602.45 224694.86 00:26:09.853 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:09.853 Verification LBA range: start 0x0 length 0x400 00:26:09.853 Nvme10n1 : 1.17 274.45 17.15 0.00 0.00 203541.16 17226.61 234681.30 00:26:09.853 [2024-11-20T05:36:41.689Z] =================================================================================================================== 00:26:09.853 [2024-11-20T05:36:41.689Z] Total : 2801.03 175.06 0.00 0.00 212411.85 11734.06 234681.30 00:26:09.853 06:36:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:26:09.853 06:36:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:26:09.853 06:36:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:26:09.853 06:36:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:26:10.112 06:36:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:26:10.112 06:36:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:10.112 06:36:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:26:10.112 06:36:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:10.112 06:36:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:26:10.112 06:36:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:10.112 06:36:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:10.112 rmmod nvme_tcp 00:26:10.112 rmmod nvme_fabrics 00:26:10.112 rmmod nvme_keyring 00:26:10.112 06:36:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:10.112 06:36:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:26:10.112 06:36:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:26:10.112 06:36:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 606312 ']' 00:26:10.112 06:36:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 606312 00:26:10.112 06:36:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # '[' -z 606312 ']' 00:26:10.112 06:36:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # kill -0 606312 00:26:10.112 06:36:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@957 -- # uname 00:26:10.112 06:36:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:10.112 06:36:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 606312 00:26:10.112 06:36:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:26:10.112 06:36:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:26:10.112 06:36:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 606312' 00:26:10.112 killing process with pid 606312 00:26:10.112 06:36:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@971 -- # kill 606312 00:26:10.112 06:36:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@976 -- # wait 606312 00:26:10.372 06:36:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:10.372 06:36:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:10.372 06:36:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:10.372 06:36:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:26:10.372 06:36:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-save 00:26:10.372 06:36:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:10.372 06:36:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-restore 00:26:10.372 06:36:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:10.372 06:36:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:10.372 06:36:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:10.372 06:36:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:10.372 06:36:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:12.908 06:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:12.908 00:26:12.908 real 0m15.237s 00:26:12.908 user 0m33.870s 00:26:12.908 sys 0m5.829s 00:26:12.908 06:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:26:12.908 06:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:12.908 ************************************ 00:26:12.908 END TEST nvmf_shutdown_tc1 00:26:12.908 ************************************ 00:26:12.908 06:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:26:12.908 06:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:26:12.908 06:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1109 -- # xtrace_disable 00:26:12.908 06:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:12.908 ************************************ 00:26:12.908 START TEST nvmf_shutdown_tc2 00:26:12.908 ************************************ 00:26:12.908 06:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1127 -- # nvmf_shutdown_tc2 00:26:12.908 06:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:26:12.908 06:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:26:12.908 06:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:12.908 06:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:12.908 06:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:12.908 06:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:12.908 06:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:12.908 06:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:12.908 06:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:12.908 06:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:12.908 06:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:12.908 06:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:12.908 06:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:26:12.908 06:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:12.908 06:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:12.908 06:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:26:12.908 06:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:12.908 06:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:12.908 06:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:12.908 06:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:12.908 06:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:12.908 06:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:26:12.908 06:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:12.908 06:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:26:12.908 06:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:26:12.908 06:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:26:12.908 06:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:26:12.908 06:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:26:12.908 06:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:26:12.908 06:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:12.908 06:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:12.908 06:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:12.908 06:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:12.908 06:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:12.908 06:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:12.908 06:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:12.908 06:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:12.908 06:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:12.908 06:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:12.908 06:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:12.908 06:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:12.909 06:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:12.909 06:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:12.909 06:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:12.909 06:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:12.909 06:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:12.909 06:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:12.909 06:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:12.909 06:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:26:12.909 Found 0000:86:00.0 (0x8086 - 0x159b) 00:26:12.909 06:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:12.909 06:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:12.909 06:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:12.909 06:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:12.909 06:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:12.909 06:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:12.909 06:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:26:12.909 Found 0000:86:00.1 (0x8086 - 0x159b) 00:26:12.909 06:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:12.909 06:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:12.909 06:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:12.909 06:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:12.909 06:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:12.909 06:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:12.909 06:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:12.909 06:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:12.909 06:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:12.909 06:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:12.909 06:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:12.909 06:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:12.909 06:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:12.909 06:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:12.909 06:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:12.909 06:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:26:12.909 Found net devices under 0000:86:00.0: cvl_0_0 00:26:12.909 06:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:12.909 06:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:12.909 06:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:12.909 06:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:12.909 06:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:12.909 06:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:12.909 06:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:12.909 06:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:12.909 06:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:26:12.909 Found net devices under 0000:86:00.1: cvl_0_1 00:26:12.909 06:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:12.909 06:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:12.909 06:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes 00:26:12.909 06:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:12.909 06:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:12.909 06:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:12.909 06:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:12.909 06:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:12.909 06:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:12.909 06:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:12.909 06:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:12.909 06:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:12.909 06:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:12.909 06:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:12.909 06:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:12.909 06:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:12.909 06:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:12.909 06:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:12.909 06:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:12.909 06:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:12.909 06:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:12.909 06:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:12.909 06:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:12.909 06:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:12.909 06:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:12.909 06:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:12.909 06:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:12.909 06:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:12.909 06:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:12.909 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:12.909 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.314 ms 00:26:12.909 00:26:12.909 --- 10.0.0.2 ping statistics --- 00:26:12.909 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:12.909 rtt min/avg/max/mdev = 0.314/0.314/0.314/0.000 ms 00:26:12.909 06:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:12.909 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:12.909 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.141 ms 00:26:12.909 00:26:12.909 --- 10.0.0.1 ping statistics --- 00:26:12.909 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:12.909 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:26:12.909 06:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:12.909 06:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0 00:26:12.909 06:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:12.909 06:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:12.909 06:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:12.909 06:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:12.909 06:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:12.909 06:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:12.909 06:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:12.909 06:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:26:12.909 06:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:12.909 06:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:12.909 06:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:12.909 06:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=608101 00:26:12.909 06:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 608101 00:26:12.909 06:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:26:12.909 06:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # '[' -z 608101 ']' 00:26:12.909 06:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:12.910 06:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:12.910 06:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:12.910 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:12.910 06:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:12.910 06:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:12.910 [2024-11-20 06:36:44.711558] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:26:12.910 [2024-11-20 06:36:44.711598] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:13.168 [2024-11-20 06:36:44.790185] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:13.168 [2024-11-20 06:36:44.831705] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:13.168 [2024-11-20 06:36:44.831742] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:13.168 [2024-11-20 06:36:44.831749] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:13.168 [2024-11-20 06:36:44.831755] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:13.168 [2024-11-20 06:36:44.831760] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:13.168 [2024-11-20 06:36:44.833385] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:13.168 [2024-11-20 06:36:44.833493] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:13.169 [2024-11-20 06:36:44.833600] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:13.169 [2024-11-20 06:36:44.833602] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:26:13.169 06:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:13.169 06:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@866 -- # return 0 00:26:13.169 06:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:13.169 06:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:13.169 06:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:13.169 06:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:13.169 06:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:13.169 06:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:13.169 06:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:13.169 [2024-11-20 06:36:44.964809] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:13.169 06:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:13.169 06:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:26:13.169 06:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:26:13.169 06:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:13.169 06:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:13.169 06:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:26:13.169 06:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:13.169 06:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:26:13.169 06:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:13.169 06:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:26:13.169 06:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:13.169 06:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:26:13.169 06:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:13.169 06:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:26:13.169 06:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:13.169 06:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:26:13.169 06:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:13.169 06:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:26:13.427 06:36:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:13.427 06:36:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:26:13.427 06:36:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:13.427 06:36:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:26:13.427 06:36:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:13.427 06:36:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:26:13.427 06:36:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:13.427 06:36:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:26:13.427 06:36:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:26:13.427 06:36:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:13.427 06:36:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:13.427 Malloc1 00:26:13.427 [2024-11-20 06:36:45.079730] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:13.427 Malloc2 00:26:13.427 Malloc3 00:26:13.427 Malloc4 00:26:13.427 Malloc5 00:26:13.686 Malloc6 00:26:13.686 Malloc7 00:26:13.686 Malloc8 00:26:13.686 Malloc9 00:26:13.686 Malloc10 00:26:13.686 06:36:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:13.686 06:36:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:26:13.686 06:36:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:13.686 06:36:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:13.686 06:36:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=608164 00:26:13.686 06:36:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 608164 /var/tmp/bdevperf.sock 00:26:13.686 06:36:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # '[' -z 608164 ']' 00:26:13.686 06:36:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:13.686 06:36:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:26:13.686 06:36:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:26:13.686 06:36:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:13.686 06:36:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:13.686 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:13.686 06:36:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=() 00:26:13.686 06:36:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:13.686 06:36:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config 00:26:13.686 06:36:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:13.686 06:36:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:13.686 06:36:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:13.686 { 00:26:13.686 "params": { 00:26:13.686 "name": "Nvme$subsystem", 00:26:13.686 "trtype": "$TEST_TRANSPORT", 00:26:13.686 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:13.686 "adrfam": "ipv4", 00:26:13.686 "trsvcid": "$NVMF_PORT", 00:26:13.686 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:13.686 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:13.686 "hdgst": ${hdgst:-false}, 00:26:13.686 "ddgst": ${ddgst:-false} 00:26:13.686 }, 00:26:13.686 "method": "bdev_nvme_attach_controller" 00:26:13.686 } 00:26:13.686 EOF 00:26:13.686 )") 00:26:13.686 06:36:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:26:13.686 06:36:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:13.686 06:36:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:13.686 { 00:26:13.686 "params": { 00:26:13.686 "name": "Nvme$subsystem", 00:26:13.686 "trtype": "$TEST_TRANSPORT", 00:26:13.686 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:13.686 "adrfam": "ipv4", 00:26:13.686 "trsvcid": "$NVMF_PORT", 00:26:13.686 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:13.686 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:13.686 "hdgst": ${hdgst:-false}, 00:26:13.686 "ddgst": ${ddgst:-false} 00:26:13.686 }, 00:26:13.686 "method": "bdev_nvme_attach_controller" 00:26:13.686 } 00:26:13.686 EOF 00:26:13.686 )") 00:26:13.686 06:36:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:26:13.686 06:36:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:13.686 06:36:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:13.686 { 00:26:13.686 "params": { 00:26:13.686 "name": "Nvme$subsystem", 00:26:13.686 "trtype": "$TEST_TRANSPORT", 00:26:13.686 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:13.686 "adrfam": "ipv4", 00:26:13.686 "trsvcid": "$NVMF_PORT", 00:26:13.686 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:13.686 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:13.686 "hdgst": ${hdgst:-false}, 00:26:13.686 "ddgst": ${ddgst:-false} 00:26:13.686 }, 00:26:13.686 "method": "bdev_nvme_attach_controller" 00:26:13.686 } 00:26:13.686 EOF 00:26:13.686 )") 00:26:13.686 06:36:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:26:13.945 06:36:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:13.945 06:36:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:13.945 { 00:26:13.945 "params": { 00:26:13.945 "name": "Nvme$subsystem", 00:26:13.945 "trtype": "$TEST_TRANSPORT", 00:26:13.945 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:13.945 "adrfam": "ipv4", 00:26:13.945 "trsvcid": "$NVMF_PORT", 00:26:13.945 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:13.945 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:13.945 "hdgst": ${hdgst:-false}, 00:26:13.946 "ddgst": ${ddgst:-false} 00:26:13.946 }, 00:26:13.946 "method": "bdev_nvme_attach_controller" 00:26:13.946 } 00:26:13.946 EOF 00:26:13.946 )") 00:26:13.946 06:36:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:26:13.946 06:36:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:13.946 06:36:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:13.946 { 00:26:13.946 "params": { 00:26:13.946 "name": "Nvme$subsystem", 00:26:13.946 "trtype": "$TEST_TRANSPORT", 00:26:13.946 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:13.946 "adrfam": "ipv4", 00:26:13.946 "trsvcid": "$NVMF_PORT", 00:26:13.946 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:13.946 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:13.946 "hdgst": ${hdgst:-false}, 00:26:13.946 "ddgst": ${ddgst:-false} 00:26:13.946 }, 00:26:13.946 "method": "bdev_nvme_attach_controller" 00:26:13.946 } 00:26:13.946 EOF 00:26:13.946 )") 00:26:13.946 06:36:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:26:13.946 06:36:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:13.946 06:36:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:13.946 { 00:26:13.946 "params": { 00:26:13.946 "name": "Nvme$subsystem", 00:26:13.946 "trtype": "$TEST_TRANSPORT", 00:26:13.946 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:13.946 "adrfam": "ipv4", 00:26:13.946 "trsvcid": "$NVMF_PORT", 00:26:13.946 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:13.946 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:13.946 "hdgst": ${hdgst:-false}, 00:26:13.946 "ddgst": ${ddgst:-false} 00:26:13.946 }, 00:26:13.946 "method": "bdev_nvme_attach_controller" 00:26:13.946 } 00:26:13.946 EOF 00:26:13.946 )") 00:26:13.946 06:36:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:26:13.946 06:36:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:13.946 06:36:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:13.946 { 00:26:13.946 "params": { 00:26:13.946 "name": "Nvme$subsystem", 00:26:13.946 "trtype": "$TEST_TRANSPORT", 00:26:13.946 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:13.946 "adrfam": "ipv4", 00:26:13.946 "trsvcid": "$NVMF_PORT", 00:26:13.946 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:13.946 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:13.946 "hdgst": ${hdgst:-false}, 00:26:13.946 "ddgst": ${ddgst:-false} 00:26:13.946 }, 00:26:13.946 "method": "bdev_nvme_attach_controller" 00:26:13.946 } 00:26:13.946 EOF 00:26:13.946 )") 00:26:13.946 06:36:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:26:13.946 [2024-11-20 06:36:45.547877] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:26:13.946 [2024-11-20 06:36:45.547927] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid608164 ] 00:26:13.946 06:36:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:13.946 06:36:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:13.946 { 00:26:13.946 "params": { 00:26:13.946 "name": "Nvme$subsystem", 00:26:13.946 "trtype": "$TEST_TRANSPORT", 00:26:13.946 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:13.946 "adrfam": "ipv4", 00:26:13.946 "trsvcid": "$NVMF_PORT", 00:26:13.946 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:13.946 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:13.946 "hdgst": ${hdgst:-false}, 00:26:13.946 "ddgst": ${ddgst:-false} 00:26:13.946 }, 00:26:13.946 "method": "bdev_nvme_attach_controller" 00:26:13.946 } 00:26:13.946 EOF 00:26:13.946 )") 00:26:13.946 06:36:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:26:13.946 06:36:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:13.946 06:36:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:13.946 { 00:26:13.946 "params": { 00:26:13.946 "name": "Nvme$subsystem", 00:26:13.946 "trtype": "$TEST_TRANSPORT", 00:26:13.946 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:13.946 "adrfam": "ipv4", 00:26:13.946 "trsvcid": "$NVMF_PORT", 00:26:13.946 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:13.946 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:13.946 "hdgst": ${hdgst:-false}, 00:26:13.946 "ddgst": ${ddgst:-false} 00:26:13.946 }, 00:26:13.946 "method": "bdev_nvme_attach_controller" 00:26:13.946 } 00:26:13.946 EOF 00:26:13.946 )") 00:26:13.946 06:36:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:26:13.946 06:36:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:13.946 06:36:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:13.946 { 00:26:13.946 "params": { 00:26:13.946 "name": "Nvme$subsystem", 00:26:13.946 "trtype": "$TEST_TRANSPORT", 00:26:13.946 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:13.946 "adrfam": "ipv4", 00:26:13.946 "trsvcid": "$NVMF_PORT", 00:26:13.946 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:13.946 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:13.946 "hdgst": ${hdgst:-false}, 00:26:13.946 "ddgst": ${ddgst:-false} 00:26:13.946 }, 00:26:13.946 "method": "bdev_nvme_attach_controller" 00:26:13.946 } 00:26:13.946 EOF 00:26:13.946 )") 00:26:13.946 06:36:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:26:13.946 06:36:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq . 00:26:13.946 06:36:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=, 00:26:13.946 06:36:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:26:13.946 "params": { 00:26:13.946 "name": "Nvme1", 00:26:13.946 "trtype": "tcp", 00:26:13.946 "traddr": "10.0.0.2", 00:26:13.946 "adrfam": "ipv4", 00:26:13.946 "trsvcid": "4420", 00:26:13.946 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:13.946 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:13.946 "hdgst": false, 00:26:13.946 "ddgst": false 00:26:13.946 }, 00:26:13.946 "method": "bdev_nvme_attach_controller" 00:26:13.946 },{ 00:26:13.946 "params": { 00:26:13.946 "name": "Nvme2", 00:26:13.946 "trtype": "tcp", 00:26:13.946 "traddr": "10.0.0.2", 00:26:13.946 "adrfam": "ipv4", 00:26:13.946 "trsvcid": "4420", 00:26:13.946 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:26:13.946 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:26:13.946 "hdgst": false, 00:26:13.946 "ddgst": false 00:26:13.946 }, 00:26:13.946 "method": "bdev_nvme_attach_controller" 00:26:13.946 },{ 00:26:13.946 "params": { 00:26:13.946 "name": "Nvme3", 00:26:13.946 "trtype": "tcp", 00:26:13.946 "traddr": "10.0.0.2", 00:26:13.946 "adrfam": "ipv4", 00:26:13.946 "trsvcid": "4420", 00:26:13.946 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:26:13.946 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:26:13.946 "hdgst": false, 00:26:13.946 "ddgst": false 00:26:13.946 }, 00:26:13.946 "method": "bdev_nvme_attach_controller" 00:26:13.946 },{ 00:26:13.946 "params": { 00:26:13.946 "name": "Nvme4", 00:26:13.946 "trtype": "tcp", 00:26:13.946 "traddr": "10.0.0.2", 00:26:13.946 "adrfam": "ipv4", 00:26:13.946 "trsvcid": "4420", 00:26:13.946 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:26:13.946 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:26:13.946 "hdgst": false, 00:26:13.946 "ddgst": false 00:26:13.946 }, 00:26:13.946 "method": "bdev_nvme_attach_controller" 00:26:13.946 },{ 00:26:13.946 "params": { 00:26:13.946 "name": "Nvme5", 00:26:13.946 "trtype": "tcp", 00:26:13.946 "traddr": "10.0.0.2", 00:26:13.946 "adrfam": "ipv4", 00:26:13.946 "trsvcid": "4420", 00:26:13.946 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:26:13.946 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:26:13.946 "hdgst": false, 00:26:13.946 "ddgst": false 00:26:13.946 }, 00:26:13.947 "method": "bdev_nvme_attach_controller" 00:26:13.947 },{ 00:26:13.947 "params": { 00:26:13.947 "name": "Nvme6", 00:26:13.947 "trtype": "tcp", 00:26:13.947 "traddr": "10.0.0.2", 00:26:13.947 "adrfam": "ipv4", 00:26:13.947 "trsvcid": "4420", 00:26:13.947 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:26:13.947 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:26:13.947 "hdgst": false, 00:26:13.947 "ddgst": false 00:26:13.947 }, 00:26:13.947 "method": "bdev_nvme_attach_controller" 00:26:13.947 },{ 00:26:13.947 "params": { 00:26:13.947 "name": "Nvme7", 00:26:13.947 "trtype": "tcp", 00:26:13.947 "traddr": "10.0.0.2", 00:26:13.947 "adrfam": "ipv4", 00:26:13.947 "trsvcid": "4420", 00:26:13.947 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:26:13.947 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:26:13.947 "hdgst": false, 00:26:13.947 "ddgst": false 00:26:13.947 }, 00:26:13.947 "method": "bdev_nvme_attach_controller" 00:26:13.947 },{ 00:26:13.947 "params": { 00:26:13.947 "name": "Nvme8", 00:26:13.947 "trtype": "tcp", 00:26:13.947 "traddr": "10.0.0.2", 00:26:13.947 "adrfam": "ipv4", 00:26:13.947 "trsvcid": "4420", 00:26:13.947 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:26:13.947 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:26:13.947 "hdgst": false, 00:26:13.947 "ddgst": false 00:26:13.947 }, 00:26:13.947 "method": "bdev_nvme_attach_controller" 00:26:13.947 },{ 00:26:13.947 "params": { 00:26:13.947 "name": "Nvme9", 00:26:13.947 "trtype": "tcp", 00:26:13.947 "traddr": "10.0.0.2", 00:26:13.947 "adrfam": "ipv4", 00:26:13.947 "trsvcid": "4420", 00:26:13.947 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:26:13.947 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:26:13.947 "hdgst": false, 00:26:13.947 "ddgst": false 00:26:13.947 }, 00:26:13.947 "method": "bdev_nvme_attach_controller" 00:26:13.947 },{ 00:26:13.947 "params": { 00:26:13.947 "name": "Nvme10", 00:26:13.947 "trtype": "tcp", 00:26:13.947 "traddr": "10.0.0.2", 00:26:13.947 "adrfam": "ipv4", 00:26:13.947 "trsvcid": "4420", 00:26:13.947 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:26:13.947 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:26:13.947 "hdgst": false, 00:26:13.947 "ddgst": false 00:26:13.947 }, 00:26:13.947 "method": "bdev_nvme_attach_controller" 00:26:13.947 }' 00:26:13.947 [2024-11-20 06:36:45.621839] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:13.947 [2024-11-20 06:36:45.662678] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:15.848 Running I/O for 10 seconds... 00:26:15.848 06:36:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:15.848 06:36:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@866 -- # return 0 00:26:15.848 06:36:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:26:15.848 06:36:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.848 06:36:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:15.848 06:36:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:15.848 06:36:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:26:15.848 06:36:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:26:15.848 06:36:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:26:15.848 06:36:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:26:15.848 06:36:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:26:15.848 06:36:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:26:15.848 06:36:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:26:15.848 06:36:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:26:15.848 06:36:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:26:15.848 06:36:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.848 06:36:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:15.848 06:36:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:15.848 06:36:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=3 00:26:15.848 06:36:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:26:15.848 06:36:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:26:16.108 06:36:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:26:16.108 06:36:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:26:16.108 06:36:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:26:16.108 06:36:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:26:16.108 06:36:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:16.108 06:36:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:16.108 06:36:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:16.108 06:36:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=67 00:26:16.108 06:36:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:26:16.108 06:36:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:26:16.367 06:36:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:26:16.367 06:36:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:26:16.367 06:36:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:26:16.367 06:36:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:26:16.367 06:36:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:16.367 06:36:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:16.367 06:36:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:16.367 06:36:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=195 00:26:16.367 06:36:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 195 -ge 100 ']' 00:26:16.367 06:36:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:26:16.367 06:36:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:26:16.367 06:36:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:26:16.367 06:36:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 608164 00:26:16.367 06:36:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # '[' -z 608164 ']' 00:26:16.367 06:36:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # kill -0 608164 00:26:16.367 06:36:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@957 -- # uname 00:26:16.367 06:36:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:16.367 06:36:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 608164 00:26:16.625 06:36:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:26:16.625 06:36:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:26:16.626 06:36:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 608164' 00:26:16.626 killing process with pid 608164 00:26:16.626 06:36:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@971 -- # kill 608164 00:26:16.626 06:36:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@976 -- # wait 608164 00:26:16.626 Received shutdown signal, test time was about 0.949269 seconds 00:26:16.626 00:26:16.626 Latency(us) 00:26:16.626 [2024-11-20T05:36:48.462Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:16.626 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:16.626 Verification LBA range: start 0x0 length 0x400 00:26:16.626 Nvme1n1 : 0.93 274.21 17.14 0.00 0.00 230673.55 15978.30 217704.35 00:26:16.626 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:16.626 Verification LBA range: start 0x0 length 0x400 00:26:16.626 Nvme2n1 : 0.93 276.27 17.27 0.00 0.00 224849.68 16976.94 205720.62 00:26:16.626 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:16.626 Verification LBA range: start 0x0 length 0x400 00:26:16.626 Nvme3n1 : 0.94 272.78 17.05 0.00 0.00 224182.61 13107.20 218702.99 00:26:16.626 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:16.626 Verification LBA range: start 0x0 length 0x400 00:26:16.626 Nvme4n1 : 0.92 276.83 17.30 0.00 0.00 216872.11 14105.84 213709.78 00:26:16.626 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:16.626 Verification LBA range: start 0x0 length 0x400 00:26:16.626 Nvme5n1 : 0.92 279.59 17.47 0.00 0.00 209890.01 16227.96 197731.47 00:26:16.626 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:16.626 Verification LBA range: start 0x0 length 0x400 00:26:16.626 Nvme6n1 : 0.94 271.62 16.98 0.00 0.00 213150.96 16852.11 219701.64 00:26:16.626 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:16.626 Verification LBA range: start 0x0 length 0x400 00:26:16.626 Nvme7n1 : 0.92 278.76 17.42 0.00 0.00 203685.79 18974.23 210713.84 00:26:16.626 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:16.626 Verification LBA range: start 0x0 length 0x400 00:26:16.626 Nvme8n1 : 0.93 301.59 18.85 0.00 0.00 181815.48 10673.01 208716.56 00:26:16.626 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:16.626 Verification LBA range: start 0x0 length 0x400 00:26:16.626 Nvme9n1 : 0.95 270.77 16.92 0.00 0.00 202684.95 17850.76 225693.50 00:26:16.626 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:16.626 Verification LBA range: start 0x0 length 0x400 00:26:16.626 Nvme10n1 : 0.95 269.87 16.87 0.00 0.00 199634.41 18974.23 241671.80 00:26:16.626 [2024-11-20T05:36:48.462Z] =================================================================================================================== 00:26:16.626 [2024-11-20T05:36:48.462Z] Total : 2772.28 173.27 0.00 0.00 210464.18 10673.01 241671.80 00:26:16.626 06:36:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:26:17.998 06:36:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 608101 00:26:17.998 06:36:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:26:17.998 06:36:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:26:17.998 06:36:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:26:17.998 06:36:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:26:17.998 06:36:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:26:17.998 06:36:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:17.998 06:36:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:26:17.998 06:36:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:17.998 06:36:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:26:17.998 06:36:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:17.998 06:36:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:17.998 rmmod nvme_tcp 00:26:17.998 rmmod nvme_fabrics 00:26:17.998 rmmod nvme_keyring 00:26:17.998 06:36:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:17.998 06:36:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:26:17.998 06:36:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:26:17.998 06:36:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 608101 ']' 00:26:17.998 06:36:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 608101 00:26:17.998 06:36:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # '[' -z 608101 ']' 00:26:17.998 06:36:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # kill -0 608101 00:26:17.998 06:36:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@957 -- # uname 00:26:17.998 06:36:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:17.998 06:36:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 608101 00:26:17.998 06:36:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:26:17.998 06:36:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:26:17.998 06:36:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 608101' 00:26:17.998 killing process with pid 608101 00:26:17.998 06:36:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@971 -- # kill 608101 00:26:17.998 06:36:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@976 -- # wait 608101 00:26:18.257 06:36:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:18.257 06:36:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:18.257 06:36:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:18.257 06:36:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:26:18.257 06:36:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-save 00:26:18.257 06:36:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:18.257 06:36:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-restore 00:26:18.257 06:36:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:18.257 06:36:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:18.257 06:36:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:18.257 06:36:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:18.257 06:36:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:20.792 06:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:20.792 00:26:20.792 real 0m7.693s 00:26:20.793 user 0m23.214s 00:26:20.793 sys 0m1.380s 00:26:20.793 06:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:26:20.793 06:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:20.793 ************************************ 00:26:20.793 END TEST nvmf_shutdown_tc2 00:26:20.793 ************************************ 00:26:20.793 06:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:26:20.793 06:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:26:20.793 06:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1109 -- # xtrace_disable 00:26:20.793 06:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:20.793 ************************************ 00:26:20.793 START TEST nvmf_shutdown_tc3 00:26:20.793 ************************************ 00:26:20.793 06:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1127 -- # nvmf_shutdown_tc3 00:26:20.793 06:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:26:20.793 06:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:26:20.793 06:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:20.793 06:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:20.793 06:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:20.793 06:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:20.793 06:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:20.793 06:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:20.793 06:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:20.793 06:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:20.793 06:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:20.793 06:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:20.793 06:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:26:20.793 06:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:20.793 06:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:20.793 06:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:26:20.793 06:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:20.793 06:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:20.793 06:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:20.793 06:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:20.793 06:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:20.793 06:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:26:20.793 06:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:20.793 06:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:26:20.793 06:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:26:20.793 06:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:26:20.793 06:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:26:20.793 06:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:26:20.793 06:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:26:20.793 06:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:20.793 06:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:20.793 06:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:20.793 06:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:20.793 06:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:20.793 06:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:20.793 06:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:20.793 06:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:20.793 06:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:20.793 06:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:20.793 06:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:20.793 06:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:20.793 06:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:20.793 06:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:20.793 06:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:20.793 06:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:20.793 06:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:20.793 06:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:20.793 06:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:20.793 06:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:26:20.793 Found 0000:86:00.0 (0x8086 - 0x159b) 00:26:20.793 06:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:20.793 06:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:20.793 06:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:20.793 06:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:20.793 06:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:20.793 06:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:20.793 06:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:26:20.793 Found 0000:86:00.1 (0x8086 - 0x159b) 00:26:20.793 06:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:20.793 06:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:20.793 06:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:20.793 06:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:20.793 06:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:20.793 06:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:20.793 06:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:20.793 06:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:20.793 06:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:20.793 06:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:20.793 06:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:20.793 06:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:20.793 06:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:20.793 06:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:20.793 06:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:20.793 06:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:26:20.793 Found net devices under 0000:86:00.0: cvl_0_0 00:26:20.793 06:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:20.793 06:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:20.793 06:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:20.793 06:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:20.793 06:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:20.793 06:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:20.793 06:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:20.793 06:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:20.793 06:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:26:20.793 Found net devices under 0000:86:00.1: cvl_0_1 00:26:20.793 06:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:20.793 06:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:20.793 06:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes 00:26:20.794 06:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:20.794 06:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:20.794 06:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:20.794 06:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:20.794 06:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:20.794 06:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:20.794 06:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:20.794 06:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:20.794 06:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:20.794 06:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:20.794 06:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:20.794 06:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:20.794 06:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:20.794 06:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:20.794 06:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:20.794 06:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:20.794 06:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:20.794 06:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:20.794 06:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:20.794 06:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:20.794 06:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:20.794 06:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:20.794 06:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:20.794 06:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:20.794 06:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:20.794 06:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:20.794 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:20.794 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.464 ms 00:26:20.794 00:26:20.794 --- 10.0.0.2 ping statistics --- 00:26:20.794 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:20.794 rtt min/avg/max/mdev = 0.464/0.464/0.464/0.000 ms 00:26:20.794 06:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:20.794 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:20.794 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.220 ms 00:26:20.794 00:26:20.794 --- 10.0.0.1 ping statistics --- 00:26:20.794 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:20.794 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:26:20.794 06:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:20.794 06:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0 00:26:20.794 06:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:20.794 06:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:20.794 06:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:20.794 06:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:20.794 06:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:20.794 06:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:20.794 06:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:20.794 06:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:26:20.794 06:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:20.794 06:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:20.794 06:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:20.794 06:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=609428 00:26:20.794 06:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 609428 00:26:20.794 06:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:26:20.794 06:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # '[' -z 609428 ']' 00:26:20.794 06:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:20.794 06:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:20.794 06:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:20.794 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:20.794 06:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:20.794 06:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:20.794 [2024-11-20 06:36:52.472624] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:26:20.794 [2024-11-20 06:36:52.472666] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:20.794 [2024-11-20 06:36:52.549420] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:20.794 [2024-11-20 06:36:52.591096] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:20.794 [2024-11-20 06:36:52.591131] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:20.794 [2024-11-20 06:36:52.591138] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:20.794 [2024-11-20 06:36:52.591144] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:20.794 [2024-11-20 06:36:52.591149] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:20.794 [2024-11-20 06:36:52.592748] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:20.794 [2024-11-20 06:36:52.592858] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:20.794 [2024-11-20 06:36:52.592962] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:20.794 [2024-11-20 06:36:52.592963] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:26:21.052 06:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:21.052 06:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@866 -- # return 0 00:26:21.052 06:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:21.052 06:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:21.053 06:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:21.053 06:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:21.053 06:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:21.053 06:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:21.053 06:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:21.053 [2024-11-20 06:36:52.736271] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:21.053 06:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:21.053 06:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:26:21.053 06:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:26:21.053 06:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:21.053 06:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:21.053 06:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:26:21.053 06:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:21.053 06:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:26:21.053 06:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:21.053 06:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:26:21.053 06:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:21.053 06:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:26:21.053 06:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:21.053 06:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:26:21.053 06:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:21.053 06:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:26:21.053 06:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:21.053 06:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:26:21.053 06:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:21.053 06:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:26:21.053 06:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:21.053 06:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:26:21.053 06:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:21.053 06:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:26:21.053 06:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:21.053 06:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:26:21.053 06:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:26:21.053 06:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:21.053 06:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:21.053 Malloc1 00:26:21.053 [2024-11-20 06:36:52.844256] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:21.053 Malloc2 00:26:21.312 Malloc3 00:26:21.312 Malloc4 00:26:21.312 Malloc5 00:26:21.312 Malloc6 00:26:21.312 Malloc7 00:26:21.312 Malloc8 00:26:21.571 Malloc9 00:26:21.571 Malloc10 00:26:21.571 06:36:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:21.571 06:36:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:26:21.571 06:36:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:21.571 06:36:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:21.571 06:36:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=609699 00:26:21.571 06:36:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 609699 /var/tmp/bdevperf.sock 00:26:21.571 06:36:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # '[' -z 609699 ']' 00:26:21.571 06:36:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:21.571 06:36:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:26:21.571 06:36:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:21.571 06:36:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:26:21.571 06:36:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:21.571 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:21.571 06:36:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=() 00:26:21.571 06:36:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:21.571 06:36:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config 00:26:21.571 06:36:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:21.571 06:36:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:21.571 06:36:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:21.571 { 00:26:21.571 "params": { 00:26:21.571 "name": "Nvme$subsystem", 00:26:21.571 "trtype": "$TEST_TRANSPORT", 00:26:21.571 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:21.571 "adrfam": "ipv4", 00:26:21.571 "trsvcid": "$NVMF_PORT", 00:26:21.571 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:21.571 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:21.571 "hdgst": ${hdgst:-false}, 00:26:21.571 "ddgst": ${ddgst:-false} 00:26:21.571 }, 00:26:21.571 "method": "bdev_nvme_attach_controller" 00:26:21.571 } 00:26:21.571 EOF 00:26:21.571 )") 00:26:21.571 06:36:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:26:21.571 06:36:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:21.571 06:36:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:21.571 { 00:26:21.571 "params": { 00:26:21.571 "name": "Nvme$subsystem", 00:26:21.572 "trtype": "$TEST_TRANSPORT", 00:26:21.572 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:21.572 "adrfam": "ipv4", 00:26:21.572 "trsvcid": "$NVMF_PORT", 00:26:21.572 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:21.572 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:21.572 "hdgst": ${hdgst:-false}, 00:26:21.572 "ddgst": ${ddgst:-false} 00:26:21.572 }, 00:26:21.572 "method": "bdev_nvme_attach_controller" 00:26:21.572 } 00:26:21.572 EOF 00:26:21.572 )") 00:26:21.572 06:36:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:26:21.572 06:36:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:21.572 06:36:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:21.572 { 00:26:21.572 "params": { 00:26:21.572 "name": "Nvme$subsystem", 00:26:21.572 "trtype": "$TEST_TRANSPORT", 00:26:21.572 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:21.572 "adrfam": "ipv4", 00:26:21.572 "trsvcid": "$NVMF_PORT", 00:26:21.572 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:21.572 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:21.572 "hdgst": ${hdgst:-false}, 00:26:21.572 "ddgst": ${ddgst:-false} 00:26:21.572 }, 00:26:21.572 "method": "bdev_nvme_attach_controller" 00:26:21.572 } 00:26:21.572 EOF 00:26:21.572 )") 00:26:21.572 06:36:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:26:21.572 06:36:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:21.572 06:36:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:21.572 { 00:26:21.572 "params": { 00:26:21.572 "name": "Nvme$subsystem", 00:26:21.572 "trtype": "$TEST_TRANSPORT", 00:26:21.572 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:21.572 "adrfam": "ipv4", 00:26:21.572 "trsvcid": "$NVMF_PORT", 00:26:21.572 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:21.572 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:21.572 "hdgst": ${hdgst:-false}, 00:26:21.572 "ddgst": ${ddgst:-false} 00:26:21.572 }, 00:26:21.572 "method": "bdev_nvme_attach_controller" 00:26:21.572 } 00:26:21.572 EOF 00:26:21.572 )") 00:26:21.572 06:36:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:26:21.572 06:36:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:21.572 06:36:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:21.572 { 00:26:21.572 "params": { 00:26:21.572 "name": "Nvme$subsystem", 00:26:21.572 "trtype": "$TEST_TRANSPORT", 00:26:21.572 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:21.572 "adrfam": "ipv4", 00:26:21.572 "trsvcid": "$NVMF_PORT", 00:26:21.572 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:21.572 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:21.572 "hdgst": ${hdgst:-false}, 00:26:21.572 "ddgst": ${ddgst:-false} 00:26:21.572 }, 00:26:21.572 "method": "bdev_nvme_attach_controller" 00:26:21.572 } 00:26:21.572 EOF 00:26:21.572 )") 00:26:21.572 06:36:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:26:21.572 06:36:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:21.572 06:36:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:21.572 { 00:26:21.572 "params": { 00:26:21.572 "name": "Nvme$subsystem", 00:26:21.572 "trtype": "$TEST_TRANSPORT", 00:26:21.572 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:21.572 "adrfam": "ipv4", 00:26:21.572 "trsvcid": "$NVMF_PORT", 00:26:21.572 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:21.572 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:21.572 "hdgst": ${hdgst:-false}, 00:26:21.572 "ddgst": ${ddgst:-false} 00:26:21.572 }, 00:26:21.572 "method": "bdev_nvme_attach_controller" 00:26:21.572 } 00:26:21.572 EOF 00:26:21.572 )") 00:26:21.572 06:36:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:26:21.572 06:36:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:21.572 06:36:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:21.572 { 00:26:21.572 "params": { 00:26:21.572 "name": "Nvme$subsystem", 00:26:21.572 "trtype": "$TEST_TRANSPORT", 00:26:21.572 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:21.572 "adrfam": "ipv4", 00:26:21.572 "trsvcid": "$NVMF_PORT", 00:26:21.572 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:21.572 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:21.572 "hdgst": ${hdgst:-false}, 00:26:21.572 "ddgst": ${ddgst:-false} 00:26:21.572 }, 00:26:21.572 "method": "bdev_nvme_attach_controller" 00:26:21.572 } 00:26:21.572 EOF 00:26:21.572 )") 00:26:21.572 06:36:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:26:21.572 [2024-11-20 06:36:53.321057] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:26:21.572 [2024-11-20 06:36:53.321107] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid609699 ] 00:26:21.572 06:36:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:21.572 06:36:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:21.572 { 00:26:21.572 "params": { 00:26:21.572 "name": "Nvme$subsystem", 00:26:21.572 "trtype": "$TEST_TRANSPORT", 00:26:21.572 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:21.572 "adrfam": "ipv4", 00:26:21.572 "trsvcid": "$NVMF_PORT", 00:26:21.572 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:21.572 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:21.572 "hdgst": ${hdgst:-false}, 00:26:21.572 "ddgst": ${ddgst:-false} 00:26:21.572 }, 00:26:21.572 "method": "bdev_nvme_attach_controller" 00:26:21.572 } 00:26:21.572 EOF 00:26:21.572 )") 00:26:21.572 06:36:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:26:21.572 06:36:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:21.572 06:36:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:21.572 { 00:26:21.572 "params": { 00:26:21.572 "name": "Nvme$subsystem", 00:26:21.572 "trtype": "$TEST_TRANSPORT", 00:26:21.572 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:21.572 "adrfam": "ipv4", 00:26:21.572 "trsvcid": "$NVMF_PORT", 00:26:21.572 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:21.572 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:21.572 "hdgst": ${hdgst:-false}, 00:26:21.572 "ddgst": ${ddgst:-false} 00:26:21.572 }, 00:26:21.572 "method": "bdev_nvme_attach_controller" 00:26:21.572 } 00:26:21.572 EOF 00:26:21.572 )") 00:26:21.572 06:36:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:26:21.572 06:36:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:21.572 06:36:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:21.572 { 00:26:21.572 "params": { 00:26:21.572 "name": "Nvme$subsystem", 00:26:21.572 "trtype": "$TEST_TRANSPORT", 00:26:21.572 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:21.572 "adrfam": "ipv4", 00:26:21.572 "trsvcid": "$NVMF_PORT", 00:26:21.572 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:21.572 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:21.572 "hdgst": ${hdgst:-false}, 00:26:21.572 "ddgst": ${ddgst:-false} 00:26:21.572 }, 00:26:21.572 "method": "bdev_nvme_attach_controller" 00:26:21.572 } 00:26:21.572 EOF 00:26:21.572 )") 00:26:21.572 06:36:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:26:21.572 06:36:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq . 00:26:21.572 06:36:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=, 00:26:21.572 06:36:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:26:21.572 "params": { 00:26:21.572 "name": "Nvme1", 00:26:21.572 "trtype": "tcp", 00:26:21.572 "traddr": "10.0.0.2", 00:26:21.572 "adrfam": "ipv4", 00:26:21.572 "trsvcid": "4420", 00:26:21.572 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:21.572 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:21.572 "hdgst": false, 00:26:21.572 "ddgst": false 00:26:21.572 }, 00:26:21.572 "method": "bdev_nvme_attach_controller" 00:26:21.572 },{ 00:26:21.572 "params": { 00:26:21.572 "name": "Nvme2", 00:26:21.572 "trtype": "tcp", 00:26:21.572 "traddr": "10.0.0.2", 00:26:21.572 "adrfam": "ipv4", 00:26:21.572 "trsvcid": "4420", 00:26:21.572 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:26:21.572 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:26:21.572 "hdgst": false, 00:26:21.572 "ddgst": false 00:26:21.572 }, 00:26:21.572 "method": "bdev_nvme_attach_controller" 00:26:21.572 },{ 00:26:21.572 "params": { 00:26:21.572 "name": "Nvme3", 00:26:21.572 "trtype": "tcp", 00:26:21.572 "traddr": "10.0.0.2", 00:26:21.572 "adrfam": "ipv4", 00:26:21.572 "trsvcid": "4420", 00:26:21.572 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:26:21.572 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:26:21.572 "hdgst": false, 00:26:21.572 "ddgst": false 00:26:21.573 }, 00:26:21.573 "method": "bdev_nvme_attach_controller" 00:26:21.573 },{ 00:26:21.573 "params": { 00:26:21.573 "name": "Nvme4", 00:26:21.573 "trtype": "tcp", 00:26:21.573 "traddr": "10.0.0.2", 00:26:21.573 "adrfam": "ipv4", 00:26:21.573 "trsvcid": "4420", 00:26:21.573 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:26:21.573 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:26:21.573 "hdgst": false, 00:26:21.573 "ddgst": false 00:26:21.573 }, 00:26:21.573 "method": "bdev_nvme_attach_controller" 00:26:21.573 },{ 00:26:21.573 "params": { 00:26:21.573 "name": "Nvme5", 00:26:21.573 "trtype": "tcp", 00:26:21.573 "traddr": "10.0.0.2", 00:26:21.573 "adrfam": "ipv4", 00:26:21.573 "trsvcid": "4420", 00:26:21.573 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:26:21.573 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:26:21.573 "hdgst": false, 00:26:21.573 "ddgst": false 00:26:21.573 }, 00:26:21.573 "method": "bdev_nvme_attach_controller" 00:26:21.573 },{ 00:26:21.573 "params": { 00:26:21.573 "name": "Nvme6", 00:26:21.573 "trtype": "tcp", 00:26:21.573 "traddr": "10.0.0.2", 00:26:21.573 "adrfam": "ipv4", 00:26:21.573 "trsvcid": "4420", 00:26:21.573 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:26:21.573 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:26:21.573 "hdgst": false, 00:26:21.573 "ddgst": false 00:26:21.573 }, 00:26:21.573 "method": "bdev_nvme_attach_controller" 00:26:21.573 },{ 00:26:21.573 "params": { 00:26:21.573 "name": "Nvme7", 00:26:21.573 "trtype": "tcp", 00:26:21.573 "traddr": "10.0.0.2", 00:26:21.573 "adrfam": "ipv4", 00:26:21.573 "trsvcid": "4420", 00:26:21.573 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:26:21.573 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:26:21.573 "hdgst": false, 00:26:21.573 "ddgst": false 00:26:21.573 }, 00:26:21.573 "method": "bdev_nvme_attach_controller" 00:26:21.573 },{ 00:26:21.573 "params": { 00:26:21.573 "name": "Nvme8", 00:26:21.573 "trtype": "tcp", 00:26:21.573 "traddr": "10.0.0.2", 00:26:21.573 "adrfam": "ipv4", 00:26:21.573 "trsvcid": "4420", 00:26:21.573 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:26:21.573 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:26:21.573 "hdgst": false, 00:26:21.573 "ddgst": false 00:26:21.573 }, 00:26:21.573 "method": "bdev_nvme_attach_controller" 00:26:21.573 },{ 00:26:21.573 "params": { 00:26:21.573 "name": "Nvme9", 00:26:21.573 "trtype": "tcp", 00:26:21.573 "traddr": "10.0.0.2", 00:26:21.573 "adrfam": "ipv4", 00:26:21.573 "trsvcid": "4420", 00:26:21.573 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:26:21.573 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:26:21.573 "hdgst": false, 00:26:21.573 "ddgst": false 00:26:21.573 }, 00:26:21.573 "method": "bdev_nvme_attach_controller" 00:26:21.573 },{ 00:26:21.573 "params": { 00:26:21.573 "name": "Nvme10", 00:26:21.573 "trtype": "tcp", 00:26:21.573 "traddr": "10.0.0.2", 00:26:21.573 "adrfam": "ipv4", 00:26:21.573 "trsvcid": "4420", 00:26:21.573 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:26:21.573 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:26:21.573 "hdgst": false, 00:26:21.573 "ddgst": false 00:26:21.573 }, 00:26:21.573 "method": "bdev_nvme_attach_controller" 00:26:21.573 }' 00:26:21.573 [2024-11-20 06:36:53.398281] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:21.831 [2024-11-20 06:36:53.439689] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:23.206 Running I/O for 10 seconds... 00:26:23.464 06:36:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:23.464 06:36:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@866 -- # return 0 00:26:23.464 06:36:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:26:23.464 06:36:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.464 06:36:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:23.464 06:36:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.464 06:36:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:23.465 06:36:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:26:23.465 06:36:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:26:23.465 06:36:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:26:23.465 06:36:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:26:23.465 06:36:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:26:23.465 06:36:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:26:23.465 06:36:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:26:23.465 06:36:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:26:23.465 06:36:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:26:23.465 06:36:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.465 06:36:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:23.465 06:36:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.465 06:36:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=67 00:26:23.465 06:36:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:26:23.465 06:36:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:26:23.723 06:36:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:26:23.723 06:36:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:26:23.723 06:36:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:26:23.723 06:36:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:26:23.723 06:36:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.723 06:36:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:23.999 06:36:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.000 06:36:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=131 00:26:24.000 06:36:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:26:24.000 06:36:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:26:24.000 06:36:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:26:24.000 06:36:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:26:24.000 06:36:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 609428 00:26:24.000 06:36:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # '[' -z 609428 ']' 00:26:24.000 06:36:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # kill -0 609428 00:26:24.000 06:36:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@957 -- # uname 00:26:24.000 06:36:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:24.000 06:36:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 609428 00:26:24.000 06:36:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:26:24.000 06:36:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:26:24.000 06:36:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 609428' 00:26:24.000 killing process with pid 609428 00:26:24.000 06:36:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@971 -- # kill 609428 00:26:24.000 06:36:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@976 -- # wait 609428 00:26:24.000 [2024-11-20 06:36:55.633391] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1269db0 is same with the state(6) to be set 00:26:24.000 [2024-11-20 06:36:55.633444] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1269db0 is same with the state(6) to be set 00:26:24.000 [2024-11-20 06:36:55.633453] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1269db0 is same with the state(6) to be set 00:26:24.000 [2024-11-20 06:36:55.633460] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1269db0 is same with the state(6) to be set 00:26:24.000 [2024-11-20 06:36:55.633467] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1269db0 is same with the state(6) to be set 00:26:24.000 [2024-11-20 06:36:55.633473] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1269db0 is same with the state(6) to be set 00:26:24.000 [2024-11-20 06:36:55.633481] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1269db0 is same with the state(6) to be set 00:26:24.000 [2024-11-20 06:36:55.633487] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1269db0 is same with the state(6) to be set 00:26:24.000 [2024-11-20 06:36:55.633494] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1269db0 is same with the state(6) to be set 00:26:24.000 [2024-11-20 06:36:55.633500] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1269db0 is same with the state(6) to be set 00:26:24.000 [2024-11-20 06:36:55.633507] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1269db0 is same with the state(6) to be set 00:26:24.000 [2024-11-20 06:36:55.633512] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1269db0 is same with the state(6) to be set 00:26:24.000 [2024-11-20 06:36:55.633522] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1269db0 is same with the state(6) to be set 00:26:24.000 [2024-11-20 06:36:55.633529] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1269db0 is same with the state(6) to be set 00:26:24.000 [2024-11-20 06:36:55.633537] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1269db0 is same with the state(6) to be set 00:26:24.000 [2024-11-20 06:36:55.633543] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1269db0 is same with the state(6) to be set 00:26:24.000 [2024-11-20 06:36:55.633549] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1269db0 is same with the state(6) to be set 00:26:24.000 [2024-11-20 06:36:55.633555] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1269db0 is same with the state(6) to be set 00:26:24.000 [2024-11-20 06:36:55.633562] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1269db0 is same with the state(6) to be set 00:26:24.000 [2024-11-20 06:36:55.633568] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1269db0 is same with the state(6) to be set 00:26:24.000 [2024-11-20 06:36:55.633574] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1269db0 is same with the state(6) to be set 00:26:24.000 [2024-11-20 06:36:55.633581] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1269db0 is same with the state(6) to be set 00:26:24.000 [2024-11-20 06:36:55.633588] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1269db0 is same with the state(6) to be set 00:26:24.000 [2024-11-20 06:36:55.633594] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1269db0 is same with the state(6) to be set 00:26:24.000 [2024-11-20 06:36:55.633600] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1269db0 is same with the state(6) to be set 00:26:24.000 [2024-11-20 06:36:55.633606] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1269db0 is same with the state(6) to be set 00:26:24.000 [2024-11-20 06:36:55.633612] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1269db0 is same with the state(6) to be set 00:26:24.000 [2024-11-20 06:36:55.633619] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1269db0 is same with the state(6) to be set 00:26:24.000 [2024-11-20 06:36:55.633625] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1269db0 is same with the state(6) to be set 00:26:24.000 [2024-11-20 06:36:55.633631] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1269db0 is same with the state(6) to be set 00:26:24.000 [2024-11-20 06:36:55.633637] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1269db0 is same with the state(6) to be set 00:26:24.000 [2024-11-20 06:36:55.633644] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1269db0 is same with the state(6) to be set 00:26:24.000 [2024-11-20 06:36:55.633649] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1269db0 is same with the state(6) to be set 00:26:24.000 [2024-11-20 06:36:55.633656] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1269db0 is same with the state(6) to be set 00:26:24.000 [2024-11-20 06:36:55.633662] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1269db0 is same with the state(6) to be set 00:26:24.000 [2024-11-20 06:36:55.633669] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1269db0 is same with the state(6) to be set 00:26:24.000 [2024-11-20 06:36:55.633676] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1269db0 is same with the state(6) to be set 00:26:24.000 [2024-11-20 06:36:55.633681] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1269db0 is same with the state(6) to be set 00:26:24.000 [2024-11-20 06:36:55.633689] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1269db0 is same with the state(6) to be set 00:26:24.000 [2024-11-20 06:36:55.633698] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1269db0 is same with the state(6) to be set 00:26:24.000 [2024-11-20 06:36:55.633704] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1269db0 is same with the state(6) to be set 00:26:24.000 [2024-11-20 06:36:55.633710] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1269db0 is same with the state(6) to be set 00:26:24.000 [2024-11-20 06:36:55.633716] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1269db0 is same with the state(6) to be set 00:26:24.000 [2024-11-20 06:36:55.633722] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1269db0 is same with the state(6) to be set 00:26:24.000 [2024-11-20 06:36:55.633729] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1269db0 is same with the state(6) to be set 00:26:24.000 [2024-11-20 06:36:55.633735] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1269db0 is same with the state(6) to be set 00:26:24.000 [2024-11-20 06:36:55.633741] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1269db0 is same with the state(6) to be set 00:26:24.000 [2024-11-20 06:36:55.633747] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1269db0 is same with the state(6) to be set 00:26:24.000 [2024-11-20 06:36:55.633754] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1269db0 is same with the state(6) to be set 00:26:24.000 [2024-11-20 06:36:55.633761] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1269db0 is same with the state(6) to be set 00:26:24.000 [2024-11-20 06:36:55.633768] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1269db0 is same with the state(6) to be set 00:26:24.000 [2024-11-20 06:36:55.633773] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1269db0 is same with the state(6) to be set 00:26:24.000 [2024-11-20 06:36:55.633779] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1269db0 is same with the state(6) to be set 00:26:24.000 [2024-11-20 06:36:55.633787] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1269db0 is same with the state(6) to be set 00:26:24.000 [2024-11-20 06:36:55.633793] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1269db0 is same with the state(6) to be set 00:26:24.000 [2024-11-20 06:36:55.633799] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1269db0 is same with the state(6) to be set 00:26:24.000 [2024-11-20 06:36:55.633805] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1269db0 is same with the state(6) to be set 00:26:24.000 [2024-11-20 06:36:55.633811] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1269db0 is same with the state(6) to be set 00:26:24.000 [2024-11-20 06:36:55.633818] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1269db0 is same with the state(6) to be set 00:26:24.000 [2024-11-20 06:36:55.633824] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1269db0 is same with the state(6) to be set 00:26:24.000 [2024-11-20 06:36:55.633830] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1269db0 is same with the state(6) to be set 00:26:24.000 [2024-11-20 06:36:55.633837] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1269db0 is same with the state(6) to be set 00:26:24.000 [2024-11-20 06:36:55.633843] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1269db0 is same with the state(6) to be set 00:26:24.000 [2024-11-20 06:36:55.634891] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f7520 is same with the state(6) to be set 00:26:24.000 [2024-11-20 06:36:55.634903] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f7520 is same with the state(6) to be set 00:26:24.000 [2024-11-20 06:36:55.634919] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f7520 is same with the state(6) to be set 00:26:24.000 [2024-11-20 06:36:55.634927] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f7520 is same with the state(6) to be set 00:26:24.000 [2024-11-20 06:36:55.634933] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f7520 is same with the state(6) to be set 00:26:24.001 [2024-11-20 06:36:55.634939] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f7520 is same with the state(6) to be set 00:26:24.001 [2024-11-20 06:36:55.634945] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f7520 is same with the state(6) to be set 00:26:24.001 [2024-11-20 06:36:55.634951] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f7520 is same with the state(6) to be set 00:26:24.001 [2024-11-20 06:36:55.634957] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f7520 is same with the state(6) to be set 00:26:24.001 [2024-11-20 06:36:55.634962] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f7520 is same with the state(6) to be set 00:26:24.001 [2024-11-20 06:36:55.634969] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f7520 is same with the state(6) to be set 00:26:24.001 [2024-11-20 06:36:55.634976] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f7520 is same with the state(6) to be set 00:26:24.001 [2024-11-20 06:36:55.634981] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f7520 is same with the state(6) to be set 00:26:24.001 [2024-11-20 06:36:55.634987] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f7520 is same with the state(6) to be set 00:26:24.001 [2024-11-20 06:36:55.634993] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f7520 is same with the state(6) to be set 00:26:24.001 [2024-11-20 06:36:55.634999] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f7520 is same with the state(6) to be set 00:26:24.001 [2024-11-20 06:36:55.635004] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f7520 is same with the state(6) to be set 00:26:24.001 [2024-11-20 06:36:55.635010] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f7520 is same with the state(6) to be set 00:26:24.001 [2024-11-20 06:36:55.635016] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f7520 is same with the state(6) to be set 00:26:24.001 [2024-11-20 06:36:55.635023] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f7520 is same with the state(6) to be set 00:26:24.001 [2024-11-20 06:36:55.635029] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f7520 is same with the state(6) to be set 00:26:24.001 [2024-11-20 06:36:55.635035] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f7520 is same with the state(6) to be set 00:26:24.001 [2024-11-20 06:36:55.635042] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f7520 is same with the state(6) to be set 00:26:24.001 [2024-11-20 06:36:55.635047] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f7520 is same with the state(6) to be set 00:26:24.001 [2024-11-20 06:36:55.635053] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f7520 is same with the state(6) to be set 00:26:24.001 [2024-11-20 06:36:55.635059] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f7520 is same with the state(6) to be set 00:26:24.001 [2024-11-20 06:36:55.635064] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f7520 is same with the state(6) to be set 00:26:24.001 [2024-11-20 06:36:55.635071] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f7520 is same with the state(6) to be set 00:26:24.001 [2024-11-20 06:36:55.635077] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f7520 is same with the state(6) to be set 00:26:24.001 [2024-11-20 06:36:55.635083] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f7520 is same with the state(6) to be set 00:26:24.001 [2024-11-20 06:36:55.635090] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f7520 is same with the state(6) to be set 00:26:24.001 [2024-11-20 06:36:55.635095] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f7520 is same with the state(6) to be set 00:26:24.001 [2024-11-20 06:36:55.635101] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f7520 is same with the state(6) to be set 00:26:24.001 [2024-11-20 06:36:55.635107] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f7520 is same with the state(6) to be set 00:26:24.001 [2024-11-20 06:36:55.635114] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f7520 is same with the state(6) to be set 00:26:24.001 [2024-11-20 06:36:55.635120] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f7520 is same with the state(6) to be set 00:26:24.001 [2024-11-20 06:36:55.635126] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f7520 is same with the state(6) to be set 00:26:24.001 [2024-11-20 06:36:55.635133] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f7520 is same with the state(6) to be set 00:26:24.001 [2024-11-20 06:36:55.635139] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f7520 is same with the state(6) to be set 00:26:24.001 [2024-11-20 06:36:55.635144] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f7520 is same with the state(6) to be set 00:26:24.001 [2024-11-20 06:36:55.635150] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f7520 is same with the state(6) to be set 00:26:24.001 [2024-11-20 06:36:55.635156] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f7520 is same with the state(6) to be set 00:26:24.001 [2024-11-20 06:36:55.635162] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f7520 is same with the state(6) to be set 00:26:24.001 [2024-11-20 06:36:55.635168] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f7520 is same with the state(6) to be set 00:26:24.001 [2024-11-20 06:36:55.635174] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f7520 is same with the state(6) to be set 00:26:24.001 [2024-11-20 06:36:55.635181] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f7520 is same with the state(6) to be set 00:26:24.001 [2024-11-20 06:36:55.635188] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f7520 is same with the state(6) to be set 00:26:24.001 [2024-11-20 06:36:55.635195] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f7520 is same with the state(6) to be set 00:26:24.001 [2024-11-20 06:36:55.635200] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f7520 is same with the state(6) to be set 00:26:24.001 [2024-11-20 06:36:55.635211] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f7520 is same with the state(6) to be set 00:26:24.001 [2024-11-20 06:36:55.635217] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f7520 is same with the state(6) to be set 00:26:24.001 [2024-11-20 06:36:55.635222] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f7520 is same with the state(6) to be set 00:26:24.001 [2024-11-20 06:36:55.635228] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f7520 is same with the state(6) to be set 00:26:24.001 [2024-11-20 06:36:55.635234] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f7520 is same with the state(6) to be set 00:26:24.001 [2024-11-20 06:36:55.635240] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f7520 is same with the state(6) to be set 00:26:24.001 [2024-11-20 06:36:55.636563] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f79f0 is same with the state(6) to be set 00:26:24.001 [2024-11-20 06:36:55.636596] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f79f0 is same with the state(6) to be set 00:26:24.001 [2024-11-20 06:36:55.636605] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f79f0 is same with the state(6) to be set 00:26:24.001 [2024-11-20 06:36:55.636664] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f79f0 is same with the state(6) to be set 00:26:24.001 [2024-11-20 06:36:55.636670] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f79f0 is same with the state(6) to be set 00:26:24.001 [2024-11-20 06:36:55.636678] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f79f0 is same with the state(6) to be set 00:26:24.001 [2024-11-20 06:36:55.636684] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f79f0 is same with the state(6) to be set 00:26:24.001 [2024-11-20 06:36:55.636691] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f79f0 is same with the state(6) to be set 00:26:24.001 [2024-11-20 06:36:55.636698] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f79f0 is same with the state(6) to be set 00:26:24.001 [2024-11-20 06:36:55.636706] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f79f0 is same with the state(6) to be set 00:26:24.001 [2024-11-20 06:36:55.636713] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f79f0 is same with the state(6) to be set 00:26:24.001 [2024-11-20 06:36:55.636719] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f79f0 is same with the state(6) to be set 00:26:24.001 [2024-11-20 06:36:55.636725] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f79f0 is same with the state(6) to be set 00:26:24.001 [2024-11-20 06:36:55.636731] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f79f0 is same with the state(6) to be set 00:26:24.001 [2024-11-20 06:36:55.636738] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f79f0 is same with the state(6) to be set 00:26:24.001 [2024-11-20 06:36:55.636744] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f79f0 is same with the state(6) to be set 00:26:24.001 [2024-11-20 06:36:55.636751] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f79f0 is same with the state(6) to be set 00:26:24.001 [2024-11-20 06:36:55.636757] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f79f0 is same with the state(6) to be set 00:26:24.001 [2024-11-20 06:36:55.636764] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f79f0 is same with the state(6) to be set 00:26:24.001 [2024-11-20 06:36:55.636771] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f79f0 is same with the state(6) to be set 00:26:24.001 [2024-11-20 06:36:55.636777] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f79f0 is same with the state(6) to be set 00:26:24.001 [2024-11-20 06:36:55.636783] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f79f0 is same with the state(6) to be set 00:26:24.001 [2024-11-20 06:36:55.636790] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f79f0 is same with the state(6) to be set 00:26:24.001 [2024-11-20 06:36:55.636796] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f79f0 is same with the state(6) to be set 00:26:24.001 [2024-11-20 06:36:55.636802] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f79f0 is same with the state(6) to be set 00:26:24.001 [2024-11-20 06:36:55.636808] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f79f0 is same with the state(6) to be set 00:26:24.001 [2024-11-20 06:36:55.636815] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f79f0 is same with the state(6) to be set 00:26:24.001 [2024-11-20 06:36:55.636823] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f79f0 is same with the state(6) to be set 00:26:24.001 [2024-11-20 06:36:55.636834] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f79f0 is same with the state(6) to be set 00:26:24.001 [2024-11-20 06:36:55.636840] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f79f0 is same with the state(6) to be set 00:26:24.001 [2024-11-20 06:36:55.636852] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f79f0 is same with the state(6) to be set 00:26:24.001 [2024-11-20 06:36:55.636858] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f79f0 is same with the state(6) to be set 00:26:24.001 [2024-11-20 06:36:55.636865] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f79f0 is same with the state(6) to be set 00:26:24.001 [2024-11-20 06:36:55.636872] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f79f0 is same with the state(6) to be set 00:26:24.002 [2024-11-20 06:36:55.636878] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f79f0 is same with the state(6) to be set 00:26:24.002 [2024-11-20 06:36:55.636885] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f79f0 is same with the state(6) to be set 00:26:24.002 [2024-11-20 06:36:55.636892] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f79f0 is same with the state(6) to be set 00:26:24.002 [2024-11-20 06:36:55.636898] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f79f0 is same with the state(6) to be set 00:26:24.002 [2024-11-20 06:36:55.636903] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f79f0 is same with the state(6) to be set 00:26:24.002 [2024-11-20 06:36:55.636909] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f79f0 is same with the state(6) to be set 00:26:24.002 [2024-11-20 06:36:55.636915] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f79f0 is same with the state(6) to be set 00:26:24.002 [2024-11-20 06:36:55.636921] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f79f0 is same with the state(6) to be set 00:26:24.002 [2024-11-20 06:36:55.636927] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f79f0 is same with the state(6) to be set 00:26:24.002 [2024-11-20 06:36:55.636934] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f79f0 is same with the state(6) to be set 00:26:24.002 [2024-11-20 06:36:55.636939] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f79f0 is same with the state(6) to be set 00:26:24.002 [2024-11-20 06:36:55.636945] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f79f0 is same with the state(6) to be set 00:26:24.002 [2024-11-20 06:36:55.636951] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f79f0 is same with the state(6) to be set 00:26:24.002 [2024-11-20 06:36:55.636957] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f79f0 is same with the state(6) to be set 00:26:24.002 [2024-11-20 06:36:55.636963] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f79f0 is same with the state(6) to be set 00:26:24.002 [2024-11-20 06:36:55.636969] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f79f0 is same with the state(6) to be set 00:26:24.002 [2024-11-20 06:36:55.636976] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f79f0 is same with the state(6) to be set 00:26:24.002 [2024-11-20 06:36:55.636982] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f79f0 is same with the state(6) to be set 00:26:24.002 [2024-11-20 06:36:55.636987] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f79f0 is same with the state(6) to be set 00:26:24.002 [2024-11-20 06:36:55.638173] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f7ee0 is same with the state(6) to be set 00:26:24.002 [2024-11-20 06:36:55.638198] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f7ee0 is same with the state(6) to be set 00:26:24.002 [2024-11-20 06:36:55.638216] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f7ee0 is same with the state(6) to be set 00:26:24.002 [2024-11-20 06:36:55.638224] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f7ee0 is same with the state(6) to be set 00:26:24.002 [2024-11-20 06:36:55.638490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.002 [2024-11-20 06:36:55.638521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.002 [2024-11-20 06:36:55.638538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.002 [2024-11-20 06:36:55.638546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.002 [2024-11-20 06:36:55.638555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.002 [2024-11-20 06:36:55.638562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.002 [2024-11-20 06:36:55.638570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.002 [2024-11-20 06:36:55.638577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.002 [2024-11-20 06:36:55.638586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.002 [2024-11-20 06:36:55.638592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.002 [2024-11-20 06:36:55.638601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.002 [2024-11-20 06:36:55.638607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.002 [2024-11-20 06:36:55.638615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.002 [2024-11-20 06:36:55.638622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.002 [2024-11-20 06:36:55.638629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.002 [2024-11-20 06:36:55.638636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.002 [2024-11-20 06:36:55.638645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.002 [2024-11-20 06:36:55.638651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.002 [2024-11-20 06:36:55.638660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.002 [2024-11-20 06:36:55.638666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.002 [2024-11-20 06:36:55.638674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.002 [2024-11-20 06:36:55.638681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.002 [2024-11-20 06:36:55.638689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.002 [2024-11-20 06:36:55.638695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.002 [2024-11-20 06:36:55.638703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.002 [2024-11-20 06:36:55.638713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.002 [2024-11-20 06:36:55.638721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.002 [2024-11-20 06:36:55.638727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.002 [2024-11-20 06:36:55.638735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.002 [2024-11-20 06:36:55.638742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.002 [2024-11-20 06:36:55.638738] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8260 is same with the state(6) to be set 00:26:24.002 [2024-11-20 06:36:55.638751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.002 [2024-11-20 06:36:55.638758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.002 [2024-11-20 06:36:55.638763] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8260 is same with the state(6) to be set 00:26:24.002 [2024-11-20 06:36:55.638766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.002 [2024-11-20 06:36:55.638771] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8260 is same with the state(6) to be set 00:26:24.002 [2024-11-20 06:36:55.638774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.002 [2024-11-20 06:36:55.638779] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8260 is same with the state(6) to be set 00:26:24.002 [2024-11-20 06:36:55.638783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.002 [2024-11-20 06:36:55.638787] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8260 is same with the state(6) to be set 00:26:24.002 [2024-11-20 06:36:55.638791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.002 [2024-11-20 06:36:55.638795] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8260 is same with the state(6) to be set 00:26:24.002 [2024-11-20 06:36:55.638800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.002 [2024-11-20 06:36:55.638803] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8260 is same with the state(6) to be set 00:26:24.002 [2024-11-20 06:36:55.638808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.002 [2024-11-20 06:36:55.638811] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8260 is same with the state(6) to be set 00:26:24.002 [2024-11-20 06:36:55.638817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.002 [2024-11-20 06:36:55.638819] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8260 is same with the state(6) to be set 00:26:24.002 [2024-11-20 06:36:55.638824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.002 [2024-11-20 06:36:55.638826] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8260 is same with the state(6) to be set 00:26:24.002 [2024-11-20 06:36:55.638833] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8260 is same with [2024-11-20 06:36:55.638833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:1the state(6) to be set 00:26:24.002 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.002 [2024-11-20 06:36:55.638845] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8260 is same with [2024-11-20 06:36:55.638846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:26:24.002 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.002 [2024-11-20 06:36:55.638856] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8260 is same with [2024-11-20 06:36:55.638857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:1the state(6) to be set 00:26:24.002 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.002 [2024-11-20 06:36:55.638866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.002 [2024-11-20 06:36:55.638866] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8260 is same with the state(6) to be set 00:26:24.003 [2024-11-20 06:36:55.638875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:1[2024-11-20 06:36:55.638876] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8260 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.003 the state(6) to be set 00:26:24.003 [2024-11-20 06:36:55.638884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-20 06:36:55.638885] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8260 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.003 the state(6) to be set 00:26:24.003 [2024-11-20 06:36:55.638894] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8260 is same with the state(6) to be set 00:26:24.003 [2024-11-20 06:36:55.638896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.003 [2024-11-20 06:36:55.638902] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8260 is same with the state(6) to be set 00:26:24.003 [2024-11-20 06:36:55.638903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.003 [2024-11-20 06:36:55.638910] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8260 is same with the state(6) to be set 00:26:24.003 [2024-11-20 06:36:55.638913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.003 [2024-11-20 06:36:55.638917] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8260 is same with the state(6) to be set 00:26:24.003 [2024-11-20 06:36:55.638920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.003 [2024-11-20 06:36:55.638924] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8260 is same with the state(6) to be set 00:26:24.003 [2024-11-20 06:36:55.638929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.003 [2024-11-20 06:36:55.638932] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8260 is same with the state(6) to be set 00:26:24.003 [2024-11-20 06:36:55.638936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.003 [2024-11-20 06:36:55.638939] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8260 is same with the state(6) to be set 00:26:24.003 [2024-11-20 06:36:55.638945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.003 [2024-11-20 06:36:55.638946] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8260 is same with the state(6) to be set 00:26:24.003 [2024-11-20 06:36:55.638952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.003 [2024-11-20 06:36:55.638955] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8260 is same with the state(6) to be set 00:26:24.003 [2024-11-20 06:36:55.638961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.003 [2024-11-20 06:36:55.638963] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8260 is same with the state(6) to be set 00:26:24.003 [2024-11-20 06:36:55.638968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.003 [2024-11-20 06:36:55.638970] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8260 is same with the state(6) to be set 00:26:24.003 [2024-11-20 06:36:55.638977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:1[2024-11-20 06:36:55.638978] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8260 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.003 the state(6) to be set 00:26:24.003 [2024-11-20 06:36:55.638986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-20 06:36:55.638986] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8260 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.003 the state(6) to be set 00:26:24.003 [2024-11-20 06:36:55.638996] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8260 is same with the state(6) to be set 00:26:24.003 [2024-11-20 06:36:55.638996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.003 [2024-11-20 06:36:55.639004] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8260 is same with [2024-11-20 06:36:55.639004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:26:24.003 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.003 [2024-11-20 06:36:55.639013] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8260 is same with the state(6) to be set 00:26:24.003 [2024-11-20 06:36:55.639015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.003 [2024-11-20 06:36:55.639020] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8260 is same with the state(6) to be set 00:26:24.003 [2024-11-20 06:36:55.639023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.003 [2024-11-20 06:36:55.639028] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8260 is same with the state(6) to be set 00:26:24.003 [2024-11-20 06:36:55.639032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.003 [2024-11-20 06:36:55.639036] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8260 is same with the state(6) to be set 00:26:24.003 [2024-11-20 06:36:55.639039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.003 [2024-11-20 06:36:55.639044] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8260 is same with the state(6) to be set 00:26:24.003 [2024-11-20 06:36:55.639047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.003 [2024-11-20 06:36:55.639051] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8260 is same with the state(6) to be set 00:26:24.003 [2024-11-20 06:36:55.639055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.003 [2024-11-20 06:36:55.639059] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8260 is same with the state(6) to be set 00:26:24.003 [2024-11-20 06:36:55.639066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.003 [2024-11-20 06:36:55.639067] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8260 is same with the state(6) to be set 00:26:24.003 [2024-11-20 06:36:55.639075] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8260 is same with the state(6) to be set 00:26:24.003 [2024-11-20 06:36:55.639076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.003 [2024-11-20 06:36:55.639081] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8260 is same with the state(6) to be set 00:26:24.003 [2024-11-20 06:36:55.639086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.003 [2024-11-20 06:36:55.639088] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8260 is same with the state(6) to be set 00:26:24.003 [2024-11-20 06:36:55.639093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.003 [2024-11-20 06:36:55.639097] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8260 is same with the state(6) to be set 00:26:24.003 [2024-11-20 06:36:55.639104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:12[2024-11-20 06:36:55.639104] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8260 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.003 the state(6) to be set 00:26:24.003 [2024-11-20 06:36:55.639114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-20 06:36:55.639115] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8260 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.003 the state(6) to be set 00:26:24.003 [2024-11-20 06:36:55.639124] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8260 is same with the state(6) to be set 00:26:24.003 [2024-11-20 06:36:55.639125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.003 [2024-11-20 06:36:55.639130] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8260 is same with the state(6) to be set 00:26:24.003 [2024-11-20 06:36:55.639132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.003 [2024-11-20 06:36:55.639138] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8260 is same with the state(6) to be set 00:26:24.003 [2024-11-20 06:36:55.639141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.003 [2024-11-20 06:36:55.639145] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8260 is same with the state(6) to be set 00:26:24.003 [2024-11-20 06:36:55.639148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.003 [2024-11-20 06:36:55.639152] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8260 is same with the state(6) to be set 00:26:24.003 [2024-11-20 06:36:55.639157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.003 [2024-11-20 06:36:55.639160] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8260 is same with the state(6) to be set 00:26:24.003 [2024-11-20 06:36:55.639165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.003 [2024-11-20 06:36:55.639168] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8260 is same with the state(6) to be set 00:26:24.003 [2024-11-20 06:36:55.639175] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8260 is same with the state(6) to be set 00:26:24.003 [2024-11-20 06:36:55.639176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.003 [2024-11-20 06:36:55.639182] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8260 is same with the state(6) to be set 00:26:24.003 [2024-11-20 06:36:55.639184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.003 [2024-11-20 06:36:55.639189] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8260 is same with the state(6) to be set 00:26:24.003 [2024-11-20 06:36:55.639193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.003 [2024-11-20 06:36:55.639196] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8260 is same with the state(6) to be set 00:26:24.003 [2024-11-20 06:36:55.639206] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8260 is same with [2024-11-20 06:36:55.639200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:26:24.003 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.004 [2024-11-20 06:36:55.639216] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8260 is same with the state(6) to be set 00:26:24.004 [2024-11-20 06:36:55.639218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.004 [2024-11-20 06:36:55.639223] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8260 is same with the state(6) to be set 00:26:24.004 [2024-11-20 06:36:55.639226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.004 [2024-11-20 06:36:55.639231] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8260 is same with the state(6) to be set 00:26:24.004 [2024-11-20 06:36:55.639235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.004 [2024-11-20 06:36:55.639238] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8260 is same with the state(6) to be set 00:26:24.004 [2024-11-20 06:36:55.639242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.004 [2024-11-20 06:36:55.639244] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8260 is same with the state(6) to be set 00:26:24.004 [2024-11-20 06:36:55.639251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:12[2024-11-20 06:36:55.639251] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8260 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.004 the state(6) to be set 00:26:24.004 [2024-11-20 06:36:55.639260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.004 [2024-11-20 06:36:55.639269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.004 [2024-11-20 06:36:55.639276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.004 [2024-11-20 06:36:55.639284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.004 [2024-11-20 06:36:55.639290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.004 [2024-11-20 06:36:55.639299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.004 [2024-11-20 06:36:55.639305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.004 [2024-11-20 06:36:55.639315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.004 [2024-11-20 06:36:55.639322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.004 [2024-11-20 06:36:55.639329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.004 [2024-11-20 06:36:55.639336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.004 [2024-11-20 06:36:55.639344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.004 [2024-11-20 06:36:55.639352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.004 [2024-11-20 06:36:55.639360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.004 [2024-11-20 06:36:55.639366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.004 [2024-11-20 06:36:55.639374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.004 [2024-11-20 06:36:55.639380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.004 [2024-11-20 06:36:55.639388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.004 [2024-11-20 06:36:55.639395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.004 [2024-11-20 06:36:55.639403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.004 [2024-11-20 06:36:55.639409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.004 [2024-11-20 06:36:55.639416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.004 [2024-11-20 06:36:55.639422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.004 [2024-11-20 06:36:55.639430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.004 [2024-11-20 06:36:55.639437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.004 [2024-11-20 06:36:55.639445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.004 [2024-11-20 06:36:55.639451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.004 [2024-11-20 06:36:55.639459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.004 [2024-11-20 06:36:55.639465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.004 [2024-11-20 06:36:55.639473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.004 [2024-11-20 06:36:55.639480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.004 [2024-11-20 06:36:55.639488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.004 [2024-11-20 06:36:55.639495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.004 [2024-11-20 06:36:55.639503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.004 [2024-11-20 06:36:55.639510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.004 [2024-11-20 06:36:55.639517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.004 [2024-11-20 06:36:55.639524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.004 [2024-11-20 06:36:55.639531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.004 [2024-11-20 06:36:55.639537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.004 [2024-11-20 06:36:55.639545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.004 [2024-11-20 06:36:55.639552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.004 [2024-11-20 06:36:55.639580] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:24.004 [2024-11-20 06:36:55.639950] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:24.004 [2024-11-20 06:36:55.639972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.004 [2024-11-20 06:36:55.639980] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:24.004 [2024-11-20 06:36:55.639987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.004 [2024-11-20 06:36:55.639995] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:24.004 [2024-11-20 06:36:55.640002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.004 [2024-11-20 06:36:55.640009] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:24.004 [2024-11-20 06:36:55.640015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.004 [2024-11-20 06:36:55.640022] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2623b90 is same with the state(6) to be set 00:26:24.004 [2024-11-20 06:36:55.640065] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:24.004 [2024-11-20 06:36:55.640073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.004 [2024-11-20 06:36:55.640081] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:24.004 [2024-11-20 06:36:55.640088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.005 [2024-11-20 06:36:55.640098] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:24.005 [2024-11-20 06:36:55.640104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.005 [2024-11-20 06:36:55.640112] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:24.005 [2024-11-20 06:36:55.640119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.005 [2024-11-20 06:36:55.640125] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2654da0 is same with the state(6) to be set 00:26:24.005 [2024-11-20 06:36:55.640158] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:24.005 [2024-11-20 06:36:55.640167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.005 [2024-11-20 06:36:55.640174] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:24.005 [2024-11-20 06:36:55.640181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.005 [2024-11-20 06:36:55.640188] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:24.005 [2024-11-20 06:36:55.640194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.005 [2024-11-20 06:36:55.640208] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:24.005 [2024-11-20 06:36:55.640216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.005 [2024-11-20 06:36:55.640223] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f72b0 is same with the state(6) to be set 00:26:24.005 [2024-11-20 06:36:55.640226] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8730 is same with the state(6) to be set 00:26:24.005 [2024-11-20 06:36:55.640239] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8730 is same with the state(6) to be set 00:26:24.005 [2024-11-20 06:36:55.640246] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8730 is same with the state(6) to be set 00:26:24.005 [2024-11-20 06:36:55.640246] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:24.005 [2024-11-20 06:36:55.640252] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8730 is same with the state(6) to be set 00:26:24.005 [2024-11-20 06:36:55.640257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.005 [2024-11-20 06:36:55.640259] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8730 is same with the state(6) to be set 00:26:24.005 [2024-11-20 06:36:55.640267] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8730 is same with [2024-11-20 06:36:55.640268] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsthe state(6) to be set 00:26:24.005 id:0 cdw10:00000000 cdw11:00000000 00:26:24.005 [2024-11-20 06:36:55.640277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-11-20 06:36:55.640277] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8730 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.005 the state(6) to be set 00:26:24.005 [2024-11-20 06:36:55.640288] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 ns[2024-11-20 06:36:55.640289] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8730 is same with id:0 cdw10:00000000 cdw11:00000000 00:26:24.005 the state(6) to be set 00:26:24.005 [2024-11-20 06:36:55.640301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-11-20 06:36:55.640301] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8730 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.005 the state(6) to be set 00:26:24.005 [2024-11-20 06:36:55.640311] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 ns[2024-11-20 06:36:55.640312] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8730 is same with id:0 cdw10:00000000 cdw11:00000000 00:26:24.005 the state(6) to be set 00:26:24.005 [2024-11-20 06:36:55.640321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-11-20 06:36:55.640321] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8730 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.005 the state(6) to be set 00:26:24.005 [2024-11-20 06:36:55.640330] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ec1d0 is same [2024-11-20 06:36:55.640332] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8730 is same with with the state(6) to be set 00:26:24.005 the state(6) to be set 00:26:24.005 [2024-11-20 06:36:55.640340] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8730 is same with the state(6) to be set 00:26:24.005 [2024-11-20 06:36:55.640348] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8730 is same with the state(6) to be set 00:26:24.005 [2024-11-20 06:36:55.640354] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8730 is same with the state(6) to be set 00:26:24.005 [2024-11-20 06:36:55.640356] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:24.005 [2024-11-20 06:36:55.640360] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8730 is same with the state(6) to be set 00:26:24.005 [2024-11-20 06:36:55.640366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.005 [2024-11-20 06:36:55.640367] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8730 is same with the state(6) to be set 00:26:24.005 [2024-11-20 06:36:55.640374] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:24.005 [2024-11-20 06:36:55.640375] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8730 is same with the state(6) to be set 00:26:24.005 [2024-11-20 06:36:55.640382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.005 [2024-11-20 06:36:55.640384] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8730 is same with the state(6) to be set 00:26:24.005 [2024-11-20 06:36:55.640390] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 ns[2024-11-20 06:36:55.640391] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8730 is same with id:0 cdw10:00000000 cdw11:00000000 00:26:24.005 the state(6) to be set 00:26:24.005 [2024-11-20 06:36:55.640398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-11-20 06:36:55.640399] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8730 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.005 the state(6) to be set 00:26:24.005 [2024-11-20 06:36:55.640408] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 ns[2024-11-20 06:36:55.640409] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8730 is same with id:0 cdw10:00000000 cdw11:00000000 00:26:24.005 the state(6) to be set 00:26:24.005 [2024-11-20 06:36:55.640422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.005 [2024-11-20 06:36:55.640424] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8730 is same with the state(6) to be set 00:26:24.005 [2024-11-20 06:36:55.640430] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f7d30 is same with the state(6) to be set 00:26:24.005 [2024-11-20 06:36:55.640431] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8730 is same with the state(6) to be set 00:26:24.005 [2024-11-20 06:36:55.640439] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8730 is same with the state(6) to be set 00:26:24.005 [2024-11-20 06:36:55.640446] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8730 is same with the state(6) to be set 00:26:24.005 [2024-11-20 06:36:55.640453] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8730 is same with the state(6) to be set 00:26:24.005 [2024-11-20 06:36:55.640455] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:24.005 [2024-11-20 06:36:55.640460] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8730 is same with the state(6) to be set 00:26:24.005 [2024-11-20 06:36:55.640464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.005 [2024-11-20 06:36:55.640466] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8730 is same with the state(6) to be set 00:26:24.005 [2024-11-20 06:36:55.640473] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:24.005 [2024-11-20 06:36:55.640475] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8730 is same with the state(6) to be set 00:26:24.005 [2024-11-20 06:36:55.640480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.005 [2024-11-20 06:36:55.640485] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8730 is same with the state(6) to be set 00:26:24.005 [2024-11-20 06:36:55.640488] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:24.005 [2024-11-20 06:36:55.640493] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8730 is same with the state(6) to be set 00:26:24.005 [2024-11-20 06:36:55.640496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.005 [2024-11-20 06:36:55.640501] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8730 is same with the state(6) to be set 00:26:24.005 [2024-11-20 06:36:55.640504] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:24.005 [2024-11-20 06:36:55.640510] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8730 is same with the state(6) to be set 00:26:24.005 [2024-11-20 06:36:55.640512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.005 [2024-11-20 06:36:55.640517] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8730 is same with the state(6) to be set 00:26:24.005 [2024-11-20 06:36:55.640519] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f81b0 is same with the state(6) to be set 00:26:24.005 [2024-11-20 06:36:55.640525] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8730 is same with the state(6) to be set 00:26:24.005 [2024-11-20 06:36:55.640532] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8730 is same with the state(6) to be set 00:26:24.005 [2024-11-20 06:36:55.640538] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8730 is same with the state(6) to be set 00:26:24.005 [2024-11-20 06:36:55.640550] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8730 is same with the state(6) to be set 00:26:24.005 [2024-11-20 06:36:55.640558] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8730 is same with the state(6) to be set 00:26:24.005 [2024-11-20 06:36:55.640564] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8730 is same with the state(6) to be set 00:26:24.005 [2024-11-20 06:36:55.640565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.006 [2024-11-20 06:36:55.640570] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8730 is same with the state(6) to be set 00:26:24.006 [2024-11-20 06:36:55.640575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.006 [2024-11-20 06:36:55.640577] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8730 is same with the state(6) to be set 00:26:24.006 [2024-11-20 06:36:55.640585] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8730 is same with the state(6) to be set 00:26:24.006 [2024-11-20 06:36:55.640587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.006 [2024-11-20 06:36:55.640591] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8730 is same with the state(6) to be set 00:26:24.006 [2024-11-20 06:36:55.640595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.006 [2024-11-20 06:36:55.640598] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8730 is same with the state(6) to be set 00:26:24.006 [2024-11-20 06:36:55.640604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:12[2024-11-20 06:36:55.640605] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8730 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.006 the state(6) to be set 00:26:24.006 [2024-11-20 06:36:55.640613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.006 [2024-11-20 06:36:55.640615] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8730 is same with the state(6) to be set 00:26:24.006 [2024-11-20 06:36:55.640622] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8730 is same with the state(6) to be set 00:26:24.006 [2024-11-20 06:36:55.640625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.006 [2024-11-20 06:36:55.640628] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8730 is same with the state(6) to be set 00:26:24.006 [2024-11-20 06:36:55.640633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.006 [2024-11-20 06:36:55.640635] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8730 is same with the state(6) to be set 00:26:24.006 [2024-11-20 06:36:55.640642] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8730 is same with [2024-11-20 06:36:55.640642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:12the state(6) to be set 00:26:24.006 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.006 [2024-11-20 06:36:55.640651] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8730 is same with the state(6) to be set 00:26:24.006 [2024-11-20 06:36:55.640652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.006 [2024-11-20 06:36:55.640659] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8730 is same with the state(6) to be set 00:26:24.006 [2024-11-20 06:36:55.640662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.006 [2024-11-20 06:36:55.640668] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8730 is same with the state(6) to be set 00:26:24.006 [2024-11-20 06:36:55.640671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.006 [2024-11-20 06:36:55.640675] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8730 is same with the state(6) to be set 00:26:24.006 [2024-11-20 06:36:55.640680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.006 [2024-11-20 06:36:55.640682] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8730 is same with the state(6) to be set 00:26:24.006 [2024-11-20 06:36:55.640687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.006 [2024-11-20 06:36:55.640689] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8730 is same with the state(6) to be set 00:26:24.006 [2024-11-20 06:36:55.640696] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8730 is same with the state(6) to be set 00:26:24.006 [2024-11-20 06:36:55.640697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.006 [2024-11-20 06:36:55.640701] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8730 is same with the state(6) to be set 00:26:24.006 [2024-11-20 06:36:55.640704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.006 [2024-11-20 06:36:55.640708] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8730 is same with the state(6) to be set 00:26:24.006 [2024-11-20 06:36:55.640715] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8730 is same with [2024-11-20 06:36:55.640715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:12the state(6) to be set 00:26:24.006 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.006 [2024-11-20 06:36:55.640724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.006 [2024-11-20 06:36:55.640725] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8730 is same with the state(6) to be set 00:26:24.006 [2024-11-20 06:36:55.640733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.006 [2024-11-20 06:36:55.640741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.006 [2024-11-20 06:36:55.640749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.006 [2024-11-20 06:36:55.640755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.006 [2024-11-20 06:36:55.640763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.006 [2024-11-20 06:36:55.640769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.006 [2024-11-20 06:36:55.640779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.006 [2024-11-20 06:36:55.640785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.006 [2024-11-20 06:36:55.640793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.006 [2024-11-20 06:36:55.640802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.006 [2024-11-20 06:36:55.640809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.006 [2024-11-20 06:36:55.640816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.006 [2024-11-20 06:36:55.640823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.006 [2024-11-20 06:36:55.640831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.006 [2024-11-20 06:36:55.640839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.006 [2024-11-20 06:36:55.640846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.006 [2024-11-20 06:36:55.640854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.006 [2024-11-20 06:36:55.640860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.006 [2024-11-20 06:36:55.640868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.006 [2024-11-20 06:36:55.640874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.006 [2024-11-20 06:36:55.640882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.006 [2024-11-20 06:36:55.640889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.006 [2024-11-20 06:36:55.640897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.006 [2024-11-20 06:36:55.640903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.006 [2024-11-20 06:36:55.640912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.006 [2024-11-20 06:36:55.640920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.006 [2024-11-20 06:36:55.640928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.006 [2024-11-20 06:36:55.640934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.006 [2024-11-20 06:36:55.640944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.006 [2024-11-20 06:36:55.640950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.006 [2024-11-20 06:36:55.640958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.006 [2024-11-20 06:36:55.640964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.006 [2024-11-20 06:36:55.640972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.006 [2024-11-20 06:36:55.640978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.006 [2024-11-20 06:36:55.640987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.006 [2024-11-20 06:36:55.640995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.006 [2024-11-20 06:36:55.641003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.006 [2024-11-20 06:36:55.641009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.006 [2024-11-20 06:36:55.641017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.006 [2024-11-20 06:36:55.641023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.006 [2024-11-20 06:36:55.641030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.007 [2024-11-20 06:36:55.641037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.007 [2024-11-20 06:36:55.641046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.007 [2024-11-20 06:36:55.641053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.007 [2024-11-20 06:36:55.641061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.007 [2024-11-20 06:36:55.641067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.007 [2024-11-20 06:36:55.641075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.007 [2024-11-20 06:36:55.641082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.007 [2024-11-20 06:36:55.641090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.007 [2024-11-20 06:36:55.641096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.007 [2024-11-20 06:36:55.641104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.007 [2024-11-20 06:36:55.641111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.007 [2024-11-20 06:36:55.641118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.007 [2024-11-20 06:36:55.641124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.007 [2024-11-20 06:36:55.641132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.007 [2024-11-20 06:36:55.641138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.007 [2024-11-20 06:36:55.641146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.007 [2024-11-20 06:36:55.641154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.007 [2024-11-20 06:36:55.641162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.007 [2024-11-20 06:36:55.641171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.007 [2024-11-20 06:36:55.641179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.007 [2024-11-20 06:36:55.641185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.007 [2024-11-20 06:36:55.641193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.007 [2024-11-20 06:36:55.641200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.007 [2024-11-20 06:36:55.641214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.007 [2024-11-20 06:36:55.641220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.007 [2024-11-20 06:36:55.641228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.007 [2024-11-20 06:36:55.641234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.007 [2024-11-20 06:36:55.641242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.007 [2024-11-20 06:36:55.641248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.007 [2024-11-20 06:36:55.641256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.007 [2024-11-20 06:36:55.641263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.007 [2024-11-20 06:36:55.641271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.007 [2024-11-20 06:36:55.641278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.007 [2024-11-20 06:36:55.641285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.007 [2024-11-20 06:36:55.641292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.007 [2024-11-20 06:36:55.641299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.007 [2024-11-20 06:36:55.641784] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8c00 is same with the state(6) to be set 00:26:24.007 [2024-11-20 06:36:55.641800] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8c00 is same with the state(6) to be set 00:26:24.007 [2024-11-20 06:36:55.641807] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8c00 is same with the state(6) to be set 00:26:24.007 [2024-11-20 06:36:55.641813] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8c00 is same with the state(6) to be set 00:26:24.007 [2024-11-20 06:36:55.641819] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8c00 is same with the state(6) to be set 00:26:24.007 [2024-11-20 06:36:55.641825] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8c00 is same with the state(6) to be set 00:26:24.007 [2024-11-20 06:36:55.641831] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8c00 is same with the state(6) to be set 00:26:24.007 [2024-11-20 06:36:55.641843] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8c00 is same with the state(6) to be set 00:26:24.007 [2024-11-20 06:36:55.641851] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8c00 is same with the state(6) to be set 00:26:24.007 [2024-11-20 06:36:55.641858] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8c00 is same with the state(6) to be set 00:26:24.007 [2024-11-20 06:36:55.641864] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8c00 is same with the state(6) to be set 00:26:24.007 [2024-11-20 06:36:55.641871] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8c00 is same with the state(6) to be set 00:26:24.007 [2024-11-20 06:36:55.641878] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8c00 is same with the state(6) to be set 00:26:24.007 [2024-11-20 06:36:55.641884] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8c00 is same with the state(6) to be set 00:26:24.007 [2024-11-20 06:36:55.641890] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8c00 is same with the state(6) to be set 00:26:24.007 [2024-11-20 06:36:55.641897] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8c00 is same with the state(6) to be set 00:26:24.007 [2024-11-20 06:36:55.641904] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8c00 is same with the state(6) to be set 00:26:24.007 [2024-11-20 06:36:55.641910] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8c00 is same with the state(6) to be set 00:26:24.007 [2024-11-20 06:36:55.641917] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8c00 is same with the state(6) to be set 00:26:24.007 [2024-11-20 06:36:55.641923] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8c00 is same with the state(6) to be set 00:26:24.007 [2024-11-20 06:36:55.641930] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8c00 is same with the state(6) to be set 00:26:24.007 [2024-11-20 06:36:55.641936] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8c00 is same with the state(6) to be set 00:26:24.007 [2024-11-20 06:36:55.641942] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8c00 is same with the state(6) to be set 00:26:24.007 [2024-11-20 06:36:55.641948] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8c00 is same with the state(6) to be set 00:26:24.007 [2024-11-20 06:36:55.641956] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8c00 is same with the state(6) to be set 00:26:24.007 [2024-11-20 06:36:55.641962] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8c00 is same with the state(6) to be set 00:26:24.007 [2024-11-20 06:36:55.641970] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8c00 is same with the state(6) to be set 00:26:24.007 [2024-11-20 06:36:55.641976] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8c00 is same with the state(6) to be set 00:26:24.007 [2024-11-20 06:36:55.641983] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8c00 is same with the state(6) to be set 00:26:24.007 [2024-11-20 06:36:55.641989] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8c00 is same with the state(6) to be set 00:26:24.007 [2024-11-20 06:36:55.641995] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8c00 is same with the state(6) to be set 00:26:24.007 [2024-11-20 06:36:55.642000] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8c00 is same with the state(6) to be set 00:26:24.007 [2024-11-20 06:36:55.642008] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8c00 is same with the state(6) to be set 00:26:24.007 [2024-11-20 06:36:55.642015] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8c00 is same with the state(6) to be set 00:26:24.007 [2024-11-20 06:36:55.642023] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8c00 is same with the state(6) to be set 00:26:24.007 [2024-11-20 06:36:55.642029] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8c00 is same with the state(6) to be set 00:26:24.007 [2024-11-20 06:36:55.642036] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8c00 is same with the state(6) to be set 00:26:24.007 [2024-11-20 06:36:55.642042] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8c00 is same with the state(6) to be set 00:26:24.007 [2024-11-20 06:36:55.642048] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8c00 is same with the state(6) to be set 00:26:24.007 [2024-11-20 06:36:55.642053] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8c00 is same with the state(6) to be set 00:26:24.007 [2024-11-20 06:36:55.642060] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8c00 is same with the state(6) to be set 00:26:24.007 [2024-11-20 06:36:55.642067] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8c00 is same with the state(6) to be set 00:26:24.007 [2024-11-20 06:36:55.642073] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8c00 is same with the state(6) to be set 00:26:24.007 [2024-11-20 06:36:55.642079] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8c00 is same with the state(6) to be set 00:26:24.007 [2024-11-20 06:36:55.642086] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8c00 is same with the state(6) to be set 00:26:24.007 [2024-11-20 06:36:55.642093] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8c00 is same with the state(6) to be set 00:26:24.008 [2024-11-20 06:36:55.642098] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8c00 is same with the state(6) to be set 00:26:24.008 [2024-11-20 06:36:55.642104] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8c00 is same with the state(6) to be set 00:26:24.008 [2024-11-20 06:36:55.642111] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8c00 is same with the state(6) to be set 00:26:24.008 [2024-11-20 06:36:55.642118] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8c00 is same with the state(6) to be set 00:26:24.008 [2024-11-20 06:36:55.642124] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8c00 is same with the state(6) to be set 00:26:24.008 [2024-11-20 06:36:55.642130] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8c00 is same with the state(6) to be set 00:26:24.008 [2024-11-20 06:36:55.642136] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8c00 is same with the state(6) to be set 00:26:24.008 [2024-11-20 06:36:55.642141] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8c00 is same with the state(6) to be set 00:26:24.008 [2024-11-20 06:36:55.642147] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8c00 is same with the state(6) to be set 00:26:24.008 [2024-11-20 06:36:55.642153] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8c00 is same with the state(6) to be set 00:26:24.008 [2024-11-20 06:36:55.642159] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8c00 is same with the state(6) to be set 00:26:24.008 [2024-11-20 06:36:55.642165] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8c00 is same with the state(6) to be set 00:26:24.008 [2024-11-20 06:36:55.642173] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8c00 is same with the state(6) to be set 00:26:24.008 [2024-11-20 06:36:55.642179] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8c00 is same with the state(6) to be set 00:26:24.008 [2024-11-20 06:36:55.642185] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8c00 is same with the state(6) to be set 00:26:24.008 [2024-11-20 06:36:55.642191] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8c00 is same with the state(6) to be set 00:26:24.008 [2024-11-20 06:36:55.642198] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8c00 is same with the state(6) to be set 00:26:24.008 [2024-11-20 06:36:55.643043] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f90f0 is same with the state(6) to be set 00:26:24.008 [2024-11-20 06:36:55.643058] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f90f0 is same with the state(6) to be set 00:26:24.008 [2024-11-20 06:36:55.643065] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f90f0 is same with the state(6) to be set 00:26:24.008 [2024-11-20 06:36:55.643071] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f90f0 is same with the state(6) to be set 00:26:24.008 [2024-11-20 06:36:55.643077] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f90f0 is same with the state(6) to be set 00:26:24.008 [2024-11-20 06:36:55.643084] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f90f0 is same with the state(6) to be set 00:26:24.008 [2024-11-20 06:36:55.643091] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f90f0 is same with the state(6) to be set 00:26:24.008 [2024-11-20 06:36:55.643097] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f90f0 is same with the state(6) to be set 00:26:24.008 [2024-11-20 06:36:55.643102] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f90f0 is same with the state(6) to be set 00:26:24.008 [2024-11-20 06:36:55.643109] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f90f0 is same with the state(6) to be set 00:26:24.008 [2024-11-20 06:36:55.643115] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f90f0 is same with the state(6) to be set 00:26:24.008 [2024-11-20 06:36:55.643120] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f90f0 is same with the state(6) to be set 00:26:24.008 [2024-11-20 06:36:55.643126] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f90f0 is same with the state(6) to be set 00:26:24.008 [2024-11-20 06:36:55.643132] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f90f0 is same with the state(6) to be set 00:26:24.008 [2024-11-20 06:36:55.643140] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f90f0 is same with the state(6) to be set 00:26:24.008 [2024-11-20 06:36:55.643147] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f90f0 is same with the state(6) to be set 00:26:24.008 [2024-11-20 06:36:55.643153] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f90f0 is same with the state(6) to be set 00:26:24.008 [2024-11-20 06:36:55.643158] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f90f0 is same with the state(6) to be set 00:26:24.008 [2024-11-20 06:36:55.643165] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f90f0 is same with the state(6) to be set 00:26:24.008 [2024-11-20 06:36:55.643171] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f90f0 is same with the state(6) to be set 00:26:24.008 [2024-11-20 06:36:55.643177] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f90f0 is same with the state(6) to be set 00:26:24.008 [2024-11-20 06:36:55.643182] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f90f0 is same with the state(6) to be set 00:26:24.008 [2024-11-20 06:36:55.643189] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f90f0 is same with the state(6) to be set 00:26:24.008 [2024-11-20 06:36:55.643197] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f90f0 is same with the state(6) to be set 00:26:24.008 [2024-11-20 06:36:55.643209] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f90f0 is same with the state(6) to be set 00:26:24.008 [2024-11-20 06:36:55.643215] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f90f0 is same with the state(6) to be set 00:26:24.008 [2024-11-20 06:36:55.643224] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f90f0 is same with the state(6) to be set 00:26:24.008 [2024-11-20 06:36:55.643231] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f90f0 is same with the state(6) to be set 00:26:24.008 [2024-11-20 06:36:55.643237] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f90f0 is same with the state(6) to be set 00:26:24.008 [2024-11-20 06:36:55.643243] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f90f0 is same with the state(6) to be set 00:26:24.008 [2024-11-20 06:36:55.643250] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f90f0 is same with the state(6) to be set 00:26:24.008 [2024-11-20 06:36:55.643256] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f90f0 is same with the state(6) to be set 00:26:24.008 [2024-11-20 06:36:55.643264] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f90f0 is same with the state(6) to be set 00:26:24.008 [2024-11-20 06:36:55.643270] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f90f0 is same with the state(6) to be set 00:26:24.008 [2024-11-20 06:36:55.643276] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f90f0 is same with the state(6) to be set 00:26:24.008 [2024-11-20 06:36:55.643282] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f90f0 is same with the state(6) to be set 00:26:24.008 [2024-11-20 06:36:55.643288] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f90f0 is same with the state(6) to be set 00:26:24.008 [2024-11-20 06:36:55.643294] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f90f0 is same with the state(6) to be set 00:26:24.008 [2024-11-20 06:36:55.643299] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f90f0 is same with the state(6) to be set 00:26:24.008 [2024-11-20 06:36:55.643305] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f90f0 is same with the state(6) to be set 00:26:24.008 [2024-11-20 06:36:55.643313] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f90f0 is same with the state(6) to be set 00:26:24.008 [2024-11-20 06:36:55.643320] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f90f0 is same with the state(6) to be set 00:26:24.008 [2024-11-20 06:36:55.643325] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f90f0 is same with the state(6) to be set 00:26:24.008 [2024-11-20 06:36:55.643370] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f90f0 is same with the state(6) to be set 00:26:24.008 [2024-11-20 06:36:55.643422] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f90f0 is same with the state(6) to be set 00:26:24.008 [2024-11-20 06:36:55.643479] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f90f0 is same with the state(6) to be set 00:26:24.008 [2024-11-20 06:36:55.643537] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f90f0 is same with the state(6) to be set 00:26:24.008 [2024-11-20 06:36:55.643590] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f90f0 is same with the state(6) to be set 00:26:24.008 [2024-11-20 06:36:55.643643] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f90f0 is same with the state(6) to be set 00:26:24.008 [2024-11-20 06:36:55.643994] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f90f0 is same with the state(6) to be set 00:26:24.008 [2024-11-20 06:36:55.644054] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f90f0 is same with the state(6) to be set 00:26:24.008 [2024-11-20 06:36:55.644112] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f90f0 is same with the state(6) to be set 00:26:24.008 [2024-11-20 06:36:55.644170] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f90f0 is same with the state(6) to be set 00:26:24.008 [2024-11-20 06:36:55.644247] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f90f0 is same with the state(6) to be set 00:26:24.008 [2024-11-20 06:36:55.644302] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f90f0 is same with the state(6) to be set 00:26:24.008 [2024-11-20 06:36:55.644359] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f90f0 is same with the state(6) to be set 00:26:24.008 [2024-11-20 06:36:55.644413] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f90f0 is same with the state(6) to be set 00:26:24.008 [2024-11-20 06:36:55.644472] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f90f0 is same with the state(6) to be set 00:26:24.008 [2024-11-20 06:36:55.644526] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f90f0 is same with the state(6) to be set 00:26:24.008 [2024-11-20 06:36:55.644585] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f90f0 is same with the state(6) to be set 00:26:24.009 [2024-11-20 06:36:55.644646] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f90f0 is same with the state(6) to be set 00:26:24.009 [2024-11-20 06:36:55.644702] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f90f0 is same with the state(6) to be set 00:26:24.009 [2024-11-20 06:36:55.644754] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f90f0 is same with the state(6) to be set 00:26:24.009 [2024-11-20 06:36:55.645387] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12698e0 is same with the state(6) to be set 00:26:24.009 [2024-11-20 06:36:55.645402] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12698e0 is same with the state(6) to be set 00:26:24.009 [2024-11-20 06:36:55.645418] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12698e0 is same with the state(6) to be set 00:26:24.009 [2024-11-20 06:36:55.645425] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12698e0 is same with the state(6) to be set 00:26:24.009 [2024-11-20 06:36:55.645431] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12698e0 is same with the state(6) to be set 00:26:24.009 [2024-11-20 06:36:55.645438] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12698e0 is same with the state(6) to be set 00:26:24.009 [2024-11-20 06:36:55.645445] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12698e0 is same with the state(6) to be set 00:26:24.009 [2024-11-20 06:36:55.645451] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12698e0 is same with the state(6) to be set 00:26:24.009 [2024-11-20 06:36:55.645458] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12698e0 is same with the state(6) to be set 00:26:24.009 [2024-11-20 06:36:55.645463] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12698e0 is same with the state(6) to be set 00:26:24.009 [2024-11-20 06:36:55.645471] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12698e0 is same with the state(6) to be set 00:26:24.009 [2024-11-20 06:36:55.645477] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12698e0 is same with the state(6) to be set 00:26:24.009 [2024-11-20 06:36:55.645482] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12698e0 is same with the state(6) to be set 00:26:24.009 [2024-11-20 06:36:55.645512] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12698e0 is same with the state(6) to be set 00:26:24.009 [2024-11-20 06:36:55.645570] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12698e0 is same with the state(6) to be set 00:26:24.009 [2024-11-20 06:36:55.645623] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12698e0 is same with the state(6) to be set 00:26:24.009 [2024-11-20 06:36:55.645684] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12698e0 is same with the state(6) to be set 00:26:24.009 [2024-11-20 06:36:55.645743] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12698e0 is same with the state(6) to be set 00:26:24.009 [2024-11-20 06:36:55.645798] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12698e0 is same with the state(6) to be set 00:26:24.009 [2024-11-20 06:36:55.645852] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12698e0 is same with the state(6) to be set 00:26:24.009 [2024-11-20 06:36:55.645910] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12698e0 is same with the state(6) to be set 00:26:24.009 [2024-11-20 06:36:55.645963] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12698e0 is same with the state(6) to be set 00:26:24.009 [2024-11-20 06:36:55.646017] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12698e0 is same with the state(6) to be set 00:26:24.009 [2024-11-20 06:36:55.646071] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12698e0 is same with the state(6) to be set 00:26:24.009 [2024-11-20 06:36:55.646130] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12698e0 is same with the state(6) to be set 00:26:24.009 [2024-11-20 06:36:55.646195] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12698e0 is same with the state(6) to be set 00:26:24.009 [2024-11-20 06:36:55.646259] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12698e0 is same with the state(6) to be set 00:26:24.009 [2024-11-20 06:36:55.646314] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12698e0 is same with the state(6) to be set 00:26:24.009 [2024-11-20 06:36:55.646369] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12698e0 is same with the state(6) to be set 00:26:24.009 [2024-11-20 06:36:55.646422] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12698e0 is same with the state(6) to be set 00:26:24.009 [2024-11-20 06:36:55.646476] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12698e0 is same with the state(6) to be set 00:26:24.009 [2024-11-20 06:36:55.646531] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12698e0 is same with the state(6) to be set 00:26:24.009 [2024-11-20 06:36:55.646592] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12698e0 is same with the state(6) to be set 00:26:24.009 [2024-11-20 06:36:55.646646] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12698e0 is same with the state(6) to be set 00:26:24.009 [2024-11-20 06:36:55.646699] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12698e0 is same with the state(6) to be set 00:26:24.009 [2024-11-20 06:36:55.646751] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12698e0 is same with the state(6) to be set 00:26:24.009 [2024-11-20 06:36:55.646804] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12698e0 is same with the state(6) to be set 00:26:24.009 [2024-11-20 06:36:55.646859] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12698e0 is same with the state(6) to be set 00:26:24.009 [2024-11-20 06:36:55.656659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.009 [2024-11-20 06:36:55.656674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.009 [2024-11-20 06:36:55.656683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.009 [2024-11-20 06:36:55.656693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.009 [2024-11-20 06:36:55.656700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.009 [2024-11-20 06:36:55.656709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.009 [2024-11-20 06:36:55.656718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.009 [2024-11-20 06:36:55.656728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.009 [2024-11-20 06:36:55.656735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.009 [2024-11-20 06:36:55.656744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.009 [2024-11-20 06:36:55.656751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.009 [2024-11-20 06:36:55.656761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.009 [2024-11-20 06:36:55.656771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.009 [2024-11-20 06:36:55.656780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.009 [2024-11-20 06:36:55.656790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.009 [2024-11-20 06:36:55.656800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.009 [2024-11-20 06:36:55.656807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.009 [2024-11-20 06:36:55.656816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.009 [2024-11-20 06:36:55.656824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.009 [2024-11-20 06:36:55.656833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.009 [2024-11-20 06:36:55.656841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.009 [2024-11-20 06:36:55.656850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.009 [2024-11-20 06:36:55.656857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.009 [2024-11-20 06:36:55.656866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.009 [2024-11-20 06:36:55.656874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.009 [2024-11-20 06:36:55.656899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.009 [2024-11-20 06:36:55.656910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.009 [2024-11-20 06:36:55.656922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.010 [2024-11-20 06:36:55.656931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.010 [2024-11-20 06:36:55.656944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.010 [2024-11-20 06:36:55.656954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.010 [2024-11-20 06:36:55.656969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.010 [2024-11-20 06:36:55.656979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.010 [2024-11-20 06:36:55.659072] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:26:24.010 [2024-11-20 06:36:55.659111] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2623b90 (9): Bad file descriptor 00:26:24.010 [2024-11-20 06:36:55.659174] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:24.010 [2024-11-20 06:36:55.659190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.010 [2024-11-20 06:36:55.659211] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:24.010 [2024-11-20 06:36:55.659223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.010 [2024-11-20 06:36:55.659234] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:24.010 [2024-11-20 06:36:55.659245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.010 [2024-11-20 06:36:55.659256] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:24.010 [2024-11-20 06:36:55.659265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.010 [2024-11-20 06:36:55.659275] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210c610 is same with the state(6) to be set 00:26:24.010 [2024-11-20 06:36:55.659312] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:24.010 [2024-11-20 06:36:55.659325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.010 [2024-11-20 06:36:55.659336] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:24.010 [2024-11-20 06:36:55.659346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.010 [2024-11-20 06:36:55.659358] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:24.010 [2024-11-20 06:36:55.659368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.010 [2024-11-20 06:36:55.659378] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:24.010 [2024-11-20 06:36:55.659388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.010 [2024-11-20 06:36:55.659398] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2656550 is same with the state(6) to be set 00:26:24.010 [2024-11-20 06:36:55.659420] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2654da0 (9): Bad file descriptor 00:26:24.010 [2024-11-20 06:36:55.659457] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:24.010 [2024-11-20 06:36:55.659470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.010 [2024-11-20 06:36:55.659486] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:24.010 [2024-11-20 06:36:55.659496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.010 [2024-11-20 06:36:55.659508] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:24.010 [2024-11-20 06:36:55.659518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.010 [2024-11-20 06:36:55.659528] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:24.010 [2024-11-20 06:36:55.659539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.010 [2024-11-20 06:36:55.659548] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2619c20 is same with the state(6) to be set 00:26:24.010 [2024-11-20 06:36:55.659568] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21f72b0 (9): Bad file descriptor 00:26:24.010 [2024-11-20 06:36:55.659590] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21ec1d0 (9): Bad file descriptor 00:26:24.010 [2024-11-20 06:36:55.659610] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21f7d30 (9): Bad file descriptor 00:26:24.010 [2024-11-20 06:36:55.659632] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21f81b0 (9): Bad file descriptor 00:26:24.010 [2024-11-20 06:36:55.660793] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12698e0 is same with the state(6) to be set 00:26:24.010 [2024-11-20 06:36:55.660807] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12698e0 is same with the state(6) to be set 00:26:24.010 [2024-11-20 06:36:55.660818] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12698e0 is same with the state(6) to be set 00:26:24.010 [2024-11-20 06:36:55.660828] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12698e0 is same with the state(6) to be set 00:26:24.010 [2024-11-20 06:36:55.660836] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12698e0 is same with the state(6) to be set 00:26:24.010 [2024-11-20 06:36:55.660844] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12698e0 is same with the state(6) to be set 00:26:24.010 [2024-11-20 06:36:55.660853] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12698e0 is same with the state(6) to be set 00:26:24.010 [2024-11-20 06:36:55.660862] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12698e0 is same with the state(6) to be set 00:26:24.010 [2024-11-20 06:36:55.660869] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12698e0 is same with the state(6) to be set 00:26:24.010 [2024-11-20 06:36:55.660880] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12698e0 is same with the state(6) to be set 00:26:24.010 [2024-11-20 06:36:55.660888] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12698e0 is same with the state(6) to be set 00:26:24.010 [2024-11-20 06:36:55.660898] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12698e0 is same with the state(6) to be set 00:26:24.010 [2024-11-20 06:36:55.660907] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12698e0 is same with the state(6) to be set 00:26:24.010 [2024-11-20 06:36:55.660914] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12698e0 is same with the state(6) to be set 00:26:24.010 [2024-11-20 06:36:55.660923] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12698e0 is same with the state(6) to be set 00:26:24.010 [2024-11-20 06:36:55.660931] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12698e0 is same with the state(6) to be set 00:26:24.010 [2024-11-20 06:36:55.660941] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12698e0 is same with the state(6) to be set 00:26:24.010 [2024-11-20 06:36:55.660949] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12698e0 is same with the state(6) to be set 00:26:24.010 [2024-11-20 06:36:55.660957] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12698e0 is same with the state(6) to be set 00:26:24.010 [2024-11-20 06:36:55.660965] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12698e0 is same with the state(6) to be set 00:26:24.010 [2024-11-20 06:36:55.660973] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12698e0 is same with the state(6) to be set 00:26:24.010 [2024-11-20 06:36:55.660981] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12698e0 is same with the state(6) to be set 00:26:24.010 [2024-11-20 06:36:55.660989] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12698e0 is same with the state(6) to be set 00:26:24.010 [2024-11-20 06:36:55.660998] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12698e0 is same with the state(6) to be set 00:26:24.010 [2024-11-20 06:36:55.661005] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12698e0 is same with the state(6) to be set 00:26:24.010 [2024-11-20 06:36:55.661447] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:26:24.010 [2024-11-20 06:36:55.662438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.010 [2024-11-20 06:36:55.662467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2623b90 with addr=10.0.0.2, port=4420 00:26:24.010 [2024-11-20 06:36:55.662480] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2623b90 is same with the state(6) to be set 00:26:24.010 [2024-11-20 06:36:55.662611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.010 [2024-11-20 06:36:55.662626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f81b0 with addr=10.0.0.2, port=4420 00:26:24.010 [2024-11-20 06:36:55.662638] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f81b0 is same with the state(6) to be set 00:26:24.010 [2024-11-20 06:36:55.663399] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:26:24.010 [2024-11-20 06:36:55.663429] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2623b90 (9): Bad file descriptor 00:26:24.010 [2024-11-20 06:36:55.663445] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21f81b0 (9): Bad file descriptor 00:26:24.010 [2024-11-20 06:36:55.663507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.010 [2024-11-20 06:36:55.663523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.010 [2024-11-20 06:36:55.663543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.010 [2024-11-20 06:36:55.663555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.010 [2024-11-20 06:36:55.663568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.010 [2024-11-20 06:36:55.663579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.010 [2024-11-20 06:36:55.663592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.010 [2024-11-20 06:36:55.663603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.010 [2024-11-20 06:36:55.663616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.011 [2024-11-20 06:36:55.663632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.011 [2024-11-20 06:36:55.663645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.011 [2024-11-20 06:36:55.663656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.011 [2024-11-20 06:36:55.663669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.011 [2024-11-20 06:36:55.663679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.011 [2024-11-20 06:36:55.663692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.011 [2024-11-20 06:36:55.663703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.011 [2024-11-20 06:36:55.663715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.011 [2024-11-20 06:36:55.663726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.011 [2024-11-20 06:36:55.663740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.011 [2024-11-20 06:36:55.663752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.011 [2024-11-20 06:36:55.663764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.011 [2024-11-20 06:36:55.663775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.011 [2024-11-20 06:36:55.663787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.011 [2024-11-20 06:36:55.663797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.011 [2024-11-20 06:36:55.663810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.011 [2024-11-20 06:36:55.663820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.011 [2024-11-20 06:36:55.663833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.011 [2024-11-20 06:36:55.663843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.011 [2024-11-20 06:36:55.663855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.011 [2024-11-20 06:36:55.663866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.011 [2024-11-20 06:36:55.663878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.011 [2024-11-20 06:36:55.663888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.011 [2024-11-20 06:36:55.663901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.011 [2024-11-20 06:36:55.663911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.011 [2024-11-20 06:36:55.663926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.011 [2024-11-20 06:36:55.663936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.011 [2024-11-20 06:36:55.663949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.011 [2024-11-20 06:36:55.663959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.011 [2024-11-20 06:36:55.663971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.011 [2024-11-20 06:36:55.663981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.011 [2024-11-20 06:36:55.663993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.011 [2024-11-20 06:36:55.664003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.011 [2024-11-20 06:36:55.664016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.011 [2024-11-20 06:36:55.664026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.011 [2024-11-20 06:36:55.664040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.011 [2024-11-20 06:36:55.664051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.011 [2024-11-20 06:36:55.664063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.011 [2024-11-20 06:36:55.664073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.011 [2024-11-20 06:36:55.664086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.011 [2024-11-20 06:36:55.664096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.011 [2024-11-20 06:36:55.664109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.011 [2024-11-20 06:36:55.664119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.011 [2024-11-20 06:36:55.664131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.011 [2024-11-20 06:36:55.664141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.011 [2024-11-20 06:36:55.664154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.011 [2024-11-20 06:36:55.664164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.011 [2024-11-20 06:36:55.664176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.011 [2024-11-20 06:36:55.664186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.011 [2024-11-20 06:36:55.664199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.011 [2024-11-20 06:36:55.664220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.011 [2024-11-20 06:36:55.664232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.011 [2024-11-20 06:36:55.664243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.011 [2024-11-20 06:36:55.664255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.011 [2024-11-20 06:36:55.664266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.011 [2024-11-20 06:36:55.664278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.011 [2024-11-20 06:36:55.664289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.011 [2024-11-20 06:36:55.664302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.011 [2024-11-20 06:36:55.664313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.011 [2024-11-20 06:36:55.664325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.011 [2024-11-20 06:36:55.664335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.011 [2024-11-20 06:36:55.664347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.011 [2024-11-20 06:36:55.664357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.011 [2024-11-20 06:36:55.664370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.011 [2024-11-20 06:36:55.664380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.011 [2024-11-20 06:36:55.664393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.011 [2024-11-20 06:36:55.664404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.011 [2024-11-20 06:36:55.664417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.011 [2024-11-20 06:36:55.664428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.011 [2024-11-20 06:36:55.664441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.011 [2024-11-20 06:36:55.664451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.011 [2024-11-20 06:36:55.664463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.011 [2024-11-20 06:36:55.664473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.011 [2024-11-20 06:36:55.664486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.011 [2024-11-20 06:36:55.664496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.011 [2024-11-20 06:36:55.664510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.011 [2024-11-20 06:36:55.664521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.011 [2024-11-20 06:36:55.664533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.011 [2024-11-20 06:36:55.664543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.012 [2024-11-20 06:36:55.664556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.012 [2024-11-20 06:36:55.664567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.012 [2024-11-20 06:36:55.664580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.012 [2024-11-20 06:36:55.664590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.012 [2024-11-20 06:36:55.664603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.012 [2024-11-20 06:36:55.664613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.012 [2024-11-20 06:36:55.664625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.012 [2024-11-20 06:36:55.664635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.012 [2024-11-20 06:36:55.664648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.012 [2024-11-20 06:36:55.664658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.012 [2024-11-20 06:36:55.664671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.012 [2024-11-20 06:36:55.664681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.012 [2024-11-20 06:36:55.664693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.012 [2024-11-20 06:36:55.664704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.012 [2024-11-20 06:36:55.664715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.012 [2024-11-20 06:36:55.664726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.012 [2024-11-20 06:36:55.664738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.012 [2024-11-20 06:36:55.664748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.012 [2024-11-20 06:36:55.664761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.012 [2024-11-20 06:36:55.664771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.012 [2024-11-20 06:36:55.664784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.012 [2024-11-20 06:36:55.664799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.012 [2024-11-20 06:36:55.664811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.012 [2024-11-20 06:36:55.664822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.012 [2024-11-20 06:36:55.664834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.012 [2024-11-20 06:36:55.664845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.012 [2024-11-20 06:36:55.664857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.012 [2024-11-20 06:36:55.664868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.012 [2024-11-20 06:36:55.664881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.012 [2024-11-20 06:36:55.664891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.012 [2024-11-20 06:36:55.664904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.012 [2024-11-20 06:36:55.664914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.012 [2024-11-20 06:36:55.664927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.012 [2024-11-20 06:36:55.664938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.012 [2024-11-20 06:36:55.664950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.012 [2024-11-20 06:36:55.664960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.012 [2024-11-20 06:36:55.664972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.012 [2024-11-20 06:36:55.664983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.012 [2024-11-20 06:36:55.664995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.012 [2024-11-20 06:36:55.665006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.012 [2024-11-20 06:36:55.665017] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27242a0 is same with the state(6) to be set 00:26:24.012 [2024-11-20 06:36:55.665149] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:26:24.012 [2024-11-20 06:36:55.665215] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:26:24.012 [2024-11-20 06:36:55.665269] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:26:24.012 [2024-11-20 06:36:55.665323] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:26:24.012 [2024-11-20 06:36:55.665410] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:26:24.012 [2024-11-20 06:36:55.665425] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:26:24.012 [2024-11-20 06:36:55.665438] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:26:24.012 [2024-11-20 06:36:55.665455] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:26:24.012 [2024-11-20 06:36:55.665466] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:26:24.012 [2024-11-20 06:36:55.665476] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:26:24.012 [2024-11-20 06:36:55.665485] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:26:24.012 [2024-11-20 06:36:55.665496] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:26:24.012 [2024-11-20 06:36:55.666909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.012 [2024-11-20 06:36:55.666927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.012 [2024-11-20 06:36:55.666944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.012 [2024-11-20 06:36:55.666955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.012 [2024-11-20 06:36:55.666967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.012 [2024-11-20 06:36:55.666978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.012 [2024-11-20 06:36:55.666990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.012 [2024-11-20 06:36:55.667001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.012 [2024-11-20 06:36:55.667013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.012 [2024-11-20 06:36:55.667023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.012 [2024-11-20 06:36:55.667036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.012 [2024-11-20 06:36:55.667057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.012 [2024-11-20 06:36:55.667068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.012 [2024-11-20 06:36:55.667076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.012 [2024-11-20 06:36:55.667088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.012 [2024-11-20 06:36:55.667096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.012 [2024-11-20 06:36:55.667106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.012 [2024-11-20 06:36:55.667114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.012 [2024-11-20 06:36:55.667123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.012 [2024-11-20 06:36:55.667131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.012 [2024-11-20 06:36:55.667140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.012 [2024-11-20 06:36:55.667152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.012 [2024-11-20 06:36:55.667162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.012 [2024-11-20 06:36:55.667169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.012 [2024-11-20 06:36:55.667179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.012 [2024-11-20 06:36:55.667187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.012 [2024-11-20 06:36:55.667196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.012 [2024-11-20 06:36:55.667212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.013 [2024-11-20 06:36:55.667222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.013 [2024-11-20 06:36:55.667230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.013 [2024-11-20 06:36:55.667240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.013 [2024-11-20 06:36:55.667249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.013 [2024-11-20 06:36:55.667259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.013 [2024-11-20 06:36:55.667267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.013 [2024-11-20 06:36:55.667277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.013 [2024-11-20 06:36:55.667285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.013 [2024-11-20 06:36:55.667295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.013 [2024-11-20 06:36:55.667303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.013 [2024-11-20 06:36:55.667313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.013 [2024-11-20 06:36:55.667320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.013 [2024-11-20 06:36:55.667330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.013 [2024-11-20 06:36:55.667338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.013 [2024-11-20 06:36:55.667347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.013 [2024-11-20 06:36:55.667355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.013 [2024-11-20 06:36:55.667365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.013 [2024-11-20 06:36:55.667374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.013 [2024-11-20 06:36:55.667385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.013 [2024-11-20 06:36:55.667393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.013 [2024-11-20 06:36:55.667403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.013 [2024-11-20 06:36:55.667411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.013 [2024-11-20 06:36:55.667420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.013 [2024-11-20 06:36:55.667428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.013 [2024-11-20 06:36:55.667437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.013 [2024-11-20 06:36:55.667445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.013 [2024-11-20 06:36:55.667455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.013 [2024-11-20 06:36:55.667463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.013 [2024-11-20 06:36:55.667472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.013 [2024-11-20 06:36:55.667480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.013 [2024-11-20 06:36:55.667489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.013 [2024-11-20 06:36:55.667498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.013 [2024-11-20 06:36:55.667508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.013 [2024-11-20 06:36:55.667515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.013 [2024-11-20 06:36:55.667525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.013 [2024-11-20 06:36:55.667533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.013 [2024-11-20 06:36:55.667543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.013 [2024-11-20 06:36:55.667551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.013 [2024-11-20 06:36:55.667561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.013 [2024-11-20 06:36:55.667569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.013 [2024-11-20 06:36:55.667578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.013 [2024-11-20 06:36:55.667586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.013 [2024-11-20 06:36:55.667596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.013 [2024-11-20 06:36:55.667609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.013 [2024-11-20 06:36:55.667619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.013 [2024-11-20 06:36:55.667627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.013 [2024-11-20 06:36:55.667636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.013 [2024-11-20 06:36:55.667644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.013 [2024-11-20 06:36:55.667655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.013 [2024-11-20 06:36:55.667663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.013 [2024-11-20 06:36:55.667673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.013 [2024-11-20 06:36:55.667681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.013 [2024-11-20 06:36:55.667692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.013 [2024-11-20 06:36:55.667700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.013 [2024-11-20 06:36:55.667709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.013 [2024-11-20 06:36:55.667717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.013 [2024-11-20 06:36:55.667727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.013 [2024-11-20 06:36:55.667734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.013 [2024-11-20 06:36:55.667744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.013 [2024-11-20 06:36:55.667752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.013 [2024-11-20 06:36:55.667762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.013 [2024-11-20 06:36:55.667770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.013 [2024-11-20 06:36:55.667780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.013 [2024-11-20 06:36:55.667788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.013 [2024-11-20 06:36:55.667798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.013 [2024-11-20 06:36:55.667805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.013 [2024-11-20 06:36:55.667815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.013 [2024-11-20 06:36:55.667823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.013 [2024-11-20 06:36:55.667835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.014 [2024-11-20 06:36:55.667843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.014 [2024-11-20 06:36:55.667852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.014 [2024-11-20 06:36:55.667861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.014 [2024-11-20 06:36:55.667870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.014 [2024-11-20 06:36:55.667878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.014 [2024-11-20 06:36:55.667888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.014 [2024-11-20 06:36:55.667896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.014 [2024-11-20 06:36:55.667907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.014 [2024-11-20 06:36:55.667915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.014 [2024-11-20 06:36:55.667925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.014 [2024-11-20 06:36:55.667933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.014 [2024-11-20 06:36:55.667942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.014 [2024-11-20 06:36:55.667950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.014 [2024-11-20 06:36:55.667960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.014 [2024-11-20 06:36:55.667967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.014 [2024-11-20 06:36:55.667977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.014 [2024-11-20 06:36:55.667985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.014 [2024-11-20 06:36:55.667994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.014 [2024-11-20 06:36:55.668003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.014 [2024-11-20 06:36:55.668012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.014 [2024-11-20 06:36:55.668020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.014 [2024-11-20 06:36:55.668030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.014 [2024-11-20 06:36:55.668037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.014 [2024-11-20 06:36:55.668047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.014 [2024-11-20 06:36:55.668057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.014 [2024-11-20 06:36:55.668066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.014 [2024-11-20 06:36:55.668074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.014 [2024-11-20 06:36:55.668084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.014 [2024-11-20 06:36:55.668091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.014 [2024-11-20 06:36:55.668101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.014 [2024-11-20 06:36:55.668110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.014 [2024-11-20 06:36:55.668118] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27254e0 is same with the state(6) to be set 00:26:24.014 [2024-11-20 06:36:55.668278] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:26:24.014 [2024-11-20 06:36:55.669422] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:26:24.014 [2024-11-20 06:36:55.669446] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:26:24.014 [2024-11-20 06:36:55.669662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.014 [2024-11-20 06:36:55.669680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21ec1d0 with addr=10.0.0.2, port=4420 00:26:24.014 [2024-11-20 06:36:55.669689] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ec1d0 is same with the state(6) to be set 00:26:24.014 [2024-11-20 06:36:55.669716] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:24.014 [2024-11-20 06:36:55.669727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.014 [2024-11-20 06:36:55.669736] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:24.014 [2024-11-20 06:36:55.669744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.014 [2024-11-20 06:36:55.669753] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:24.014 [2024-11-20 06:36:55.669761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.014 [2024-11-20 06:36:55.669770] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:24.014 [2024-11-20 06:36:55.669778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.014 [2024-11-20 06:36:55.669786] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2651b70 is same with the state(6) to be set 00:26:24.014 [2024-11-20 06:36:55.669807] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x210c610 (9): Bad file descriptor 00:26:24.014 [2024-11-20 06:36:55.669827] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2656550 (9): Bad file descriptor 00:26:24.014 [2024-11-20 06:36:55.669862] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2619c20 (9): Bad file descriptor 00:26:24.014 [2024-11-20 06:36:55.670367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.014 [2024-11-20 06:36:55.670388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f72b0 with addr=10.0.0.2, port=4420 00:26:24.014 [2024-11-20 06:36:55.670397] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f72b0 is same with the state(6) to be set 00:26:24.014 [2024-11-20 06:36:55.670408] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21ec1d0 (9): Bad file descriptor 00:26:24.014 [2024-11-20 06:36:55.670718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.014 [2024-11-20 06:36:55.670730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.014 [2024-11-20 06:36:55.670744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.014 [2024-11-20 06:36:55.670752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.014 [2024-11-20 06:36:55.670763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.014 [2024-11-20 06:36:55.670771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.014 [2024-11-20 06:36:55.670781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.014 [2024-11-20 06:36:55.670789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.014 [2024-11-20 06:36:55.670800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.014 [2024-11-20 06:36:55.670808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.014 [2024-11-20 06:36:55.670818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.014 [2024-11-20 06:36:55.670826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.014 [2024-11-20 06:36:55.670836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.014 [2024-11-20 06:36:55.670844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.014 [2024-11-20 06:36:55.670854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.014 [2024-11-20 06:36:55.670862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.014 [2024-11-20 06:36:55.670873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.014 [2024-11-20 06:36:55.670880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.014 [2024-11-20 06:36:55.670890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.014 [2024-11-20 06:36:55.670899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.014 [2024-11-20 06:36:55.670909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.014 [2024-11-20 06:36:55.670917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.014 [2024-11-20 06:36:55.670930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.014 [2024-11-20 06:36:55.670938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.014 [2024-11-20 06:36:55.670948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.015 [2024-11-20 06:36:55.670956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.015 [2024-11-20 06:36:55.670966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.015 [2024-11-20 06:36:55.670975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.015 [2024-11-20 06:36:55.670985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.015 [2024-11-20 06:36:55.670993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.015 [2024-11-20 06:36:55.671002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.015 [2024-11-20 06:36:55.671011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.015 [2024-11-20 06:36:55.671021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.015 [2024-11-20 06:36:55.671030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.015 [2024-11-20 06:36:55.671039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.015 [2024-11-20 06:36:55.671048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.015 [2024-11-20 06:36:55.671067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.015 [2024-11-20 06:36:55.671075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.015 [2024-11-20 06:36:55.671085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.015 [2024-11-20 06:36:55.671093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.015 [2024-11-20 06:36:55.671103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.015 [2024-11-20 06:36:55.671111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.015 [2024-11-20 06:36:55.671121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.015 [2024-11-20 06:36:55.671130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.015 [2024-11-20 06:36:55.671139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.015 [2024-11-20 06:36:55.671148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.015 [2024-11-20 06:36:55.671158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.015 [2024-11-20 06:36:55.671169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.015 [2024-11-20 06:36:55.671178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.015 [2024-11-20 06:36:55.671187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.015 [2024-11-20 06:36:55.671196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.015 [2024-11-20 06:36:55.671212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.015 [2024-11-20 06:36:55.671223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.015 [2024-11-20 06:36:55.671231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.015 [2024-11-20 06:36:55.671242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.015 [2024-11-20 06:36:55.671252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.015 [2024-11-20 06:36:55.671264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.015 [2024-11-20 06:36:55.671273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.015 [2024-11-20 06:36:55.671284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.015 [2024-11-20 06:36:55.671293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.015 [2024-11-20 06:36:55.671302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.015 [2024-11-20 06:36:55.671310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.015 [2024-11-20 06:36:55.671320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.015 [2024-11-20 06:36:55.671328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.015 [2024-11-20 06:36:55.671338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.015 [2024-11-20 06:36:55.671346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.015 [2024-11-20 06:36:55.671355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.015 [2024-11-20 06:36:55.671363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.015 [2024-11-20 06:36:55.671373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.015 [2024-11-20 06:36:55.671381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.015 [2024-11-20 06:36:55.671391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.015 [2024-11-20 06:36:55.671399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.015 [2024-11-20 06:36:55.671411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.015 [2024-11-20 06:36:55.671419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.015 [2024-11-20 06:36:55.671429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.015 [2024-11-20 06:36:55.671437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.015 [2024-11-20 06:36:55.671447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.015 [2024-11-20 06:36:55.671455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.015 [2024-11-20 06:36:55.671465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.015 [2024-11-20 06:36:55.671473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.015 [2024-11-20 06:36:55.671482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.015 [2024-11-20 06:36:55.671490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.015 [2024-11-20 06:36:55.671499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.015 [2024-11-20 06:36:55.671508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.015 [2024-11-20 06:36:55.671517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.015 [2024-11-20 06:36:55.671525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.015 [2024-11-20 06:36:55.671534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.015 [2024-11-20 06:36:55.671542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.015 [2024-11-20 06:36:55.671553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.015 [2024-11-20 06:36:55.671561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.015 [2024-11-20 06:36:55.671571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.015 [2024-11-20 06:36:55.671579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.015 [2024-11-20 06:36:55.671588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.015 [2024-11-20 06:36:55.671596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.015 [2024-11-20 06:36:55.671606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.015 [2024-11-20 06:36:55.671614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.015 [2024-11-20 06:36:55.671624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.015 [2024-11-20 06:36:55.671633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.015 [2024-11-20 06:36:55.671643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.015 [2024-11-20 06:36:55.671651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.015 [2024-11-20 06:36:55.671661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.015 [2024-11-20 06:36:55.671671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.015 [2024-11-20 06:36:55.671682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.015 [2024-11-20 06:36:55.671690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.016 [2024-11-20 06:36:55.671700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.016 [2024-11-20 06:36:55.671708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.016 [2024-11-20 06:36:55.671718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.016 [2024-11-20 06:36:55.671725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.016 [2024-11-20 06:36:55.671735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.016 [2024-11-20 06:36:55.671743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.016 [2024-11-20 06:36:55.671753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.016 [2024-11-20 06:36:55.671760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.016 [2024-11-20 06:36:55.671770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.016 [2024-11-20 06:36:55.671778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.016 [2024-11-20 06:36:55.671788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.016 [2024-11-20 06:36:55.671796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.016 [2024-11-20 06:36:55.671806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.016 [2024-11-20 06:36:55.671814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.016 [2024-11-20 06:36:55.671823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.016 [2024-11-20 06:36:55.671832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.016 [2024-11-20 06:36:55.671842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.016 [2024-11-20 06:36:55.671850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.016 [2024-11-20 06:36:55.671862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.016 [2024-11-20 06:36:55.671871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.016 [2024-11-20 06:36:55.671880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.016 [2024-11-20 06:36:55.671888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.016 [2024-11-20 06:36:55.671897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.016 [2024-11-20 06:36:55.671905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.016 [2024-11-20 06:36:55.671914] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2727880 is same with the state(6) to be set 00:26:24.016 [2024-11-20 06:36:55.673069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.016 [2024-11-20 06:36:55.673087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.016 [2024-11-20 06:36:55.673099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.016 [2024-11-20 06:36:55.673109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.016 [2024-11-20 06:36:55.673120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.016 [2024-11-20 06:36:55.673128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.016 [2024-11-20 06:36:55.673139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.016 [2024-11-20 06:36:55.673146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.016 [2024-11-20 06:36:55.673157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.016 [2024-11-20 06:36:55.673164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.016 [2024-11-20 06:36:55.673175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.016 [2024-11-20 06:36:55.673183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.016 [2024-11-20 06:36:55.673193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.016 [2024-11-20 06:36:55.673206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.016 [2024-11-20 06:36:55.673217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.016 [2024-11-20 06:36:55.673225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.016 [2024-11-20 06:36:55.673235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.016 [2024-11-20 06:36:55.673244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.016 [2024-11-20 06:36:55.673255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.016 [2024-11-20 06:36:55.673266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.016 [2024-11-20 06:36:55.673277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.016 [2024-11-20 06:36:55.673285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.016 [2024-11-20 06:36:55.673296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.016 [2024-11-20 06:36:55.673305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.016 [2024-11-20 06:36:55.673315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.016 [2024-11-20 06:36:55.673324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.016 [2024-11-20 06:36:55.673334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.016 [2024-11-20 06:36:55.673342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.016 [2024-11-20 06:36:55.673352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.016 [2024-11-20 06:36:55.673360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.016 [2024-11-20 06:36:55.673370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.016 [2024-11-20 06:36:55.673378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.016 [2024-11-20 06:36:55.673388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.016 [2024-11-20 06:36:55.673397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.016 [2024-11-20 06:36:55.673407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.016 [2024-11-20 06:36:55.673415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.016 [2024-11-20 06:36:55.673425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.016 [2024-11-20 06:36:55.673433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.016 [2024-11-20 06:36:55.673444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.016 [2024-11-20 06:36:55.673451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.016 [2024-11-20 06:36:55.673461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.016 [2024-11-20 06:36:55.673469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.016 [2024-11-20 06:36:55.673479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.016 [2024-11-20 06:36:55.673487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.016 [2024-11-20 06:36:55.673499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.016 [2024-11-20 06:36:55.673507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.016 [2024-11-20 06:36:55.673518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.016 [2024-11-20 06:36:55.673526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.016 [2024-11-20 06:36:55.673536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.016 [2024-11-20 06:36:55.673545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.016 [2024-11-20 06:36:55.673555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.016 [2024-11-20 06:36:55.673564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.016 [2024-11-20 06:36:55.673574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.016 [2024-11-20 06:36:55.673583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.016 [2024-11-20 06:36:55.673592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.017 [2024-11-20 06:36:55.673600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.017 [2024-11-20 06:36:55.673610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.017 [2024-11-20 06:36:55.673619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.017 [2024-11-20 06:36:55.673628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.017 [2024-11-20 06:36:55.673636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.017 [2024-11-20 06:36:55.673646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.017 [2024-11-20 06:36:55.673655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.017 [2024-11-20 06:36:55.673664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.017 [2024-11-20 06:36:55.673673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.017 [2024-11-20 06:36:55.673684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.017 [2024-11-20 06:36:55.673692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.017 [2024-11-20 06:36:55.673703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.017 [2024-11-20 06:36:55.673712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.017 [2024-11-20 06:36:55.673723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.017 [2024-11-20 06:36:55.673734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.017 [2024-11-20 06:36:55.673745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.017 [2024-11-20 06:36:55.673752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.017 [2024-11-20 06:36:55.673763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.017 [2024-11-20 06:36:55.673771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.017 [2024-11-20 06:36:55.673781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.017 [2024-11-20 06:36:55.673789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.017 [2024-11-20 06:36:55.673799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.017 [2024-11-20 06:36:55.673808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.017 [2024-11-20 06:36:55.673817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.017 [2024-11-20 06:36:55.673826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.017 [2024-11-20 06:36:55.673835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.017 [2024-11-20 06:36:55.673844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.017 [2024-11-20 06:36:55.673853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.017 [2024-11-20 06:36:55.673862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.017 [2024-11-20 06:36:55.673872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.017 [2024-11-20 06:36:55.673880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.017 [2024-11-20 06:36:55.673889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.017 [2024-11-20 06:36:55.673897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.017 [2024-11-20 06:36:55.673908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.017 [2024-11-20 06:36:55.673916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.017 [2024-11-20 06:36:55.673927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.017 [2024-11-20 06:36:55.673934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.017 [2024-11-20 06:36:55.673944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.017 [2024-11-20 06:36:55.673952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.017 [2024-11-20 06:36:55.673964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.017 [2024-11-20 06:36:55.673972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.017 [2024-11-20 06:36:55.673981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.017 [2024-11-20 06:36:55.673989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.017 [2024-11-20 06:36:55.673999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.017 [2024-11-20 06:36:55.674007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.017 [2024-11-20 06:36:55.674017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.017 [2024-11-20 06:36:55.674027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.017 [2024-11-20 06:36:55.674037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.017 [2024-11-20 06:36:55.674045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.017 [2024-11-20 06:36:55.674055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.017 [2024-11-20 06:36:55.674064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.017 [2024-11-20 06:36:55.674073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.017 [2024-11-20 06:36:55.674081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.017 [2024-11-20 06:36:55.674091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.017 [2024-11-20 06:36:55.674099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.017 [2024-11-20 06:36:55.674108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.017 [2024-11-20 06:36:55.674116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.017 [2024-11-20 06:36:55.674126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.017 [2024-11-20 06:36:55.674134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.017 [2024-11-20 06:36:55.674143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.017 [2024-11-20 06:36:55.674151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.017 [2024-11-20 06:36:55.674161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.017 [2024-11-20 06:36:55.674169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.017 [2024-11-20 06:36:55.674178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.017 [2024-11-20 06:36:55.674188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.017 [2024-11-20 06:36:55.674197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.017 [2024-11-20 06:36:55.674210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.017 [2024-11-20 06:36:55.674221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.017 [2024-11-20 06:36:55.674228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.017 [2024-11-20 06:36:55.674238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.017 [2024-11-20 06:36:55.674246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.017 [2024-11-20 06:36:55.674256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.017 [2024-11-20 06:36:55.674264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.017 [2024-11-20 06:36:55.674273] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fdd30 is same with the state(6) to be set 00:26:24.017 [2024-11-20 06:36:55.675373] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:26:24.017 [2024-11-20 06:36:55.675394] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:26:24.017 [2024-11-20 06:36:55.675420] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21f72b0 (9): Bad file descriptor 00:26:24.017 [2024-11-20 06:36:55.675431] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:26:24.017 [2024-11-20 06:36:55.675440] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:26:24.017 [2024-11-20 06:36:55.675450] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:26:24.018 [2024-11-20 06:36:55.675460] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:26:24.018 [2024-11-20 06:36:55.675692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.018 [2024-11-20 06:36:55.675712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7d30 with addr=10.0.0.2, port=4420 00:26:24.018 [2024-11-20 06:36:55.675722] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f7d30 is same with the state(6) to be set 00:26:24.018 [2024-11-20 06:36:55.675801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.018 [2024-11-20 06:36:55.675814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2654da0 with addr=10.0.0.2, port=4420 00:26:24.018 [2024-11-20 06:36:55.675824] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2654da0 is same with the state(6) to be set 00:26:24.018 [2024-11-20 06:36:55.675833] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:26:24.018 [2024-11-20 06:36:55.675841] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:26:24.018 [2024-11-20 06:36:55.675849] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:26:24.018 [2024-11-20 06:36:55.675859] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:26:24.018 [2024-11-20 06:36:55.676429] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:26:24.018 [2024-11-20 06:36:55.676447] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:26:24.018 [2024-11-20 06:36:55.676472] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21f7d30 (9): Bad file descriptor 00:26:24.018 [2024-11-20 06:36:55.676485] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2654da0 (9): Bad file descriptor 00:26:24.018 [2024-11-20 06:36:55.676673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.018 [2024-11-20 06:36:55.676691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f81b0 with addr=10.0.0.2, port=4420 00:26:24.018 [2024-11-20 06:36:55.676700] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f81b0 is same with the state(6) to be set 00:26:24.018 [2024-11-20 06:36:55.676853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.018 [2024-11-20 06:36:55.676867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2623b90 with addr=10.0.0.2, port=4420 00:26:24.018 [2024-11-20 06:36:55.676877] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2623b90 is same with the state(6) to be set 00:26:24.018 [2024-11-20 06:36:55.676886] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:26:24.018 [2024-11-20 06:36:55.676894] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:26:24.018 [2024-11-20 06:36:55.676903] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:26:24.018 [2024-11-20 06:36:55.676911] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:26:24.018 [2024-11-20 06:36:55.676920] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:26:24.018 [2024-11-20 06:36:55.676928] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:26:24.018 [2024-11-20 06:36:55.676936] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:26:24.018 [2024-11-20 06:36:55.676943] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:26:24.018 [2024-11-20 06:36:55.676987] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21f81b0 (9): Bad file descriptor 00:26:24.018 [2024-11-20 06:36:55.677000] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2623b90 (9): Bad file descriptor 00:26:24.018 [2024-11-20 06:36:55.677038] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:26:24.018 [2024-11-20 06:36:55.677046] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:26:24.018 [2024-11-20 06:36:55.677053] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:26:24.018 [2024-11-20 06:36:55.677060] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:26:24.018 [2024-11-20 06:36:55.677067] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:26:24.018 [2024-11-20 06:36:55.677074] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:26:24.018 [2024-11-20 06:36:55.677082] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:26:24.018 [2024-11-20 06:36:55.677088] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:26:24.018 [2024-11-20 06:36:55.678375] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:26:24.018 [2024-11-20 06:36:55.678573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.018 [2024-11-20 06:36:55.678587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21ec1d0 with addr=10.0.0.2, port=4420 00:26:24.018 [2024-11-20 06:36:55.678594] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ec1d0 is same with the state(6) to be set 00:26:24.018 [2024-11-20 06:36:55.678619] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21ec1d0 (9): Bad file descriptor 00:26:24.018 [2024-11-20 06:36:55.678645] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:26:24.018 [2024-11-20 06:36:55.678653] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:26:24.018 [2024-11-20 06:36:55.678661] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:26:24.018 [2024-11-20 06:36:55.678677] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:26:24.018 [2024-11-20 06:36:55.679483] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2651b70 (9): Bad file descriptor 00:26:24.018 [2024-11-20 06:36:55.679596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.018 [2024-11-20 06:36:55.679610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.018 [2024-11-20 06:36:55.679621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.018 [2024-11-20 06:36:55.679629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.018 [2024-11-20 06:36:55.679637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.018 [2024-11-20 06:36:55.679645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.018 [2024-11-20 06:36:55.679653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.018 [2024-11-20 06:36:55.679661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.018 [2024-11-20 06:36:55.679670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.018 [2024-11-20 06:36:55.679677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.018 [2024-11-20 06:36:55.679692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.018 [2024-11-20 06:36:55.679699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.018 [2024-11-20 06:36:55.679707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.018 [2024-11-20 06:36:55.679715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.018 [2024-11-20 06:36:55.679723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.018 [2024-11-20 06:36:55.679731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.018 [2024-11-20 06:36:55.679739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.018 [2024-11-20 06:36:55.679747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.018 [2024-11-20 06:36:55.679760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.018 [2024-11-20 06:36:55.679768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.018 [2024-11-20 06:36:55.679777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.018 [2024-11-20 06:36:55.679784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.018 [2024-11-20 06:36:55.679792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.018 [2024-11-20 06:36:55.679800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.019 [2024-11-20 06:36:55.679808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.019 [2024-11-20 06:36:55.679816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.019 [2024-11-20 06:36:55.679824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.019 [2024-11-20 06:36:55.679832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.019 [2024-11-20 06:36:55.679841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.019 [2024-11-20 06:36:55.679849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.019 [2024-11-20 06:36:55.679857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.019 [2024-11-20 06:36:55.679864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.019 [2024-11-20 06:36:55.679873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.019 [2024-11-20 06:36:55.679880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.019 [2024-11-20 06:36:55.679890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.019 [2024-11-20 06:36:55.679896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.019 [2024-11-20 06:36:55.679906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.019 [2024-11-20 06:36:55.679913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.019 [2024-11-20 06:36:55.679922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.019 [2024-11-20 06:36:55.679929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.019 [2024-11-20 06:36:55.679938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.019 [2024-11-20 06:36:55.679945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.019 [2024-11-20 06:36:55.679953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.019 [2024-11-20 06:36:55.679963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.019 [2024-11-20 06:36:55.679971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.019 [2024-11-20 06:36:55.679978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.019 [2024-11-20 06:36:55.679986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.019 [2024-11-20 06:36:55.679993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.019 [2024-11-20 06:36:55.680003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.019 [2024-11-20 06:36:55.680010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.019 [2024-11-20 06:36:55.680019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.019 [2024-11-20 06:36:55.680025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.019 [2024-11-20 06:36:55.680034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.019 [2024-11-20 06:36:55.680040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.019 [2024-11-20 06:36:55.680050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.019 [2024-11-20 06:36:55.680056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.019 [2024-11-20 06:36:55.680065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.019 [2024-11-20 06:36:55.680073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.019 [2024-11-20 06:36:55.680082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.019 [2024-11-20 06:36:55.680089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.019 [2024-11-20 06:36:55.680097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.019 [2024-11-20 06:36:55.680104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.019 [2024-11-20 06:36:55.680112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.019 [2024-11-20 06:36:55.680119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.019 [2024-11-20 06:36:55.680127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.019 [2024-11-20 06:36:55.680134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.019 [2024-11-20 06:36:55.680142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.019 [2024-11-20 06:36:55.680149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.019 [2024-11-20 06:36:55.680159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.019 [2024-11-20 06:36:55.680166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.019 [2024-11-20 06:36:55.680175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.019 [2024-11-20 06:36:55.680181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.019 [2024-11-20 06:36:55.680190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.019 [2024-11-20 06:36:55.680196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.019 [2024-11-20 06:36:55.680211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.019 [2024-11-20 06:36:55.680219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.019 [2024-11-20 06:36:55.680228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.019 [2024-11-20 06:36:55.680234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.019 [2024-11-20 06:36:55.680243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.019 [2024-11-20 06:36:55.680250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.019 [2024-11-20 06:36:55.680260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.019 [2024-11-20 06:36:55.680267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.019 [2024-11-20 06:36:55.680276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.019 [2024-11-20 06:36:55.680283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.019 [2024-11-20 06:36:55.680290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.019 [2024-11-20 06:36:55.680297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.019 [2024-11-20 06:36:55.680306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.019 [2024-11-20 06:36:55.680313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.019 [2024-11-20 06:36:55.680321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.019 [2024-11-20 06:36:55.680328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.019 [2024-11-20 06:36:55.680338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.019 [2024-11-20 06:36:55.680345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.019 [2024-11-20 06:36:55.680353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.019 [2024-11-20 06:36:55.680362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.019 [2024-11-20 06:36:55.680370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.019 [2024-11-20 06:36:55.680377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.019 [2024-11-20 06:36:55.680385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.019 [2024-11-20 06:36:55.680392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.019 [2024-11-20 06:36:55.680401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.019 [2024-11-20 06:36:55.680407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.019 [2024-11-20 06:36:55.680416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.019 [2024-11-20 06:36:55.680422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.019 [2024-11-20 06:36:55.680432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.019 [2024-11-20 06:36:55.680438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.020 [2024-11-20 06:36:55.680446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.020 [2024-11-20 06:36:55.680455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.020 [2024-11-20 06:36:55.680464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.020 [2024-11-20 06:36:55.680471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.020 [2024-11-20 06:36:55.680478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.020 [2024-11-20 06:36:55.680486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.020 [2024-11-20 06:36:55.680493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.020 [2024-11-20 06:36:55.680501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.020 [2024-11-20 06:36:55.680509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.020 [2024-11-20 06:36:55.680515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.020 [2024-11-20 06:36:55.680524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.020 [2024-11-20 06:36:55.680530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.020 [2024-11-20 06:36:55.680539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.020 [2024-11-20 06:36:55.680545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.020 [2024-11-20 06:36:55.680555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.020 [2024-11-20 06:36:55.680562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.020 [2024-11-20 06:36:55.680571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.020 [2024-11-20 06:36:55.680577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.020 [2024-11-20 06:36:55.680586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.020 [2024-11-20 06:36:55.680594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.020 [2024-11-20 06:36:55.680601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.020 [2024-11-20 06:36:55.680608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.020 [2024-11-20 06:36:55.680617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.020 [2024-11-20 06:36:55.680624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.020 [2024-11-20 06:36:55.680631] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x30ab5c0 is same with the state(6) to be set 00:26:24.020 [2024-11-20 06:36:55.681623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.020 [2024-11-20 06:36:55.681637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.020 [2024-11-20 06:36:55.681649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.020 [2024-11-20 06:36:55.681656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.020 [2024-11-20 06:36:55.681665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.020 [2024-11-20 06:36:55.681673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.020 [2024-11-20 06:36:55.681682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.020 [2024-11-20 06:36:55.681689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.020 [2024-11-20 06:36:55.681698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.020 [2024-11-20 06:36:55.681705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.020 [2024-11-20 06:36:55.681714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.020 [2024-11-20 06:36:55.681721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.020 [2024-11-20 06:36:55.681730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.020 [2024-11-20 06:36:55.681737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.020 [2024-11-20 06:36:55.681748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.020 [2024-11-20 06:36:55.681755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.020 [2024-11-20 06:36:55.681763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.020 [2024-11-20 06:36:55.681770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.020 [2024-11-20 06:36:55.681779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.020 [2024-11-20 06:36:55.681786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.020 [2024-11-20 06:36:55.681794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.020 [2024-11-20 06:36:55.681802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.020 [2024-11-20 06:36:55.681811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.020 [2024-11-20 06:36:55.681819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.020 [2024-11-20 06:36:55.681827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.020 [2024-11-20 06:36:55.681834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.020 [2024-11-20 06:36:55.681843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.020 [2024-11-20 06:36:55.681850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.020 [2024-11-20 06:36:55.681859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.020 [2024-11-20 06:36:55.681867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.020 [2024-11-20 06:36:55.681876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.020 [2024-11-20 06:36:55.681882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.020 [2024-11-20 06:36:55.681892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.020 [2024-11-20 06:36:55.681899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.020 [2024-11-20 06:36:55.681909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.020 [2024-11-20 06:36:55.681915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.020 [2024-11-20 06:36:55.681925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.020 [2024-11-20 06:36:55.681932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.020 [2024-11-20 06:36:55.681942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.020 [2024-11-20 06:36:55.681953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.020 [2024-11-20 06:36:55.681962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.020 [2024-11-20 06:36:55.681969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.020 [2024-11-20 06:36:55.681978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.020 [2024-11-20 06:36:55.681985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.020 [2024-11-20 06:36:55.681995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.020 [2024-11-20 06:36:55.682002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.020 [2024-11-20 06:36:55.682010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.020 [2024-11-20 06:36:55.682017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.020 [2024-11-20 06:36:55.682026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.020 [2024-11-20 06:36:55.682033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.020 [2024-11-20 06:36:55.682042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.020 [2024-11-20 06:36:55.682050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.020 [2024-11-20 06:36:55.682058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.021 [2024-11-20 06:36:55.682065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.021 [2024-11-20 06:36:55.682074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.021 [2024-11-20 06:36:55.682080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.021 [2024-11-20 06:36:55.682089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.021 [2024-11-20 06:36:55.682096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.021 [2024-11-20 06:36:55.682104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.021 [2024-11-20 06:36:55.682112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.021 [2024-11-20 06:36:55.682120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.021 [2024-11-20 06:36:55.682126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.021 [2024-11-20 06:36:55.682135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.021 [2024-11-20 06:36:55.682141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.021 [2024-11-20 06:36:55.682152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.021 [2024-11-20 06:36:55.682158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.021 [2024-11-20 06:36:55.682167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.021 [2024-11-20 06:36:55.682173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.021 [2024-11-20 06:36:55.682182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.021 [2024-11-20 06:36:55.682189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.021 [2024-11-20 06:36:55.682198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.021 [2024-11-20 06:36:55.682210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.021 [2024-11-20 06:36:55.682220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.021 [2024-11-20 06:36:55.682227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.021 [2024-11-20 06:36:55.682235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.021 [2024-11-20 06:36:55.682242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.021 [2024-11-20 06:36:55.682250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.021 [2024-11-20 06:36:55.682257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.021 [2024-11-20 06:36:55.682265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.021 [2024-11-20 06:36:55.682273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.021 [2024-11-20 06:36:55.682282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.021 [2024-11-20 06:36:55.682289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.021 [2024-11-20 06:36:55.682298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.021 [2024-11-20 06:36:55.682305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.021 [2024-11-20 06:36:55.682314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.021 [2024-11-20 06:36:55.682321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.021 [2024-11-20 06:36:55.682330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.021 [2024-11-20 06:36:55.682336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.021 [2024-11-20 06:36:55.682344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.021 [2024-11-20 06:36:55.682353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.021 [2024-11-20 06:36:55.682361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.021 [2024-11-20 06:36:55.682367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.021 [2024-11-20 06:36:55.682376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.021 [2024-11-20 06:36:55.682385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.021 [2024-11-20 06:36:55.682393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.021 [2024-11-20 06:36:55.682400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.021 [2024-11-20 06:36:55.682409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.021 [2024-11-20 06:36:55.682415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.021 [2024-11-20 06:36:55.682424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.021 [2024-11-20 06:36:55.682431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.021 [2024-11-20 06:36:55.682439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.021 [2024-11-20 06:36:55.682446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.021 [2024-11-20 06:36:55.682455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.021 [2024-11-20 06:36:55.682462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.021 [2024-11-20 06:36:55.682471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.021 [2024-11-20 06:36:55.682477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.021 [2024-11-20 06:36:55.682486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.021 [2024-11-20 06:36:55.682493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.021 [2024-11-20 06:36:55.682502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.021 [2024-11-20 06:36:55.682508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.021 [2024-11-20 06:36:55.682518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.021 [2024-11-20 06:36:55.682524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.021 [2024-11-20 06:36:55.682533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.021 [2024-11-20 06:36:55.682540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.021 [2024-11-20 06:36:55.682550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.021 [2024-11-20 06:36:55.682557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.021 [2024-11-20 06:36:55.682565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.021 [2024-11-20 06:36:55.682573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.021 [2024-11-20 06:36:55.682580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.021 [2024-11-20 06:36:55.682587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.021 [2024-11-20 06:36:55.682595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.021 [2024-11-20 06:36:55.682602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.021 [2024-11-20 06:36:55.682611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.021 [2024-11-20 06:36:55.682617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.021 [2024-11-20 06:36:55.682627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.021 [2024-11-20 06:36:55.682635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.021 [2024-11-20 06:36:55.682643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.021 [2024-11-20 06:36:55.682650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.021 [2024-11-20 06:36:55.682657] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x32f9230 is same with the state(6) to be set 00:26:24.021 [2024-11-20 06:36:55.683645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.021 [2024-11-20 06:36:55.683659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.021 [2024-11-20 06:36:55.683671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.021 [2024-11-20 06:36:55.683683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.022 [2024-11-20 06:36:55.683693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.022 [2024-11-20 06:36:55.683701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.022 [2024-11-20 06:36:55.683709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.022 [2024-11-20 06:36:55.683717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.022 [2024-11-20 06:36:55.683725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.022 [2024-11-20 06:36:55.683733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.022 [2024-11-20 06:36:55.683743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.022 [2024-11-20 06:36:55.683751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.022 [2024-11-20 06:36:55.683759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.022 [2024-11-20 06:36:55.683767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.022 [2024-11-20 06:36:55.683776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.022 [2024-11-20 06:36:55.683783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.022 [2024-11-20 06:36:55.683792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.022 [2024-11-20 06:36:55.683799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.022 [2024-11-20 06:36:55.683808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.022 [2024-11-20 06:36:55.683815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.022 [2024-11-20 06:36:55.683823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.022 [2024-11-20 06:36:55.683831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.022 [2024-11-20 06:36:55.683840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.022 [2024-11-20 06:36:55.683846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.022 [2024-11-20 06:36:55.683855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.022 [2024-11-20 06:36:55.683861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.022 [2024-11-20 06:36:55.683869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.022 [2024-11-20 06:36:55.683876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.022 [2024-11-20 06:36:55.683885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.022 [2024-11-20 06:36:55.683892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.022 [2024-11-20 06:36:55.683902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.022 [2024-11-20 06:36:55.683909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.022 [2024-11-20 06:36:55.683917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.022 [2024-11-20 06:36:55.683924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.022 [2024-11-20 06:36:55.683931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.022 [2024-11-20 06:36:55.683940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.022 [2024-11-20 06:36:55.683948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.022 [2024-11-20 06:36:55.683955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.022 [2024-11-20 06:36:55.683964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.022 [2024-11-20 06:36:55.683971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.022 [2024-11-20 06:36:55.683980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.022 [2024-11-20 06:36:55.683987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.022 [2024-11-20 06:36:55.683996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.022 [2024-11-20 06:36:55.684004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.022 [2024-11-20 06:36:55.684013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.022 [2024-11-20 06:36:55.684020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.022 [2024-11-20 06:36:55.684029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.022 [2024-11-20 06:36:55.684035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.022 [2024-11-20 06:36:55.684044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.022 [2024-11-20 06:36:55.684051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.022 [2024-11-20 06:36:55.684060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.022 [2024-11-20 06:36:55.684068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.022 [2024-11-20 06:36:55.684079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.022 [2024-11-20 06:36:55.684086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.022 [2024-11-20 06:36:55.684094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.022 [2024-11-20 06:36:55.684101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.022 [2024-11-20 06:36:55.684110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.022 [2024-11-20 06:36:55.684116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.022 [2024-11-20 06:36:55.684125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.022 [2024-11-20 06:36:55.684133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.022 [2024-11-20 06:36:55.684141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.022 [2024-11-20 06:36:55.684150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.022 [2024-11-20 06:36:55.684158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.022 [2024-11-20 06:36:55.684165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.022 [2024-11-20 06:36:55.684174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.022 [2024-11-20 06:36:55.684181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.022 [2024-11-20 06:36:55.684189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.022 [2024-11-20 06:36:55.684197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.023 [2024-11-20 06:36:55.684210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.023 [2024-11-20 06:36:55.684217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.023 [2024-11-20 06:36:55.684226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.023 [2024-11-20 06:36:55.684234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.023 [2024-11-20 06:36:55.684242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.023 [2024-11-20 06:36:55.684250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.023 [2024-11-20 06:36:55.684259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.023 [2024-11-20 06:36:55.684265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.023 [2024-11-20 06:36:55.684274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.023 [2024-11-20 06:36:55.684280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.023 [2024-11-20 06:36:55.684289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.023 [2024-11-20 06:36:55.684296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.023 [2024-11-20 06:36:55.684304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.023 [2024-11-20 06:36:55.684311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.023 [2024-11-20 06:36:55.684319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.023 [2024-11-20 06:36:55.684326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.023 [2024-11-20 06:36:55.684335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.023 [2024-11-20 06:36:55.684343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.023 [2024-11-20 06:36:55.684354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.023 [2024-11-20 06:36:55.684362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.023 [2024-11-20 06:36:55.684370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.023 [2024-11-20 06:36:55.684378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.023 [2024-11-20 06:36:55.684386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.023 [2024-11-20 06:36:55.684394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.023 [2024-11-20 06:36:55.684402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.023 [2024-11-20 06:36:55.684411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.023 [2024-11-20 06:36:55.684419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.023 [2024-11-20 06:36:55.684426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.023 [2024-11-20 06:36:55.684434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.023 [2024-11-20 06:36:55.684441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.023 [2024-11-20 06:36:55.684450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.023 [2024-11-20 06:36:55.684457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.023 [2024-11-20 06:36:55.684466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.023 [2024-11-20 06:36:55.684473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.023 [2024-11-20 06:36:55.684482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.023 [2024-11-20 06:36:55.684488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.023 [2024-11-20 06:36:55.684497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.023 [2024-11-20 06:36:55.684503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.023 [2024-11-20 06:36:55.684512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.023 [2024-11-20 06:36:55.684518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.023 [2024-11-20 06:36:55.684527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.023 [2024-11-20 06:36:55.684534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.023 [2024-11-20 06:36:55.684543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.023 [2024-11-20 06:36:55.684554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.023 [2024-11-20 06:36:55.684562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.023 [2024-11-20 06:36:55.684569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.023 [2024-11-20 06:36:55.684578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.023 [2024-11-20 06:36:55.684586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.023 [2024-11-20 06:36:55.684595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.023 [2024-11-20 06:36:55.684602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.023 [2024-11-20 06:36:55.684611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.023 [2024-11-20 06:36:55.684617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.023 [2024-11-20 06:36:55.684626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.023 [2024-11-20 06:36:55.684633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.023 [2024-11-20 06:36:55.684641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.023 [2024-11-20 06:36:55.684648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.023 [2024-11-20 06:36:55.684657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.023 [2024-11-20 06:36:55.684663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.023 [2024-11-20 06:36:55.684672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.023 [2024-11-20 06:36:55.684679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.023 [2024-11-20 06:36:55.684686] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x3546eb0 is same with the state(6) to be set 00:26:24.023 [2024-11-20 06:36:55.685642] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:26:24.023 [2024-11-20 06:36:55.685659] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:26:24.023 [2024-11-20 06:36:55.685668] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:26:24.023 [2024-11-20 06:36:55.685748] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:26:24.023 [2024-11-20 06:36:55.685849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.023 [2024-11-20 06:36:55.685862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2619c20 with addr=10.0.0.2, port=4420 00:26:24.023 [2024-11-20 06:36:55.685870] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2619c20 is same with the state(6) to be set 00:26:24.023 [2024-11-20 06:36:55.686029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.023 [2024-11-20 06:36:55.686045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x210c610 with addr=10.0.0.2, port=4420 00:26:24.023 [2024-11-20 06:36:55.686053] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210c610 is same with the state(6) to be set 00:26:24.023 [2024-11-20 06:36:55.686195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.023 [2024-11-20 06:36:55.686259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2656550 with addr=10.0.0.2, port=4420 00:26:24.024 [2024-11-20 06:36:55.686268] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2656550 is same with the state(6) to be set 00:26:24.024 [2024-11-20 06:36:55.686971] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:26:24.024 [2024-11-20 06:36:55.686987] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:26:24.024 [2024-11-20 06:36:55.687157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.024 [2024-11-20 06:36:55.687171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f72b0 with addr=10.0.0.2, port=4420 00:26:24.024 [2024-11-20 06:36:55.687179] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f72b0 is same with the state(6) to be set 00:26:24.024 [2024-11-20 06:36:55.687189] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2619c20 (9): Bad file descriptor 00:26:24.024 [2024-11-20 06:36:55.687199] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x210c610 (9): Bad file descriptor 00:26:24.024 [2024-11-20 06:36:55.687215] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2656550 (9): Bad file descriptor 00:26:24.024 [2024-11-20 06:36:55.687433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.024 [2024-11-20 06:36:55.687448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2654da0 with addr=10.0.0.2, port=4420 00:26:24.024 [2024-11-20 06:36:55.687456] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2654da0 is same with the state(6) to be set 00:26:24.024 [2024-11-20 06:36:55.687533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.024 [2024-11-20 06:36:55.687546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7d30 with addr=10.0.0.2, port=4420 00:26:24.024 [2024-11-20 06:36:55.687554] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f7d30 is same with the state(6) to be set 00:26:24.024 [2024-11-20 06:36:55.687563] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21f72b0 (9): Bad file descriptor 00:26:24.024 [2024-11-20 06:36:55.687572] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:26:24.024 [2024-11-20 06:36:55.687579] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:26:24.024 [2024-11-20 06:36:55.687586] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:26:24.024 [2024-11-20 06:36:55.687594] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:26:24.024 [2024-11-20 06:36:55.687602] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:26:24.024 [2024-11-20 06:36:55.687608] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:26:24.024 [2024-11-20 06:36:55.687616] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:26:24.024 [2024-11-20 06:36:55.687622] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:26:24.024 [2024-11-20 06:36:55.687629] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:26:24.024 [2024-11-20 06:36:55.687639] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:26:24.024 [2024-11-20 06:36:55.687646] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:26:24.024 [2024-11-20 06:36:55.687653] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:26:24.024 [2024-11-20 06:36:55.687691] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:26:24.024 [2024-11-20 06:36:55.687702] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:26:24.024 [2024-11-20 06:36:55.687722] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2654da0 (9): Bad file descriptor 00:26:24.024 [2024-11-20 06:36:55.687732] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21f7d30 (9): Bad file descriptor 00:26:24.024 [2024-11-20 06:36:55.687739] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:26:24.024 [2024-11-20 06:36:55.687746] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:26:24.024 [2024-11-20 06:36:55.687754] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:26:24.024 [2024-11-20 06:36:55.687760] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:26:24.024 [2024-11-20 06:36:55.687961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.024 [2024-11-20 06:36:55.687975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2623b90 with addr=10.0.0.2, port=4420 00:26:24.024 [2024-11-20 06:36:55.687982] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2623b90 is same with the state(6) to be set 00:26:24.024 [2024-11-20 06:36:55.688064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.024 [2024-11-20 06:36:55.688075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f81b0 with addr=10.0.0.2, port=4420 00:26:24.024 [2024-11-20 06:36:55.688083] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f81b0 is same with the state(6) to be set 00:26:24.024 [2024-11-20 06:36:55.688090] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:26:24.024 [2024-11-20 06:36:55.688097] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:26:24.024 [2024-11-20 06:36:55.688104] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:26:24.024 [2024-11-20 06:36:55.688111] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:26:24.024 [2024-11-20 06:36:55.688118] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:26:24.024 [2024-11-20 06:36:55.688124] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:26:24.024 [2024-11-20 06:36:55.688131] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:26:24.024 [2024-11-20 06:36:55.688138] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:26:24.024 [2024-11-20 06:36:55.688157] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2623b90 (9): Bad file descriptor 00:26:24.024 [2024-11-20 06:36:55.688167] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21f81b0 (9): Bad file descriptor 00:26:24.024 [2024-11-20 06:36:55.688184] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:26:24.024 [2024-11-20 06:36:55.688192] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:26:24.024 [2024-11-20 06:36:55.688211] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:26:24.024 [2024-11-20 06:36:55.688218] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:26:24.024 [2024-11-20 06:36:55.688225] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:26:24.024 [2024-11-20 06:36:55.688231] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:26:24.024 [2024-11-20 06:36:55.688238] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:26:24.024 [2024-11-20 06:36:55.688244] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:26:24.024 [2024-11-20 06:36:55.688455] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:26:24.024 [2024-11-20 06:36:55.688615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.024 [2024-11-20 06:36:55.688629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21ec1d0 with addr=10.0.0.2, port=4420 00:26:24.024 [2024-11-20 06:36:55.688638] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ec1d0 is same with the state(6) to be set 00:26:24.024 [2024-11-20 06:36:55.688656] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21ec1d0 (9): Bad file descriptor 00:26:24.024 [2024-11-20 06:36:55.688675] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:26:24.024 [2024-11-20 06:36:55.688683] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:26:24.024 [2024-11-20 06:36:55.688690] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:26:24.024 [2024-11-20 06:36:55.688697] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:26:24.024 [2024-11-20 06:36:55.689557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.024 [2024-11-20 06:36:55.689572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.024 [2024-11-20 06:36:55.689584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.024 [2024-11-20 06:36:55.689592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.024 [2024-11-20 06:36:55.689600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.024 [2024-11-20 06:36:55.689609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.024 [2024-11-20 06:36:55.689618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.024 [2024-11-20 06:36:55.689626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.024 [2024-11-20 06:36:55.689635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.024 [2024-11-20 06:36:55.689642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.024 [2024-11-20 06:36:55.689651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.024 [2024-11-20 06:36:55.689665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.024 [2024-11-20 06:36:55.689674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.024 [2024-11-20 06:36:55.689684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.024 [2024-11-20 06:36:55.689693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.025 [2024-11-20 06:36:55.689700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.025 [2024-11-20 06:36:55.689709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.025 [2024-11-20 06:36:55.689716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.025 [2024-11-20 06:36:55.689725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.025 [2024-11-20 06:36:55.689732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.025 [2024-11-20 06:36:55.689742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.025 [2024-11-20 06:36:55.689749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.025 [2024-11-20 06:36:55.689758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.025 [2024-11-20 06:36:55.689765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.025 [2024-11-20 06:36:55.689773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.025 [2024-11-20 06:36:55.689781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.025 [2024-11-20 06:36:55.689789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.025 [2024-11-20 06:36:55.689797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.025 [2024-11-20 06:36:55.689806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.025 [2024-11-20 06:36:55.689813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.025 [2024-11-20 06:36:55.689822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.025 [2024-11-20 06:36:55.689828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.025 [2024-11-20 06:36:55.689837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.025 [2024-11-20 06:36:55.689844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.025 [2024-11-20 06:36:55.689853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.025 [2024-11-20 06:36:55.689860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.025 [2024-11-20 06:36:55.689869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.025 [2024-11-20 06:36:55.689875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.025 [2024-11-20 06:36:55.689886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.025 [2024-11-20 06:36:55.689893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.025 [2024-11-20 06:36:55.689902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.025 [2024-11-20 06:36:55.689909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.025 [2024-11-20 06:36:55.689917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.025 [2024-11-20 06:36:55.689924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.025 [2024-11-20 06:36:55.689932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.025 [2024-11-20 06:36:55.689940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.025 [2024-11-20 06:36:55.689949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.025 [2024-11-20 06:36:55.689956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.025 [2024-11-20 06:36:55.689965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.025 [2024-11-20 06:36:55.689971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.025 [2024-11-20 06:36:55.689980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.025 [2024-11-20 06:36:55.689987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.025 [2024-11-20 06:36:55.689995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.025 [2024-11-20 06:36:55.690003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.025 [2024-11-20 06:36:55.690011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.025 [2024-11-20 06:36:55.690018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.025 [2024-11-20 06:36:55.690027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.025 [2024-11-20 06:36:55.690034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.025 [2024-11-20 06:36:55.690043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.025 [2024-11-20 06:36:55.690051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.025 [2024-11-20 06:36:55.690059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.025 [2024-11-20 06:36:55.690066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.025 [2024-11-20 06:36:55.690075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.025 [2024-11-20 06:36:55.690083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.025 [2024-11-20 06:36:55.690092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.025 [2024-11-20 06:36:55.690099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.025 [2024-11-20 06:36:55.690108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.025 [2024-11-20 06:36:55.690114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.025 [2024-11-20 06:36:55.690123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.025 [2024-11-20 06:36:55.690129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.025 [2024-11-20 06:36:55.690138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.025 [2024-11-20 06:36:55.690145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.025 [2024-11-20 06:36:55.690153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.025 [2024-11-20 06:36:55.690163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.025 [2024-11-20 06:36:55.690172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.025 [2024-11-20 06:36:55.690179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.025 [2024-11-20 06:36:55.690187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.025 [2024-11-20 06:36:55.690194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.025 [2024-11-20 06:36:55.690206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.025 [2024-11-20 06:36:55.690214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.025 [2024-11-20 06:36:55.690222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.025 [2024-11-20 06:36:55.690229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.025 [2024-11-20 06:36:55.690237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.025 [2024-11-20 06:36:55.690244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.025 [2024-11-20 06:36:55.690253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.025 [2024-11-20 06:36:55.690260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.025 [2024-11-20 06:36:55.690269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.025 [2024-11-20 06:36:55.690275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.026 [2024-11-20 06:36:55.690286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.026 [2024-11-20 06:36:55.690294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.026 [2024-11-20 06:36:55.690303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.026 [2024-11-20 06:36:55.690310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.026 [2024-11-20 06:36:55.690319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.026 [2024-11-20 06:36:55.690326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.026 [2024-11-20 06:36:55.690334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.026 [2024-11-20 06:36:55.690340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.026 [2024-11-20 06:36:55.690349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.026 [2024-11-20 06:36:55.690357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.026 [2024-11-20 06:36:55.690365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.026 [2024-11-20 06:36:55.690372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.026 [2024-11-20 06:36:55.690380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.026 [2024-11-20 06:36:55.690387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.026 [2024-11-20 06:36:55.690395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.026 [2024-11-20 06:36:55.690402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.026 [2024-11-20 06:36:55.690411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.026 [2024-11-20 06:36:55.690418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.026 [2024-11-20 06:36:55.690427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.026 [2024-11-20 06:36:55.690433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.026 [2024-11-20 06:36:55.690441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.026 [2024-11-20 06:36:55.690448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.026 [2024-11-20 06:36:55.690457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.026 [2024-11-20 06:36:55.690463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.026 [2024-11-20 06:36:55.690473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.026 [2024-11-20 06:36:55.690481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.026 [2024-11-20 06:36:55.690490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.026 [2024-11-20 06:36:55.690498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.026 [2024-11-20 06:36:55.690505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.026 [2024-11-20 06:36:55.690512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.026 [2024-11-20 06:36:55.690521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.026 [2024-11-20 06:36:55.690527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.026 [2024-11-20 06:36:55.690535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.026 [2024-11-20 06:36:55.690543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.026 [2024-11-20 06:36:55.690551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.026 [2024-11-20 06:36:55.690559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.026 [2024-11-20 06:36:55.690568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.026 [2024-11-20 06:36:55.690574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.026 [2024-11-20 06:36:55.690583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.026 [2024-11-20 06:36:55.690590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.026 [2024-11-20 06:36:55.690598] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fc7b0 is same with the state(6) to be set 00:26:24.026 task offset: 28672 on job bdev=Nvme4n1 fails 00:26:24.026 00:26:24.026 Latency(us) 00:26:24.026 [2024-11-20T05:36:55.862Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:24.026 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:24.026 Job: Nvme1n1 ended in about 0.78 seconds with error 00:26:24.026 Verification LBA range: start 0x0 length 0x400 00:26:24.026 Nvme1n1 : 0.78 247.61 15.48 82.54 0.00 191458.99 15166.90 214708.42 00:26:24.026 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:24.026 Job: Nvme2n1 ended in about 0.78 seconds with error 00:26:24.026 Verification LBA range: start 0x0 length 0x400 00:26:24.026 Nvme2n1 : 0.78 250.86 15.68 81.91 0.00 186061.51 16227.96 211712.49 00:26:24.026 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:24.026 Job: Nvme3n1 ended in about 0.78 seconds with error 00:26:24.026 Verification LBA range: start 0x0 length 0x400 00:26:24.026 Nvme3n1 : 0.78 251.27 15.70 81.63 0.00 182228.58 18974.23 202724.69 00:26:24.026 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:24.026 Job: Nvme4n1 ended in about 0.77 seconds with error 00:26:24.026 Verification LBA range: start 0x0 length 0x400 00:26:24.026 Nvme4n1 : 0.77 253.52 15.84 82.78 0.00 176429.63 19099.06 204721.98 00:26:24.026 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:24.026 Job: Nvme5n1 ended in about 0.79 seconds with error 00:26:24.026 Verification LBA range: start 0x0 length 0x400 00:26:24.026 Nvme5n1 : 0.79 162.52 10.16 81.26 0.00 238725.28 18225.25 217704.35 00:26:24.026 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:24.026 Job: Nvme6n1 ended in about 0.80 seconds with error 00:26:24.026 Verification LBA range: start 0x0 length 0x400 00:26:24.026 Nvme6n1 : 0.80 160.75 10.05 80.37 0.00 236464.68 17351.44 216705.71 00:26:24.026 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:24.026 Job: Nvme7n1 ended in about 0.80 seconds with error 00:26:24.026 Verification LBA range: start 0x0 length 0x400 00:26:24.026 Nvme7n1 : 0.80 160.34 10.02 80.17 0.00 232001.67 23218.47 230686.72 00:26:24.026 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:24.026 Job: Nvme8n1 ended in about 0.80 seconds with error 00:26:24.026 Verification LBA range: start 0x0 length 0x400 00:26:24.026 Nvme8n1 : 0.80 159.94 10.00 79.97 0.00 227586.28 14355.50 217704.35 00:26:24.026 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:24.026 Job: Nvme9n1 ended in about 0.81 seconds with error 00:26:24.026 Verification LBA range: start 0x0 length 0x400 00:26:24.026 Nvme9n1 : 0.81 158.77 9.92 79.38 0.00 224418.21 20472.20 217704.35 00:26:24.026 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:24.026 Job: Nvme10n1 ended in about 0.79 seconds with error 00:26:24.026 Verification LBA range: start 0x0 length 0x400 00:26:24.026 Nvme10n1 : 0.79 162.03 10.13 81.02 0.00 213957.49 19099.06 235679.94 00:26:24.026 [2024-11-20T05:36:55.862Z] =================================================================================================================== 00:26:24.026 [2024-11-20T05:36:55.862Z] Total : 1967.60 122.98 811.03 0.00 207614.60 14355.50 235679.94 00:26:24.026 [2024-11-20 06:36:55.722717] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:26:24.026 [2024-11-20 06:36:55.722769] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:26:24.026 [2024-11-20 06:36:55.723116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.026 [2024-11-20 06:36:55.723139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2651b70 with addr=10.0.0.2, port=4420 00:26:24.026 [2024-11-20 06:36:55.723151] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2651b70 is same with the state(6) to be set 00:26:24.026 [2024-11-20 06:36:55.723513] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2651b70 (9): Bad file descriptor 00:26:24.026 [2024-11-20 06:36:55.723786] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:26:24.026 [2024-11-20 06:36:55.723803] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:26:24.027 [2024-11-20 06:36:55.723813] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:26:24.027 [2024-11-20 06:36:55.723822] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:26:24.027 [2024-11-20 06:36:55.723830] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:26:24.027 [2024-11-20 06:36:55.723867] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:26:24.027 [2024-11-20 06:36:55.723875] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:26:24.027 [2024-11-20 06:36:55.723885] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:26:24.027 [2024-11-20 06:36:55.723893] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:26:24.027 [2024-11-20 06:36:55.723923] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:26:24.027 [2024-11-20 06:36:55.723938] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:26:24.027 [2024-11-20 06:36:55.723947] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:26:24.027 [2024-11-20 06:36:55.723955] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:26:24.027 [2024-11-20 06:36:55.724090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.027 [2024-11-20 06:36:55.724105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2656550 with addr=10.0.0.2, port=4420 00:26:24.027 [2024-11-20 06:36:55.724114] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2656550 is same with the state(6) to be set 00:26:24.027 [2024-11-20 06:36:55.724336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.027 [2024-11-20 06:36:55.724350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x210c610 with addr=10.0.0.2, port=4420 00:26:24.027 [2024-11-20 06:36:55.724359] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210c610 is same with the state(6) to be set 00:26:24.027 [2024-11-20 06:36:55.724444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.027 [2024-11-20 06:36:55.724456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2619c20 with addr=10.0.0.2, port=4420 00:26:24.027 [2024-11-20 06:36:55.724464] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2619c20 is same with the state(6) to be set 00:26:24.027 [2024-11-20 06:36:55.724551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.027 [2024-11-20 06:36:55.724564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f72b0 with addr=10.0.0.2, port=4420 00:26:24.027 [2024-11-20 06:36:55.724573] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f72b0 is same with the state(6) to be set 00:26:24.027 [2024-11-20 06:36:55.724701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.027 [2024-11-20 06:36:55.724712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7d30 with addr=10.0.0.2, port=4420 00:26:24.027 [2024-11-20 06:36:55.724721] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f7d30 is same with the state(6) to be set 00:26:24.027 [2024-11-20 06:36:55.724840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.027 [2024-11-20 06:36:55.724854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2654da0 with addr=10.0.0.2, port=4420 00:26:24.027 [2024-11-20 06:36:55.724862] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2654da0 is same with the state(6) to be set 00:26:24.027 [2024-11-20 06:36:55.724928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.027 [2024-11-20 06:36:55.724940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f81b0 with addr=10.0.0.2, port=4420 00:26:24.027 [2024-11-20 06:36:55.724948] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f81b0 is same with the state(6) to be set 00:26:24.027 [2024-11-20 06:36:55.725079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.027 [2024-11-20 06:36:55.725091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2623b90 with addr=10.0.0.2, port=4420 00:26:24.027 [2024-11-20 06:36:55.725100] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2623b90 is same with the state(6) to be set 00:26:24.027 [2024-11-20 06:36:55.725242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.027 [2024-11-20 06:36:55.725254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21ec1d0 with addr=10.0.0.2, port=4420 00:26:24.027 [2024-11-20 06:36:55.725263] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ec1d0 is same with the state(6) to be set 00:26:24.027 [2024-11-20 06:36:55.725277] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2656550 (9): Bad file descriptor 00:26:24.027 [2024-11-20 06:36:55.725289] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x210c610 (9): Bad file descriptor 00:26:24.027 [2024-11-20 06:36:55.725298] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2619c20 (9): Bad file descriptor 00:26:24.027 [2024-11-20 06:36:55.725307] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21f72b0 (9): Bad file descriptor 00:26:24.027 [2024-11-20 06:36:55.725316] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21f7d30 (9): Bad file descriptor 00:26:24.027 [2024-11-20 06:36:55.725342] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2654da0 (9): Bad file descriptor 00:26:24.027 [2024-11-20 06:36:55.725352] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21f81b0 (9): Bad file descriptor 00:26:24.027 [2024-11-20 06:36:55.725362] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2623b90 (9): Bad file descriptor 00:26:24.027 [2024-11-20 06:36:55.725370] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21ec1d0 (9): Bad file descriptor 00:26:24.027 [2024-11-20 06:36:55.725378] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:26:24.027 [2024-11-20 06:36:55.725386] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:26:24.027 [2024-11-20 06:36:55.725393] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:26:24.027 [2024-11-20 06:36:55.725402] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:26:24.027 [2024-11-20 06:36:55.725411] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:26:24.027 [2024-11-20 06:36:55.725417] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:26:24.027 [2024-11-20 06:36:55.725424] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:26:24.027 [2024-11-20 06:36:55.725431] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:26:24.027 [2024-11-20 06:36:55.725438] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:26:24.027 [2024-11-20 06:36:55.725444] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:26:24.027 [2024-11-20 06:36:55.725451] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:26:24.027 [2024-11-20 06:36:55.725458] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:26:24.027 [2024-11-20 06:36:55.725465] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:26:24.027 [2024-11-20 06:36:55.725473] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:26:24.027 [2024-11-20 06:36:55.725479] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:26:24.027 [2024-11-20 06:36:55.725486] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:26:24.027 [2024-11-20 06:36:55.725492] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:26:24.027 [2024-11-20 06:36:55.725499] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:26:24.027 [2024-11-20 06:36:55.725506] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:26:24.027 [2024-11-20 06:36:55.725512] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:26:24.027 [2024-11-20 06:36:55.725538] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:26:24.027 [2024-11-20 06:36:55.725546] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:26:24.027 [2024-11-20 06:36:55.725554] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:26:24.027 [2024-11-20 06:36:55.725561] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:26:24.028 [2024-11-20 06:36:55.725568] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:26:24.028 [2024-11-20 06:36:55.725575] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:26:24.028 [2024-11-20 06:36:55.725582] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:26:24.028 [2024-11-20 06:36:55.725588] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:26:24.028 [2024-11-20 06:36:55.725596] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:26:24.028 [2024-11-20 06:36:55.725602] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:26:24.028 [2024-11-20 06:36:55.725609] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:26:24.028 [2024-11-20 06:36:55.725616] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:26:24.028 [2024-11-20 06:36:55.725623] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:26:24.028 [2024-11-20 06:36:55.725629] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:26:24.028 [2024-11-20 06:36:55.725636] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:26:24.028 [2024-11-20 06:36:55.725643] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:26:24.287 06:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:26:25.224 06:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 609699 00:26:25.224 06:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@650 -- # local es=0 00:26:25.224 06:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # valid_exec_arg wait 609699 00:26:25.224 06:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@638 -- # local arg=wait 00:26:25.224 06:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:25.224 06:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # type -t wait 00:26:25.224 06:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:25.224 06:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@653 -- # wait 609699 00:26:25.224 06:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@653 -- # es=255 00:26:25.224 06:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:25.224 06:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@662 -- # es=127 00:26:25.224 06:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # case "$es" in 00:26:25.224 06:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@670 -- # es=1 00:26:25.224 06:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:25.224 06:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:26:25.224 06:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:26:25.224 06:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:26:25.224 06:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:26:25.224 06:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:26:25.224 06:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:25.484 06:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:26:25.484 06:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:25.484 06:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:26:25.484 06:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:25.484 06:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:25.484 rmmod nvme_tcp 00:26:25.484 rmmod nvme_fabrics 00:26:25.484 rmmod nvme_keyring 00:26:25.484 06:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:25.485 06:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:26:25.485 06:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:26:25.485 06:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 609428 ']' 00:26:25.485 06:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 609428 00:26:25.485 06:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # '[' -z 609428 ']' 00:26:25.485 06:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # kill -0 609428 00:26:25.485 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (609428) - No such process 00:26:25.485 06:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@979 -- # echo 'Process with pid 609428 is not found' 00:26:25.485 Process with pid 609428 is not found 00:26:25.485 06:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:25.485 06:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:25.485 06:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:25.485 06:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:26:25.485 06:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-save 00:26:25.485 06:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:25.485 06:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-restore 00:26:25.485 06:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:25.485 06:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:25.485 06:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:25.485 06:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:25.485 06:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:27.391 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:27.391 00:26:27.391 real 0m7.112s 00:26:27.391 user 0m16.141s 00:26:27.391 sys 0m1.301s 00:26:27.391 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:26:27.391 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:27.391 ************************************ 00:26:27.391 END TEST nvmf_shutdown_tc3 00:26:27.391 ************************************ 00:26:27.650 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:26:27.650 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:26:27.650 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:26:27.650 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:26:27.650 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1109 -- # xtrace_disable 00:26:27.650 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:27.650 ************************************ 00:26:27.650 START TEST nvmf_shutdown_tc4 00:26:27.650 ************************************ 00:26:27.650 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1127 -- # nvmf_shutdown_tc4 00:26:27.650 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:26:27.650 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:26:27.650 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:27.650 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:27.650 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:27.650 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:27.650 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:27.650 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:27.650 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:27.650 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:27.650 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:27.650 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:27.650 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:26:27.650 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:26:27.650 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:27.650 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:26:27.650 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:27.650 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:27.650 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:27.650 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:27.650 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:27.650 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:26:27.650 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:27.650 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:26:27.650 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:26:27.650 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:26:27.650 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:26:27.650 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:26:27.650 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:26:27.650 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:27.650 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:27.650 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:27.650 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:27.650 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:27.650 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:27.650 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:27.650 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:27.650 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:27.651 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:27.651 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:27.651 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:27.651 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:27.651 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:27.651 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:27.651 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:27.651 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:27.651 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:27.651 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:27.651 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:26:27.651 Found 0000:86:00.0 (0x8086 - 0x159b) 00:26:27.651 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:27.651 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:27.651 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:27.651 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:27.651 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:27.651 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:27.651 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:26:27.651 Found 0000:86:00.1 (0x8086 - 0x159b) 00:26:27.651 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:27.651 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:27.651 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:27.651 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:27.651 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:27.651 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:27.651 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:27.651 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:27.651 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:27.651 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:27.651 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:27.651 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:27.651 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:27.651 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:27.651 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:27.651 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:26:27.651 Found net devices under 0000:86:00.0: cvl_0_0 00:26:27.651 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:27.651 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:27.651 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:27.651 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:27.651 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:27.651 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:27.651 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:27.651 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:27.651 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:26:27.651 Found net devices under 0000:86:00.1: cvl_0_1 00:26:27.651 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:27.651 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:27.651 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes 00:26:27.651 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:27.651 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:27.651 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:27.651 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:27.651 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:27.651 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:27.651 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:27.651 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:27.651 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:27.651 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:27.651 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:27.651 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:27.651 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:27.651 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:27.651 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:27.651 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:27.651 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:27.651 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:27.651 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:27.651 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:27.651 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:27.651 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:27.909 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:27.909 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:27.909 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:27.909 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:27.909 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:27.909 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.203 ms 00:26:27.909 00:26:27.909 --- 10.0.0.2 ping statistics --- 00:26:27.909 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:27.909 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:26:27.909 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:27.909 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:27.909 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.144 ms 00:26:27.909 00:26:27.909 --- 10.0.0.1 ping statistics --- 00:26:27.909 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:27.909 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:26:27.909 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:27.909 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0 00:26:27.909 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:27.909 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:27.909 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:27.909 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:27.909 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:27.909 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:27.909 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:27.909 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:26:27.909 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:27.909 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:27.909 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:26:27.909 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=610748 00:26:27.909 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 610748 00:26:27.909 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:26:27.909 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@833 -- # '[' -z 610748 ']' 00:26:27.909 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:27.909 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:27.909 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:27.909 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:27.909 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:27.909 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:26:27.909 [2024-11-20 06:36:59.664437] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:26:27.909 [2024-11-20 06:36:59.664480] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:28.167 [2024-11-20 06:36:59.742081] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:28.167 [2024-11-20 06:36:59.784727] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:28.167 [2024-11-20 06:36:59.784764] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:28.167 [2024-11-20 06:36:59.784771] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:28.167 [2024-11-20 06:36:59.784777] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:28.167 [2024-11-20 06:36:59.784782] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:28.167 [2024-11-20 06:36:59.788219] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:28.167 [2024-11-20 06:36:59.788305] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:28.167 [2024-11-20 06:36:59.788414] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:28.167 [2024-11-20 06:36:59.788416] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:26:28.731 06:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:28.731 06:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@866 -- # return 0 00:26:28.731 06:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:28.731 06:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:28.731 06:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:26:28.731 06:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:28.731 06:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:28.731 06:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:28.731 06:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:26:28.731 [2024-11-20 06:37:00.537674] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:28.731 06:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:28.731 06:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:26:28.731 06:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:26:28.731 06:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:28.731 06:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:26:28.731 06:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:26:28.731 06:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:28.731 06:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:26:28.731 06:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:28.731 06:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:26:28.731 06:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:28.731 06:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:26:28.988 06:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:28.988 06:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:26:28.988 06:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:28.988 06:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:26:28.988 06:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:28.988 06:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:26:28.988 06:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:28.988 06:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:26:28.988 06:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:28.988 06:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:26:28.988 06:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:28.988 06:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:26:28.988 06:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:28.988 06:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:26:28.988 06:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:26:28.988 06:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:28.988 06:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:26:28.988 Malloc1 00:26:28.988 [2024-11-20 06:37:00.645230] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:28.988 Malloc2 00:26:28.988 Malloc3 00:26:28.988 Malloc4 00:26:28.988 Malloc5 00:26:29.245 Malloc6 00:26:29.245 Malloc7 00:26:29.245 Malloc8 00:26:29.245 Malloc9 00:26:29.245 Malloc10 00:26:29.246 06:37:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.246 06:37:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:26:29.246 06:37:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:29.246 06:37:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:26:29.246 06:37:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=611029 00:26:29.246 06:37:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:26:29.246 06:37:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:26:29.503 [2024-11-20 06:37:01.140701] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:26:34.777 06:37:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:34.778 06:37:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 610748 00:26:34.778 06:37:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@952 -- # '[' -z 610748 ']' 00:26:34.778 06:37:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@956 -- # kill -0 610748 00:26:34.778 06:37:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@957 -- # uname 00:26:34.778 06:37:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:34.778 06:37:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 610748 00:26:34.778 06:37:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:26:34.778 06:37:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:26:34.778 06:37:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 610748' 00:26:34.778 killing process with pid 610748 00:26:34.778 06:37:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@971 -- # kill 610748 00:26:34.778 06:37:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@976 -- # wait 610748 00:26:34.778 Write completed with error (sct=0, sc=8) 00:26:34.778 Write completed with error (sct=0, sc=8) 00:26:34.778 Write completed with error (sct=0, sc=8) 00:26:34.778 starting I/O failed: -6 00:26:34.778 Write completed with error (sct=0, sc=8) 00:26:34.778 Write completed with error (sct=0, sc=8) 00:26:34.778 Write completed with error (sct=0, sc=8) 00:26:34.778 Write completed with error (sct=0, sc=8) 00:26:34.778 starting I/O failed: -6 00:26:34.778 Write completed with error (sct=0, sc=8) 00:26:34.778 Write completed with error (sct=0, sc=8) 00:26:34.778 Write completed with error (sct=0, sc=8) 00:26:34.778 Write completed with error (sct=0, sc=8) 00:26:34.778 starting I/O failed: -6 00:26:34.778 Write completed with error (sct=0, sc=8) 00:26:34.778 Write completed with error (sct=0, sc=8) 00:26:34.778 Write completed with error (sct=0, sc=8) 00:26:34.778 Write completed with error (sct=0, sc=8) 00:26:34.778 starting I/O failed: -6 00:26:34.778 Write completed with error (sct=0, sc=8) 00:26:34.778 Write completed with error (sct=0, sc=8) 00:26:34.778 Write completed with error (sct=0, sc=8) 00:26:34.778 Write completed with error (sct=0, sc=8) 00:26:34.778 starting I/O failed: -6 00:26:34.778 Write completed with error (sct=0, sc=8) 00:26:34.778 Write completed with error (sct=0, sc=8) 00:26:34.778 Write completed with error (sct=0, sc=8) 00:26:34.778 Write completed with error (sct=0, sc=8) 00:26:34.778 starting I/O failed: -6 00:26:34.778 Write completed with error (sct=0, sc=8) 00:26:34.778 Write completed with error (sct=0, sc=8) 00:26:34.778 Write completed with error (sct=0, sc=8) 00:26:34.778 Write completed with error (sct=0, sc=8) 00:26:34.778 starting I/O failed: -6 00:26:34.778 Write completed with error (sct=0, sc=8) 00:26:34.778 Write completed with error (sct=0, sc=8) 00:26:34.778 Write completed with error (sct=0, sc=8) 00:26:34.778 Write completed with error (sct=0, sc=8) 00:26:34.778 starting I/O failed: -6 00:26:34.778 Write completed with error (sct=0, sc=8) 00:26:34.778 Write completed with error (sct=0, sc=8) 00:26:34.778 Write completed with error (sct=0, sc=8) 00:26:34.778 Write completed with error (sct=0, sc=8) 00:26:34.778 starting I/O failed: -6 00:26:34.778 Write completed with error (sct=0, sc=8) 00:26:34.778 Write completed with error (sct=0, sc=8) 00:26:34.778 Write completed with error (sct=0, sc=8) 00:26:34.778 Write completed with error (sct=0, sc=8) 00:26:34.778 starting I/O failed: -6 00:26:34.778 Write completed with error (sct=0, sc=8) 00:26:34.778 Write completed with error (sct=0, sc=8) 00:26:34.778 [2024-11-20 06:37:06.150121] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:34.778 Write completed with error (sct=0, sc=8) 00:26:34.778 starting I/O failed: -6 00:26:34.778 Write completed with error (sct=0, sc=8) 00:26:34.778 Write completed with error (sct=0, sc=8) 00:26:34.778 Write completed with error (sct=0, sc=8) 00:26:34.778 starting I/O failed: -6 00:26:34.778 Write completed with error (sct=0, sc=8) 00:26:34.778 starting I/O failed: -6 00:26:34.778 Write completed with error (sct=0, sc=8) 00:26:34.778 Write completed with error (sct=0, sc=8) 00:26:34.778 Write completed with error (sct=0, sc=8) 00:26:34.778 starting I/O failed: -6 00:26:34.778 Write completed with error (sct=0, sc=8) 00:26:34.778 starting I/O failed: -6 00:26:34.778 Write completed with error (sct=0, sc=8) 00:26:34.778 Write completed with error (sct=0, sc=8) 00:26:34.778 Write completed with error (sct=0, sc=8) 00:26:34.778 starting I/O failed: -6 00:26:34.778 Write completed with error (sct=0, sc=8) 00:26:34.778 starting I/O failed: -6 00:26:34.778 Write completed with error (sct=0, sc=8) 00:26:34.778 Write completed with error (sct=0, sc=8) 00:26:34.778 Write completed with error (sct=0, sc=8) 00:26:34.778 starting I/O failed: -6 00:26:34.778 Write completed with error (sct=0, sc=8) 00:26:34.778 starting I/O failed: -6 00:26:34.778 Write completed with error (sct=0, sc=8) 00:26:34.778 Write completed with error (sct=0, sc=8) 00:26:34.778 Write completed with error (sct=0, sc=8) 00:26:34.778 starting I/O failed: -6 00:26:34.778 Write completed with error (sct=0, sc=8) 00:26:34.778 starting I/O failed: -6 00:26:34.778 Write completed with error (sct=0, sc=8) 00:26:34.778 Write completed with error (sct=0, sc=8) 00:26:34.778 Write completed with error (sct=0, sc=8) 00:26:34.778 starting I/O failed: -6 00:26:34.778 Write completed with error (sct=0, sc=8) 00:26:34.778 starting I/O failed: -6 00:26:34.778 Write completed with error (sct=0, sc=8) 00:26:34.778 Write completed with error (sct=0, sc=8) 00:26:34.778 Write completed with error (sct=0, sc=8) 00:26:34.778 starting I/O failed: -6 00:26:34.778 Write completed with error (sct=0, sc=8) 00:26:34.778 starting I/O failed: -6 00:26:34.778 Write completed with error (sct=0, sc=8) 00:26:34.778 Write completed with error (sct=0, sc=8) 00:26:34.778 Write completed with error (sct=0, sc=8) 00:26:34.778 starting I/O failed: -6 00:26:34.778 Write completed with error (sct=0, sc=8) 00:26:34.778 starting I/O failed: -6 00:26:34.778 Write completed with error (sct=0, sc=8) 00:26:34.778 Write completed with error (sct=0, sc=8) 00:26:34.778 Write completed with error (sct=0, sc=8) 00:26:34.778 starting I/O failed: -6 00:26:34.778 Write completed with error (sct=0, sc=8) 00:26:34.778 starting I/O failed: -6 00:26:34.778 Write completed with error (sct=0, sc=8) 00:26:34.778 Write completed with error (sct=0, sc=8) 00:26:34.778 Write completed with error (sct=0, sc=8) 00:26:34.778 starting I/O failed: -6 00:26:34.778 [2024-11-20 06:37:06.150983] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:34.778 starting I/O failed: -6 00:26:34.778 Write completed with error (sct=0, sc=8) 00:26:34.778 Write completed with error (sct=0, sc=8) 00:26:34.778 starting I/O failed: -6 00:26:34.778 Write completed with error (sct=0, sc=8) 00:26:34.778 starting I/O failed: -6 00:26:34.778 Write completed with error (sct=0, sc=8) 00:26:34.778 starting I/O failed: -6 00:26:34.778 Write completed with error (sct=0, sc=8) 00:26:34.778 Write completed with error (sct=0, sc=8) 00:26:34.778 starting I/O failed: -6 00:26:34.778 Write completed with error (sct=0, sc=8) 00:26:34.778 starting I/O failed: -6 00:26:34.778 Write completed with error (sct=0, sc=8) 00:26:34.778 starting I/O failed: -6 00:26:34.778 Write completed with error (sct=0, sc=8) 00:26:34.778 Write completed with error (sct=0, sc=8) 00:26:34.778 starting I/O failed: -6 00:26:34.778 Write completed with error (sct=0, sc=8) 00:26:34.778 starting I/O failed: -6 00:26:34.778 Write completed with error (sct=0, sc=8) 00:26:34.778 starting I/O failed: -6 00:26:34.778 Write completed with error (sct=0, sc=8) 00:26:34.778 Write completed with error (sct=0, sc=8) 00:26:34.778 starting I/O failed: -6 00:26:34.778 Write completed with error (sct=0, sc=8) 00:26:34.778 starting I/O failed: -6 00:26:34.778 Write completed with error (sct=0, sc=8) 00:26:34.778 starting I/O failed: -6 00:26:34.778 Write completed with error (sct=0, sc=8) 00:26:34.778 Write completed with error (sct=0, sc=8) 00:26:34.778 starting I/O failed: -6 00:26:34.778 Write completed with error (sct=0, sc=8) 00:26:34.778 starting I/O failed: -6 00:26:34.778 Write completed with error (sct=0, sc=8) 00:26:34.778 starting I/O failed: -6 00:26:34.778 Write completed with error (sct=0, sc=8) 00:26:34.778 Write completed with error (sct=0, sc=8) 00:26:34.778 starting I/O failed: -6 00:26:34.778 Write completed with error (sct=0, sc=8) 00:26:34.778 starting I/O failed: -6 00:26:34.778 Write completed with error (sct=0, sc=8) 00:26:34.778 starting I/O failed: -6 00:26:34.778 Write completed with error (sct=0, sc=8) 00:26:34.778 Write completed with error (sct=0, sc=8) 00:26:34.778 starting I/O failed: -6 00:26:34.778 Write completed with error (sct=0, sc=8) 00:26:34.778 starting I/O failed: -6 00:26:34.778 Write completed with error (sct=0, sc=8) 00:26:34.778 starting I/O failed: -6 00:26:34.778 Write completed with error (sct=0, sc=8) 00:26:34.778 Write completed with error (sct=0, sc=8) 00:26:34.778 starting I/O failed: -6 00:26:34.778 Write completed with error (sct=0, sc=8) 00:26:34.778 starting I/O failed: -6 00:26:34.778 Write completed with error (sct=0, sc=8) 00:26:34.778 starting I/O failed: -6 00:26:34.778 Write completed with error (sct=0, sc=8) 00:26:34.778 Write completed with error (sct=0, sc=8) 00:26:34.778 starting I/O failed: -6 00:26:34.778 Write completed with error (sct=0, sc=8) 00:26:34.778 starting I/O failed: -6 00:26:34.778 Write completed with error (sct=0, sc=8) 00:26:34.778 starting I/O failed: -6 00:26:34.778 Write completed with error (sct=0, sc=8) 00:26:34.778 Write completed with error (sct=0, sc=8) 00:26:34.778 starting I/O failed: -6 00:26:34.778 Write completed with error (sct=0, sc=8) 00:26:34.778 starting I/O failed: -6 00:26:34.778 Write completed with error (sct=0, sc=8) 00:26:34.778 starting I/O failed: -6 00:26:34.778 Write completed with error (sct=0, sc=8) 00:26:34.778 Write completed with error (sct=0, sc=8) 00:26:34.778 starting I/O failed: -6 00:26:34.778 Write completed with error (sct=0, sc=8) 00:26:34.778 [2024-11-20 06:37:06.151859] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67dcc0 is same with tstarting I/O failed: -6 00:26:34.778 he state(6) to be set 00:26:34.778 Write completed with error (sct=0, sc=8) 00:26:34.778 [2024-11-20 06:37:06.151896] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67dcc0 is same with the state(6) to be set 00:26:34.778 starting I/O failed: -6 00:26:34.778 [2024-11-20 06:37:06.151905] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67dcc0 is same with the state(6) to be set 00:26:34.778 [2024-11-20 06:37:06.151912] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67dcc0 is same with the state(6) to be set 00:26:34.778 Write completed with error (sct=0, sc=8) 00:26:34.778 [2024-11-20 06:37:06.151919] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67dcc0 is same with the state(6) to be set 00:26:34.778 [2024-11-20 06:37:06.151926] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67dcc0 is same with the state(6) to be set 00:26:34.778 Write completed with error (sct=0, sc=8) 00:26:34.778 starting I/O failed: -6 00:26:34.778 Write completed with error (sct=0, sc=8) 00:26:34.778 starting I/O failed: -6 00:26:34.778 Write completed with error (sct=0, sc=8) 00:26:34.778 starting I/O failed: -6 00:26:34.778 Write completed with error (sct=0, sc=8) 00:26:34.778 [2024-11-20 06:37:06.152011] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:34.778 Write completed with error (sct=0, sc=8) 00:26:34.778 starting I/O failed: -6 00:26:34.778 Write completed with error (sct=0, sc=8) 00:26:34.778 starting I/O failed: -6 00:26:34.778 Write completed with error (sct=0, sc=8) 00:26:34.778 starting I/O failed: -6 00:26:34.778 Write completed with error (sct=0, sc=8) 00:26:34.778 starting I/O failed: -6 00:26:34.778 Write completed with error (sct=0, sc=8) 00:26:34.778 starting I/O failed: -6 00:26:34.778 Write completed with error (sct=0, sc=8) 00:26:34.778 [2024-11-20 06:37:06.152206] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67e1b0 is same with tstarting I/O failed: -6 00:26:34.778 he state(6) to be set 00:26:34.778 [2024-11-20 06:37:06.152228] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67e1b0 is same with the state(6) to be set 00:26:34.778 Write completed with error (sct=0, sc=8) 00:26:34.778 [2024-11-20 06:37:06.152235] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67e1b0 is same with the state(6) to be set 00:26:34.778 starting I/O failed: -6 00:26:34.778 [2024-11-20 06:37:06.152242] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67e1b0 is same with the state(6) to be set 00:26:34.778 [2024-11-20 06:37:06.152248] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67e1b0 is same with the state(6) to be set 00:26:34.778 Write completed with error (sct=0, sc=8) 00:26:34.779 [2024-11-20 06:37:06.152254] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67e1b0 is same with the state(6) to be set 00:26:34.779 starting I/O failed: -6 00:26:34.779 [2024-11-20 06:37:06.152261] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67e1b0 is same with the state(6) to be set 00:26:34.779 Write completed with error (sct=0, sc=8) 00:26:34.779 starting I/O failed: -6 00:26:34.779 Write completed with error (sct=0, sc=8) 00:26:34.779 starting I/O failed: -6 00:26:34.779 Write completed with error (sct=0, sc=8) 00:26:34.779 starting I/O failed: -6 00:26:34.779 Write completed with error (sct=0, sc=8) 00:26:34.779 starting I/O failed: -6 00:26:34.779 Write completed with error (sct=0, sc=8) 00:26:34.779 starting I/O failed: -6 00:26:34.779 Write completed with error (sct=0, sc=8) 00:26:34.779 starting I/O failed: -6 00:26:34.779 Write completed with error (sct=0, sc=8) 00:26:34.779 starting I/O failed: -6 00:26:34.779 Write completed with error (sct=0, sc=8) 00:26:34.779 starting I/O failed: -6 00:26:34.779 Write completed with error (sct=0, sc=8) 00:26:34.779 starting I/O failed: -6 00:26:34.779 Write completed with error (sct=0, sc=8) 00:26:34.779 starting I/O failed: -6 00:26:34.779 Write completed with error (sct=0, sc=8) 00:26:34.779 starting I/O failed: -6 00:26:34.779 Write completed with error (sct=0, sc=8) 00:26:34.779 starting I/O failed: -6 00:26:34.779 Write completed with error (sct=0, sc=8) 00:26:34.779 starting I/O failed: -6 00:26:34.779 Write completed with error (sct=0, sc=8) 00:26:34.779 starting I/O failed: -6 00:26:34.779 Write completed with error (sct=0, sc=8) 00:26:34.779 starting I/O failed: -6 00:26:34.779 Write completed with error (sct=0, sc=8) 00:26:34.779 [2024-11-20 06:37:06.152561] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67e6a0 is same with tstarting I/O failed: -6 00:26:34.779 he state(6) to be set 00:26:34.779 [2024-11-20 06:37:06.152585] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67e6a0 is same with the state(6) to be set 00:26:34.779 Write completed with error (sct=0, sc=8) 00:26:34.779 [2024-11-20 06:37:06.152593] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67e6a0 is same with tstarting I/O failed: -6 00:26:34.779 he state(6) to be set 00:26:34.779 [2024-11-20 06:37:06.152601] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67e6a0 is same with the state(6) to be set 00:26:34.779 [2024-11-20 06:37:06.152607] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67e6a0 is same with the state(6) to be set 00:26:34.779 Write completed with error (sct=0, sc=8) 00:26:34.779 [2024-11-20 06:37:06.152613] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67e6a0 is same with the state(6) to be set 00:26:34.779 starting I/O failed: -6 00:26:34.779 [2024-11-20 06:37:06.152620] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67e6a0 is same with the state(6) to be set 00:26:34.779 [2024-11-20 06:37:06.152626] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67e6a0 is same with the state(6) to be set 00:26:34.779 Write completed with error (sct=0, sc=8) 00:26:34.779 [2024-11-20 06:37:06.152632] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67e6a0 is same with the state(6) to be set 00:26:34.779 starting I/O failed: -6 00:26:34.779 [2024-11-20 06:37:06.152638] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67e6a0 is same with the state(6) to be set 00:26:34.779 Write completed with error (sct=0, sc=8) 00:26:34.779 starting I/O failed: -6 00:26:34.779 Write completed with error (sct=0, sc=8) 00:26:34.779 starting I/O failed: -6 00:26:34.779 Write completed with error (sct=0, sc=8) 00:26:34.779 starting I/O failed: -6 00:26:34.779 Write completed with error (sct=0, sc=8) 00:26:34.779 starting I/O failed: -6 00:26:34.779 Write completed with error (sct=0, sc=8) 00:26:34.779 starting I/O failed: -6 00:26:34.779 Write completed with error (sct=0, sc=8) 00:26:34.779 starting I/O failed: -6 00:26:34.779 Write completed with error (sct=0, sc=8) 00:26:34.779 starting I/O failed: -6 00:26:34.779 Write completed with error (sct=0, sc=8) 00:26:34.779 starting I/O failed: -6 00:26:34.779 Write completed with error (sct=0, sc=8) 00:26:34.779 starting I/O failed: -6 00:26:34.779 Write completed with error (sct=0, sc=8) 00:26:34.779 starting I/O failed: -6 00:26:34.779 Write completed with error (sct=0, sc=8) 00:26:34.779 starting I/O failed: -6 00:26:34.779 Write completed with error (sct=0, sc=8) 00:26:34.779 starting I/O failed: -6 00:26:34.779 Write completed with error (sct=0, sc=8) 00:26:34.779 starting I/O failed: -6 00:26:34.779 Write completed with error (sct=0, sc=8) 00:26:34.779 starting I/O failed: -6 00:26:34.779 Write completed with error (sct=0, sc=8) 00:26:34.779 starting I/O failed: -6 00:26:34.779 Write completed with error (sct=0, sc=8) 00:26:34.779 starting I/O failed: -6 00:26:34.779 Write completed with error (sct=0, sc=8) 00:26:34.779 starting I/O failed: -6 00:26:34.779 Write completed with error (sct=0, sc=8) 00:26:34.779 starting I/O failed: -6 00:26:34.779 Write completed with error (sct=0, sc=8) 00:26:34.779 starting I/O failed: -6 00:26:34.779 Write completed with error (sct=0, sc=8) 00:26:34.779 starting I/O failed: -6 00:26:34.779 [2024-11-20 06:37:06.153028] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67d7f0 is same with the state(6) to be set 00:26:34.779 Write completed with error (sct=0, sc=8) 00:26:34.779 starting I/O failed: -6 00:26:34.779 [2024-11-20 06:37:06.153051] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67d7f0 is same with the state(6) to be set 00:26:34.779 [2024-11-20 06:37:06.153060] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67d7f0 is same with the state(6) to be set 00:26:34.779 Write completed with error (sct=0, sc=8) 00:26:34.779 [2024-11-20 06:37:06.153066] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67d7f0 is same with the state(6) to be set 00:26:34.779 starting I/O failed: -6 00:26:34.779 [2024-11-20 06:37:06.153073] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67d7f0 is same with the state(6) to be set 00:26:34.779 [2024-11-20 06:37:06.153079] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67d7f0 is same with the state(6) to be set 00:26:34.779 Write completed with error (sct=0, sc=8) 00:26:34.779 [2024-11-20 06:37:06.153085] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67d7f0 is same with the state(6) to be set 00:26:34.779 starting I/O failed: -6 00:26:34.779 [2024-11-20 06:37:06.153092] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67d7f0 is same with the state(6) to be set 00:26:34.779 [2024-11-20 06:37:06.153099] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67d7f0 is same with the state(6) to be set 00:26:34.779 Write completed with error (sct=0, sc=8) 00:26:34.779 starting I/O failed: -6 00:26:34.779 Write completed with error (sct=0, sc=8) 00:26:34.779 starting I/O failed: -6 00:26:34.779 Write completed with error (sct=0, sc=8) 00:26:34.779 starting I/O failed: -6 00:26:34.779 Write completed with error (sct=0, sc=8) 00:26:34.779 starting I/O failed: -6 00:26:34.779 Write completed with error (sct=0, sc=8) 00:26:34.779 starting I/O failed: -6 00:26:34.779 Write completed with error (sct=0, sc=8) 00:26:34.779 starting I/O failed: -6 00:26:34.779 Write completed with error (sct=0, sc=8) 00:26:34.779 starting I/O failed: -6 00:26:34.779 Write completed with error (sct=0, sc=8) 00:26:34.779 starting I/O failed: -6 00:26:34.779 Write completed with error (sct=0, sc=8) 00:26:34.779 starting I/O failed: -6 00:26:34.779 Write completed with error (sct=0, sc=8) 00:26:34.779 starting I/O failed: -6 00:26:34.779 Write completed with error (sct=0, sc=8) 00:26:34.779 starting I/O failed: -6 00:26:34.779 [2024-11-20 06:37:06.153475] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:34.779 NVMe io qpair process completion error 00:26:34.779 Write completed with error (sct=0, sc=8) 00:26:34.779 starting I/O failed: -6 00:26:34.779 Write completed with error (sct=0, sc=8) 00:26:34.779 Write completed with error (sct=0, sc=8) 00:26:34.779 Write completed with error (sct=0, sc=8) 00:26:34.779 Write completed with error (sct=0, sc=8) 00:26:34.779 starting I/O failed: -6 00:26:34.779 Write completed with error (sct=0, sc=8) 00:26:34.779 Write completed with error (sct=0, sc=8) 00:26:34.779 Write completed with error (sct=0, sc=8) 00:26:34.779 Write completed with error (sct=0, sc=8) 00:26:34.779 starting I/O failed: -6 00:26:34.779 Write completed with error (sct=0, sc=8) 00:26:34.779 Write completed with error (sct=0, sc=8) 00:26:34.779 Write completed with error (sct=0, sc=8) 00:26:34.779 Write completed with error (sct=0, sc=8) 00:26:34.779 starting I/O failed: -6 00:26:34.779 Write completed with error (sct=0, sc=8) 00:26:34.779 Write completed with error (sct=0, sc=8) 00:26:34.779 Write completed with error (sct=0, sc=8) 00:26:34.779 [2024-11-20 06:37:06.154079] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f1f70 is same with the state(6) to be set 00:26:34.779 Write completed with error (sct=0, sc=8) 00:26:34.779 starting I/O failed: -6 00:26:34.779 [2024-11-20 06:37:06.154094] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f1f70 is same with the state(6) to be set 00:26:34.779 [2024-11-20 06:37:06.154100] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f1f70 is same with the state(6) to be set 00:26:34.779 [2024-11-20 06:37:06.154107] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f1f70 is same with tWrite completed with error (sct=0, sc=8) 00:26:34.779 he state(6) to be set 00:26:34.779 [2024-11-20 06:37:06.154115] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f1f70 is same with the state(6) to be set 00:26:34.779 Write completed with error (sct=0, sc=8) 00:26:34.779 [2024-11-20 06:37:06.154122] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f1f70 is same with the state(6) to be set 00:26:34.779 Write completed with error (sct=0, sc=8) 00:26:34.779 Write completed with error (sct=0, sc=8) 00:26:34.779 starting I/O failed: -6 00:26:34.779 Write completed with error (sct=0, sc=8) 00:26:34.779 Write completed with error (sct=0, sc=8) 00:26:34.779 Write completed with error (sct=0, sc=8) 00:26:34.779 Write completed with error (sct=0, sc=8) 00:26:34.779 starting I/O failed: -6 00:26:34.779 Write completed with error (sct=0, sc=8) 00:26:34.779 Write completed with error (sct=0, sc=8) 00:26:34.779 Write completed with error (sct=0, sc=8) 00:26:34.779 Write completed with error (sct=0, sc=8) 00:26:34.779 starting I/O failed: -6 00:26:34.779 Write completed with error (sct=0, sc=8) 00:26:34.779 Write completed with error (sct=0, sc=8) 00:26:34.779 Write completed with error (sct=0, sc=8) 00:26:34.779 Write completed with error (sct=0, sc=8) 00:26:34.779 starting I/O failed: -6 00:26:34.779 Write completed with error (sct=0, sc=8) 00:26:34.779 Write completed with error (sct=0, sc=8) 00:26:34.779 Write completed with error (sct=0, sc=8) 00:26:34.779 Write completed with error (sct=0, sc=8) 00:26:34.779 starting I/O failed: -6 00:26:34.779 [2024-11-20 06:37:06.154426] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:34.779 Write completed with error (sct=0, sc=8) 00:26:34.779 Write completed with error (sct=0, sc=8) 00:26:34.779 Write completed with error (sct=0, sc=8) 00:26:34.779 starting I/O failed: -6 00:26:34.779 Write completed with error (sct=0, sc=8) 00:26:34.779 starting I/O failed: -6 00:26:34.779 Write completed with error (sct=0, sc=8) 00:26:34.779 Write completed with error (sct=0, sc=8) 00:26:34.779 Write completed with error (sct=0, sc=8) 00:26:34.779 starting I/O failed: -6 00:26:34.779 Write completed with error (sct=0, sc=8) 00:26:34.779 starting I/O failed: -6 00:26:34.779 Write completed with error (sct=0, sc=8) 00:26:34.779 Write completed with error (sct=0, sc=8) 00:26:34.779 Write completed with error (sct=0, sc=8) 00:26:34.779 starting I/O failed: -6 00:26:34.779 Write completed with error (sct=0, sc=8) 00:26:34.779 starting I/O failed: -6 00:26:34.779 Write completed with error (sct=0, sc=8) 00:26:34.779 Write completed with error (sct=0, sc=8) 00:26:34.779 Write completed with error (sct=0, sc=8) 00:26:34.779 starting I/O failed: -6 00:26:34.779 Write completed with error (sct=0, sc=8) 00:26:34.779 starting I/O failed: -6 00:26:34.779 Write completed with error (sct=0, sc=8) 00:26:34.779 Write completed with error (sct=0, sc=8) 00:26:34.779 Write completed with error (sct=0, sc=8) 00:26:34.779 starting I/O failed: -6 00:26:34.779 Write completed with error (sct=0, sc=8) 00:26:34.779 starting I/O failed: -6 00:26:34.779 Write completed with error (sct=0, sc=8) 00:26:34.779 Write completed with error (sct=0, sc=8) 00:26:34.779 Write completed with error (sct=0, sc=8) 00:26:34.779 starting I/O failed: -6 00:26:34.779 Write completed with error (sct=0, sc=8) 00:26:34.779 starting I/O failed: -6 00:26:34.779 Write completed with error (sct=0, sc=8) 00:26:34.779 Write completed with error (sct=0, sc=8) 00:26:34.779 Write completed with error (sct=0, sc=8) 00:26:34.779 starting I/O failed: -6 00:26:34.779 Write completed with error (sct=0, sc=8) 00:26:34.779 starting I/O failed: -6 00:26:34.779 Write completed with error (sct=0, sc=8) 00:26:34.779 Write completed with error (sct=0, sc=8) 00:26:34.779 Write completed with error (sct=0, sc=8) 00:26:34.779 starting I/O failed: -6 00:26:34.779 Write completed with error (sct=0, sc=8) 00:26:34.779 starting I/O failed: -6 00:26:34.779 Write completed with error (sct=0, sc=8) 00:26:34.779 Write completed with error (sct=0, sc=8) 00:26:34.779 Write completed with error (sct=0, sc=8) 00:26:34.779 starting I/O failed: -6 00:26:34.780 Write completed with error (sct=0, sc=8) 00:26:34.780 starting I/O failed: -6 00:26:34.780 Write completed with error (sct=0, sc=8) 00:26:34.780 Write completed with error (sct=0, sc=8) 00:26:34.780 Write completed with error (sct=0, sc=8) 00:26:34.780 starting I/O failed: -6 00:26:34.780 Write completed with error (sct=0, sc=8) 00:26:34.780 starting I/O failed: -6 00:26:34.780 Write completed with error (sct=0, sc=8) 00:26:34.780 Write completed with error (sct=0, sc=8) 00:26:34.780 Write completed with error (sct=0, sc=8) 00:26:34.780 starting I/O failed: -6 00:26:34.780 Write completed with error (sct=0, sc=8) 00:26:34.780 starting I/O failed: -6 00:26:34.780 Write completed with error (sct=0, sc=8) 00:26:34.780 [2024-11-20 06:37:06.155300] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:34.780 Write completed with error (sct=0, sc=8) 00:26:34.780 starting I/O failed: -6 00:26:34.780 Write completed with error (sct=0, sc=8) 00:26:34.780 starting I/O failed: -6 00:26:34.780 Write completed with error (sct=0, sc=8) 00:26:34.780 Write completed with error (sct=0, sc=8) 00:26:34.780 starting I/O failed: -6 00:26:34.780 Write completed with error (sct=0, sc=8) 00:26:34.780 starting I/O failed: -6 00:26:34.780 Write completed with error (sct=0, sc=8) 00:26:34.780 starting I/O failed: -6 00:26:34.780 Write completed with error (sct=0, sc=8) 00:26:34.780 Write completed with error (sct=0, sc=8) 00:26:34.780 starting I/O failed: -6 00:26:34.780 Write completed with error (sct=0, sc=8) 00:26:34.780 starting I/O failed: -6 00:26:34.780 Write completed with error (sct=0, sc=8) 00:26:34.780 starting I/O failed: -6 00:26:34.780 Write completed with error (sct=0, sc=8) 00:26:34.780 Write completed with error (sct=0, sc=8) 00:26:34.780 starting I/O failed: -6 00:26:34.780 Write completed with error (sct=0, sc=8) 00:26:34.780 starting I/O failed: -6 00:26:34.780 Write completed with error (sct=0, sc=8) 00:26:34.780 starting I/O failed: -6 00:26:34.780 Write completed with error (sct=0, sc=8) 00:26:34.780 Write completed with error (sct=0, sc=8) 00:26:34.780 starting I/O failed: -6 00:26:34.780 Write completed with error (sct=0, sc=8) 00:26:34.780 starting I/O failed: -6 00:26:34.780 Write completed with error (sct=0, sc=8) 00:26:34.780 starting I/O failed: -6 00:26:34.780 Write completed with error (sct=0, sc=8) 00:26:34.780 Write completed with error (sct=0, sc=8) 00:26:34.780 starting I/O failed: -6 00:26:34.780 Write completed with error (sct=0, sc=8) 00:26:34.780 starting I/O failed: -6 00:26:34.780 Write completed with error (sct=0, sc=8) 00:26:34.780 starting I/O failed: -6 00:26:34.780 Write completed with error (sct=0, sc=8) 00:26:34.780 Write completed with error (sct=0, sc=8) 00:26:34.780 starting I/O failed: -6 00:26:34.780 Write completed with error (sct=0, sc=8) 00:26:34.780 starting I/O failed: -6 00:26:34.780 Write completed with error (sct=0, sc=8) 00:26:34.780 starting I/O failed: -6 00:26:34.780 Write completed with error (sct=0, sc=8) 00:26:34.780 Write completed with error (sct=0, sc=8) 00:26:34.780 starting I/O failed: -6 00:26:34.780 Write completed with error (sct=0, sc=8) 00:26:34.780 starting I/O failed: -6 00:26:34.780 Write completed with error (sct=0, sc=8) 00:26:34.780 starting I/O failed: -6 00:26:34.780 Write completed with error (sct=0, sc=8) 00:26:34.780 Write completed with error (sct=0, sc=8) 00:26:34.780 starting I/O failed: -6 00:26:34.780 Write completed with error (sct=0, sc=8) 00:26:34.780 starting I/O failed: -6 00:26:34.780 Write completed with error (sct=0, sc=8) 00:26:34.780 starting I/O failed: -6 00:26:34.780 Write completed with error (sct=0, sc=8) 00:26:34.780 Write completed with error (sct=0, sc=8) 00:26:34.780 starting I/O failed: -6 00:26:34.780 Write completed with error (sct=0, sc=8) 00:26:34.780 starting I/O failed: -6 00:26:34.780 Write completed with error (sct=0, sc=8) 00:26:34.780 starting I/O failed: -6 00:26:34.780 Write completed with error (sct=0, sc=8) 00:26:34.780 Write completed with error (sct=0, sc=8) 00:26:34.780 starting I/O failed: -6 00:26:34.780 Write completed with error (sct=0, sc=8) 00:26:34.780 starting I/O failed: -6 00:26:34.780 Write completed with error (sct=0, sc=8) 00:26:34.780 starting I/O failed: -6 00:26:34.780 Write completed with error (sct=0, sc=8) 00:26:34.780 Write completed with error (sct=0, sc=8) 00:26:34.780 starting I/O failed: -6 00:26:34.780 Write completed with error (sct=0, sc=8) 00:26:34.780 starting I/O failed: -6 00:26:34.780 Write completed with error (sct=0, sc=8) 00:26:34.780 starting I/O failed: -6 00:26:34.780 Write completed with error (sct=0, sc=8) 00:26:34.780 Write completed with error (sct=0, sc=8) 00:26:34.780 starting I/O failed: -6 00:26:34.780 [2024-11-20 06:37:06.156297] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:34.780 Write completed with error (sct=0, sc=8) 00:26:34.780 starting I/O failed: -6 00:26:34.780 Write completed with error (sct=0, sc=8) 00:26:34.780 starting I/O failed: -6 00:26:34.780 Write completed with error (sct=0, sc=8) 00:26:34.780 starting I/O failed: -6 00:26:34.780 Write completed with error (sct=0, sc=8) 00:26:34.780 starting I/O failed: -6 00:26:34.780 Write completed with error (sct=0, sc=8) 00:26:34.780 starting I/O failed: -6 00:26:34.780 Write completed with error (sct=0, sc=8) 00:26:34.780 starting I/O failed: -6 00:26:34.780 Write completed with error (sct=0, sc=8) 00:26:34.780 starting I/O failed: -6 00:26:34.780 Write completed with error (sct=0, sc=8) 00:26:34.780 starting I/O failed: -6 00:26:34.780 Write completed with error (sct=0, sc=8) 00:26:34.780 starting I/O failed: -6 00:26:34.780 Write completed with error (sct=0, sc=8) 00:26:34.780 starting I/O failed: -6 00:26:34.780 Write completed with error (sct=0, sc=8) 00:26:34.780 starting I/O failed: -6 00:26:34.780 Write completed with error (sct=0, sc=8) 00:26:34.780 starting I/O failed: -6 00:26:34.780 Write completed with error (sct=0, sc=8) 00:26:34.780 starting I/O failed: -6 00:26:34.780 Write completed with error (sct=0, sc=8) 00:26:34.780 starting I/O failed: -6 00:26:34.780 Write completed with error (sct=0, sc=8) 00:26:34.780 starting I/O failed: -6 00:26:34.780 Write completed with error (sct=0, sc=8) 00:26:34.780 starting I/O failed: -6 00:26:34.780 Write completed with error (sct=0, sc=8) 00:26:34.780 starting I/O failed: -6 00:26:34.780 Write completed with error (sct=0, sc=8) 00:26:34.780 starting I/O failed: -6 00:26:34.780 Write completed with error (sct=0, sc=8) 00:26:34.780 starting I/O failed: -6 00:26:34.780 Write completed with error (sct=0, sc=8) 00:26:34.780 starting I/O failed: -6 00:26:34.780 Write completed with error (sct=0, sc=8) 00:26:34.780 starting I/O failed: -6 00:26:34.780 Write completed with error (sct=0, sc=8) 00:26:34.780 starting I/O failed: -6 00:26:34.780 Write completed with error (sct=0, sc=8) 00:26:34.780 starting I/O failed: -6 00:26:34.780 Write completed with error (sct=0, sc=8) 00:26:34.780 starting I/O failed: -6 00:26:34.780 Write completed with error (sct=0, sc=8) 00:26:34.780 starting I/O failed: -6 00:26:34.780 Write completed with error (sct=0, sc=8) 00:26:34.780 starting I/O failed: -6 00:26:34.780 Write completed with error (sct=0, sc=8) 00:26:34.780 starting I/O failed: -6 00:26:34.780 Write completed with error (sct=0, sc=8) 00:26:34.780 starting I/O failed: -6 00:26:34.780 Write completed with error (sct=0, sc=8) 00:26:34.780 starting I/O failed: -6 00:26:34.780 Write completed with error (sct=0, sc=8) 00:26:34.780 starting I/O failed: -6 00:26:34.780 Write completed with error (sct=0, sc=8) 00:26:34.780 starting I/O failed: -6 00:26:34.780 Write completed with error (sct=0, sc=8) 00:26:34.780 starting I/O failed: -6 00:26:34.780 Write completed with error (sct=0, sc=8) 00:26:34.780 starting I/O failed: -6 00:26:34.780 Write completed with error (sct=0, sc=8) 00:26:34.780 starting I/O failed: -6 00:26:34.780 Write completed with error (sct=0, sc=8) 00:26:34.780 starting I/O failed: -6 00:26:34.780 Write completed with error (sct=0, sc=8) 00:26:34.780 starting I/O failed: -6 00:26:34.780 Write completed with error (sct=0, sc=8) 00:26:34.780 starting I/O failed: -6 00:26:34.780 Write completed with error (sct=0, sc=8) 00:26:34.780 starting I/O failed: -6 00:26:34.780 Write completed with error (sct=0, sc=8) 00:26:34.780 starting I/O failed: -6 00:26:34.780 Write completed with error (sct=0, sc=8) 00:26:34.780 starting I/O failed: -6 00:26:34.780 Write completed with error (sct=0, sc=8) 00:26:34.780 starting I/O failed: -6 00:26:34.780 Write completed with error (sct=0, sc=8) 00:26:34.780 starting I/O failed: -6 00:26:34.780 Write completed with error (sct=0, sc=8) 00:26:34.780 starting I/O failed: -6 00:26:34.780 Write completed with error (sct=0, sc=8) 00:26:34.780 starting I/O failed: -6 00:26:34.780 Write completed with error (sct=0, sc=8) 00:26:34.780 starting I/O failed: -6 00:26:34.780 Write completed with error (sct=0, sc=8) 00:26:34.780 starting I/O failed: -6 00:26:34.780 Write completed with error (sct=0, sc=8) 00:26:34.780 starting I/O failed: -6 00:26:34.780 Write completed with error (sct=0, sc=8) 00:26:34.780 starting I/O failed: -6 00:26:34.780 Write completed with error (sct=0, sc=8) 00:26:34.780 starting I/O failed: -6 00:26:34.780 Write completed with error (sct=0, sc=8) 00:26:34.780 starting I/O failed: -6 00:26:34.780 Write completed with error (sct=0, sc=8) 00:26:34.780 starting I/O failed: -6 00:26:34.780 Write completed with error (sct=0, sc=8) 00:26:34.780 starting I/O failed: -6 00:26:34.780 Write completed with error (sct=0, sc=8) 00:26:34.780 starting I/O failed: -6 00:26:34.780 Write completed with error (sct=0, sc=8) 00:26:34.780 starting I/O failed: -6 00:26:34.780 Write completed with error (sct=0, sc=8) 00:26:34.780 starting I/O failed: -6 00:26:34.780 Write completed with error (sct=0, sc=8) 00:26:34.780 starting I/O failed: -6 00:26:34.780 Write completed with error (sct=0, sc=8) 00:26:34.780 starting I/O failed: -6 00:26:34.780 Write completed with error (sct=0, sc=8) 00:26:34.780 starting I/O failed: -6 00:26:34.780 Write completed with error (sct=0, sc=8) 00:26:34.780 starting I/O failed: -6 00:26:34.780 Write completed with error (sct=0, sc=8) 00:26:34.780 starting I/O failed: -6 00:26:34.780 [2024-11-20 06:37:06.158220] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:34.780 NVMe io qpair process completion error 00:26:34.780 Write completed with error (sct=0, sc=8) 00:26:34.780 starting I/O failed: -6 00:26:34.780 Write completed with error (sct=0, sc=8) 00:26:34.780 Write completed with error (sct=0, sc=8) 00:26:34.780 Write completed with error (sct=0, sc=8) 00:26:34.780 Write completed with error (sct=0, sc=8) 00:26:34.780 starting I/O failed: -6 00:26:34.780 Write completed with error (sct=0, sc=8) 00:26:34.780 Write completed with error (sct=0, sc=8) 00:26:34.780 Write completed with error (sct=0, sc=8) 00:26:34.780 Write completed with error (sct=0, sc=8) 00:26:34.780 starting I/O failed: -6 00:26:34.780 Write completed with error (sct=0, sc=8) 00:26:34.780 Write completed with error (sct=0, sc=8) 00:26:34.780 Write completed with error (sct=0, sc=8) 00:26:34.780 Write completed with error (sct=0, sc=8) 00:26:34.780 starting I/O failed: -6 00:26:34.780 Write completed with error (sct=0, sc=8) 00:26:34.780 Write completed with error (sct=0, sc=8) 00:26:34.780 Write completed with error (sct=0, sc=8) 00:26:34.780 Write completed with error (sct=0, sc=8) 00:26:34.780 starting I/O failed: -6 00:26:34.780 Write completed with error (sct=0, sc=8) 00:26:34.780 Write completed with error (sct=0, sc=8) 00:26:34.780 Write completed with error (sct=0, sc=8) 00:26:34.780 Write completed with error (sct=0, sc=8) 00:26:34.780 starting I/O failed: -6 00:26:34.780 Write completed with error (sct=0, sc=8) 00:26:34.780 Write completed with error (sct=0, sc=8) 00:26:34.780 Write completed with error (sct=0, sc=8) 00:26:34.780 Write completed with error (sct=0, sc=8) 00:26:34.780 starting I/O failed: -6 00:26:34.780 Write completed with error (sct=0, sc=8) 00:26:34.780 Write completed with error (sct=0, sc=8) 00:26:34.780 Write completed with error (sct=0, sc=8) 00:26:34.780 Write completed with error (sct=0, sc=8) 00:26:34.780 starting I/O failed: -6 00:26:34.780 Write completed with error (sct=0, sc=8) 00:26:34.780 Write completed with error (sct=0, sc=8) 00:26:34.780 Write completed with error (sct=0, sc=8) 00:26:34.780 Write completed with error (sct=0, sc=8) 00:26:34.780 starting I/O failed: -6 00:26:34.780 Write completed with error (sct=0, sc=8) 00:26:34.780 Write completed with error (sct=0, sc=8) 00:26:34.780 Write completed with error (sct=0, sc=8) 00:26:34.780 Write completed with error (sct=0, sc=8) 00:26:34.781 starting I/O failed: -6 00:26:34.781 Write completed with error (sct=0, sc=8) 00:26:34.781 Write completed with error (sct=0, sc=8) 00:26:34.781 Write completed with error (sct=0, sc=8) 00:26:34.781 Write completed with error (sct=0, sc=8) 00:26:34.781 starting I/O failed: -6 00:26:34.781 [2024-11-20 06:37:06.159257] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:34.781 Write completed with error (sct=0, sc=8) 00:26:34.781 Write completed with error (sct=0, sc=8) 00:26:34.781 Write completed with error (sct=0, sc=8) 00:26:34.781 starting I/O failed: -6 00:26:34.781 Write completed with error (sct=0, sc=8) 00:26:34.781 starting I/O failed: -6 00:26:34.781 Write completed with error (sct=0, sc=8) 00:26:34.781 Write completed with error (sct=0, sc=8) 00:26:34.781 Write completed with error (sct=0, sc=8) 00:26:34.781 starting I/O failed: -6 00:26:34.781 Write completed with error (sct=0, sc=8) 00:26:34.781 starting I/O failed: -6 00:26:34.781 Write completed with error (sct=0, sc=8) 00:26:34.781 Write completed with error (sct=0, sc=8) 00:26:34.781 Write completed with error (sct=0, sc=8) 00:26:34.781 starting I/O failed: -6 00:26:34.781 Write completed with error (sct=0, sc=8) 00:26:34.781 starting I/O failed: -6 00:26:34.781 Write completed with error (sct=0, sc=8) 00:26:34.781 Write completed with error (sct=0, sc=8) 00:26:34.781 Write completed with error (sct=0, sc=8) 00:26:34.781 starting I/O failed: -6 00:26:34.781 Write completed with error (sct=0, sc=8) 00:26:34.781 starting I/O failed: -6 00:26:34.781 Write completed with error (sct=0, sc=8) 00:26:34.781 Write completed with error (sct=0, sc=8) 00:26:34.781 Write completed with error (sct=0, sc=8) 00:26:34.781 starting I/O failed: -6 00:26:34.781 Write completed with error (sct=0, sc=8) 00:26:34.781 starting I/O failed: -6 00:26:34.781 Write completed with error (sct=0, sc=8) 00:26:34.781 Write completed with error (sct=0, sc=8) 00:26:34.781 Write completed with error (sct=0, sc=8) 00:26:34.781 starting I/O failed: -6 00:26:34.781 Write completed with error (sct=0, sc=8) 00:26:34.781 starting I/O failed: -6 00:26:34.781 Write completed with error (sct=0, sc=8) 00:26:34.781 Write completed with error (sct=0, sc=8) 00:26:34.781 Write completed with error (sct=0, sc=8) 00:26:34.781 starting I/O failed: -6 00:26:34.781 Write completed with error (sct=0, sc=8) 00:26:34.781 starting I/O failed: -6 00:26:34.781 Write completed with error (sct=0, sc=8) 00:26:34.781 Write completed with error (sct=0, sc=8) 00:26:34.781 Write completed with error (sct=0, sc=8) 00:26:34.781 starting I/O failed: -6 00:26:34.781 Write completed with error (sct=0, sc=8) 00:26:34.781 starting I/O failed: -6 00:26:34.781 Write completed with error (sct=0, sc=8) 00:26:34.781 Write completed with error (sct=0, sc=8) 00:26:34.781 Write completed with error (sct=0, sc=8) 00:26:34.781 starting I/O failed: -6 00:26:34.781 Write completed with error (sct=0, sc=8) 00:26:34.781 starting I/O failed: -6 00:26:34.781 Write completed with error (sct=0, sc=8) 00:26:34.781 Write completed with error (sct=0, sc=8) 00:26:34.781 Write completed with error (sct=0, sc=8) 00:26:34.781 starting I/O failed: -6 00:26:34.781 [2024-11-20 06:37:06.160021] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:34.781 Write completed with error (sct=0, sc=8) 00:26:34.781 starting I/O failed: -6 00:26:34.781 Write completed with error (sct=0, sc=8) 00:26:34.781 Write completed with error (sct=0, sc=8) 00:26:34.781 starting I/O failed: -6 00:26:34.781 Write completed with error (sct=0, sc=8) 00:26:34.781 starting I/O failed: -6 00:26:34.781 Write completed with error (sct=0, sc=8) 00:26:34.781 starting I/O failed: -6 00:26:34.781 Write completed with error (sct=0, sc=8) 00:26:34.781 Write completed with error (sct=0, sc=8) 00:26:34.781 starting I/O failed: -6 00:26:34.781 Write completed with error (sct=0, sc=8) 00:26:34.781 starting I/O failed: -6 00:26:34.781 Write completed with error (sct=0, sc=8) 00:26:34.781 starting I/O failed: -6 00:26:34.781 Write completed with error (sct=0, sc=8) 00:26:34.781 Write completed with error (sct=0, sc=8) 00:26:34.781 starting I/O failed: -6 00:26:34.781 Write completed with error (sct=0, sc=8) 00:26:34.781 starting I/O failed: -6 00:26:34.781 Write completed with error (sct=0, sc=8) 00:26:34.781 starting I/O failed: -6 00:26:34.781 Write completed with error (sct=0, sc=8) 00:26:34.781 Write completed with error (sct=0, sc=8) 00:26:34.781 starting I/O failed: -6 00:26:34.781 Write completed with error (sct=0, sc=8) 00:26:34.781 starting I/O failed: -6 00:26:34.781 Write completed with error (sct=0, sc=8) 00:26:34.781 starting I/O failed: -6 00:26:34.781 Write completed with error (sct=0, sc=8) 00:26:34.781 Write completed with error (sct=0, sc=8) 00:26:34.781 starting I/O failed: -6 00:26:34.781 Write completed with error (sct=0, sc=8) 00:26:34.781 starting I/O failed: -6 00:26:34.781 Write completed with error (sct=0, sc=8) 00:26:34.781 starting I/O failed: -6 00:26:34.781 Write completed with error (sct=0, sc=8) 00:26:34.781 Write completed with error (sct=0, sc=8) 00:26:34.781 starting I/O failed: -6 00:26:34.781 Write completed with error (sct=0, sc=8) 00:26:34.781 starting I/O failed: -6 00:26:34.781 Write completed with error (sct=0, sc=8) 00:26:34.781 starting I/O failed: -6 00:26:34.781 Write completed with error (sct=0, sc=8) 00:26:34.781 Write completed with error (sct=0, sc=8) 00:26:34.781 starting I/O failed: -6 00:26:34.781 Write completed with error (sct=0, sc=8) 00:26:34.781 starting I/O failed: -6 00:26:34.781 Write completed with error (sct=0, sc=8) 00:26:34.781 starting I/O failed: -6 00:26:34.781 Write completed with error (sct=0, sc=8) 00:26:34.781 Write completed with error (sct=0, sc=8) 00:26:34.781 starting I/O failed: -6 00:26:34.781 Write completed with error (sct=0, sc=8) 00:26:34.781 starting I/O failed: -6 00:26:34.781 Write completed with error (sct=0, sc=8) 00:26:34.781 starting I/O failed: -6 00:26:34.781 Write completed with error (sct=0, sc=8) 00:26:34.781 Write completed with error (sct=0, sc=8) 00:26:34.781 starting I/O failed: -6 00:26:34.781 Write completed with error (sct=0, sc=8) 00:26:34.781 starting I/O failed: -6 00:26:34.781 Write completed with error (sct=0, sc=8) 00:26:34.781 starting I/O failed: -6 00:26:34.781 Write completed with error (sct=0, sc=8) 00:26:34.781 Write completed with error (sct=0, sc=8) 00:26:34.781 starting I/O failed: -6 00:26:34.781 Write completed with error (sct=0, sc=8) 00:26:34.781 starting I/O failed: -6 00:26:34.781 Write completed with error (sct=0, sc=8) 00:26:34.781 starting I/O failed: -6 00:26:34.781 Write completed with error (sct=0, sc=8) 00:26:34.781 Write completed with error (sct=0, sc=8) 00:26:34.781 starting I/O failed: -6 00:26:34.781 Write completed with error (sct=0, sc=8) 00:26:34.781 starting I/O failed: -6 00:26:34.781 Write completed with error (sct=0, sc=8) 00:26:34.781 starting I/O failed: -6 00:26:34.781 Write completed with error (sct=0, sc=8) 00:26:34.781 Write completed with error (sct=0, sc=8) 00:26:34.781 starting I/O failed: -6 00:26:34.781 Write completed with error (sct=0, sc=8) 00:26:34.781 starting I/O failed: -6 00:26:34.781 Write completed with error (sct=0, sc=8) 00:26:34.781 starting I/O failed: -6 00:26:34.781 Write completed with error (sct=0, sc=8) 00:26:34.781 Write completed with error (sct=0, sc=8) 00:26:34.781 starting I/O failed: -6 00:26:34.781 [2024-11-20 06:37:06.161052] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:34.781 Write completed with error (sct=0, sc=8) 00:26:34.781 starting I/O failed: -6 00:26:34.781 Write completed with error (sct=0, sc=8) 00:26:34.781 starting I/O failed: -6 00:26:34.781 Write completed with error (sct=0, sc=8) 00:26:34.781 starting I/O failed: -6 00:26:34.781 Write completed with error (sct=0, sc=8) 00:26:34.781 starting I/O failed: -6 00:26:34.781 Write completed with error (sct=0, sc=8) 00:26:34.781 starting I/O failed: -6 00:26:34.781 Write completed with error (sct=0, sc=8) 00:26:34.781 starting I/O failed: -6 00:26:34.781 Write completed with error (sct=0, sc=8) 00:26:34.781 starting I/O failed: -6 00:26:34.781 Write completed with error (sct=0, sc=8) 00:26:34.781 starting I/O failed: -6 00:26:34.781 Write completed with error (sct=0, sc=8) 00:26:34.781 starting I/O failed: -6 00:26:34.781 Write completed with error (sct=0, sc=8) 00:26:34.781 starting I/O failed: -6 00:26:34.781 Write completed with error (sct=0, sc=8) 00:26:34.781 starting I/O failed: -6 00:26:34.781 Write completed with error (sct=0, sc=8) 00:26:34.781 starting I/O failed: -6 00:26:34.781 Write completed with error (sct=0, sc=8) 00:26:34.781 starting I/O failed: -6 00:26:34.781 Write completed with error (sct=0, sc=8) 00:26:34.781 starting I/O failed: -6 00:26:34.781 Write completed with error (sct=0, sc=8) 00:26:34.781 starting I/O failed: -6 00:26:34.781 Write completed with error (sct=0, sc=8) 00:26:34.781 starting I/O failed: -6 00:26:34.781 Write completed with error (sct=0, sc=8) 00:26:34.781 starting I/O failed: -6 00:26:34.781 Write completed with error (sct=0, sc=8) 00:26:34.781 starting I/O failed: -6 00:26:34.781 Write completed with error (sct=0, sc=8) 00:26:34.781 starting I/O failed: -6 00:26:34.781 Write completed with error (sct=0, sc=8) 00:26:34.781 starting I/O failed: -6 00:26:34.781 Write completed with error (sct=0, sc=8) 00:26:34.781 starting I/O failed: -6 00:26:34.781 Write completed with error (sct=0, sc=8) 00:26:34.781 starting I/O failed: -6 00:26:34.781 Write completed with error (sct=0, sc=8) 00:26:34.781 starting I/O failed: -6 00:26:34.781 Write completed with error (sct=0, sc=8) 00:26:34.781 starting I/O failed: -6 00:26:34.781 Write completed with error (sct=0, sc=8) 00:26:34.781 starting I/O failed: -6 00:26:34.781 Write completed with error (sct=0, sc=8) 00:26:34.781 starting I/O failed: -6 00:26:34.781 Write completed with error (sct=0, sc=8) 00:26:34.781 starting I/O failed: -6 00:26:34.781 Write completed with error (sct=0, sc=8) 00:26:34.781 starting I/O failed: -6 00:26:34.781 Write completed with error (sct=0, sc=8) 00:26:34.781 starting I/O failed: -6 00:26:34.781 Write completed with error (sct=0, sc=8) 00:26:34.781 starting I/O failed: -6 00:26:34.781 Write completed with error (sct=0, sc=8) 00:26:34.781 starting I/O failed: -6 00:26:34.781 Write completed with error (sct=0, sc=8) 00:26:34.781 starting I/O failed: -6 00:26:34.781 Write completed with error (sct=0, sc=8) 00:26:34.781 starting I/O failed: -6 00:26:34.781 Write completed with error (sct=0, sc=8) 00:26:34.781 starting I/O failed: -6 00:26:34.781 Write completed with error (sct=0, sc=8) 00:26:34.781 starting I/O failed: -6 00:26:34.781 Write completed with error (sct=0, sc=8) 00:26:34.781 starting I/O failed: -6 00:26:34.781 Write completed with error (sct=0, sc=8) 00:26:34.781 starting I/O failed: -6 00:26:34.781 Write completed with error (sct=0, sc=8) 00:26:34.781 starting I/O failed: -6 00:26:34.781 Write completed with error (sct=0, sc=8) 00:26:34.782 starting I/O failed: -6 00:26:34.782 Write completed with error (sct=0, sc=8) 00:26:34.782 starting I/O failed: -6 00:26:34.782 Write completed with error (sct=0, sc=8) 00:26:34.782 starting I/O failed: -6 00:26:34.782 Write completed with error (sct=0, sc=8) 00:26:34.782 starting I/O failed: -6 00:26:34.782 Write completed with error (sct=0, sc=8) 00:26:34.782 starting I/O failed: -6 00:26:34.782 Write completed with error (sct=0, sc=8) 00:26:34.782 starting I/O failed: -6 00:26:34.782 Write completed with error (sct=0, sc=8) 00:26:34.782 starting I/O failed: -6 00:26:34.782 Write completed with error (sct=0, sc=8) 00:26:34.782 starting I/O failed: -6 00:26:34.782 Write completed with error (sct=0, sc=8) 00:26:34.782 starting I/O failed: -6 00:26:34.782 Write completed with error (sct=0, sc=8) 00:26:34.782 starting I/O failed: -6 00:26:34.782 Write completed with error (sct=0, sc=8) 00:26:34.782 starting I/O failed: -6 00:26:34.782 Write completed with error (sct=0, sc=8) 00:26:34.782 starting I/O failed: -6 00:26:34.782 Write completed with error (sct=0, sc=8) 00:26:34.782 starting I/O failed: -6 00:26:34.782 Write completed with error (sct=0, sc=8) 00:26:34.782 starting I/O failed: -6 00:26:34.782 Write completed with error (sct=0, sc=8) 00:26:34.782 starting I/O failed: -6 00:26:34.782 Write completed with error (sct=0, sc=8) 00:26:34.782 starting I/O failed: -6 00:26:34.782 Write completed with error (sct=0, sc=8) 00:26:34.782 starting I/O failed: -6 00:26:34.782 Write completed with error (sct=0, sc=8) 00:26:34.782 starting I/O failed: -6 00:26:34.782 Write completed with error (sct=0, sc=8) 00:26:34.782 starting I/O failed: -6 00:26:34.782 Write completed with error (sct=0, sc=8) 00:26:34.782 starting I/O failed: -6 00:26:34.782 Write completed with error (sct=0, sc=8) 00:26:34.782 starting I/O failed: -6 00:26:34.782 Write completed with error (sct=0, sc=8) 00:26:34.782 starting I/O failed: -6 00:26:34.782 [2024-11-20 06:37:06.162972] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:34.782 NVMe io qpair process completion error 00:26:34.782 Write completed with error (sct=0, sc=8) 00:26:34.782 starting I/O failed: -6 00:26:34.782 Write completed with error (sct=0, sc=8) 00:26:34.782 Write completed with error (sct=0, sc=8) 00:26:34.782 Write completed with error (sct=0, sc=8) 00:26:34.782 Write completed with error (sct=0, sc=8) 00:26:34.782 starting I/O failed: -6 00:26:34.782 Write completed with error (sct=0, sc=8) 00:26:34.782 Write completed with error (sct=0, sc=8) 00:26:34.782 Write completed with error (sct=0, sc=8) 00:26:34.782 Write completed with error (sct=0, sc=8) 00:26:34.782 starting I/O failed: -6 00:26:34.782 Write completed with error (sct=0, sc=8) 00:26:34.782 Write completed with error (sct=0, sc=8) 00:26:34.782 Write completed with error (sct=0, sc=8) 00:26:34.782 Write completed with error (sct=0, sc=8) 00:26:34.782 starting I/O failed: -6 00:26:34.782 Write completed with error (sct=0, sc=8) 00:26:34.782 Write completed with error (sct=0, sc=8) 00:26:34.782 Write completed with error (sct=0, sc=8) 00:26:34.782 Write completed with error (sct=0, sc=8) 00:26:34.782 starting I/O failed: -6 00:26:34.782 Write completed with error (sct=0, sc=8) 00:26:34.782 Write completed with error (sct=0, sc=8) 00:26:34.782 Write completed with error (sct=0, sc=8) 00:26:34.782 Write completed with error (sct=0, sc=8) 00:26:34.782 starting I/O failed: -6 00:26:34.782 Write completed with error (sct=0, sc=8) 00:26:34.782 Write completed with error (sct=0, sc=8) 00:26:34.782 Write completed with error (sct=0, sc=8) 00:26:34.782 Write completed with error (sct=0, sc=8) 00:26:34.782 starting I/O failed: -6 00:26:34.782 Write completed with error (sct=0, sc=8) 00:26:34.782 Write completed with error (sct=0, sc=8) 00:26:34.782 Write completed with error (sct=0, sc=8) 00:26:34.782 Write completed with error (sct=0, sc=8) 00:26:34.782 starting I/O failed: -6 00:26:34.782 Write completed with error (sct=0, sc=8) 00:26:34.782 Write completed with error (sct=0, sc=8) 00:26:34.782 Write completed with error (sct=0, sc=8) 00:26:34.782 Write completed with error (sct=0, sc=8) 00:26:34.782 starting I/O failed: -6 00:26:34.782 Write completed with error (sct=0, sc=8) 00:26:34.782 Write completed with error (sct=0, sc=8) 00:26:34.782 Write completed with error (sct=0, sc=8) 00:26:34.782 Write completed with error (sct=0, sc=8) 00:26:34.782 starting I/O failed: -6 00:26:34.782 Write completed with error (sct=0, sc=8) 00:26:34.782 Write completed with error (sct=0, sc=8) 00:26:34.782 Write completed with error (sct=0, sc=8) 00:26:34.782 Write completed with error (sct=0, sc=8) 00:26:34.782 starting I/O failed: -6 00:26:34.782 Write completed with error (sct=0, sc=8) 00:26:34.782 [2024-11-20 06:37:06.163975] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:34.782 Write completed with error (sct=0, sc=8) 00:26:34.782 Write completed with error (sct=0, sc=8) 00:26:34.782 starting I/O failed: -6 00:26:34.782 Write completed with error (sct=0, sc=8) 00:26:34.782 starting I/O failed: -6 00:26:34.782 Write completed with error (sct=0, sc=8) 00:26:34.782 Write completed with error (sct=0, sc=8) 00:26:34.782 Write completed with error (sct=0, sc=8) 00:26:34.782 starting I/O failed: -6 00:26:34.782 Write completed with error (sct=0, sc=8) 00:26:34.782 starting I/O failed: -6 00:26:34.782 Write completed with error (sct=0, sc=8) 00:26:34.782 Write completed with error (sct=0, sc=8) 00:26:34.782 Write completed with error (sct=0, sc=8) 00:26:34.782 starting I/O failed: -6 00:26:34.782 Write completed with error (sct=0, sc=8) 00:26:34.782 starting I/O failed: -6 00:26:34.782 Write completed with error (sct=0, sc=8) 00:26:34.782 Write completed with error (sct=0, sc=8) 00:26:34.782 Write completed with error (sct=0, sc=8) 00:26:34.782 starting I/O failed: -6 00:26:34.782 Write completed with error (sct=0, sc=8) 00:26:34.782 starting I/O failed: -6 00:26:34.782 Write completed with error (sct=0, sc=8) 00:26:34.782 Write completed with error (sct=0, sc=8) 00:26:34.782 Write completed with error (sct=0, sc=8) 00:26:34.782 starting I/O failed: -6 00:26:34.782 Write completed with error (sct=0, sc=8) 00:26:34.782 starting I/O failed: -6 00:26:34.782 Write completed with error (sct=0, sc=8) 00:26:34.782 Write completed with error (sct=0, sc=8) 00:26:34.782 Write completed with error (sct=0, sc=8) 00:26:34.782 starting I/O failed: -6 00:26:34.782 Write completed with error (sct=0, sc=8) 00:26:34.782 starting I/O failed: -6 00:26:34.782 Write completed with error (sct=0, sc=8) 00:26:34.782 Write completed with error (sct=0, sc=8) 00:26:34.782 Write completed with error (sct=0, sc=8) 00:26:34.782 starting I/O failed: -6 00:26:34.782 Write completed with error (sct=0, sc=8) 00:26:34.782 starting I/O failed: -6 00:26:34.782 Write completed with error (sct=0, sc=8) 00:26:34.782 Write completed with error (sct=0, sc=8) 00:26:34.782 Write completed with error (sct=0, sc=8) 00:26:34.782 starting I/O failed: -6 00:26:34.782 Write completed with error (sct=0, sc=8) 00:26:34.782 starting I/O failed: -6 00:26:34.782 Write completed with error (sct=0, sc=8) 00:26:34.782 Write completed with error (sct=0, sc=8) 00:26:34.782 Write completed with error (sct=0, sc=8) 00:26:34.782 starting I/O failed: -6 00:26:34.782 Write completed with error (sct=0, sc=8) 00:26:34.782 starting I/O failed: -6 00:26:34.782 Write completed with error (sct=0, sc=8) 00:26:34.782 Write completed with error (sct=0, sc=8) 00:26:34.782 Write completed with error (sct=0, sc=8) 00:26:34.782 starting I/O failed: -6 00:26:34.782 Write completed with error (sct=0, sc=8) 00:26:34.782 starting I/O failed: -6 00:26:34.782 [2024-11-20 06:37:06.164756] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:34.782 Write completed with error (sct=0, sc=8) 00:26:34.782 Write completed with error (sct=0, sc=8) 00:26:34.782 starting I/O failed: -6 00:26:34.782 Write completed with error (sct=0, sc=8) 00:26:34.782 starting I/O failed: -6 00:26:34.782 Write completed with error (sct=0, sc=8) 00:26:34.782 starting I/O failed: -6 00:26:34.782 Write completed with error (sct=0, sc=8) 00:26:34.782 Write completed with error (sct=0, sc=8) 00:26:34.782 starting I/O failed: -6 00:26:34.782 Write completed with error (sct=0, sc=8) 00:26:34.782 starting I/O failed: -6 00:26:34.782 Write completed with error (sct=0, sc=8) 00:26:34.782 starting I/O failed: -6 00:26:34.782 Write completed with error (sct=0, sc=8) 00:26:34.782 Write completed with error (sct=0, sc=8) 00:26:34.782 starting I/O failed: -6 00:26:34.782 Write completed with error (sct=0, sc=8) 00:26:34.782 starting I/O failed: -6 00:26:34.782 Write completed with error (sct=0, sc=8) 00:26:34.782 starting I/O failed: -6 00:26:34.782 Write completed with error (sct=0, sc=8) 00:26:34.782 Write completed with error (sct=0, sc=8) 00:26:34.782 starting I/O failed: -6 00:26:34.782 Write completed with error (sct=0, sc=8) 00:26:34.782 starting I/O failed: -6 00:26:34.782 Write completed with error (sct=0, sc=8) 00:26:34.782 starting I/O failed: -6 00:26:34.782 Write completed with error (sct=0, sc=8) 00:26:34.782 Write completed with error (sct=0, sc=8) 00:26:34.782 starting I/O failed: -6 00:26:34.782 Write completed with error (sct=0, sc=8) 00:26:34.782 starting I/O failed: -6 00:26:34.782 Write completed with error (sct=0, sc=8) 00:26:34.782 starting I/O failed: -6 00:26:34.782 Write completed with error (sct=0, sc=8) 00:26:34.782 Write completed with error (sct=0, sc=8) 00:26:34.782 starting I/O failed: -6 00:26:34.782 Write completed with error (sct=0, sc=8) 00:26:34.782 starting I/O failed: -6 00:26:34.782 Write completed with error (sct=0, sc=8) 00:26:34.782 starting I/O failed: -6 00:26:34.782 Write completed with error (sct=0, sc=8) 00:26:34.782 Write completed with error (sct=0, sc=8) 00:26:34.782 starting I/O failed: -6 00:26:34.782 Write completed with error (sct=0, sc=8) 00:26:34.782 starting I/O failed: -6 00:26:34.782 Write completed with error (sct=0, sc=8) 00:26:34.782 starting I/O failed: -6 00:26:34.782 Write completed with error (sct=0, sc=8) 00:26:34.782 Write completed with error (sct=0, sc=8) 00:26:34.782 starting I/O failed: -6 00:26:34.782 Write completed with error (sct=0, sc=8) 00:26:34.782 starting I/O failed: -6 00:26:34.782 Write completed with error (sct=0, sc=8) 00:26:34.782 starting I/O failed: -6 00:26:34.782 Write completed with error (sct=0, sc=8) 00:26:34.782 Write completed with error (sct=0, sc=8) 00:26:34.782 starting I/O failed: -6 00:26:34.782 Write completed with error (sct=0, sc=8) 00:26:34.782 starting I/O failed: -6 00:26:34.782 Write completed with error (sct=0, sc=8) 00:26:34.782 starting I/O failed: -6 00:26:34.782 Write completed with error (sct=0, sc=8) 00:26:34.782 Write completed with error (sct=0, sc=8) 00:26:34.782 starting I/O failed: -6 00:26:34.782 Write completed with error (sct=0, sc=8) 00:26:34.782 starting I/O failed: -6 00:26:34.782 Write completed with error (sct=0, sc=8) 00:26:34.782 starting I/O failed: -6 00:26:34.782 Write completed with error (sct=0, sc=8) 00:26:34.782 Write completed with error (sct=0, sc=8) 00:26:34.782 starting I/O failed: -6 00:26:34.782 Write completed with error (sct=0, sc=8) 00:26:34.782 starting I/O failed: -6 00:26:34.782 Write completed with error (sct=0, sc=8) 00:26:34.782 starting I/O failed: -6 00:26:34.782 Write completed with error (sct=0, sc=8) 00:26:34.782 Write completed with error (sct=0, sc=8) 00:26:34.782 starting I/O failed: -6 00:26:34.782 Write completed with error (sct=0, sc=8) 00:26:34.782 starting I/O failed: -6 00:26:34.782 Write completed with error (sct=0, sc=8) 00:26:34.782 starting I/O failed: -6 00:26:34.782 Write completed with error (sct=0, sc=8) 00:26:34.782 [2024-11-20 06:37:06.165764] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:34.782 Write completed with error (sct=0, sc=8) 00:26:34.782 starting I/O failed: -6 00:26:34.782 Write completed with error (sct=0, sc=8) 00:26:34.782 starting I/O failed: -6 00:26:34.782 Write completed with error (sct=0, sc=8) 00:26:34.782 starting I/O failed: -6 00:26:34.782 Write completed with error (sct=0, sc=8) 00:26:34.782 starting I/O failed: -6 00:26:34.782 Write completed with error (sct=0, sc=8) 00:26:34.782 starting I/O failed: -6 00:26:34.782 Write completed with error (sct=0, sc=8) 00:26:34.782 starting I/O failed: -6 00:26:34.782 Write completed with error (sct=0, sc=8) 00:26:34.782 starting I/O failed: -6 00:26:34.782 Write completed with error (sct=0, sc=8) 00:26:34.782 starting I/O failed: -6 00:26:34.782 Write completed with error (sct=0, sc=8) 00:26:34.782 starting I/O failed: -6 00:26:34.782 Write completed with error (sct=0, sc=8) 00:26:34.783 starting I/O failed: -6 00:26:34.783 Write completed with error (sct=0, sc=8) 00:26:34.783 starting I/O failed: -6 00:26:34.783 Write completed with error (sct=0, sc=8) 00:26:34.783 starting I/O failed: -6 00:26:34.783 Write completed with error (sct=0, sc=8) 00:26:34.783 starting I/O failed: -6 00:26:34.783 Write completed with error (sct=0, sc=8) 00:26:34.783 starting I/O failed: -6 00:26:34.783 Write completed with error (sct=0, sc=8) 00:26:34.783 starting I/O failed: -6 00:26:34.783 Write completed with error (sct=0, sc=8) 00:26:34.783 starting I/O failed: -6 00:26:34.783 Write completed with error (sct=0, sc=8) 00:26:34.783 starting I/O failed: -6 00:26:34.783 Write completed with error (sct=0, sc=8) 00:26:34.783 starting I/O failed: -6 00:26:34.783 Write completed with error (sct=0, sc=8) 00:26:34.783 starting I/O failed: -6 00:26:34.783 Write completed with error (sct=0, sc=8) 00:26:34.783 starting I/O failed: -6 00:26:34.783 Write completed with error (sct=0, sc=8) 00:26:34.783 starting I/O failed: -6 00:26:34.783 Write completed with error (sct=0, sc=8) 00:26:34.783 starting I/O failed: -6 00:26:34.783 Write completed with error (sct=0, sc=8) 00:26:34.783 starting I/O failed: -6 00:26:34.783 Write completed with error (sct=0, sc=8) 00:26:34.783 starting I/O failed: -6 00:26:34.783 Write completed with error (sct=0, sc=8) 00:26:34.783 starting I/O failed: -6 00:26:34.783 Write completed with error (sct=0, sc=8) 00:26:34.783 starting I/O failed: -6 00:26:34.783 Write completed with error (sct=0, sc=8) 00:26:34.783 starting I/O failed: -6 00:26:34.783 Write completed with error (sct=0, sc=8) 00:26:34.783 starting I/O failed: -6 00:26:34.783 Write completed with error (sct=0, sc=8) 00:26:34.783 starting I/O failed: -6 00:26:34.783 Write completed with error (sct=0, sc=8) 00:26:34.783 starting I/O failed: -6 00:26:34.783 Write completed with error (sct=0, sc=8) 00:26:34.783 starting I/O failed: -6 00:26:34.783 Write completed with error (sct=0, sc=8) 00:26:34.783 starting I/O failed: -6 00:26:34.783 Write completed with error (sct=0, sc=8) 00:26:34.783 starting I/O failed: -6 00:26:34.783 Write completed with error (sct=0, sc=8) 00:26:34.783 starting I/O failed: -6 00:26:34.783 Write completed with error (sct=0, sc=8) 00:26:34.783 starting I/O failed: -6 00:26:34.783 Write completed with error (sct=0, sc=8) 00:26:34.783 starting I/O failed: -6 00:26:34.783 Write completed with error (sct=0, sc=8) 00:26:34.783 starting I/O failed: -6 00:26:34.783 Write completed with error (sct=0, sc=8) 00:26:34.783 starting I/O failed: -6 00:26:34.783 Write completed with error (sct=0, sc=8) 00:26:34.783 starting I/O failed: -6 00:26:34.783 Write completed with error (sct=0, sc=8) 00:26:34.783 starting I/O failed: -6 00:26:34.783 Write completed with error (sct=0, sc=8) 00:26:34.783 starting I/O failed: -6 00:26:34.783 Write completed with error (sct=0, sc=8) 00:26:34.783 starting I/O failed: -6 00:26:34.783 Write completed with error (sct=0, sc=8) 00:26:34.783 starting I/O failed: -6 00:26:34.783 Write completed with error (sct=0, sc=8) 00:26:34.783 starting I/O failed: -6 00:26:34.783 Write completed with error (sct=0, sc=8) 00:26:34.783 starting I/O failed: -6 00:26:34.783 Write completed with error (sct=0, sc=8) 00:26:34.783 starting I/O failed: -6 00:26:34.783 Write completed with error (sct=0, sc=8) 00:26:34.783 starting I/O failed: -6 00:26:34.783 Write completed with error (sct=0, sc=8) 00:26:34.783 starting I/O failed: -6 00:26:34.783 Write completed with error (sct=0, sc=8) 00:26:34.783 starting I/O failed: -6 00:26:34.783 Write completed with error (sct=0, sc=8) 00:26:34.783 starting I/O failed: -6 00:26:34.783 Write completed with error (sct=0, sc=8) 00:26:34.783 starting I/O failed: -6 00:26:34.783 Write completed with error (sct=0, sc=8) 00:26:34.783 starting I/O failed: -6 00:26:34.783 Write completed with error (sct=0, sc=8) 00:26:34.783 starting I/O failed: -6 00:26:34.783 Write completed with error (sct=0, sc=8) 00:26:34.783 starting I/O failed: -6 00:26:34.783 Write completed with error (sct=0, sc=8) 00:26:34.783 starting I/O failed: -6 00:26:34.783 Write completed with error (sct=0, sc=8) 00:26:34.783 starting I/O failed: -6 00:26:34.783 Write completed with error (sct=0, sc=8) 00:26:34.783 starting I/O failed: -6 00:26:34.783 Write completed with error (sct=0, sc=8) 00:26:34.783 starting I/O failed: -6 00:26:34.783 Write completed with error (sct=0, sc=8) 00:26:34.783 starting I/O failed: -6 00:26:34.783 Write completed with error (sct=0, sc=8) 00:26:34.783 starting I/O failed: -6 00:26:34.783 Write completed with error (sct=0, sc=8) 00:26:34.783 starting I/O failed: -6 00:26:34.783 [2024-11-20 06:37:06.167698] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:34.783 NVMe io qpair process completion error 00:26:34.783 Write completed with error (sct=0, sc=8) 00:26:34.783 Write completed with error (sct=0, sc=8) 00:26:34.783 Write completed with error (sct=0, sc=8) 00:26:34.783 starting I/O failed: -6 00:26:34.783 Write completed with error (sct=0, sc=8) 00:26:34.783 Write completed with error (sct=0, sc=8) 00:26:34.783 Write completed with error (sct=0, sc=8) 00:26:34.783 Write completed with error (sct=0, sc=8) 00:26:34.783 starting I/O failed: -6 00:26:34.783 Write completed with error (sct=0, sc=8) 00:26:34.783 Write completed with error (sct=0, sc=8) 00:26:34.783 Write completed with error (sct=0, sc=8) 00:26:34.783 Write completed with error (sct=0, sc=8) 00:26:34.783 starting I/O failed: -6 00:26:34.783 Write completed with error (sct=0, sc=8) 00:26:34.783 Write completed with error (sct=0, sc=8) 00:26:34.783 Write completed with error (sct=0, sc=8) 00:26:34.783 Write completed with error (sct=0, sc=8) 00:26:34.783 starting I/O failed: -6 00:26:34.783 Write completed with error (sct=0, sc=8) 00:26:34.783 Write completed with error (sct=0, sc=8) 00:26:34.783 Write completed with error (sct=0, sc=8) 00:26:34.783 Write completed with error (sct=0, sc=8) 00:26:34.783 starting I/O failed: -6 00:26:34.783 Write completed with error (sct=0, sc=8) 00:26:34.783 Write completed with error (sct=0, sc=8) 00:26:34.783 Write completed with error (sct=0, sc=8) 00:26:34.783 Write completed with error (sct=0, sc=8) 00:26:34.783 starting I/O failed: -6 00:26:34.783 Write completed with error (sct=0, sc=8) 00:26:34.783 Write completed with error (sct=0, sc=8) 00:26:34.783 Write completed with error (sct=0, sc=8) 00:26:34.783 Write completed with error (sct=0, sc=8) 00:26:34.783 starting I/O failed: -6 00:26:34.783 Write completed with error (sct=0, sc=8) 00:26:34.783 Write completed with error (sct=0, sc=8) 00:26:34.783 Write completed with error (sct=0, sc=8) 00:26:34.783 Write completed with error (sct=0, sc=8) 00:26:34.783 starting I/O failed: -6 00:26:34.783 Write completed with error (sct=0, sc=8) 00:26:34.783 Write completed with error (sct=0, sc=8) 00:26:34.783 Write completed with error (sct=0, sc=8) 00:26:34.783 Write completed with error (sct=0, sc=8) 00:26:34.783 starting I/O failed: -6 00:26:34.783 Write completed with error (sct=0, sc=8) 00:26:34.783 Write completed with error (sct=0, sc=8) 00:26:34.783 Write completed with error (sct=0, sc=8) 00:26:34.783 Write completed with error (sct=0, sc=8) 00:26:34.783 starting I/O failed: -6 00:26:34.783 [2024-11-20 06:37:06.168788] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:34.783 Write completed with error (sct=0, sc=8) 00:26:34.783 Write completed with error (sct=0, sc=8) 00:26:34.783 Write completed with error (sct=0, sc=8) 00:26:34.783 starting I/O failed: -6 00:26:34.783 Write completed with error (sct=0, sc=8) 00:26:34.783 starting I/O failed: -6 00:26:34.783 Write completed with error (sct=0, sc=8) 00:26:34.783 Write completed with error (sct=0, sc=8) 00:26:34.783 Write completed with error (sct=0, sc=8) 00:26:34.783 starting I/O failed: -6 00:26:34.783 Write completed with error (sct=0, sc=8) 00:26:34.783 starting I/O failed: -6 00:26:34.783 Write completed with error (sct=0, sc=8) 00:26:34.783 Write completed with error (sct=0, sc=8) 00:26:34.783 Write completed with error (sct=0, sc=8) 00:26:34.783 starting I/O failed: -6 00:26:34.783 Write completed with error (sct=0, sc=8) 00:26:34.783 starting I/O failed: -6 00:26:34.783 Write completed with error (sct=0, sc=8) 00:26:34.783 Write completed with error (sct=0, sc=8) 00:26:34.783 Write completed with error (sct=0, sc=8) 00:26:34.783 starting I/O failed: -6 00:26:34.783 Write completed with error (sct=0, sc=8) 00:26:34.783 starting I/O failed: -6 00:26:34.783 Write completed with error (sct=0, sc=8) 00:26:34.783 Write completed with error (sct=0, sc=8) 00:26:34.783 Write completed with error (sct=0, sc=8) 00:26:34.783 starting I/O failed: -6 00:26:34.783 Write completed with error (sct=0, sc=8) 00:26:34.783 starting I/O failed: -6 00:26:34.783 Write completed with error (sct=0, sc=8) 00:26:34.783 Write completed with error (sct=0, sc=8) 00:26:34.783 Write completed with error (sct=0, sc=8) 00:26:34.783 starting I/O failed: -6 00:26:34.783 Write completed with error (sct=0, sc=8) 00:26:34.783 starting I/O failed: -6 00:26:34.783 Write completed with error (sct=0, sc=8) 00:26:34.783 Write completed with error (sct=0, sc=8) 00:26:34.783 Write completed with error (sct=0, sc=8) 00:26:34.783 starting I/O failed: -6 00:26:34.783 Write completed with error (sct=0, sc=8) 00:26:34.783 starting I/O failed: -6 00:26:34.783 Write completed with error (sct=0, sc=8) 00:26:34.783 Write completed with error (sct=0, sc=8) 00:26:34.783 Write completed with error (sct=0, sc=8) 00:26:34.783 starting I/O failed: -6 00:26:34.783 Write completed with error (sct=0, sc=8) 00:26:34.783 starting I/O failed: -6 00:26:34.783 Write completed with error (sct=0, sc=8) 00:26:34.783 Write completed with error (sct=0, sc=8) 00:26:34.783 Write completed with error (sct=0, sc=8) 00:26:34.783 starting I/O failed: -6 00:26:34.783 Write completed with error (sct=0, sc=8) 00:26:34.783 starting I/O failed: -6 00:26:34.783 Write completed with error (sct=0, sc=8) 00:26:34.783 Write completed with error (sct=0, sc=8) 00:26:34.783 Write completed with error (sct=0, sc=8) 00:26:34.783 starting I/O failed: -6 00:26:34.783 Write completed with error (sct=0, sc=8) 00:26:34.783 starting I/O failed: -6 00:26:34.783 [2024-11-20 06:37:06.169573] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:34.783 Write completed with error (sct=0, sc=8) 00:26:34.783 starting I/O failed: -6 00:26:34.783 Write completed with error (sct=0, sc=8) 00:26:34.783 Write completed with error (sct=0, sc=8) 00:26:34.783 starting I/O failed: -6 00:26:34.783 Write completed with error (sct=0, sc=8) 00:26:34.783 starting I/O failed: -6 00:26:34.783 Write completed with error (sct=0, sc=8) 00:26:34.783 starting I/O failed: -6 00:26:34.783 Write completed with error (sct=0, sc=8) 00:26:34.783 Write completed with error (sct=0, sc=8) 00:26:34.783 starting I/O failed: -6 00:26:34.783 Write completed with error (sct=0, sc=8) 00:26:34.783 starting I/O failed: -6 00:26:34.783 Write completed with error (sct=0, sc=8) 00:26:34.783 starting I/O failed: -6 00:26:34.783 Write completed with error (sct=0, sc=8) 00:26:34.783 Write completed with error (sct=0, sc=8) 00:26:34.783 starting I/O failed: -6 00:26:34.783 Write completed with error (sct=0, sc=8) 00:26:34.783 starting I/O failed: -6 00:26:34.783 Write completed with error (sct=0, sc=8) 00:26:34.783 starting I/O failed: -6 00:26:34.783 Write completed with error (sct=0, sc=8) 00:26:34.783 Write completed with error (sct=0, sc=8) 00:26:34.783 starting I/O failed: -6 00:26:34.783 Write completed with error (sct=0, sc=8) 00:26:34.783 starting I/O failed: -6 00:26:34.783 Write completed with error (sct=0, sc=8) 00:26:34.783 starting I/O failed: -6 00:26:34.783 Write completed with error (sct=0, sc=8) 00:26:34.783 Write completed with error (sct=0, sc=8) 00:26:34.783 starting I/O failed: -6 00:26:34.783 Write completed with error (sct=0, sc=8) 00:26:34.783 starting I/O failed: -6 00:26:34.783 Write completed with error (sct=0, sc=8) 00:26:34.783 starting I/O failed: -6 00:26:34.783 Write completed with error (sct=0, sc=8) 00:26:34.783 Write completed with error (sct=0, sc=8) 00:26:34.783 starting I/O failed: -6 00:26:34.783 Write completed with error (sct=0, sc=8) 00:26:34.783 starting I/O failed: -6 00:26:34.783 Write completed with error (sct=0, sc=8) 00:26:34.783 starting I/O failed: -6 00:26:34.783 Write completed with error (sct=0, sc=8) 00:26:34.783 Write completed with error (sct=0, sc=8) 00:26:34.783 starting I/O failed: -6 00:26:34.783 Write completed with error (sct=0, sc=8) 00:26:34.783 starting I/O failed: -6 00:26:34.783 Write completed with error (sct=0, sc=8) 00:26:34.784 starting I/O failed: -6 00:26:34.784 Write completed with error (sct=0, sc=8) 00:26:34.784 Write completed with error (sct=0, sc=8) 00:26:34.784 starting I/O failed: -6 00:26:34.784 Write completed with error (sct=0, sc=8) 00:26:34.784 starting I/O failed: -6 00:26:34.784 Write completed with error (sct=0, sc=8) 00:26:34.784 starting I/O failed: -6 00:26:34.784 Write completed with error (sct=0, sc=8) 00:26:34.784 Write completed with error (sct=0, sc=8) 00:26:34.784 starting I/O failed: -6 00:26:34.784 Write completed with error (sct=0, sc=8) 00:26:34.784 starting I/O failed: -6 00:26:34.784 Write completed with error (sct=0, sc=8) 00:26:34.784 starting I/O failed: -6 00:26:34.784 Write completed with error (sct=0, sc=8) 00:26:34.784 Write completed with error (sct=0, sc=8) 00:26:34.784 starting I/O failed: -6 00:26:34.784 Write completed with error (sct=0, sc=8) 00:26:34.784 starting I/O failed: -6 00:26:34.784 Write completed with error (sct=0, sc=8) 00:26:34.784 starting I/O failed: -6 00:26:34.784 Write completed with error (sct=0, sc=8) 00:26:34.784 Write completed with error (sct=0, sc=8) 00:26:34.784 starting I/O failed: -6 00:26:34.784 Write completed with error (sct=0, sc=8) 00:26:34.784 starting I/O failed: -6 00:26:34.784 Write completed with error (sct=0, sc=8) 00:26:34.784 starting I/O failed: -6 00:26:34.784 Write completed with error (sct=0, sc=8) 00:26:34.784 Write completed with error (sct=0, sc=8) 00:26:34.784 starting I/O failed: -6 00:26:34.784 Write completed with error (sct=0, sc=8) 00:26:34.784 starting I/O failed: -6 00:26:34.784 Write completed with error (sct=0, sc=8) 00:26:34.784 starting I/O failed: -6 00:26:34.784 [2024-11-20 06:37:06.170580] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:34.784 Write completed with error (sct=0, sc=8) 00:26:34.784 starting I/O failed: -6 00:26:34.784 Write completed with error (sct=0, sc=8) 00:26:34.784 starting I/O failed: -6 00:26:34.784 Write completed with error (sct=0, sc=8) 00:26:34.784 starting I/O failed: -6 00:26:34.784 Write completed with error (sct=0, sc=8) 00:26:34.784 starting I/O failed: -6 00:26:34.784 Write completed with error (sct=0, sc=8) 00:26:34.784 starting I/O failed: -6 00:26:34.784 Write completed with error (sct=0, sc=8) 00:26:34.784 starting I/O failed: -6 00:26:34.784 Write completed with error (sct=0, sc=8) 00:26:34.784 starting I/O failed: -6 00:26:34.784 Write completed with error (sct=0, sc=8) 00:26:34.784 starting I/O failed: -6 00:26:34.784 Write completed with error (sct=0, sc=8) 00:26:34.784 starting I/O failed: -6 00:26:34.784 Write completed with error (sct=0, sc=8) 00:26:34.784 starting I/O failed: -6 00:26:34.784 Write completed with error (sct=0, sc=8) 00:26:34.784 starting I/O failed: -6 00:26:34.784 Write completed with error (sct=0, sc=8) 00:26:34.784 starting I/O failed: -6 00:26:34.784 Write completed with error (sct=0, sc=8) 00:26:34.784 starting I/O failed: -6 00:26:34.784 Write completed with error (sct=0, sc=8) 00:26:34.784 starting I/O failed: -6 00:26:34.784 Write completed with error (sct=0, sc=8) 00:26:34.784 starting I/O failed: -6 00:26:34.784 Write completed with error (sct=0, sc=8) 00:26:34.784 starting I/O failed: -6 00:26:34.784 Write completed with error (sct=0, sc=8) 00:26:34.784 starting I/O failed: -6 00:26:34.784 Write completed with error (sct=0, sc=8) 00:26:34.784 starting I/O failed: -6 00:26:34.784 Write completed with error (sct=0, sc=8) 00:26:34.784 starting I/O failed: -6 00:26:34.784 Write completed with error (sct=0, sc=8) 00:26:34.784 starting I/O failed: -6 00:26:34.784 Write completed with error (sct=0, sc=8) 00:26:34.784 starting I/O failed: -6 00:26:34.784 Write completed with error (sct=0, sc=8) 00:26:34.784 starting I/O failed: -6 00:26:34.784 Write completed with error (sct=0, sc=8) 00:26:34.784 starting I/O failed: -6 00:26:34.784 Write completed with error (sct=0, sc=8) 00:26:34.784 starting I/O failed: -6 00:26:34.784 Write completed with error (sct=0, sc=8) 00:26:34.784 starting I/O failed: -6 00:26:34.784 Write completed with error (sct=0, sc=8) 00:26:34.784 starting I/O failed: -6 00:26:34.784 Write completed with error (sct=0, sc=8) 00:26:34.784 starting I/O failed: -6 00:26:34.784 Write completed with error (sct=0, sc=8) 00:26:34.784 starting I/O failed: -6 00:26:34.784 Write completed with error (sct=0, sc=8) 00:26:34.784 starting I/O failed: -6 00:26:34.784 Write completed with error (sct=0, sc=8) 00:26:34.784 starting I/O failed: -6 00:26:34.784 Write completed with error (sct=0, sc=8) 00:26:34.784 starting I/O failed: -6 00:26:34.784 Write completed with error (sct=0, sc=8) 00:26:34.784 starting I/O failed: -6 00:26:34.784 Write completed with error (sct=0, sc=8) 00:26:34.784 starting I/O failed: -6 00:26:34.784 Write completed with error (sct=0, sc=8) 00:26:34.784 starting I/O failed: -6 00:26:34.784 Write completed with error (sct=0, sc=8) 00:26:34.784 starting I/O failed: -6 00:26:34.784 Write completed with error (sct=0, sc=8) 00:26:34.784 starting I/O failed: -6 00:26:34.784 Write completed with error (sct=0, sc=8) 00:26:34.784 starting I/O failed: -6 00:26:34.784 Write completed with error (sct=0, sc=8) 00:26:34.784 starting I/O failed: -6 00:26:34.784 Write completed with error (sct=0, sc=8) 00:26:34.784 starting I/O failed: -6 00:26:34.784 Write completed with error (sct=0, sc=8) 00:26:34.784 starting I/O failed: -6 00:26:34.784 Write completed with error (sct=0, sc=8) 00:26:34.784 starting I/O failed: -6 00:26:34.784 Write completed with error (sct=0, sc=8) 00:26:34.784 starting I/O failed: -6 00:26:34.784 Write completed with error (sct=0, sc=8) 00:26:34.784 starting I/O failed: -6 00:26:34.784 Write completed with error (sct=0, sc=8) 00:26:34.784 starting I/O failed: -6 00:26:34.784 Write completed with error (sct=0, sc=8) 00:26:34.784 starting I/O failed: -6 00:26:34.784 Write completed with error (sct=0, sc=8) 00:26:34.784 starting I/O failed: -6 00:26:34.784 Write completed with error (sct=0, sc=8) 00:26:34.784 starting I/O failed: -6 00:26:34.784 Write completed with error (sct=0, sc=8) 00:26:34.784 starting I/O failed: -6 00:26:34.784 Write completed with error (sct=0, sc=8) 00:26:34.784 starting I/O failed: -6 00:26:34.784 Write completed with error (sct=0, sc=8) 00:26:34.784 starting I/O failed: -6 00:26:34.784 Write completed with error (sct=0, sc=8) 00:26:34.784 starting I/O failed: -6 00:26:34.784 Write completed with error (sct=0, sc=8) 00:26:34.784 starting I/O failed: -6 00:26:34.784 Write completed with error (sct=0, sc=8) 00:26:34.784 starting I/O failed: -6 00:26:34.784 Write completed with error (sct=0, sc=8) 00:26:34.784 starting I/O failed: -6 00:26:34.784 Write completed with error (sct=0, sc=8) 00:26:34.784 starting I/O failed: -6 00:26:34.784 Write completed with error (sct=0, sc=8) 00:26:34.784 starting I/O failed: -6 00:26:34.784 Write completed with error (sct=0, sc=8) 00:26:34.784 starting I/O failed: -6 00:26:34.784 Write completed with error (sct=0, sc=8) 00:26:34.784 starting I/O failed: -6 00:26:34.784 Write completed with error (sct=0, sc=8) 00:26:34.784 starting I/O failed: -6 00:26:34.784 Write completed with error (sct=0, sc=8) 00:26:34.784 starting I/O failed: -6 00:26:34.784 Write completed with error (sct=0, sc=8) 00:26:34.784 starting I/O failed: -6 00:26:34.784 [2024-11-20 06:37:06.175532] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:34.784 NVMe io qpair process completion error 00:26:34.784 Write completed with error (sct=0, sc=8) 00:26:34.784 Write completed with error (sct=0, sc=8) 00:26:34.784 Write completed with error (sct=0, sc=8) 00:26:34.784 Write completed with error (sct=0, sc=8) 00:26:34.784 starting I/O failed: -6 00:26:34.784 Write completed with error (sct=0, sc=8) 00:26:34.784 Write completed with error (sct=0, sc=8) 00:26:34.784 Write completed with error (sct=0, sc=8) 00:26:34.784 Write completed with error (sct=0, sc=8) 00:26:34.784 starting I/O failed: -6 00:26:34.784 Write completed with error (sct=0, sc=8) 00:26:34.784 Write completed with error (sct=0, sc=8) 00:26:34.784 Write completed with error (sct=0, sc=8) 00:26:34.784 Write completed with error (sct=0, sc=8) 00:26:34.784 starting I/O failed: -6 00:26:34.784 Write completed with error (sct=0, sc=8) 00:26:34.784 Write completed with error (sct=0, sc=8) 00:26:34.784 Write completed with error (sct=0, sc=8) 00:26:34.784 Write completed with error (sct=0, sc=8) 00:26:34.784 starting I/O failed: -6 00:26:34.784 Write completed with error (sct=0, sc=8) 00:26:34.784 Write completed with error (sct=0, sc=8) 00:26:34.784 Write completed with error (sct=0, sc=8) 00:26:34.784 Write completed with error (sct=0, sc=8) 00:26:34.784 starting I/O failed: -6 00:26:34.784 Write completed with error (sct=0, sc=8) 00:26:34.784 Write completed with error (sct=0, sc=8) 00:26:34.784 Write completed with error (sct=0, sc=8) 00:26:34.784 Write completed with error (sct=0, sc=8) 00:26:34.784 starting I/O failed: -6 00:26:34.784 Write completed with error (sct=0, sc=8) 00:26:34.784 Write completed with error (sct=0, sc=8) 00:26:34.784 Write completed with error (sct=0, sc=8) 00:26:34.784 Write completed with error (sct=0, sc=8) 00:26:34.784 starting I/O failed: -6 00:26:34.784 Write completed with error (sct=0, sc=8) 00:26:34.784 Write completed with error (sct=0, sc=8) 00:26:34.784 Write completed with error (sct=0, sc=8) 00:26:34.784 Write completed with error (sct=0, sc=8) 00:26:34.784 starting I/O failed: -6 00:26:34.784 Write completed with error (sct=0, sc=8) 00:26:34.784 Write completed with error (sct=0, sc=8) 00:26:34.784 Write completed with error (sct=0, sc=8) 00:26:34.784 Write completed with error (sct=0, sc=8) 00:26:34.784 starting I/O failed: -6 00:26:34.784 Write completed with error (sct=0, sc=8) 00:26:34.784 Write completed with error (sct=0, sc=8) 00:26:34.784 Write completed with error (sct=0, sc=8) 00:26:34.784 [2024-11-20 06:37:06.176550] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:34.784 Write completed with error (sct=0, sc=8) 00:26:34.784 starting I/O failed: -6 00:26:34.784 Write completed with error (sct=0, sc=8) 00:26:34.784 Write completed with error (sct=0, sc=8) 00:26:34.784 starting I/O failed: -6 00:26:34.784 Write completed with error (sct=0, sc=8) 00:26:34.784 Write completed with error (sct=0, sc=8) 00:26:34.784 starting I/O failed: -6 00:26:34.784 Write completed with error (sct=0, sc=8) 00:26:34.784 Write completed with error (sct=0, sc=8) 00:26:34.784 starting I/O failed: -6 00:26:34.784 Write completed with error (sct=0, sc=8) 00:26:34.784 Write completed with error (sct=0, sc=8) 00:26:34.784 starting I/O failed: -6 00:26:34.784 Write completed with error (sct=0, sc=8) 00:26:34.784 Write completed with error (sct=0, sc=8) 00:26:34.784 starting I/O failed: -6 00:26:34.784 Write completed with error (sct=0, sc=8) 00:26:34.784 Write completed with error (sct=0, sc=8) 00:26:34.784 starting I/O failed: -6 00:26:34.784 Write completed with error (sct=0, sc=8) 00:26:34.784 Write completed with error (sct=0, sc=8) 00:26:34.784 starting I/O failed: -6 00:26:34.784 Write completed with error (sct=0, sc=8) 00:26:34.784 Write completed with error (sct=0, sc=8) 00:26:34.784 starting I/O failed: -6 00:26:34.784 Write completed with error (sct=0, sc=8) 00:26:34.784 Write completed with error (sct=0, sc=8) 00:26:34.784 starting I/O failed: -6 00:26:34.784 Write completed with error (sct=0, sc=8) 00:26:34.784 Write completed with error (sct=0, sc=8) 00:26:34.784 starting I/O failed: -6 00:26:34.784 Write completed with error (sct=0, sc=8) 00:26:34.784 Write completed with error (sct=0, sc=8) 00:26:34.784 starting I/O failed: -6 00:26:34.784 Write completed with error (sct=0, sc=8) 00:26:34.784 Write completed with error (sct=0, sc=8) 00:26:34.784 starting I/O failed: -6 00:26:34.784 Write completed with error (sct=0, sc=8) 00:26:34.784 Write completed with error (sct=0, sc=8) 00:26:34.784 starting I/O failed: -6 00:26:34.784 Write completed with error (sct=0, sc=8) 00:26:34.784 Write completed with error (sct=0, sc=8) 00:26:34.784 starting I/O failed: -6 00:26:34.784 Write completed with error (sct=0, sc=8) 00:26:34.784 Write completed with error (sct=0, sc=8) 00:26:34.784 starting I/O failed: -6 00:26:34.784 Write completed with error (sct=0, sc=8) 00:26:34.784 Write completed with error (sct=0, sc=8) 00:26:34.784 starting I/O failed: -6 00:26:34.784 Write completed with error (sct=0, sc=8) 00:26:34.784 Write completed with error (sct=0, sc=8) 00:26:34.784 starting I/O failed: -6 00:26:34.784 Write completed with error (sct=0, sc=8) 00:26:34.784 Write completed with error (sct=0, sc=8) 00:26:34.785 starting I/O failed: -6 00:26:34.785 Write completed with error (sct=0, sc=8) 00:26:34.785 Write completed with error (sct=0, sc=8) 00:26:34.785 starting I/O failed: -6 00:26:34.785 Write completed with error (sct=0, sc=8) 00:26:34.785 Write completed with error (sct=0, sc=8) 00:26:34.785 starting I/O failed: -6 00:26:34.785 Write completed with error (sct=0, sc=8) 00:26:34.785 Write completed with error (sct=0, sc=8) 00:26:34.785 starting I/O failed: -6 00:26:34.785 Write completed with error (sct=0, sc=8) 00:26:34.785 Write completed with error (sct=0, sc=8) 00:26:34.785 starting I/O failed: -6 00:26:34.785 Write completed with error (sct=0, sc=8) 00:26:34.785 [2024-11-20 06:37:06.177445] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:34.785 Write completed with error (sct=0, sc=8) 00:26:34.785 starting I/O failed: -6 00:26:34.785 Write completed with error (sct=0, sc=8) 00:26:34.785 Write completed with error (sct=0, sc=8) 00:26:34.785 starting I/O failed: -6 00:26:34.785 Write completed with error (sct=0, sc=8) 00:26:34.785 starting I/O failed: -6 00:26:34.785 Write completed with error (sct=0, sc=8) 00:26:34.785 starting I/O failed: -6 00:26:34.785 Write completed with error (sct=0, sc=8) 00:26:34.785 Write completed with error (sct=0, sc=8) 00:26:34.785 starting I/O failed: -6 00:26:34.785 Write completed with error (sct=0, sc=8) 00:26:34.785 starting I/O failed: -6 00:26:34.785 Write completed with error (sct=0, sc=8) 00:26:34.785 starting I/O failed: -6 00:26:34.785 Write completed with error (sct=0, sc=8) 00:26:34.785 Write completed with error (sct=0, sc=8) 00:26:34.785 starting I/O failed: -6 00:26:34.785 Write completed with error (sct=0, sc=8) 00:26:34.785 starting I/O failed: -6 00:26:34.785 Write completed with error (sct=0, sc=8) 00:26:34.785 starting I/O failed: -6 00:26:34.785 Write completed with error (sct=0, sc=8) 00:26:34.785 Write completed with error (sct=0, sc=8) 00:26:34.785 starting I/O failed: -6 00:26:34.785 Write completed with error (sct=0, sc=8) 00:26:34.785 starting I/O failed: -6 00:26:34.785 Write completed with error (sct=0, sc=8) 00:26:34.785 starting I/O failed: -6 00:26:34.785 Write completed with error (sct=0, sc=8) 00:26:34.785 Write completed with error (sct=0, sc=8) 00:26:34.785 starting I/O failed: -6 00:26:34.785 Write completed with error (sct=0, sc=8) 00:26:34.785 starting I/O failed: -6 00:26:34.785 Write completed with error (sct=0, sc=8) 00:26:34.785 starting I/O failed: -6 00:26:34.785 Write completed with error (sct=0, sc=8) 00:26:34.785 Write completed with error (sct=0, sc=8) 00:26:34.785 starting I/O failed: -6 00:26:34.785 Write completed with error (sct=0, sc=8) 00:26:34.785 starting I/O failed: -6 00:26:34.785 Write completed with error (sct=0, sc=8) 00:26:34.785 starting I/O failed: -6 00:26:34.785 Write completed with error (sct=0, sc=8) 00:26:34.785 Write completed with error (sct=0, sc=8) 00:26:34.785 starting I/O failed: -6 00:26:34.785 Write completed with error (sct=0, sc=8) 00:26:34.785 starting I/O failed: -6 00:26:34.785 Write completed with error (sct=0, sc=8) 00:26:34.785 starting I/O failed: -6 00:26:34.785 Write completed with error (sct=0, sc=8) 00:26:34.785 Write completed with error (sct=0, sc=8) 00:26:34.785 starting I/O failed: -6 00:26:34.785 Write completed with error (sct=0, sc=8) 00:26:34.785 starting I/O failed: -6 00:26:34.785 Write completed with error (sct=0, sc=8) 00:26:34.785 starting I/O failed: -6 00:26:34.785 Write completed with error (sct=0, sc=8) 00:26:34.785 Write completed with error (sct=0, sc=8) 00:26:34.785 starting I/O failed: -6 00:26:34.785 Write completed with error (sct=0, sc=8) 00:26:34.785 starting I/O failed: -6 00:26:34.785 Write completed with error (sct=0, sc=8) 00:26:34.785 starting I/O failed: -6 00:26:34.785 Write completed with error (sct=0, sc=8) 00:26:34.785 Write completed with error (sct=0, sc=8) 00:26:34.785 starting I/O failed: -6 00:26:34.785 Write completed with error (sct=0, sc=8) 00:26:34.785 starting I/O failed: -6 00:26:34.785 Write completed with error (sct=0, sc=8) 00:26:34.785 starting I/O failed: -6 00:26:34.785 Write completed with error (sct=0, sc=8) 00:26:34.785 Write completed with error (sct=0, sc=8) 00:26:34.785 starting I/O failed: -6 00:26:34.785 Write completed with error (sct=0, sc=8) 00:26:34.785 starting I/O failed: -6 00:26:34.785 Write completed with error (sct=0, sc=8) 00:26:34.785 starting I/O failed: -6 00:26:34.785 Write completed with error (sct=0, sc=8) 00:26:34.785 Write completed with error (sct=0, sc=8) 00:26:34.785 starting I/O failed: -6 00:26:34.785 Write completed with error (sct=0, sc=8) 00:26:34.785 starting I/O failed: -6 00:26:34.785 Write completed with error (sct=0, sc=8) 00:26:34.785 starting I/O failed: -6 00:26:34.785 Write completed with error (sct=0, sc=8) 00:26:34.785 [2024-11-20 06:37:06.178462] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:34.785 Write completed with error (sct=0, sc=8) 00:26:34.785 starting I/O failed: -6 00:26:34.785 Write completed with error (sct=0, sc=8) 00:26:34.785 starting I/O failed: -6 00:26:34.785 Write completed with error (sct=0, sc=8) 00:26:34.785 starting I/O failed: -6 00:26:34.785 Write completed with error (sct=0, sc=8) 00:26:34.785 starting I/O failed: -6 00:26:34.785 Write completed with error (sct=0, sc=8) 00:26:34.785 starting I/O failed: -6 00:26:34.785 Write completed with error (sct=0, sc=8) 00:26:34.785 starting I/O failed: -6 00:26:34.785 Write completed with error (sct=0, sc=8) 00:26:34.785 starting I/O failed: -6 00:26:34.785 Write completed with error (sct=0, sc=8) 00:26:34.785 starting I/O failed: -6 00:26:34.785 Write completed with error (sct=0, sc=8) 00:26:34.785 starting I/O failed: -6 00:26:34.785 Write completed with error (sct=0, sc=8) 00:26:34.785 starting I/O failed: -6 00:26:34.785 Write completed with error (sct=0, sc=8) 00:26:34.785 starting I/O failed: -6 00:26:34.785 Write completed with error (sct=0, sc=8) 00:26:34.785 starting I/O failed: -6 00:26:34.785 Write completed with error (sct=0, sc=8) 00:26:34.785 starting I/O failed: -6 00:26:34.785 Write completed with error (sct=0, sc=8) 00:26:34.785 starting I/O failed: -6 00:26:34.785 Write completed with error (sct=0, sc=8) 00:26:34.785 starting I/O failed: -6 00:26:34.785 Write completed with error (sct=0, sc=8) 00:26:34.785 starting I/O failed: -6 00:26:34.785 Write completed with error (sct=0, sc=8) 00:26:34.785 starting I/O failed: -6 00:26:34.785 Write completed with error (sct=0, sc=8) 00:26:34.785 starting I/O failed: -6 00:26:34.785 Write completed with error (sct=0, sc=8) 00:26:34.785 starting I/O failed: -6 00:26:34.785 Write completed with error (sct=0, sc=8) 00:26:34.785 starting I/O failed: -6 00:26:34.785 Write completed with error (sct=0, sc=8) 00:26:34.785 starting I/O failed: -6 00:26:34.785 Write completed with error (sct=0, sc=8) 00:26:34.785 starting I/O failed: -6 00:26:34.785 Write completed with error (sct=0, sc=8) 00:26:34.785 starting I/O failed: -6 00:26:34.785 Write completed with error (sct=0, sc=8) 00:26:34.785 starting I/O failed: -6 00:26:34.785 Write completed with error (sct=0, sc=8) 00:26:34.785 starting I/O failed: -6 00:26:34.785 Write completed with error (sct=0, sc=8) 00:26:34.785 starting I/O failed: -6 00:26:34.785 Write completed with error (sct=0, sc=8) 00:26:34.785 starting I/O failed: -6 00:26:34.785 Write completed with error (sct=0, sc=8) 00:26:34.785 starting I/O failed: -6 00:26:34.785 Write completed with error (sct=0, sc=8) 00:26:34.785 starting I/O failed: -6 00:26:34.785 Write completed with error (sct=0, sc=8) 00:26:34.785 starting I/O failed: -6 00:26:34.785 Write completed with error (sct=0, sc=8) 00:26:34.785 starting I/O failed: -6 00:26:34.785 Write completed with error (sct=0, sc=8) 00:26:34.785 starting I/O failed: -6 00:26:34.785 Write completed with error (sct=0, sc=8) 00:26:34.785 starting I/O failed: -6 00:26:34.785 Write completed with error (sct=0, sc=8) 00:26:34.785 starting I/O failed: -6 00:26:34.785 Write completed with error (sct=0, sc=8) 00:26:34.785 starting I/O failed: -6 00:26:34.785 Write completed with error (sct=0, sc=8) 00:26:34.785 starting I/O failed: -6 00:26:34.785 Write completed with error (sct=0, sc=8) 00:26:34.785 starting I/O failed: -6 00:26:34.785 Write completed with error (sct=0, sc=8) 00:26:34.785 starting I/O failed: -6 00:26:34.785 Write completed with error (sct=0, sc=8) 00:26:34.785 starting I/O failed: -6 00:26:34.785 Write completed with error (sct=0, sc=8) 00:26:34.785 starting I/O failed: -6 00:26:34.785 Write completed with error (sct=0, sc=8) 00:26:34.785 starting I/O failed: -6 00:26:34.785 Write completed with error (sct=0, sc=8) 00:26:34.785 starting I/O failed: -6 00:26:34.785 Write completed with error (sct=0, sc=8) 00:26:34.785 starting I/O failed: -6 00:26:34.785 Write completed with error (sct=0, sc=8) 00:26:34.785 starting I/O failed: -6 00:26:34.785 Write completed with error (sct=0, sc=8) 00:26:34.785 starting I/O failed: -6 00:26:34.785 Write completed with error (sct=0, sc=8) 00:26:34.785 starting I/O failed: -6 00:26:34.785 Write completed with error (sct=0, sc=8) 00:26:34.785 starting I/O failed: -6 00:26:34.785 Write completed with error (sct=0, sc=8) 00:26:34.785 starting I/O failed: -6 00:26:34.785 Write completed with error (sct=0, sc=8) 00:26:34.785 starting I/O failed: -6 00:26:34.785 Write completed with error (sct=0, sc=8) 00:26:34.785 starting I/O failed: -6 00:26:34.785 Write completed with error (sct=0, sc=8) 00:26:34.785 starting I/O failed: -6 00:26:34.785 Write completed with error (sct=0, sc=8) 00:26:34.785 starting I/O failed: -6 00:26:34.785 Write completed with error (sct=0, sc=8) 00:26:34.785 starting I/O failed: -6 00:26:34.785 Write completed with error (sct=0, sc=8) 00:26:34.785 starting I/O failed: -6 00:26:34.785 Write completed with error (sct=0, sc=8) 00:26:34.785 starting I/O failed: -6 00:26:34.785 Write completed with error (sct=0, sc=8) 00:26:34.785 starting I/O failed: -6 00:26:34.785 Write completed with error (sct=0, sc=8) 00:26:34.785 starting I/O failed: -6 00:26:34.785 Write completed with error (sct=0, sc=8) 00:26:34.785 starting I/O failed: -6 00:26:34.785 Write completed with error (sct=0, sc=8) 00:26:34.785 starting I/O failed: -6 00:26:34.785 [2024-11-20 06:37:06.180293] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:34.785 NVMe io qpair process completion error 00:26:34.785 Write completed with error (sct=0, sc=8) 00:26:34.785 Write completed with error (sct=0, sc=8) 00:26:34.785 Write completed with error (sct=0, sc=8) 00:26:34.785 Write completed with error (sct=0, sc=8) 00:26:34.785 starting I/O failed: -6 00:26:34.785 Write completed with error (sct=0, sc=8) 00:26:34.785 Write completed with error (sct=0, sc=8) 00:26:34.785 Write completed with error (sct=0, sc=8) 00:26:34.785 Write completed with error (sct=0, sc=8) 00:26:34.785 starting I/O failed: -6 00:26:34.785 Write completed with error (sct=0, sc=8) 00:26:34.785 Write completed with error (sct=0, sc=8) 00:26:34.785 Write completed with error (sct=0, sc=8) 00:26:34.785 Write completed with error (sct=0, sc=8) 00:26:34.785 starting I/O failed: -6 00:26:34.785 Write completed with error (sct=0, sc=8) 00:26:34.785 Write completed with error (sct=0, sc=8) 00:26:34.785 Write completed with error (sct=0, sc=8) 00:26:34.785 Write completed with error (sct=0, sc=8) 00:26:34.785 starting I/O failed: -6 00:26:34.785 Write completed with error (sct=0, sc=8) 00:26:34.786 Write completed with error (sct=0, sc=8) 00:26:34.786 Write completed with error (sct=0, sc=8) 00:26:34.786 Write completed with error (sct=0, sc=8) 00:26:34.786 starting I/O failed: -6 00:26:34.786 Write completed with error (sct=0, sc=8) 00:26:34.786 Write completed with error (sct=0, sc=8) 00:26:34.786 Write completed with error (sct=0, sc=8) 00:26:34.786 Write completed with error (sct=0, sc=8) 00:26:34.786 starting I/O failed: -6 00:26:34.786 Write completed with error (sct=0, sc=8) 00:26:34.786 Write completed with error (sct=0, sc=8) 00:26:34.786 Write completed with error (sct=0, sc=8) 00:26:34.786 Write completed with error (sct=0, sc=8) 00:26:34.786 starting I/O failed: -6 00:26:34.786 Write completed with error (sct=0, sc=8) 00:26:34.786 Write completed with error (sct=0, sc=8) 00:26:34.786 Write completed with error (sct=0, sc=8) 00:26:34.786 Write completed with error (sct=0, sc=8) 00:26:34.786 starting I/O failed: -6 00:26:34.786 Write completed with error (sct=0, sc=8) 00:26:34.786 Write completed with error (sct=0, sc=8) 00:26:34.786 Write completed with error (sct=0, sc=8) 00:26:34.786 Write completed with error (sct=0, sc=8) 00:26:34.786 starting I/O failed: -6 00:26:34.786 Write completed with error (sct=0, sc=8) 00:26:34.786 Write completed with error (sct=0, sc=8) 00:26:34.786 Write completed with error (sct=0, sc=8) 00:26:34.786 [2024-11-20 06:37:06.181310] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:34.786 starting I/O failed: -6 00:26:34.786 Write completed with error (sct=0, sc=8) 00:26:34.786 Write completed with error (sct=0, sc=8) 00:26:34.786 Write completed with error (sct=0, sc=8) 00:26:34.786 starting I/O failed: -6 00:26:34.786 Write completed with error (sct=0, sc=8) 00:26:34.786 starting I/O failed: -6 00:26:34.786 Write completed with error (sct=0, sc=8) 00:26:34.786 Write completed with error (sct=0, sc=8) 00:26:34.786 Write completed with error (sct=0, sc=8) 00:26:34.786 starting I/O failed: -6 00:26:34.786 Write completed with error (sct=0, sc=8) 00:26:34.786 starting I/O failed: -6 00:26:34.786 Write completed with error (sct=0, sc=8) 00:26:34.786 Write completed with error (sct=0, sc=8) 00:26:34.786 Write completed with error (sct=0, sc=8) 00:26:34.786 starting I/O failed: -6 00:26:34.786 Write completed with error (sct=0, sc=8) 00:26:34.786 starting I/O failed: -6 00:26:34.786 Write completed with error (sct=0, sc=8) 00:26:34.786 Write completed with error (sct=0, sc=8) 00:26:34.786 Write completed with error (sct=0, sc=8) 00:26:34.786 starting I/O failed: -6 00:26:34.786 Write completed with error (sct=0, sc=8) 00:26:34.786 starting I/O failed: -6 00:26:34.786 Write completed with error (sct=0, sc=8) 00:26:34.786 Write completed with error (sct=0, sc=8) 00:26:34.786 Write completed with error (sct=0, sc=8) 00:26:34.786 starting I/O failed: -6 00:26:34.786 Write completed with error (sct=0, sc=8) 00:26:34.786 starting I/O failed: -6 00:26:34.786 Write completed with error (sct=0, sc=8) 00:26:34.786 Write completed with error (sct=0, sc=8) 00:26:34.786 Write completed with error (sct=0, sc=8) 00:26:34.786 starting I/O failed: -6 00:26:34.786 Write completed with error (sct=0, sc=8) 00:26:34.786 starting I/O failed: -6 00:26:34.786 Write completed with error (sct=0, sc=8) 00:26:34.786 Write completed with error (sct=0, sc=8) 00:26:34.786 Write completed with error (sct=0, sc=8) 00:26:34.786 starting I/O failed: -6 00:26:34.786 Write completed with error (sct=0, sc=8) 00:26:34.786 starting I/O failed: -6 00:26:34.786 Write completed with error (sct=0, sc=8) 00:26:34.786 Write completed with error (sct=0, sc=8) 00:26:34.786 Write completed with error (sct=0, sc=8) 00:26:34.786 starting I/O failed: -6 00:26:34.786 Write completed with error (sct=0, sc=8) 00:26:34.786 starting I/O failed: -6 00:26:34.786 Write completed with error (sct=0, sc=8) 00:26:34.786 Write completed with error (sct=0, sc=8) 00:26:34.786 Write completed with error (sct=0, sc=8) 00:26:34.786 starting I/O failed: -6 00:26:34.786 Write completed with error (sct=0, sc=8) 00:26:34.786 starting I/O failed: -6 00:26:34.786 Write completed with error (sct=0, sc=8) 00:26:34.786 Write completed with error (sct=0, sc=8) 00:26:34.786 Write completed with error (sct=0, sc=8) 00:26:34.786 starting I/O failed: -6 00:26:34.786 Write completed with error (sct=0, sc=8) 00:26:34.786 starting I/O failed: -6 00:26:34.786 Write completed with error (sct=0, sc=8) 00:26:34.786 Write completed with error (sct=0, sc=8) 00:26:34.786 Write completed with error (sct=0, sc=8) 00:26:34.786 starting I/O failed: -6 00:26:34.786 [2024-11-20 06:37:06.182170] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:34.786 Write completed with error (sct=0, sc=8) 00:26:34.786 starting I/O failed: -6 00:26:34.786 Write completed with error (sct=0, sc=8) 00:26:34.786 starting I/O failed: -6 00:26:34.786 Write completed with error (sct=0, sc=8) 00:26:34.786 Write completed with error (sct=0, sc=8) 00:26:34.786 starting I/O failed: -6 00:26:34.786 Write completed with error (sct=0, sc=8) 00:26:34.786 starting I/O failed: -6 00:26:34.786 Write completed with error (sct=0, sc=8) 00:26:34.786 starting I/O failed: -6 00:26:34.786 Write completed with error (sct=0, sc=8) 00:26:34.786 Write completed with error (sct=0, sc=8) 00:26:34.786 starting I/O failed: -6 00:26:34.786 Write completed with error (sct=0, sc=8) 00:26:34.786 starting I/O failed: -6 00:26:34.786 Write completed with error (sct=0, sc=8) 00:26:34.786 starting I/O failed: -6 00:26:34.786 Write completed with error (sct=0, sc=8) 00:26:34.786 Write completed with error (sct=0, sc=8) 00:26:34.786 starting I/O failed: -6 00:26:34.786 Write completed with error (sct=0, sc=8) 00:26:34.786 starting I/O failed: -6 00:26:34.786 Write completed with error (sct=0, sc=8) 00:26:34.786 starting I/O failed: -6 00:26:34.786 Write completed with error (sct=0, sc=8) 00:26:34.786 Write completed with error (sct=0, sc=8) 00:26:34.786 starting I/O failed: -6 00:26:34.786 Write completed with error (sct=0, sc=8) 00:26:34.786 starting I/O failed: -6 00:26:34.786 Write completed with error (sct=0, sc=8) 00:26:34.786 starting I/O failed: -6 00:26:34.786 Write completed with error (sct=0, sc=8) 00:26:34.786 Write completed with error (sct=0, sc=8) 00:26:34.786 starting I/O failed: -6 00:26:34.786 Write completed with error (sct=0, sc=8) 00:26:34.786 starting I/O failed: -6 00:26:34.786 Write completed with error (sct=0, sc=8) 00:26:34.786 starting I/O failed: -6 00:26:34.786 Write completed with error (sct=0, sc=8) 00:26:34.786 Write completed with error (sct=0, sc=8) 00:26:34.786 starting I/O failed: -6 00:26:34.786 Write completed with error (sct=0, sc=8) 00:26:34.786 starting I/O failed: -6 00:26:34.786 Write completed with error (sct=0, sc=8) 00:26:34.786 starting I/O failed: -6 00:26:34.786 Write completed with error (sct=0, sc=8) 00:26:34.786 Write completed with error (sct=0, sc=8) 00:26:34.786 starting I/O failed: -6 00:26:34.786 Write completed with error (sct=0, sc=8) 00:26:34.786 starting I/O failed: -6 00:26:34.786 Write completed with error (sct=0, sc=8) 00:26:34.786 starting I/O failed: -6 00:26:34.786 Write completed with error (sct=0, sc=8) 00:26:34.786 Write completed with error (sct=0, sc=8) 00:26:34.786 starting I/O failed: -6 00:26:34.786 Write completed with error (sct=0, sc=8) 00:26:34.786 starting I/O failed: -6 00:26:34.786 Write completed with error (sct=0, sc=8) 00:26:34.786 starting I/O failed: -6 00:26:34.786 Write completed with error (sct=0, sc=8) 00:26:34.786 Write completed with error (sct=0, sc=8) 00:26:34.786 starting I/O failed: -6 00:26:34.786 Write completed with error (sct=0, sc=8) 00:26:34.786 starting I/O failed: -6 00:26:34.786 Write completed with error (sct=0, sc=8) 00:26:34.786 starting I/O failed: -6 00:26:34.786 Write completed with error (sct=0, sc=8) 00:26:34.786 Write completed with error (sct=0, sc=8) 00:26:34.786 starting I/O failed: -6 00:26:34.786 Write completed with error (sct=0, sc=8) 00:26:34.786 starting I/O failed: -6 00:26:34.786 Write completed with error (sct=0, sc=8) 00:26:34.786 starting I/O failed: -6 00:26:34.786 Write completed with error (sct=0, sc=8) 00:26:34.786 Write completed with error (sct=0, sc=8) 00:26:34.786 starting I/O failed: -6 00:26:34.786 Write completed with error (sct=0, sc=8) 00:26:34.786 starting I/O failed: -6 00:26:34.786 Write completed with error (sct=0, sc=8) 00:26:34.786 starting I/O failed: -6 00:26:34.786 Write completed with error (sct=0, sc=8) 00:26:34.786 Write completed with error (sct=0, sc=8) 00:26:34.786 starting I/O failed: -6 00:26:34.786 Write completed with error (sct=0, sc=8) 00:26:34.786 starting I/O failed: -6 00:26:34.786 [2024-11-20 06:37:06.183169] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:34.786 starting I/O failed: -6 00:26:34.786 Write completed with error (sct=0, sc=8) 00:26:34.786 starting I/O failed: -6 00:26:34.786 Write completed with error (sct=0, sc=8) 00:26:34.786 starting I/O failed: -6 00:26:34.786 Write completed with error (sct=0, sc=8) 00:26:34.786 starting I/O failed: -6 00:26:34.786 Write completed with error (sct=0, sc=8) 00:26:34.786 starting I/O failed: -6 00:26:34.786 Write completed with error (sct=0, sc=8) 00:26:34.786 starting I/O failed: -6 00:26:34.786 Write completed with error (sct=0, sc=8) 00:26:34.786 starting I/O failed: -6 00:26:34.786 Write completed with error (sct=0, sc=8) 00:26:34.786 starting I/O failed: -6 00:26:34.786 Write completed with error (sct=0, sc=8) 00:26:34.786 starting I/O failed: -6 00:26:34.786 Write completed with error (sct=0, sc=8) 00:26:34.786 starting I/O failed: -6 00:26:34.786 Write completed with error (sct=0, sc=8) 00:26:34.786 starting I/O failed: -6 00:26:34.786 Write completed with error (sct=0, sc=8) 00:26:34.786 starting I/O failed: -6 00:26:34.786 Write completed with error (sct=0, sc=8) 00:26:34.786 starting I/O failed: -6 00:26:34.786 Write completed with error (sct=0, sc=8) 00:26:34.786 starting I/O failed: -6 00:26:34.786 Write completed with error (sct=0, sc=8) 00:26:34.786 starting I/O failed: -6 00:26:34.786 Write completed with error (sct=0, sc=8) 00:26:34.786 starting I/O failed: -6 00:26:34.786 Write completed with error (sct=0, sc=8) 00:26:34.786 starting I/O failed: -6 00:26:34.786 Write completed with error (sct=0, sc=8) 00:26:34.786 starting I/O failed: -6 00:26:34.786 Write completed with error (sct=0, sc=8) 00:26:34.786 starting I/O failed: -6 00:26:34.786 Write completed with error (sct=0, sc=8) 00:26:34.786 starting I/O failed: -6 00:26:34.786 Write completed with error (sct=0, sc=8) 00:26:34.786 starting I/O failed: -6 00:26:34.786 Write completed with error (sct=0, sc=8) 00:26:34.786 starting I/O failed: -6 00:26:34.786 Write completed with error (sct=0, sc=8) 00:26:34.786 starting I/O failed: -6 00:26:34.786 Write completed with error (sct=0, sc=8) 00:26:34.786 starting I/O failed: -6 00:26:34.786 Write completed with error (sct=0, sc=8) 00:26:34.786 starting I/O failed: -6 00:26:34.786 Write completed with error (sct=0, sc=8) 00:26:34.786 starting I/O failed: -6 00:26:34.786 Write completed with error (sct=0, sc=8) 00:26:34.786 starting I/O failed: -6 00:26:34.786 Write completed with error (sct=0, sc=8) 00:26:34.786 starting I/O failed: -6 00:26:34.786 Write completed with error (sct=0, sc=8) 00:26:34.786 starting I/O failed: -6 00:26:34.786 Write completed with error (sct=0, sc=8) 00:26:34.786 starting I/O failed: -6 00:26:34.786 Write completed with error (sct=0, sc=8) 00:26:34.786 starting I/O failed: -6 00:26:34.786 Write completed with error (sct=0, sc=8) 00:26:34.786 starting I/O failed: -6 00:26:34.786 Write completed with error (sct=0, sc=8) 00:26:34.786 starting I/O failed: -6 00:26:34.786 Write completed with error (sct=0, sc=8) 00:26:34.786 starting I/O failed: -6 00:26:34.786 Write completed with error (sct=0, sc=8) 00:26:34.786 starting I/O failed: -6 00:26:34.786 Write completed with error (sct=0, sc=8) 00:26:34.786 starting I/O failed: -6 00:26:34.786 Write completed with error (sct=0, sc=8) 00:26:34.786 starting I/O failed: -6 00:26:34.786 Write completed with error (sct=0, sc=8) 00:26:34.786 starting I/O failed: -6 00:26:34.786 Write completed with error (sct=0, sc=8) 00:26:34.786 starting I/O failed: -6 00:26:34.786 Write completed with error (sct=0, sc=8) 00:26:34.786 starting I/O failed: -6 00:26:34.786 Write completed with error (sct=0, sc=8) 00:26:34.786 starting I/O failed: -6 00:26:34.786 Write completed with error (sct=0, sc=8) 00:26:34.786 starting I/O failed: -6 00:26:34.786 Write completed with error (sct=0, sc=8) 00:26:34.786 starting I/O failed: -6 00:26:34.786 Write completed with error (sct=0, sc=8) 00:26:34.786 starting I/O failed: -6 00:26:34.786 Write completed with error (sct=0, sc=8) 00:26:34.786 starting I/O failed: -6 00:26:34.787 Write completed with error (sct=0, sc=8) 00:26:34.787 starting I/O failed: -6 00:26:34.787 Write completed with error (sct=0, sc=8) 00:26:34.787 starting I/O failed: -6 00:26:34.787 Write completed with error (sct=0, sc=8) 00:26:34.787 starting I/O failed: -6 00:26:34.787 Write completed with error (sct=0, sc=8) 00:26:34.787 starting I/O failed: -6 00:26:34.787 Write completed with error (sct=0, sc=8) 00:26:34.787 starting I/O failed: -6 00:26:34.787 Write completed with error (sct=0, sc=8) 00:26:34.787 starting I/O failed: -6 00:26:34.787 Write completed with error (sct=0, sc=8) 00:26:34.787 starting I/O failed: -6 00:26:34.787 Write completed with error (sct=0, sc=8) 00:26:34.787 starting I/O failed: -6 00:26:34.787 Write completed with error (sct=0, sc=8) 00:26:34.787 starting I/O failed: -6 00:26:34.787 Write completed with error (sct=0, sc=8) 00:26:34.787 starting I/O failed: -6 00:26:34.787 Write completed with error (sct=0, sc=8) 00:26:34.787 starting I/O failed: -6 00:26:34.787 Write completed with error (sct=0, sc=8) 00:26:34.787 starting I/O failed: -6 00:26:34.787 Write completed with error (sct=0, sc=8) 00:26:34.787 starting I/O failed: -6 00:26:34.787 Write completed with error (sct=0, sc=8) 00:26:34.787 starting I/O failed: -6 00:26:34.787 Write completed with error (sct=0, sc=8) 00:26:34.787 starting I/O failed: -6 00:26:34.787 [2024-11-20 06:37:06.184713] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:34.787 NVMe io qpair process completion error 00:26:34.787 Write completed with error (sct=0, sc=8) 00:26:34.787 Write completed with error (sct=0, sc=8) 00:26:34.787 Write completed with error (sct=0, sc=8) 00:26:34.787 Write completed with error (sct=0, sc=8) 00:26:34.787 starting I/O failed: -6 00:26:34.787 Write completed with error (sct=0, sc=8) 00:26:34.787 Write completed with error (sct=0, sc=8) 00:26:34.787 Write completed with error (sct=0, sc=8) 00:26:34.787 Write completed with error (sct=0, sc=8) 00:26:34.787 starting I/O failed: -6 00:26:34.787 Write completed with error (sct=0, sc=8) 00:26:34.787 Write completed with error (sct=0, sc=8) 00:26:34.787 Write completed with error (sct=0, sc=8) 00:26:34.787 Write completed with error (sct=0, sc=8) 00:26:34.787 starting I/O failed: -6 00:26:34.787 Write completed with error (sct=0, sc=8) 00:26:34.787 Write completed with error (sct=0, sc=8) 00:26:34.787 Write completed with error (sct=0, sc=8) 00:26:34.787 Write completed with error (sct=0, sc=8) 00:26:34.787 starting I/O failed: -6 00:26:34.787 Write completed with error (sct=0, sc=8) 00:26:34.787 Write completed with error (sct=0, sc=8) 00:26:34.787 Write completed with error (sct=0, sc=8) 00:26:34.787 Write completed with error (sct=0, sc=8) 00:26:34.787 starting I/O failed: -6 00:26:34.787 Write completed with error (sct=0, sc=8) 00:26:34.787 Write completed with error (sct=0, sc=8) 00:26:34.787 Write completed with error (sct=0, sc=8) 00:26:34.787 Write completed with error (sct=0, sc=8) 00:26:34.787 starting I/O failed: -6 00:26:34.787 Write completed with error (sct=0, sc=8) 00:26:34.787 Write completed with error (sct=0, sc=8) 00:26:34.787 Write completed with error (sct=0, sc=8) 00:26:34.787 Write completed with error (sct=0, sc=8) 00:26:34.787 starting I/O failed: -6 00:26:34.787 Write completed with error (sct=0, sc=8) 00:26:34.787 Write completed with error (sct=0, sc=8) 00:26:34.787 Write completed with error (sct=0, sc=8) 00:26:34.787 Write completed with error (sct=0, sc=8) 00:26:34.787 starting I/O failed: -6 00:26:34.787 Write completed with error (sct=0, sc=8) 00:26:34.787 Write completed with error (sct=0, sc=8) 00:26:34.787 Write completed with error (sct=0, sc=8) 00:26:34.787 Write completed with error (sct=0, sc=8) 00:26:34.787 starting I/O failed: -6 00:26:34.787 Write completed with error (sct=0, sc=8) 00:26:34.787 Write completed with error (sct=0, sc=8) 00:26:34.787 Write completed with error (sct=0, sc=8) 00:26:34.787 Write completed with error (sct=0, sc=8) 00:26:34.787 starting I/O failed: -6 00:26:34.787 [2024-11-20 06:37:06.185705] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:34.787 Write completed with error (sct=0, sc=8) 00:26:34.787 Write completed with error (sct=0, sc=8) 00:26:34.787 starting I/O failed: -6 00:26:34.787 Write completed with error (sct=0, sc=8) 00:26:34.787 starting I/O failed: -6 00:26:34.787 Write completed with error (sct=0, sc=8) 00:26:34.787 Write completed with error (sct=0, sc=8) 00:26:34.787 Write completed with error (sct=0, sc=8) 00:26:34.787 starting I/O failed: -6 00:26:34.787 Write completed with error (sct=0, sc=8) 00:26:34.787 starting I/O failed: -6 00:26:34.787 Write completed with error (sct=0, sc=8) 00:26:34.787 Write completed with error (sct=0, sc=8) 00:26:34.787 Write completed with error (sct=0, sc=8) 00:26:34.787 starting I/O failed: -6 00:26:34.787 Write completed with error (sct=0, sc=8) 00:26:34.787 starting I/O failed: -6 00:26:34.787 Write completed with error (sct=0, sc=8) 00:26:34.787 Write completed with error (sct=0, sc=8) 00:26:34.787 Write completed with error (sct=0, sc=8) 00:26:34.787 starting I/O failed: -6 00:26:34.787 Write completed with error (sct=0, sc=8) 00:26:34.787 starting I/O failed: -6 00:26:34.787 Write completed with error (sct=0, sc=8) 00:26:34.787 Write completed with error (sct=0, sc=8) 00:26:34.787 Write completed with error (sct=0, sc=8) 00:26:34.787 starting I/O failed: -6 00:26:34.787 Write completed with error (sct=0, sc=8) 00:26:34.787 starting I/O failed: -6 00:26:34.787 Write completed with error (sct=0, sc=8) 00:26:34.787 Write completed with error (sct=0, sc=8) 00:26:34.787 Write completed with error (sct=0, sc=8) 00:26:34.787 starting I/O failed: -6 00:26:34.787 Write completed with error (sct=0, sc=8) 00:26:34.787 starting I/O failed: -6 00:26:34.787 Write completed with error (sct=0, sc=8) 00:26:34.787 Write completed with error (sct=0, sc=8) 00:26:34.787 Write completed with error (sct=0, sc=8) 00:26:34.787 starting I/O failed: -6 00:26:34.787 Write completed with error (sct=0, sc=8) 00:26:34.787 starting I/O failed: -6 00:26:34.787 Write completed with error (sct=0, sc=8) 00:26:34.787 Write completed with error (sct=0, sc=8) 00:26:34.787 Write completed with error (sct=0, sc=8) 00:26:34.787 starting I/O failed: -6 00:26:34.787 Write completed with error (sct=0, sc=8) 00:26:34.787 starting I/O failed: -6 00:26:34.787 Write completed with error (sct=0, sc=8) 00:26:34.787 Write completed with error (sct=0, sc=8) 00:26:34.787 Write completed with error (sct=0, sc=8) 00:26:34.787 starting I/O failed: -6 00:26:34.787 Write completed with error (sct=0, sc=8) 00:26:34.787 starting I/O failed: -6 00:26:34.787 Write completed with error (sct=0, sc=8) 00:26:34.787 Write completed with error (sct=0, sc=8) 00:26:34.787 Write completed with error (sct=0, sc=8) 00:26:34.787 starting I/O failed: -6 00:26:34.787 Write completed with error (sct=0, sc=8) 00:26:34.787 starting I/O failed: -6 00:26:34.787 Write completed with error (sct=0, sc=8) 00:26:34.787 Write completed with error (sct=0, sc=8) 00:26:34.787 Write completed with error (sct=0, sc=8) 00:26:34.787 starting I/O failed: -6 00:26:34.787 Write completed with error (sct=0, sc=8) 00:26:34.787 starting I/O failed: -6 00:26:34.787 Write completed with error (sct=0, sc=8) 00:26:34.787 [2024-11-20 06:37:06.186584] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:34.787 Write completed with error (sct=0, sc=8) 00:26:34.787 starting I/O failed: -6 00:26:34.787 Write completed with error (sct=0, sc=8) 00:26:34.787 starting I/O failed: -6 00:26:34.787 Write completed with error (sct=0, sc=8) 00:26:34.787 Write completed with error (sct=0, sc=8) 00:26:34.787 starting I/O failed: -6 00:26:34.787 Write completed with error (sct=0, sc=8) 00:26:34.787 starting I/O failed: -6 00:26:34.787 Write completed with error (sct=0, sc=8) 00:26:34.787 starting I/O failed: -6 00:26:34.787 Write completed with error (sct=0, sc=8) 00:26:34.787 Write completed with error (sct=0, sc=8) 00:26:34.787 starting I/O failed: -6 00:26:34.787 Write completed with error (sct=0, sc=8) 00:26:34.787 starting I/O failed: -6 00:26:34.787 Write completed with error (sct=0, sc=8) 00:26:34.787 starting I/O failed: -6 00:26:34.787 Write completed with error (sct=0, sc=8) 00:26:34.787 Write completed with error (sct=0, sc=8) 00:26:34.787 starting I/O failed: -6 00:26:34.787 Write completed with error (sct=0, sc=8) 00:26:34.787 starting I/O failed: -6 00:26:34.787 Write completed with error (sct=0, sc=8) 00:26:34.787 starting I/O failed: -6 00:26:34.787 Write completed with error (sct=0, sc=8) 00:26:34.787 Write completed with error (sct=0, sc=8) 00:26:34.787 starting I/O failed: -6 00:26:34.787 Write completed with error (sct=0, sc=8) 00:26:34.787 starting I/O failed: -6 00:26:34.787 Write completed with error (sct=0, sc=8) 00:26:34.787 starting I/O failed: -6 00:26:34.787 Write completed with error (sct=0, sc=8) 00:26:34.787 Write completed with error (sct=0, sc=8) 00:26:34.787 starting I/O failed: -6 00:26:34.787 Write completed with error (sct=0, sc=8) 00:26:34.787 starting I/O failed: -6 00:26:34.787 Write completed with error (sct=0, sc=8) 00:26:34.787 starting I/O failed: -6 00:26:34.787 Write completed with error (sct=0, sc=8) 00:26:34.787 Write completed with error (sct=0, sc=8) 00:26:34.787 starting I/O failed: -6 00:26:34.787 Write completed with error (sct=0, sc=8) 00:26:34.787 starting I/O failed: -6 00:26:34.787 Write completed with error (sct=0, sc=8) 00:26:34.787 starting I/O failed: -6 00:26:34.787 Write completed with error (sct=0, sc=8) 00:26:34.787 Write completed with error (sct=0, sc=8) 00:26:34.787 starting I/O failed: -6 00:26:34.787 Write completed with error (sct=0, sc=8) 00:26:34.787 starting I/O failed: -6 00:26:34.787 Write completed with error (sct=0, sc=8) 00:26:34.787 starting I/O failed: -6 00:26:34.787 Write completed with error (sct=0, sc=8) 00:26:34.787 Write completed with error (sct=0, sc=8) 00:26:34.787 starting I/O failed: -6 00:26:34.787 Write completed with error (sct=0, sc=8) 00:26:34.787 starting I/O failed: -6 00:26:34.787 Write completed with error (sct=0, sc=8) 00:26:34.787 starting I/O failed: -6 00:26:34.787 Write completed with error (sct=0, sc=8) 00:26:34.787 Write completed with error (sct=0, sc=8) 00:26:34.787 starting I/O failed: -6 00:26:34.787 Write completed with error (sct=0, sc=8) 00:26:34.787 starting I/O failed: -6 00:26:34.787 Write completed with error (sct=0, sc=8) 00:26:34.787 starting I/O failed: -6 00:26:34.787 Write completed with error (sct=0, sc=8) 00:26:34.787 Write completed with error (sct=0, sc=8) 00:26:34.787 starting I/O failed: -6 00:26:34.787 Write completed with error (sct=0, sc=8) 00:26:34.787 starting I/O failed: -6 00:26:34.787 Write completed with error (sct=0, sc=8) 00:26:34.787 starting I/O failed: -6 00:26:34.787 Write completed with error (sct=0, sc=8) 00:26:34.787 Write completed with error (sct=0, sc=8) 00:26:34.787 starting I/O failed: -6 00:26:34.787 Write completed with error (sct=0, sc=8) 00:26:34.787 starting I/O failed: -6 00:26:34.787 Write completed with error (sct=0, sc=8) 00:26:34.787 starting I/O failed: -6 00:26:34.787 Write completed with error (sct=0, sc=8) 00:26:34.787 Write completed with error (sct=0, sc=8) 00:26:34.787 starting I/O failed: -6 00:26:34.787 [2024-11-20 06:37:06.187587] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:34.787 Write completed with error (sct=0, sc=8) 00:26:34.787 starting I/O failed: -6 00:26:34.787 Write completed with error (sct=0, sc=8) 00:26:34.787 starting I/O failed: -6 00:26:34.787 Write completed with error (sct=0, sc=8) 00:26:34.787 starting I/O failed: -6 00:26:34.787 Write completed with error (sct=0, sc=8) 00:26:34.787 starting I/O failed: -6 00:26:34.787 Write completed with error (sct=0, sc=8) 00:26:34.787 starting I/O failed: -6 00:26:34.787 Write completed with error (sct=0, sc=8) 00:26:34.787 starting I/O failed: -6 00:26:34.787 Write completed with error (sct=0, sc=8) 00:26:34.787 starting I/O failed: -6 00:26:34.787 Write completed with error (sct=0, sc=8) 00:26:34.787 starting I/O failed: -6 00:26:34.787 Write completed with error (sct=0, sc=8) 00:26:34.787 starting I/O failed: -6 00:26:34.787 Write completed with error (sct=0, sc=8) 00:26:34.787 starting I/O failed: -6 00:26:34.787 Write completed with error (sct=0, sc=8) 00:26:34.787 starting I/O failed: -6 00:26:34.787 Write completed with error (sct=0, sc=8) 00:26:34.787 starting I/O failed: -6 00:26:34.787 Write completed with error (sct=0, sc=8) 00:26:34.787 starting I/O failed: -6 00:26:34.787 Write completed with error (sct=0, sc=8) 00:26:34.787 starting I/O failed: -6 00:26:34.787 Write completed with error (sct=0, sc=8) 00:26:34.787 starting I/O failed: -6 00:26:34.787 Write completed with error (sct=0, sc=8) 00:26:34.787 starting I/O failed: -6 00:26:34.787 Write completed with error (sct=0, sc=8) 00:26:34.788 starting I/O failed: -6 00:26:34.788 Write completed with error (sct=0, sc=8) 00:26:34.788 starting I/O failed: -6 00:26:34.788 Write completed with error (sct=0, sc=8) 00:26:34.788 starting I/O failed: -6 00:26:34.788 Write completed with error (sct=0, sc=8) 00:26:34.788 starting I/O failed: -6 00:26:34.788 Write completed with error (sct=0, sc=8) 00:26:34.788 starting I/O failed: -6 00:26:34.788 Write completed with error (sct=0, sc=8) 00:26:34.788 starting I/O failed: -6 00:26:34.788 Write completed with error (sct=0, sc=8) 00:26:34.788 starting I/O failed: -6 00:26:34.788 Write completed with error (sct=0, sc=8) 00:26:34.788 starting I/O failed: -6 00:26:34.788 Write completed with error (sct=0, sc=8) 00:26:34.788 starting I/O failed: -6 00:26:34.788 Write completed with error (sct=0, sc=8) 00:26:34.788 starting I/O failed: -6 00:26:34.788 Write completed with error (sct=0, sc=8) 00:26:34.788 starting I/O failed: -6 00:26:34.788 Write completed with error (sct=0, sc=8) 00:26:34.788 starting I/O failed: -6 00:26:34.788 Write completed with error (sct=0, sc=8) 00:26:34.788 starting I/O failed: -6 00:26:34.788 Write completed with error (sct=0, sc=8) 00:26:34.788 starting I/O failed: -6 00:26:34.788 Write completed with error (sct=0, sc=8) 00:26:34.788 starting I/O failed: -6 00:26:34.788 Write completed with error (sct=0, sc=8) 00:26:34.788 starting I/O failed: -6 00:26:34.788 Write completed with error (sct=0, sc=8) 00:26:34.788 starting I/O failed: -6 00:26:34.788 Write completed with error (sct=0, sc=8) 00:26:34.788 starting I/O failed: -6 00:26:34.788 Write completed with error (sct=0, sc=8) 00:26:34.788 starting I/O failed: -6 00:26:34.788 Write completed with error (sct=0, sc=8) 00:26:34.788 starting I/O failed: -6 00:26:34.788 Write completed with error (sct=0, sc=8) 00:26:34.788 starting I/O failed: -6 00:26:34.788 Write completed with error (sct=0, sc=8) 00:26:34.788 starting I/O failed: -6 00:26:34.788 Write completed with error (sct=0, sc=8) 00:26:34.788 starting I/O failed: -6 00:26:34.788 Write completed with error (sct=0, sc=8) 00:26:34.788 starting I/O failed: -6 00:26:34.788 Write completed with error (sct=0, sc=8) 00:26:34.788 starting I/O failed: -6 00:26:34.788 Write completed with error (sct=0, sc=8) 00:26:34.788 starting I/O failed: -6 00:26:34.788 Write completed with error (sct=0, sc=8) 00:26:34.788 starting I/O failed: -6 00:26:34.788 Write completed with error (sct=0, sc=8) 00:26:34.788 starting I/O failed: -6 00:26:34.788 Write completed with error (sct=0, sc=8) 00:26:34.788 starting I/O failed: -6 00:26:34.788 Write completed with error (sct=0, sc=8) 00:26:34.788 starting I/O failed: -6 00:26:34.788 Write completed with error (sct=0, sc=8) 00:26:34.788 starting I/O failed: -6 00:26:34.788 Write completed with error (sct=0, sc=8) 00:26:34.788 starting I/O failed: -6 00:26:34.788 Write completed with error (sct=0, sc=8) 00:26:34.788 starting I/O failed: -6 00:26:34.788 Write completed with error (sct=0, sc=8) 00:26:34.788 starting I/O failed: -6 00:26:34.788 Write completed with error (sct=0, sc=8) 00:26:34.788 starting I/O failed: -6 00:26:34.788 Write completed with error (sct=0, sc=8) 00:26:34.788 starting I/O failed: -6 00:26:34.788 Write completed with error (sct=0, sc=8) 00:26:34.788 starting I/O failed: -6 00:26:34.788 Write completed with error (sct=0, sc=8) 00:26:34.788 starting I/O failed: -6 00:26:34.788 Write completed with error (sct=0, sc=8) 00:26:34.788 starting I/O failed: -6 00:26:34.788 Write completed with error (sct=0, sc=8) 00:26:34.788 starting I/O failed: -6 00:26:34.788 Write completed with error (sct=0, sc=8) 00:26:34.788 starting I/O failed: -6 00:26:34.788 Write completed with error (sct=0, sc=8) 00:26:34.788 starting I/O failed: -6 00:26:34.788 Write completed with error (sct=0, sc=8) 00:26:34.788 starting I/O failed: -6 00:26:34.788 Write completed with error (sct=0, sc=8) 00:26:34.788 starting I/O failed: -6 00:26:34.788 [2024-11-20 06:37:06.190230] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:34.788 NVMe io qpair process completion error 00:26:34.788 Write completed with error (sct=0, sc=8) 00:26:34.788 Write completed with error (sct=0, sc=8) 00:26:34.788 starting I/O failed: -6 00:26:34.788 Write completed with error (sct=0, sc=8) 00:26:34.788 Write completed with error (sct=0, sc=8) 00:26:34.788 Write completed with error (sct=0, sc=8) 00:26:34.788 Write completed with error (sct=0, sc=8) 00:26:34.788 starting I/O failed: -6 00:26:34.788 Write completed with error (sct=0, sc=8) 00:26:34.788 Write completed with error (sct=0, sc=8) 00:26:34.788 Write completed with error (sct=0, sc=8) 00:26:34.788 Write completed with error (sct=0, sc=8) 00:26:34.788 starting I/O failed: -6 00:26:34.788 Write completed with error (sct=0, sc=8) 00:26:34.788 Write completed with error (sct=0, sc=8) 00:26:34.788 Write completed with error (sct=0, sc=8) 00:26:34.788 Write completed with error (sct=0, sc=8) 00:26:34.788 starting I/O failed: -6 00:26:34.788 Write completed with error (sct=0, sc=8) 00:26:34.788 Write completed with error (sct=0, sc=8) 00:26:34.788 Write completed with error (sct=0, sc=8) 00:26:34.788 Write completed with error (sct=0, sc=8) 00:26:34.788 starting I/O failed: -6 00:26:34.788 Write completed with error (sct=0, sc=8) 00:26:34.788 Write completed with error (sct=0, sc=8) 00:26:34.788 Write completed with error (sct=0, sc=8) 00:26:34.788 Write completed with error (sct=0, sc=8) 00:26:34.788 starting I/O failed: -6 00:26:34.788 Write completed with error (sct=0, sc=8) 00:26:34.788 Write completed with error (sct=0, sc=8) 00:26:34.788 Write completed with error (sct=0, sc=8) 00:26:34.788 Write completed with error (sct=0, sc=8) 00:26:34.788 starting I/O failed: -6 00:26:34.788 Write completed with error (sct=0, sc=8) 00:26:34.788 Write completed with error (sct=0, sc=8) 00:26:34.788 Write completed with error (sct=0, sc=8) 00:26:34.788 Write completed with error (sct=0, sc=8) 00:26:34.788 starting I/O failed: -6 00:26:34.788 Write completed with error (sct=0, sc=8) 00:26:34.788 Write completed with error (sct=0, sc=8) 00:26:34.788 Write completed with error (sct=0, sc=8) 00:26:34.788 Write completed with error (sct=0, sc=8) 00:26:34.788 starting I/O failed: -6 00:26:34.788 Write completed with error (sct=0, sc=8) 00:26:34.788 Write completed with error (sct=0, sc=8) 00:26:34.788 Write completed with error (sct=0, sc=8) 00:26:34.788 Write completed with error (sct=0, sc=8) 00:26:34.788 starting I/O failed: -6 00:26:34.788 Write completed with error (sct=0, sc=8) 00:26:34.788 [2024-11-20 06:37:06.191282] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:34.788 Write completed with error (sct=0, sc=8) 00:26:34.788 starting I/O failed: -6 00:26:34.788 Write completed with error (sct=0, sc=8) 00:26:34.788 Write completed with error (sct=0, sc=8) 00:26:34.788 Write completed with error (sct=0, sc=8) 00:26:34.788 starting I/O failed: -6 00:26:34.788 Write completed with error (sct=0, sc=8) 00:26:34.788 starting I/O failed: -6 00:26:34.788 Write completed with error (sct=0, sc=8) 00:26:34.788 Write completed with error (sct=0, sc=8) 00:26:34.788 Write completed with error (sct=0, sc=8) 00:26:34.788 starting I/O failed: -6 00:26:34.788 Write completed with error (sct=0, sc=8) 00:26:34.788 starting I/O failed: -6 00:26:34.788 Write completed with error (sct=0, sc=8) 00:26:34.788 Write completed with error (sct=0, sc=8) 00:26:34.788 Write completed with error (sct=0, sc=8) 00:26:34.788 starting I/O failed: -6 00:26:34.788 Write completed with error (sct=0, sc=8) 00:26:34.788 starting I/O failed: -6 00:26:34.788 Write completed with error (sct=0, sc=8) 00:26:34.788 Write completed with error (sct=0, sc=8) 00:26:34.788 Write completed with error (sct=0, sc=8) 00:26:34.788 starting I/O failed: -6 00:26:34.788 Write completed with error (sct=0, sc=8) 00:26:34.788 starting I/O failed: -6 00:26:34.788 Write completed with error (sct=0, sc=8) 00:26:34.788 Write completed with error (sct=0, sc=8) 00:26:34.788 Write completed with error (sct=0, sc=8) 00:26:34.788 starting I/O failed: -6 00:26:34.788 Write completed with error (sct=0, sc=8) 00:26:34.788 starting I/O failed: -6 00:26:34.788 Write completed with error (sct=0, sc=8) 00:26:34.788 Write completed with error (sct=0, sc=8) 00:26:34.788 Write completed with error (sct=0, sc=8) 00:26:34.788 starting I/O failed: -6 00:26:34.788 Write completed with error (sct=0, sc=8) 00:26:34.788 starting I/O failed: -6 00:26:34.788 Write completed with error (sct=0, sc=8) 00:26:34.788 Write completed with error (sct=0, sc=8) 00:26:34.788 Write completed with error (sct=0, sc=8) 00:26:34.788 starting I/O failed: -6 00:26:34.788 Write completed with error (sct=0, sc=8) 00:26:34.788 starting I/O failed: -6 00:26:34.788 Write completed with error (sct=0, sc=8) 00:26:34.788 Write completed with error (sct=0, sc=8) 00:26:34.788 Write completed with error (sct=0, sc=8) 00:26:34.788 starting I/O failed: -6 00:26:34.788 Write completed with error (sct=0, sc=8) 00:26:34.788 starting I/O failed: -6 00:26:34.788 Write completed with error (sct=0, sc=8) 00:26:34.788 Write completed with error (sct=0, sc=8) 00:26:34.788 Write completed with error (sct=0, sc=8) 00:26:34.788 starting I/O failed: -6 00:26:34.788 Write completed with error (sct=0, sc=8) 00:26:34.788 starting I/O failed: -6 00:26:34.788 Write completed with error (sct=0, sc=8) 00:26:34.788 Write completed with error (sct=0, sc=8) 00:26:34.788 Write completed with error (sct=0, sc=8) 00:26:34.788 starting I/O failed: -6 00:26:34.788 Write completed with error (sct=0, sc=8) 00:26:34.788 starting I/O failed: -6 00:26:34.788 Write completed with error (sct=0, sc=8) 00:26:34.788 Write completed with error (sct=0, sc=8) 00:26:34.788 Write completed with error (sct=0, sc=8) 00:26:34.788 starting I/O failed: -6 00:26:34.788 [2024-11-20 06:37:06.192166] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:34.788 Write completed with error (sct=0, sc=8) 00:26:34.788 starting I/O failed: -6 00:26:34.788 Write completed with error (sct=0, sc=8) 00:26:34.788 Write completed with error (sct=0, sc=8) 00:26:34.788 starting I/O failed: -6 00:26:34.788 Write completed with error (sct=0, sc=8) 00:26:34.788 starting I/O failed: -6 00:26:34.788 Write completed with error (sct=0, sc=8) 00:26:34.788 starting I/O failed: -6 00:26:34.788 Write completed with error (sct=0, sc=8) 00:26:34.788 Write completed with error (sct=0, sc=8) 00:26:34.788 starting I/O failed: -6 00:26:34.788 Write completed with error (sct=0, sc=8) 00:26:34.788 starting I/O failed: -6 00:26:34.788 Write completed with error (sct=0, sc=8) 00:26:34.788 starting I/O failed: -6 00:26:34.788 Write completed with error (sct=0, sc=8) 00:26:34.788 Write completed with error (sct=0, sc=8) 00:26:34.788 starting I/O failed: -6 00:26:34.788 Write completed with error (sct=0, sc=8) 00:26:34.788 starting I/O failed: -6 00:26:34.788 Write completed with error (sct=0, sc=8) 00:26:34.788 starting I/O failed: -6 00:26:34.788 Write completed with error (sct=0, sc=8) 00:26:34.788 Write completed with error (sct=0, sc=8) 00:26:34.788 starting I/O failed: -6 00:26:34.788 Write completed with error (sct=0, sc=8) 00:26:34.788 starting I/O failed: -6 00:26:34.788 Write completed with error (sct=0, sc=8) 00:26:34.788 starting I/O failed: -6 00:26:34.788 Write completed with error (sct=0, sc=8) 00:26:34.788 Write completed with error (sct=0, sc=8) 00:26:34.788 starting I/O failed: -6 00:26:34.788 Write completed with error (sct=0, sc=8) 00:26:34.788 starting I/O failed: -6 00:26:34.788 Write completed with error (sct=0, sc=8) 00:26:34.788 starting I/O failed: -6 00:26:34.788 Write completed with error (sct=0, sc=8) 00:26:34.788 Write completed with error (sct=0, sc=8) 00:26:34.788 starting I/O failed: -6 00:26:34.788 Write completed with error (sct=0, sc=8) 00:26:34.788 starting I/O failed: -6 00:26:34.788 Write completed with error (sct=0, sc=8) 00:26:34.788 starting I/O failed: -6 00:26:34.788 Write completed with error (sct=0, sc=8) 00:26:34.788 Write completed with error (sct=0, sc=8) 00:26:34.788 starting I/O failed: -6 00:26:34.788 Write completed with error (sct=0, sc=8) 00:26:34.788 starting I/O failed: -6 00:26:34.788 Write completed with error (sct=0, sc=8) 00:26:34.788 starting I/O failed: -6 00:26:34.788 Write completed with error (sct=0, sc=8) 00:26:34.788 Write completed with error (sct=0, sc=8) 00:26:34.788 starting I/O failed: -6 00:26:34.788 Write completed with error (sct=0, sc=8) 00:26:34.788 starting I/O failed: -6 00:26:34.788 Write completed with error (sct=0, sc=8) 00:26:34.788 starting I/O failed: -6 00:26:34.788 Write completed with error (sct=0, sc=8) 00:26:34.788 Write completed with error (sct=0, sc=8) 00:26:34.788 starting I/O failed: -6 00:26:34.789 Write completed with error (sct=0, sc=8) 00:26:34.789 starting I/O failed: -6 00:26:34.789 Write completed with error (sct=0, sc=8) 00:26:34.789 starting I/O failed: -6 00:26:34.789 Write completed with error (sct=0, sc=8) 00:26:34.789 Write completed with error (sct=0, sc=8) 00:26:34.789 starting I/O failed: -6 00:26:34.789 Write completed with error (sct=0, sc=8) 00:26:34.789 starting I/O failed: -6 00:26:34.789 Write completed with error (sct=0, sc=8) 00:26:34.789 starting I/O failed: -6 00:26:34.789 Write completed with error (sct=0, sc=8) 00:26:34.789 Write completed with error (sct=0, sc=8) 00:26:34.789 starting I/O failed: -6 00:26:34.789 Write completed with error (sct=0, sc=8) 00:26:34.789 starting I/O failed: -6 00:26:34.789 Write completed with error (sct=0, sc=8) 00:26:34.789 starting I/O failed: -6 00:26:34.789 Write completed with error (sct=0, sc=8) 00:26:34.789 Write completed with error (sct=0, sc=8) 00:26:34.789 starting I/O failed: -6 00:26:34.789 Write completed with error (sct=0, sc=8) 00:26:34.789 starting I/O failed: -6 00:26:34.789 [2024-11-20 06:37:06.193147] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:34.789 Write completed with error (sct=0, sc=8) 00:26:34.789 starting I/O failed: -6 00:26:34.789 Write completed with error (sct=0, sc=8) 00:26:34.789 starting I/O failed: -6 00:26:34.789 Write completed with error (sct=0, sc=8) 00:26:34.789 starting I/O failed: -6 00:26:34.789 Write completed with error (sct=0, sc=8) 00:26:34.789 starting I/O failed: -6 00:26:34.789 Write completed with error (sct=0, sc=8) 00:26:34.789 starting I/O failed: -6 00:26:34.789 Write completed with error (sct=0, sc=8) 00:26:34.789 starting I/O failed: -6 00:26:34.789 Write completed with error (sct=0, sc=8) 00:26:34.789 starting I/O failed: -6 00:26:34.789 Write completed with error (sct=0, sc=8) 00:26:34.789 starting I/O failed: -6 00:26:34.789 Write completed with error (sct=0, sc=8) 00:26:34.789 starting I/O failed: -6 00:26:34.789 Write completed with error (sct=0, sc=8) 00:26:34.789 starting I/O failed: -6 00:26:34.789 Write completed with error (sct=0, sc=8) 00:26:34.789 starting I/O failed: -6 00:26:34.789 Write completed with error (sct=0, sc=8) 00:26:34.789 starting I/O failed: -6 00:26:34.789 Write completed with error (sct=0, sc=8) 00:26:34.789 starting I/O failed: -6 00:26:34.789 Write completed with error (sct=0, sc=8) 00:26:34.789 starting I/O failed: -6 00:26:34.789 Write completed with error (sct=0, sc=8) 00:26:34.789 starting I/O failed: -6 00:26:34.789 Write completed with error (sct=0, sc=8) 00:26:34.789 starting I/O failed: -6 00:26:34.789 Write completed with error (sct=0, sc=8) 00:26:34.789 starting I/O failed: -6 00:26:34.789 Write completed with error (sct=0, sc=8) 00:26:34.789 starting I/O failed: -6 00:26:34.789 Write completed with error (sct=0, sc=8) 00:26:34.789 starting I/O failed: -6 00:26:34.789 Write completed with error (sct=0, sc=8) 00:26:34.789 starting I/O failed: -6 00:26:34.789 Write completed with error (sct=0, sc=8) 00:26:34.789 starting I/O failed: -6 00:26:34.789 Write completed with error (sct=0, sc=8) 00:26:34.789 starting I/O failed: -6 00:26:34.789 Write completed with error (sct=0, sc=8) 00:26:34.789 starting I/O failed: -6 00:26:34.789 Write completed with error (sct=0, sc=8) 00:26:34.789 starting I/O failed: -6 00:26:34.789 Write completed with error (sct=0, sc=8) 00:26:34.789 starting I/O failed: -6 00:26:34.789 Write completed with error (sct=0, sc=8) 00:26:34.789 starting I/O failed: -6 00:26:34.789 Write completed with error (sct=0, sc=8) 00:26:34.789 starting I/O failed: -6 00:26:34.789 Write completed with error (sct=0, sc=8) 00:26:34.789 starting I/O failed: -6 00:26:34.789 Write completed with error (sct=0, sc=8) 00:26:34.789 starting I/O failed: -6 00:26:34.789 Write completed with error (sct=0, sc=8) 00:26:34.789 starting I/O failed: -6 00:26:34.789 Write completed with error (sct=0, sc=8) 00:26:34.789 starting I/O failed: -6 00:26:34.789 Write completed with error (sct=0, sc=8) 00:26:34.789 starting I/O failed: -6 00:26:34.789 Write completed with error (sct=0, sc=8) 00:26:34.789 starting I/O failed: -6 00:26:34.789 Write completed with error (sct=0, sc=8) 00:26:34.789 starting I/O failed: -6 00:26:34.789 Write completed with error (sct=0, sc=8) 00:26:34.789 starting I/O failed: -6 00:26:34.789 Write completed with error (sct=0, sc=8) 00:26:34.789 starting I/O failed: -6 00:26:34.789 Write completed with error (sct=0, sc=8) 00:26:34.789 starting I/O failed: -6 00:26:34.789 Write completed with error (sct=0, sc=8) 00:26:34.789 starting I/O failed: -6 00:26:34.789 Write completed with error (sct=0, sc=8) 00:26:34.789 starting I/O failed: -6 00:26:34.789 Write completed with error (sct=0, sc=8) 00:26:34.789 starting I/O failed: -6 00:26:34.789 Write completed with error (sct=0, sc=8) 00:26:34.789 starting I/O failed: -6 00:26:34.789 Write completed with error (sct=0, sc=8) 00:26:34.789 starting I/O failed: -6 00:26:34.789 Write completed with error (sct=0, sc=8) 00:26:34.789 starting I/O failed: -6 00:26:34.789 Write completed with error (sct=0, sc=8) 00:26:34.789 starting I/O failed: -6 00:26:34.789 Write completed with error (sct=0, sc=8) 00:26:34.789 starting I/O failed: -6 00:26:34.789 Write completed with error (sct=0, sc=8) 00:26:34.789 starting I/O failed: -6 00:26:34.789 Write completed with error (sct=0, sc=8) 00:26:34.789 starting I/O failed: -6 00:26:34.789 Write completed with error (sct=0, sc=8) 00:26:34.789 starting I/O failed: -6 00:26:34.789 Write completed with error (sct=0, sc=8) 00:26:34.789 starting I/O failed: -6 00:26:34.789 Write completed with error (sct=0, sc=8) 00:26:34.789 starting I/O failed: -6 00:26:34.789 Write completed with error (sct=0, sc=8) 00:26:34.789 starting I/O failed: -6 00:26:34.789 Write completed with error (sct=0, sc=8) 00:26:34.789 starting I/O failed: -6 00:26:34.789 Write completed with error (sct=0, sc=8) 00:26:34.789 starting I/O failed: -6 00:26:34.789 Write completed with error (sct=0, sc=8) 00:26:34.789 starting I/O failed: -6 00:26:34.789 Write completed with error (sct=0, sc=8) 00:26:34.789 starting I/O failed: -6 00:26:34.789 Write completed with error (sct=0, sc=8) 00:26:34.789 starting I/O failed: -6 00:26:34.789 Write completed with error (sct=0, sc=8) 00:26:34.789 starting I/O failed: -6 00:26:34.789 Write completed with error (sct=0, sc=8) 00:26:34.789 starting I/O failed: -6 00:26:34.789 Write completed with error (sct=0, sc=8) 00:26:34.789 starting I/O failed: -6 00:26:34.789 Write completed with error (sct=0, sc=8) 00:26:34.789 starting I/O failed: -6 00:26:34.789 [2024-11-20 06:37:06.196991] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:34.789 NVMe io qpair process completion error 00:26:34.789 Write completed with error (sct=0, sc=8) 00:26:34.789 starting I/O failed: -6 00:26:34.789 Write completed with error (sct=0, sc=8) 00:26:34.789 Write completed with error (sct=0, sc=8) 00:26:34.789 Write completed with error (sct=0, sc=8) 00:26:34.789 Write completed with error (sct=0, sc=8) 00:26:34.789 starting I/O failed: -6 00:26:34.789 Write completed with error (sct=0, sc=8) 00:26:34.789 Write completed with error (sct=0, sc=8) 00:26:34.789 Write completed with error (sct=0, sc=8) 00:26:34.789 Write completed with error (sct=0, sc=8) 00:26:34.789 starting I/O failed: -6 00:26:34.789 Write completed with error (sct=0, sc=8) 00:26:34.789 Write completed with error (sct=0, sc=8) 00:26:34.789 Write completed with error (sct=0, sc=8) 00:26:34.789 Write completed with error (sct=0, sc=8) 00:26:34.789 starting I/O failed: -6 00:26:34.789 Write completed with error (sct=0, sc=8) 00:26:34.789 Write completed with error (sct=0, sc=8) 00:26:34.789 Write completed with error (sct=0, sc=8) 00:26:34.789 Write completed with error (sct=0, sc=8) 00:26:34.789 starting I/O failed: -6 00:26:34.789 Write completed with error (sct=0, sc=8) 00:26:34.789 Write completed with error (sct=0, sc=8) 00:26:34.789 Write completed with error (sct=0, sc=8) 00:26:34.789 Write completed with error (sct=0, sc=8) 00:26:34.789 starting I/O failed: -6 00:26:34.789 Write completed with error (sct=0, sc=8) 00:26:34.789 Write completed with error (sct=0, sc=8) 00:26:34.789 Write completed with error (sct=0, sc=8) 00:26:34.789 Write completed with error (sct=0, sc=8) 00:26:34.789 starting I/O failed: -6 00:26:34.789 Write completed with error (sct=0, sc=8) 00:26:34.789 Write completed with error (sct=0, sc=8) 00:26:34.789 Write completed with error (sct=0, sc=8) 00:26:34.789 Write completed with error (sct=0, sc=8) 00:26:34.789 starting I/O failed: -6 00:26:34.789 Write completed with error (sct=0, sc=8) 00:26:34.789 Write completed with error (sct=0, sc=8) 00:26:34.789 Write completed with error (sct=0, sc=8) 00:26:34.789 Write completed with error (sct=0, sc=8) 00:26:34.789 starting I/O failed: -6 00:26:34.789 Write completed with error (sct=0, sc=8) 00:26:34.789 Write completed with error (sct=0, sc=8) 00:26:34.789 Write completed with error (sct=0, sc=8) 00:26:34.789 Write completed with error (sct=0, sc=8) 00:26:34.789 starting I/O failed: -6 00:26:34.789 Write completed with error (sct=0, sc=8) 00:26:34.789 Write completed with error (sct=0, sc=8) 00:26:34.789 Write completed with error (sct=0, sc=8) 00:26:34.789 Write completed with error (sct=0, sc=8) 00:26:34.789 starting I/O failed: -6 00:26:34.789 Write completed with error (sct=0, sc=8) 00:26:34.789 starting I/O failed: -6 00:26:34.789 Write completed with error (sct=0, sc=8) 00:26:34.789 Write completed with error (sct=0, sc=8) 00:26:34.789 Write completed with error (sct=0, sc=8) 00:26:34.789 starting I/O failed: -6 00:26:34.789 Write completed with error (sct=0, sc=8) 00:26:34.789 starting I/O failed: -6 00:26:34.789 Write completed with error (sct=0, sc=8) 00:26:34.789 Write completed with error (sct=0, sc=8) 00:26:34.789 Write completed with error (sct=0, sc=8) 00:26:34.789 starting I/O failed: -6 00:26:34.789 Write completed with error (sct=0, sc=8) 00:26:34.789 starting I/O failed: -6 00:26:34.789 Write completed with error (sct=0, sc=8) 00:26:34.789 Write completed with error (sct=0, sc=8) 00:26:34.789 Write completed with error (sct=0, sc=8) 00:26:34.789 starting I/O failed: -6 00:26:34.789 Write completed with error (sct=0, sc=8) 00:26:34.789 starting I/O failed: -6 00:26:34.789 Write completed with error (sct=0, sc=8) 00:26:34.789 Write completed with error (sct=0, sc=8) 00:26:34.789 Write completed with error (sct=0, sc=8) 00:26:34.789 starting I/O failed: -6 00:26:34.789 Write completed with error (sct=0, sc=8) 00:26:34.789 starting I/O failed: -6 00:26:34.789 Write completed with error (sct=0, sc=8) 00:26:34.789 Write completed with error (sct=0, sc=8) 00:26:34.789 Write completed with error (sct=0, sc=8) 00:26:34.789 starting I/O failed: -6 00:26:34.789 Write completed with error (sct=0, sc=8) 00:26:34.789 starting I/O failed: -6 00:26:34.789 Write completed with error (sct=0, sc=8) 00:26:34.789 Write completed with error (sct=0, sc=8) 00:26:34.789 Write completed with error (sct=0, sc=8) 00:26:34.789 starting I/O failed: -6 00:26:34.789 Write completed with error (sct=0, sc=8) 00:26:34.789 starting I/O failed: -6 00:26:34.789 Write completed with error (sct=0, sc=8) 00:26:34.789 Write completed with error (sct=0, sc=8) 00:26:34.789 Write completed with error (sct=0, sc=8) 00:26:34.789 starting I/O failed: -6 00:26:34.789 Write completed with error (sct=0, sc=8) 00:26:34.789 starting I/O failed: -6 00:26:34.789 Write completed with error (sct=0, sc=8) 00:26:34.789 Write completed with error (sct=0, sc=8) 00:26:34.789 Write completed with error (sct=0, sc=8) 00:26:34.789 starting I/O failed: -6 00:26:34.789 Write completed with error (sct=0, sc=8) 00:26:34.790 starting I/O failed: -6 00:26:34.790 Write completed with error (sct=0, sc=8) 00:26:34.790 Write completed with error (sct=0, sc=8) 00:26:34.790 Write completed with error (sct=0, sc=8) 00:26:34.790 starting I/O failed: -6 00:26:34.790 Write completed with error (sct=0, sc=8) 00:26:34.790 starting I/O failed: -6 00:26:34.790 Write completed with error (sct=0, sc=8) 00:26:34.790 Write completed with error (sct=0, sc=8) 00:26:34.790 Write completed with error (sct=0, sc=8) 00:26:34.790 starting I/O failed: -6 00:26:34.790 Write completed with error (sct=0, sc=8) 00:26:34.790 starting I/O failed: -6 00:26:34.790 Write completed with error (sct=0, sc=8) 00:26:34.790 Write completed with error (sct=0, sc=8) 00:26:34.790 Write completed with error (sct=0, sc=8) 00:26:34.790 starting I/O failed: -6 00:26:34.790 Write completed with error (sct=0, sc=8) 00:26:34.790 starting I/O failed: -6 00:26:34.790 Write completed with error (sct=0, sc=8) 00:26:34.790 starting I/O failed: -6 00:26:34.790 Write completed with error (sct=0, sc=8) 00:26:34.790 Write completed with error (sct=0, sc=8) 00:26:34.790 starting I/O failed: -6 00:26:34.790 Write completed with error (sct=0, sc=8) 00:26:34.790 starting I/O failed: -6 00:26:34.790 Write completed with error (sct=0, sc=8) 00:26:34.790 starting I/O failed: -6 00:26:34.790 Write completed with error (sct=0, sc=8) 00:26:34.790 Write completed with error (sct=0, sc=8) 00:26:34.790 starting I/O failed: -6 00:26:34.790 Write completed with error (sct=0, sc=8) 00:26:34.790 starting I/O failed: -6 00:26:34.790 Write completed with error (sct=0, sc=8) 00:26:34.790 starting I/O failed: -6 00:26:34.790 Write completed with error (sct=0, sc=8) 00:26:34.790 Write completed with error (sct=0, sc=8) 00:26:34.790 starting I/O failed: -6 00:26:34.790 Write completed with error (sct=0, sc=8) 00:26:34.790 starting I/O failed: -6 00:26:34.790 Write completed with error (sct=0, sc=8) 00:26:34.790 starting I/O failed: -6 00:26:34.790 Write completed with error (sct=0, sc=8) 00:26:34.790 Write completed with error (sct=0, sc=8) 00:26:34.790 starting I/O failed: -6 00:26:34.790 Write completed with error (sct=0, sc=8) 00:26:34.790 starting I/O failed: -6 00:26:34.790 Write completed with error (sct=0, sc=8) 00:26:34.790 starting I/O failed: -6 00:26:34.790 Write completed with error (sct=0, sc=8) 00:26:34.790 Write completed with error (sct=0, sc=8) 00:26:34.790 starting I/O failed: -6 00:26:34.790 Write completed with error (sct=0, sc=8) 00:26:34.790 starting I/O failed: -6 00:26:34.790 Write completed with error (sct=0, sc=8) 00:26:34.790 starting I/O failed: -6 00:26:34.790 Write completed with error (sct=0, sc=8) 00:26:34.790 Write completed with error (sct=0, sc=8) 00:26:34.790 starting I/O failed: -6 00:26:34.790 Write completed with error (sct=0, sc=8) 00:26:34.790 starting I/O failed: -6 00:26:34.790 Write completed with error (sct=0, sc=8) 00:26:34.790 starting I/O failed: -6 00:26:34.790 Write completed with error (sct=0, sc=8) 00:26:34.790 Write completed with error (sct=0, sc=8) 00:26:34.790 starting I/O failed: -6 00:26:34.790 Write completed with error (sct=0, sc=8) 00:26:34.790 starting I/O failed: -6 00:26:34.790 Write completed with error (sct=0, sc=8) 00:26:34.790 starting I/O failed: -6 00:26:34.790 Write completed with error (sct=0, sc=8) 00:26:34.790 Write completed with error (sct=0, sc=8) 00:26:34.790 starting I/O failed: -6 00:26:34.790 Write completed with error (sct=0, sc=8) 00:26:34.790 starting I/O failed: -6 00:26:34.790 Write completed with error (sct=0, sc=8) 00:26:34.790 starting I/O failed: -6 00:26:34.790 Write completed with error (sct=0, sc=8) 00:26:34.790 Write completed with error (sct=0, sc=8) 00:26:34.790 starting I/O failed: -6 00:26:34.790 Write completed with error (sct=0, sc=8) 00:26:34.790 starting I/O failed: -6 00:26:34.790 Write completed with error (sct=0, sc=8) 00:26:34.790 starting I/O failed: -6 00:26:34.790 Write completed with error (sct=0, sc=8) 00:26:34.790 Write completed with error (sct=0, sc=8) 00:26:34.790 starting I/O failed: -6 00:26:34.790 Write completed with error (sct=0, sc=8) 00:26:34.790 starting I/O failed: -6 00:26:34.790 Write completed with error (sct=0, sc=8) 00:26:34.790 starting I/O failed: -6 00:26:34.790 Write completed with error (sct=0, sc=8) 00:26:34.790 Write completed with error (sct=0, sc=8) 00:26:34.790 starting I/O failed: -6 00:26:34.790 Write completed with error (sct=0, sc=8) 00:26:34.790 starting I/O failed: -6 00:26:34.790 Write completed with error (sct=0, sc=8) 00:26:34.790 starting I/O failed: -6 00:26:34.790 Write completed with error (sct=0, sc=8) 00:26:34.790 starting I/O failed: -6 00:26:34.790 Write completed with error (sct=0, sc=8) 00:26:34.790 starting I/O failed: -6 00:26:34.790 Write completed with error (sct=0, sc=8) 00:26:34.790 starting I/O failed: -6 00:26:34.790 Write completed with error (sct=0, sc=8) 00:26:34.790 starting I/O failed: -6 00:26:34.790 Write completed with error (sct=0, sc=8) 00:26:34.790 starting I/O failed: -6 00:26:34.790 Write completed with error (sct=0, sc=8) 00:26:34.790 starting I/O failed: -6 00:26:34.790 Write completed with error (sct=0, sc=8) 00:26:34.790 starting I/O failed: -6 00:26:34.790 Write completed with error (sct=0, sc=8) 00:26:34.790 starting I/O failed: -6 00:26:34.790 Write completed with error (sct=0, sc=8) 00:26:34.790 starting I/O failed: -6 00:26:34.790 Write completed with error (sct=0, sc=8) 00:26:34.790 starting I/O failed: -6 00:26:34.790 Write completed with error (sct=0, sc=8) 00:26:34.790 starting I/O failed: -6 00:26:34.790 Write completed with error (sct=0, sc=8) 00:26:34.790 starting I/O failed: -6 00:26:34.790 Write completed with error (sct=0, sc=8) 00:26:34.790 starting I/O failed: -6 00:26:34.790 Write completed with error (sct=0, sc=8) 00:26:34.790 starting I/O failed: -6 00:26:34.790 Write completed with error (sct=0, sc=8) 00:26:34.790 starting I/O failed: -6 00:26:34.790 Write completed with error (sct=0, sc=8) 00:26:34.790 starting I/O failed: -6 00:26:34.790 Write completed with error (sct=0, sc=8) 00:26:34.790 starting I/O failed: -6 00:26:34.790 Write completed with error (sct=0, sc=8) 00:26:34.790 starting I/O failed: -6 00:26:34.790 Write completed with error (sct=0, sc=8) 00:26:34.790 starting I/O failed: -6 00:26:34.790 Write completed with error (sct=0, sc=8) 00:26:34.790 starting I/O failed: -6 00:26:34.790 Write completed with error (sct=0, sc=8) 00:26:34.790 starting I/O failed: -6 00:26:34.790 Write completed with error (sct=0, sc=8) 00:26:34.790 starting I/O failed: -6 00:26:34.790 Write completed with error (sct=0, sc=8) 00:26:34.790 starting I/O failed: -6 00:26:34.790 Write completed with error (sct=0, sc=8) 00:26:34.790 starting I/O failed: -6 00:26:34.790 Write completed with error (sct=0, sc=8) 00:26:34.790 starting I/O failed: -6 00:26:34.790 Write completed with error (sct=0, sc=8) 00:26:34.790 starting I/O failed: -6 00:26:34.790 Write completed with error (sct=0, sc=8) 00:26:34.790 starting I/O failed: -6 00:26:34.790 Write completed with error (sct=0, sc=8) 00:26:34.790 starting I/O failed: -6 00:26:34.790 Write completed with error (sct=0, sc=8) 00:26:34.790 starting I/O failed: -6 00:26:34.790 Write completed with error (sct=0, sc=8) 00:26:34.790 starting I/O failed: -6 00:26:34.790 Write completed with error (sct=0, sc=8) 00:26:34.790 starting I/O failed: -6 00:26:34.790 Write completed with error (sct=0, sc=8) 00:26:34.790 starting I/O failed: -6 00:26:34.790 Write completed with error (sct=0, sc=8) 00:26:34.790 starting I/O failed: -6 00:26:34.790 Write completed with error (sct=0, sc=8) 00:26:34.790 starting I/O failed: -6 00:26:34.790 Write completed with error (sct=0, sc=8) 00:26:34.790 starting I/O failed: -6 00:26:34.790 Write completed with error (sct=0, sc=8) 00:26:34.790 starting I/O failed: -6 00:26:34.790 Write completed with error (sct=0, sc=8) 00:26:34.790 starting I/O failed: -6 00:26:34.790 Write completed with error (sct=0, sc=8) 00:26:34.790 starting I/O failed: -6 00:26:34.790 Write completed with error (sct=0, sc=8) 00:26:34.790 starting I/O failed: -6 00:26:34.790 Write completed with error (sct=0, sc=8) 00:26:34.790 starting I/O failed: -6 00:26:34.790 Write completed with error (sct=0, sc=8) 00:26:34.790 starting I/O failed: -6 00:26:34.790 Write completed with error (sct=0, sc=8) 00:26:34.790 starting I/O failed: -6 00:26:34.790 Write completed with error (sct=0, sc=8) 00:26:34.790 starting I/O failed: -6 00:26:34.790 Write completed with error (sct=0, sc=8) 00:26:34.790 starting I/O failed: -6 00:26:34.790 Write completed with error (sct=0, sc=8) 00:26:34.790 starting I/O failed: -6 00:26:34.790 Write completed with error (sct=0, sc=8) 00:26:34.790 starting I/O failed: -6 00:26:34.790 Write completed with error (sct=0, sc=8) 00:26:34.790 starting I/O failed: -6 00:26:34.790 Write completed with error (sct=0, sc=8) 00:26:34.790 starting I/O failed: -6 00:26:34.790 Write completed with error (sct=0, sc=8) 00:26:34.790 starting I/O failed: -6 00:26:34.790 Write completed with error (sct=0, sc=8) 00:26:34.790 starting I/O failed: -6 00:26:34.790 Write completed with error (sct=0, sc=8) 00:26:34.790 starting I/O failed: -6 00:26:34.790 Write completed with error (sct=0, sc=8) 00:26:34.790 starting I/O failed: -6 00:26:34.790 Write completed with error (sct=0, sc=8) 00:26:34.790 starting I/O failed: -6 00:26:34.790 Write completed with error (sct=0, sc=8) 00:26:34.790 starting I/O failed: -6 00:26:34.790 Write completed with error (sct=0, sc=8) 00:26:34.790 starting I/O failed: -6 00:26:34.790 Write completed with error (sct=0, sc=8) 00:26:34.790 starting I/O failed: -6 00:26:34.790 Write completed with error (sct=0, sc=8) 00:26:34.790 starting I/O failed: -6 00:26:34.790 Write completed with error (sct=0, sc=8) 00:26:34.790 starting I/O failed: -6 00:26:34.790 Write completed with error (sct=0, sc=8) 00:26:34.790 starting I/O failed: -6 00:26:34.790 Write completed with error (sct=0, sc=8) 00:26:34.790 starting I/O failed: -6 00:26:34.790 Initializing NVMe Controllers 00:26:34.790 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:26:34.790 Controller IO queue size 128, less than required. 00:26:34.790 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:34.790 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:26:34.790 Controller IO queue size 128, less than required. 00:26:34.790 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:34.790 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:26:34.790 Controller IO queue size 128, less than required. 00:26:34.790 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:34.790 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:26:34.790 Controller IO queue size 128, less than required. 00:26:34.790 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:34.790 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:26:34.790 Controller IO queue size 128, less than required. 00:26:34.790 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:34.790 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:34.790 Controller IO queue size 128, less than required. 00:26:34.790 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:34.790 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:26:34.790 Controller IO queue size 128, less than required. 00:26:34.790 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:34.790 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:26:34.790 Controller IO queue size 128, less than required. 00:26:34.790 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:34.790 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:26:34.790 Controller IO queue size 128, less than required. 00:26:34.790 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:34.790 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:26:34.790 Controller IO queue size 128, less than required. 00:26:34.790 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:34.790 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:26:34.790 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:26:34.790 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:26:34.790 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:26:34.790 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:26:34.790 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:34.790 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:26:34.790 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:26:34.790 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:26:34.791 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:26:34.791 Initialization complete. Launching workers. 00:26:34.791 ======================================================== 00:26:34.791 Latency(us) 00:26:34.791 Device Information : IOPS MiB/s Average min max 00:26:34.791 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 2150.11 92.39 59536.52 686.94 109455.66 00:26:34.791 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 2186.87 93.97 58545.13 698.35 111493.13 00:26:34.791 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 2185.38 93.90 58609.20 735.09 114846.99 00:26:34.791 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 2223.20 95.53 57643.80 846.06 119272.19 00:26:34.791 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 2208.97 94.92 57388.65 755.60 100457.88 00:26:34.791 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2222.56 95.50 57045.07 886.44 100204.80 00:26:34.791 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 2209.82 94.95 57388.07 779.01 99299.98 00:26:34.791 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 2231.06 95.87 56854.58 808.38 98858.53 00:26:34.791 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 2246.58 96.53 56476.55 696.02 101373.94 00:26:34.791 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 2165.83 93.06 58633.27 702.47 106964.34 00:26:34.791 ======================================================== 00:26:34.791 Total : 22030.37 946.62 57800.44 686.94 119272.19 00:26:34.791 00:26:34.791 [2024-11-20 06:37:06.204665] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10c6ef0 is same with the state(6) to be set 00:26:34.791 [2024-11-20 06:37:06.204716] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10c6bc0 is same with the state(6) to be set 00:26:34.791 [2024-11-20 06:37:06.204746] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10c7a70 is same with the state(6) to be set 00:26:34.791 [2024-11-20 06:37:06.204778] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10c6560 is same with the state(6) to be set 00:26:34.791 [2024-11-20 06:37:06.204807] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10c8900 is same with the state(6) to be set 00:26:34.791 [2024-11-20 06:37:06.204836] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10c8720 is same with the state(6) to be set 00:26:34.791 [2024-11-20 06:37:06.204864] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10c8ae0 is same with the state(6) to be set 00:26:34.791 [2024-11-20 06:37:06.204898] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10c6890 is same with the state(6) to be set 00:26:34.791 [2024-11-20 06:37:06.204927] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10c7740 is same with the state(6) to be set 00:26:34.791 [2024-11-20 06:37:06.204955] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10c7410 is same with the state(6) to be set 00:26:34.791 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:26:34.791 06:37:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:26:35.738 06:37:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 611029 00:26:35.738 06:37:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@650 -- # local es=0 00:26:35.738 06:37:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # valid_exec_arg wait 611029 00:26:35.738 06:37:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@638 -- # local arg=wait 00:26:35.738 06:37:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:35.738 06:37:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # type -t wait 00:26:35.738 06:37:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:35.738 06:37:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@653 -- # wait 611029 00:26:35.738 06:37:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@653 -- # es=1 00:26:35.738 06:37:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:35.739 06:37:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:35.739 06:37:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:35.739 06:37:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:26:35.739 06:37:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:26:35.739 06:37:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:26:35.739 06:37:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:26:35.739 06:37:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:26:35.739 06:37:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:35.739 06:37:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:26:35.739 06:37:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:35.739 06:37:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:26:35.739 06:37:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:35.739 06:37:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:35.739 rmmod nvme_tcp 00:26:35.739 rmmod nvme_fabrics 00:26:35.997 rmmod nvme_keyring 00:26:35.997 06:37:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:35.997 06:37:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:26:35.997 06:37:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:26:35.997 06:37:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 610748 ']' 00:26:35.997 06:37:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 610748 00:26:35.997 06:37:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@952 -- # '[' -z 610748 ']' 00:26:35.997 06:37:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@956 -- # kill -0 610748 00:26:35.997 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (610748) - No such process 00:26:35.997 06:37:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@979 -- # echo 'Process with pid 610748 is not found' 00:26:35.997 Process with pid 610748 is not found 00:26:35.997 06:37:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:35.997 06:37:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:35.997 06:37:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:35.997 06:37:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:26:35.997 06:37:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-save 00:26:35.998 06:37:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:35.998 06:37:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-restore 00:26:35.998 06:37:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:35.998 06:37:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:35.998 06:37:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:35.998 06:37:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:35.998 06:37:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:37.947 06:37:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:37.947 00:26:37.947 real 0m10.383s 00:26:37.947 user 0m27.452s 00:26:37.947 sys 0m5.210s 00:26:37.947 06:37:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:26:37.947 06:37:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:26:37.947 ************************************ 00:26:37.947 END TEST nvmf_shutdown_tc4 00:26:37.947 ************************************ 00:26:37.947 06:37:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:26:37.947 00:26:37.947 real 0m40.948s 00:26:37.947 user 1m40.935s 00:26:37.947 sys 0m14.020s 00:26:37.947 06:37:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1128 -- # xtrace_disable 00:26:37.947 06:37:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:37.947 ************************************ 00:26:37.947 END TEST nvmf_shutdown 00:26:37.947 ************************************ 00:26:37.947 06:37:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:26:37.947 06:37:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:26:37.947 06:37:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:26:37.948 06:37:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:26:38.255 ************************************ 00:26:38.255 START TEST nvmf_nsid 00:26:38.255 ************************************ 00:26:38.255 06:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:26:38.255 * Looking for test storage... 00:26:38.255 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:38.255 06:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:26:38.255 06:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1691 -- # lcov --version 00:26:38.255 06:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:26:38.255 06:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:26:38.255 06:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:38.255 06:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:38.255 06:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:38.255 06:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:26:38.255 06:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:26:38.255 06:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:26:38.255 06:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:26:38.255 06:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:26:38.255 06:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:26:38.255 06:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:26:38.255 06:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:38.255 06:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:26:38.255 06:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:26:38.255 06:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:38.255 06:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:38.255 06:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:26:38.255 06:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:26:38.255 06:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:38.255 06:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:26:38.255 06:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:26:38.255 06:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:26:38.255 06:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:26:38.255 06:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:38.255 06:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:26:38.255 06:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:26:38.255 06:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:38.255 06:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:38.255 06:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:26:38.255 06:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:38.255 06:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:26:38.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:38.255 --rc genhtml_branch_coverage=1 00:26:38.255 --rc genhtml_function_coverage=1 00:26:38.255 --rc genhtml_legend=1 00:26:38.255 --rc geninfo_all_blocks=1 00:26:38.256 --rc geninfo_unexecuted_blocks=1 00:26:38.256 00:26:38.256 ' 00:26:38.256 06:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:26:38.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:38.256 --rc genhtml_branch_coverage=1 00:26:38.256 --rc genhtml_function_coverage=1 00:26:38.256 --rc genhtml_legend=1 00:26:38.256 --rc geninfo_all_blocks=1 00:26:38.256 --rc geninfo_unexecuted_blocks=1 00:26:38.256 00:26:38.256 ' 00:26:38.256 06:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:26:38.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:38.256 --rc genhtml_branch_coverage=1 00:26:38.256 --rc genhtml_function_coverage=1 00:26:38.256 --rc genhtml_legend=1 00:26:38.256 --rc geninfo_all_blocks=1 00:26:38.256 --rc geninfo_unexecuted_blocks=1 00:26:38.256 00:26:38.256 ' 00:26:38.256 06:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:26:38.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:38.256 --rc genhtml_branch_coverage=1 00:26:38.256 --rc genhtml_function_coverage=1 00:26:38.256 --rc genhtml_legend=1 00:26:38.256 --rc geninfo_all_blocks=1 00:26:38.256 --rc geninfo_unexecuted_blocks=1 00:26:38.256 00:26:38.256 ' 00:26:38.256 06:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:38.256 06:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:26:38.256 06:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:38.256 06:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:38.256 06:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:38.256 06:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:38.256 06:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:38.256 06:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:38.256 06:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:38.256 06:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:38.256 06:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:38.256 06:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:38.256 06:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:26:38.256 06:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:26:38.256 06:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:38.256 06:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:38.256 06:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:38.256 06:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:38.256 06:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:38.256 06:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:26:38.256 06:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:38.256 06:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:38.256 06:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:38.256 06:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:38.256 06:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:38.256 06:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:38.256 06:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:26:38.256 06:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:38.256 06:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:26:38.256 06:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:38.256 06:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:38.256 06:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:38.256 06:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:38.256 06:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:38.256 06:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:38.256 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:38.256 06:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:38.256 06:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:38.256 06:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:38.256 06:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:26:38.256 06:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:26:38.256 06:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:26:38.256 06:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:26:38.256 06:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:26:38.256 06:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:26:38.256 06:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:38.256 06:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:38.256 06:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:38.256 06:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:38.256 06:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:38.256 06:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:38.256 06:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:38.256 06:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:38.256 06:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:38.256 06:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:38.256 06:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # xtrace_disable 00:26:38.256 06:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:26:44.859 06:37:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:44.859 06:37:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # pci_devs=() 00:26:44.860 06:37:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:44.860 06:37:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:44.860 06:37:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:44.860 06:37:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:44.860 06:37:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:44.860 06:37:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # net_devs=() 00:26:44.860 06:37:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:44.860 06:37:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # e810=() 00:26:44.860 06:37:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # local -ga e810 00:26:44.860 06:37:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # x722=() 00:26:44.860 06:37:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # local -ga x722 00:26:44.860 06:37:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # mlx=() 00:26:44.860 06:37:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # local -ga mlx 00:26:44.860 06:37:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:44.860 06:37:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:44.860 06:37:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:44.860 06:37:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:44.860 06:37:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:44.860 06:37:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:44.860 06:37:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:44.860 06:37:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:44.860 06:37:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:44.860 06:37:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:44.860 06:37:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:44.860 06:37:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:44.860 06:37:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:44.860 06:37:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:44.860 06:37:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:44.860 06:37:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:44.860 06:37:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:44.860 06:37:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:44.860 06:37:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:44.860 06:37:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:26:44.860 Found 0000:86:00.0 (0x8086 - 0x159b) 00:26:44.860 06:37:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:44.860 06:37:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:44.860 06:37:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:44.860 06:37:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:44.860 06:37:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:44.860 06:37:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:44.860 06:37:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:26:44.860 Found 0000:86:00.1 (0x8086 - 0x159b) 00:26:44.860 06:37:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:44.860 06:37:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:44.860 06:37:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:44.860 06:37:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:44.860 06:37:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:44.860 06:37:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:44.860 06:37:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:44.860 06:37:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:44.860 06:37:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:44.860 06:37:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:44.860 06:37:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:44.860 06:37:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:44.860 06:37:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:44.860 06:37:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:44.860 06:37:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:44.860 06:37:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:26:44.860 Found net devices under 0000:86:00.0: cvl_0_0 00:26:44.860 06:37:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:44.860 06:37:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:44.860 06:37:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:44.860 06:37:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:44.860 06:37:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:44.860 06:37:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:44.860 06:37:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:44.860 06:37:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:44.860 06:37:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:26:44.860 Found net devices under 0000:86:00.1: cvl_0_1 00:26:44.860 06:37:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:44.860 06:37:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:44.860 06:37:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # is_hw=yes 00:26:44.860 06:37:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:44.860 06:37:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:44.860 06:37:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:44.860 06:37:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:44.860 06:37:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:44.860 06:37:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:44.860 06:37:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:44.860 06:37:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:44.860 06:37:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:44.860 06:37:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:44.860 06:37:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:44.860 06:37:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:44.860 06:37:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:44.860 06:37:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:44.860 06:37:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:44.860 06:37:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:44.860 06:37:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:44.860 06:37:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:44.860 06:37:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:44.860 06:37:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:44.861 06:37:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:44.861 06:37:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:44.861 06:37:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:44.861 06:37:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:44.861 06:37:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:44.861 06:37:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:44.861 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:44.861 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.361 ms 00:26:44.861 00:26:44.861 --- 10.0.0.2 ping statistics --- 00:26:44.861 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:44.861 rtt min/avg/max/mdev = 0.361/0.361/0.361/0.000 ms 00:26:44.861 06:37:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:44.861 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:44.861 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.210 ms 00:26:44.861 00:26:44.861 --- 10.0.0.1 ping statistics --- 00:26:44.861 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:44.861 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:26:44.861 06:37:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:44.861 06:37:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@450 -- # return 0 00:26:44.861 06:37:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:44.861 06:37:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:44.861 06:37:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:44.861 06:37:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:44.861 06:37:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:44.861 06:37:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:44.861 06:37:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:44.861 06:37:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:26:44.861 06:37:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:44.861 06:37:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:44.861 06:37:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:26:44.861 06:37:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=615563 00:26:44.861 06:37:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:26:44.861 06:37:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 615563 00:26:44.861 06:37:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@833 -- # '[' -z 615563 ']' 00:26:44.861 06:37:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:44.861 06:37:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:44.861 06:37:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:44.861 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:44.861 06:37:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:44.861 06:37:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:26:44.861 [2024-11-20 06:37:15.983592] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:26:44.861 [2024-11-20 06:37:15.983637] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:44.861 [2024-11-20 06:37:16.066001] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:44.861 [2024-11-20 06:37:16.109174] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:44.861 [2024-11-20 06:37:16.109217] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:44.861 [2024-11-20 06:37:16.109224] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:44.861 [2024-11-20 06:37:16.109231] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:44.861 [2024-11-20 06:37:16.109236] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:44.861 [2024-11-20 06:37:16.109822] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:45.121 06:37:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:45.121 06:37:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@866 -- # return 0 00:26:45.121 06:37:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:45.121 06:37:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:45.121 06:37:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:26:45.121 06:37:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:45.121 06:37:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:26:45.121 06:37:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=615739 00:26:45.121 06:37:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.2 00:26:45.121 06:37:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:26:45.121 06:37:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:26:45.121 06:37:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:26:45.121 06:37:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:45.121 06:37:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:45.121 06:37:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:45.121 06:37:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:45.121 06:37:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:45.121 06:37:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:45.121 06:37:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:45.122 06:37:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:45.122 06:37:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:45.122 06:37:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:26:45.122 06:37:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:26:45.122 06:37:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=cbaf1a94-17bc-4a0b-820b-5f812ec569f6 00:26:45.122 06:37:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:26:45.122 06:37:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=73642fa1-86f3-41df-bc1c-9aa03ce647ee 00:26:45.122 06:37:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:26:45.122 06:37:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=33d04c52-4c76-4109-a670-1f4aabafe021 00:26:45.122 06:37:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:26:45.122 06:37:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:45.122 06:37:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:26:45.122 null0 00:26:45.122 null1 00:26:45.122 [2024-11-20 06:37:16.912776] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:26:45.122 [2024-11-20 06:37:16.912821] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid615739 ] 00:26:45.122 null2 00:26:45.122 [2024-11-20 06:37:16.919713] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:45.122 [2024-11-20 06:37:16.943909] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:45.381 06:37:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:45.381 06:37:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 615739 /var/tmp/tgt2.sock 00:26:45.381 06:37:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@833 -- # '[' -z 615739 ']' 00:26:45.381 06:37:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/tgt2.sock 00:26:45.381 06:37:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:45.382 06:37:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:26:45.382 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:26:45.382 06:37:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:45.382 06:37:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:26:45.382 [2024-11-20 06:37:16.985871] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:45.382 [2024-11-20 06:37:17.026363] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:45.640 06:37:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:45.640 06:37:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@866 -- # return 0 00:26:45.640 06:37:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:26:45.899 [2024-11-20 06:37:17.557390] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:45.899 [2024-11-20 06:37:17.573507] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:26:45.899 nvme0n1 nvme0n2 00:26:45.899 nvme1n1 00:26:45.899 06:37:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:26:45.899 06:37:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:26:45.899 06:37:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 00:26:47.279 06:37:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:26:47.279 06:37:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:26:47.279 06:37:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:26:47.279 06:37:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:26:47.279 06:37:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 00:26:47.279 06:37:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:26:47.279 06:37:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:26:47.279 06:37:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1237 -- # local i=0 00:26:47.279 06:37:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:26:47.279 06:37:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n1 00:26:47.279 06:37:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # '[' 0 -lt 15 ']' 00:26:47.279 06:37:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # i=1 00:26:47.279 06:37:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # sleep 1 00:26:48.215 06:37:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:26:48.215 06:37:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n1 00:26:48.215 06:37:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:26:48.215 06:37:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n1 00:26:48.215 06:37:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1248 -- # return 0 00:26:48.215 06:37:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid cbaf1a94-17bc-4a0b-820b-5f812ec569f6 00:26:48.215 06:37:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:26:48.215 06:37:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:26:48.215 06:37:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:26:48.215 06:37:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:26:48.215 06:37:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:26:48.215 06:37:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=cbaf1a9417bc4a0b820b5f812ec569f6 00:26:48.215 06:37:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo CBAF1A9417BC4A0B820B5F812EC569F6 00:26:48.215 06:37:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ CBAF1A9417BC4A0B820B5F812EC569F6 == \C\B\A\F\1\A\9\4\1\7\B\C\4\A\0\B\8\2\0\B\5\F\8\1\2\E\C\5\6\9\F\6 ]] 00:26:48.215 06:37:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:26:48.215 06:37:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1237 -- # local i=0 00:26:48.215 06:37:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:26:48.215 06:37:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n2 00:26:48.215 06:37:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:26:48.215 06:37:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n2 00:26:48.215 06:37:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1248 -- # return 0 00:26:48.215 06:37:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid 73642fa1-86f3-41df-bc1c-9aa03ce647ee 00:26:48.215 06:37:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:26:48.215 06:37:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:26:48.215 06:37:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:26:48.215 06:37:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:26:48.215 06:37:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:26:48.215 06:37:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=73642fa186f341dfbc1c9aa03ce647ee 00:26:48.215 06:37:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 73642FA186F341DFBC1C9AA03CE647EE 00:26:48.215 06:37:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ 73642FA186F341DFBC1C9AA03CE647EE == \7\3\6\4\2\F\A\1\8\6\F\3\4\1\D\F\B\C\1\C\9\A\A\0\3\C\E\6\4\7\E\E ]] 00:26:48.215 06:37:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:26:48.215 06:37:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1237 -- # local i=0 00:26:48.215 06:37:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:26:48.215 06:37:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n3 00:26:48.215 06:37:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:26:48.215 06:37:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n3 00:26:48.215 06:37:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1248 -- # return 0 00:26:48.215 06:37:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid 33d04c52-4c76-4109-a670-1f4aabafe021 00:26:48.215 06:37:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:26:48.215 06:37:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:26:48.215 06:37:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:26:48.216 06:37:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:26:48.216 06:37:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:26:48.216 06:37:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=33d04c524c764109a6701f4aabafe021 00:26:48.216 06:37:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 33D04C524C764109A6701F4AABAFE021 00:26:48.216 06:37:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ 33D04C524C764109A6701F4AABAFE021 == \3\3\D\0\4\C\5\2\4\C\7\6\4\1\0\9\A\6\7\0\1\F\4\A\A\B\A\F\E\0\2\1 ]] 00:26:48.216 06:37:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:26:48.475 06:37:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:26:48.475 06:37:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:26:48.475 06:37:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 615739 00:26:48.475 06:37:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@952 -- # '[' -z 615739 ']' 00:26:48.475 06:37:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@956 -- # kill -0 615739 00:26:48.475 06:37:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@957 -- # uname 00:26:48.475 06:37:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:48.475 06:37:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 615739 00:26:48.475 06:37:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:26:48.475 06:37:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:26:48.475 06:37:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@970 -- # echo 'killing process with pid 615739' 00:26:48.475 killing process with pid 615739 00:26:48.475 06:37:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@971 -- # kill 615739 00:26:48.475 06:37:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@976 -- # wait 615739 00:26:48.735 06:37:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:26:48.735 06:37:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:48.735 06:37:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:26:48.735 06:37:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:48.735 06:37:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:26:48.735 06:37:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:48.735 06:37:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:48.735 rmmod nvme_tcp 00:26:48.735 rmmod nvme_fabrics 00:26:48.735 rmmod nvme_keyring 00:26:48.735 06:37:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:48.735 06:37:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:26:48.735 06:37:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:26:48.735 06:37:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 615563 ']' 00:26:48.735 06:37:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 615563 00:26:48.735 06:37:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@952 -- # '[' -z 615563 ']' 00:26:48.735 06:37:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@956 -- # kill -0 615563 00:26:48.735 06:37:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@957 -- # uname 00:26:48.735 06:37:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:48.735 06:37:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 615563 00:26:48.994 06:37:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:26:48.995 06:37:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:26:48.995 06:37:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@970 -- # echo 'killing process with pid 615563' 00:26:48.995 killing process with pid 615563 00:26:48.995 06:37:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@971 -- # kill 615563 00:26:48.995 06:37:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@976 -- # wait 615563 00:26:48.995 06:37:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:48.995 06:37:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:48.995 06:37:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:48.995 06:37:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:26:48.995 06:37:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:26:48.995 06:37:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:48.995 06:37:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:26:48.995 06:37:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:48.995 06:37:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:48.995 06:37:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:48.995 06:37:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:48.995 06:37:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:51.532 06:37:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:51.532 00:26:51.532 real 0m13.022s 00:26:51.532 user 0m10.421s 00:26:51.532 sys 0m5.550s 00:26:51.532 06:37:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1128 -- # xtrace_disable 00:26:51.532 06:37:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:26:51.532 ************************************ 00:26:51.532 END TEST nvmf_nsid 00:26:51.532 ************************************ 00:26:51.532 06:37:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:26:51.532 00:26:51.532 real 12m6.244s 00:26:51.532 user 26m6.577s 00:26:51.532 sys 3m40.039s 00:26:51.532 06:37:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1128 -- # xtrace_disable 00:26:51.532 06:37:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:26:51.532 ************************************ 00:26:51.532 END TEST nvmf_target_extra 00:26:51.532 ************************************ 00:26:51.532 06:37:22 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:26:51.532 06:37:22 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:26:51.532 06:37:22 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:26:51.532 06:37:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:51.532 ************************************ 00:26:51.532 START TEST nvmf_host 00:26:51.532 ************************************ 00:26:51.532 06:37:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:26:51.532 * Looking for test storage... 00:26:51.532 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:26:51.532 06:37:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:26:51.532 06:37:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1691 -- # lcov --version 00:26:51.532 06:37:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:26:51.532 06:37:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:26:51.532 06:37:23 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:51.532 06:37:23 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:51.532 06:37:23 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:51.532 06:37:23 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:26:51.532 06:37:23 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:26:51.532 06:37:23 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:26:51.532 06:37:23 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:26:51.532 06:37:23 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:26:51.532 06:37:23 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:26:51.532 06:37:23 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:26:51.532 06:37:23 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:51.532 06:37:23 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:26:51.532 06:37:23 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:26:51.532 06:37:23 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:51.532 06:37:23 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:51.533 06:37:23 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:26:51.533 06:37:23 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:26:51.533 06:37:23 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:51.533 06:37:23 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:26:51.533 06:37:23 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:26:51.533 06:37:23 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:26:51.533 06:37:23 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:26:51.533 06:37:23 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:51.533 06:37:23 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:26:51.533 06:37:23 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:26:51.533 06:37:23 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:51.533 06:37:23 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:51.533 06:37:23 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:26:51.533 06:37:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:51.533 06:37:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:26:51.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:51.533 --rc genhtml_branch_coverage=1 00:26:51.533 --rc genhtml_function_coverage=1 00:26:51.533 --rc genhtml_legend=1 00:26:51.533 --rc geninfo_all_blocks=1 00:26:51.533 --rc geninfo_unexecuted_blocks=1 00:26:51.533 00:26:51.533 ' 00:26:51.533 06:37:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:26:51.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:51.533 --rc genhtml_branch_coverage=1 00:26:51.533 --rc genhtml_function_coverage=1 00:26:51.533 --rc genhtml_legend=1 00:26:51.533 --rc geninfo_all_blocks=1 00:26:51.533 --rc geninfo_unexecuted_blocks=1 00:26:51.533 00:26:51.533 ' 00:26:51.533 06:37:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:26:51.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:51.533 --rc genhtml_branch_coverage=1 00:26:51.533 --rc genhtml_function_coverage=1 00:26:51.533 --rc genhtml_legend=1 00:26:51.533 --rc geninfo_all_blocks=1 00:26:51.533 --rc geninfo_unexecuted_blocks=1 00:26:51.533 00:26:51.533 ' 00:26:51.533 06:37:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:26:51.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:51.533 --rc genhtml_branch_coverage=1 00:26:51.533 --rc genhtml_function_coverage=1 00:26:51.533 --rc genhtml_legend=1 00:26:51.533 --rc geninfo_all_blocks=1 00:26:51.533 --rc geninfo_unexecuted_blocks=1 00:26:51.533 00:26:51.533 ' 00:26:51.533 06:37:23 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:51.533 06:37:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:26:51.533 06:37:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:51.533 06:37:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:51.533 06:37:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:51.533 06:37:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:51.533 06:37:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:51.533 06:37:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:51.533 06:37:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:51.533 06:37:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:51.533 06:37:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:51.533 06:37:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:51.533 06:37:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:26:51.533 06:37:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:26:51.533 06:37:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:51.533 06:37:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:51.533 06:37:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:51.533 06:37:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:51.533 06:37:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:51.533 06:37:23 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:26:51.533 06:37:23 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:51.533 06:37:23 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:51.533 06:37:23 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:51.533 06:37:23 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:51.533 06:37:23 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:51.533 06:37:23 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:51.533 06:37:23 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:26:51.533 06:37:23 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:51.533 06:37:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:26:51.533 06:37:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:51.533 06:37:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:51.533 06:37:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:51.533 06:37:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:51.533 06:37:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:51.533 06:37:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:51.533 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:51.533 06:37:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:51.533 06:37:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:51.533 06:37:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:51.533 06:37:23 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:26:51.533 06:37:23 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:26:51.533 06:37:23 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:26:51.533 06:37:23 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:26:51.533 06:37:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:26:51.533 06:37:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:26:51.533 06:37:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.533 ************************************ 00:26:51.533 START TEST nvmf_multicontroller 00:26:51.533 ************************************ 00:26:51.533 06:37:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:26:51.533 * Looking for test storage... 00:26:51.533 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:51.533 06:37:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:26:51.533 06:37:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1691 -- # lcov --version 00:26:51.533 06:37:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:26:51.533 06:37:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:26:51.534 06:37:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:51.534 06:37:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:51.534 06:37:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:51.534 06:37:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:26:51.534 06:37:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:26:51.534 06:37:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:26:51.534 06:37:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:26:51.534 06:37:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:26:51.534 06:37:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:26:51.534 06:37:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:26:51.534 06:37:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:51.534 06:37:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:26:51.534 06:37:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:26:51.534 06:37:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:51.534 06:37:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:51.534 06:37:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:26:51.534 06:37:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:26:51.534 06:37:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:51.534 06:37:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:26:51.534 06:37:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:26:51.534 06:37:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:26:51.534 06:37:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:26:51.534 06:37:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:51.534 06:37:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:26:51.534 06:37:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:26:51.534 06:37:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:51.534 06:37:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:51.534 06:37:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:26:51.534 06:37:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:51.534 06:37:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:26:51.534 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:51.534 --rc genhtml_branch_coverage=1 00:26:51.534 --rc genhtml_function_coverage=1 00:26:51.534 --rc genhtml_legend=1 00:26:51.534 --rc geninfo_all_blocks=1 00:26:51.534 --rc geninfo_unexecuted_blocks=1 00:26:51.534 00:26:51.534 ' 00:26:51.534 06:37:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:26:51.534 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:51.534 --rc genhtml_branch_coverage=1 00:26:51.534 --rc genhtml_function_coverage=1 00:26:51.534 --rc genhtml_legend=1 00:26:51.534 --rc geninfo_all_blocks=1 00:26:51.534 --rc geninfo_unexecuted_blocks=1 00:26:51.534 00:26:51.534 ' 00:26:51.534 06:37:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:26:51.534 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:51.534 --rc genhtml_branch_coverage=1 00:26:51.534 --rc genhtml_function_coverage=1 00:26:51.534 --rc genhtml_legend=1 00:26:51.534 --rc geninfo_all_blocks=1 00:26:51.534 --rc geninfo_unexecuted_blocks=1 00:26:51.534 00:26:51.534 ' 00:26:51.534 06:37:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:26:51.534 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:51.534 --rc genhtml_branch_coverage=1 00:26:51.534 --rc genhtml_function_coverage=1 00:26:51.534 --rc genhtml_legend=1 00:26:51.534 --rc geninfo_all_blocks=1 00:26:51.534 --rc geninfo_unexecuted_blocks=1 00:26:51.534 00:26:51.534 ' 00:26:51.534 06:37:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:51.534 06:37:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:26:51.534 06:37:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:51.534 06:37:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:51.534 06:37:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:51.534 06:37:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:51.534 06:37:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:51.534 06:37:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:51.534 06:37:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:51.534 06:37:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:51.534 06:37:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:51.534 06:37:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:51.794 06:37:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:26:51.794 06:37:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:26:51.794 06:37:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:51.794 06:37:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:51.794 06:37:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:51.794 06:37:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:51.794 06:37:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:51.794 06:37:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:26:51.794 06:37:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:51.794 06:37:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:51.794 06:37:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:51.794 06:37:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:51.794 06:37:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:51.794 06:37:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:51.794 06:37:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:26:51.794 06:37:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:51.794 06:37:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:26:51.794 06:37:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:51.794 06:37:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:51.794 06:37:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:51.794 06:37:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:51.794 06:37:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:51.794 06:37:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:51.794 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:51.794 06:37:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:51.794 06:37:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:51.794 06:37:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:51.794 06:37:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:51.794 06:37:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:51.795 06:37:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:26:51.795 06:37:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:26:51.795 06:37:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:26:51.795 06:37:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:26:51.795 06:37:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:26:51.795 06:37:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:51.795 06:37:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:51.795 06:37:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:51.795 06:37:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:51.795 06:37:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:51.795 06:37:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:51.795 06:37:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:51.795 06:37:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:51.795 06:37:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:51.795 06:37:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:51.795 06:37:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:26:51.795 06:37:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:58.365 06:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:58.365 06:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:26:58.365 06:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:58.365 06:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:58.365 06:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:58.365 06:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:58.365 06:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:58.365 06:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:26:58.365 06:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:58.365 06:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:26:58.365 06:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:26:58.365 06:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:26:58.365 06:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:26:58.365 06:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:26:58.365 06:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:26:58.365 06:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:58.366 06:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:58.366 06:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:58.366 06:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:58.366 06:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:58.366 06:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:58.366 06:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:58.366 06:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:58.366 06:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:58.366 06:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:58.366 06:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:58.366 06:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:58.366 06:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:58.366 06:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:58.366 06:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:58.366 06:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:58.366 06:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:58.366 06:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:58.366 06:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:58.366 06:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:26:58.366 Found 0000:86:00.0 (0x8086 - 0x159b) 00:26:58.366 06:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:58.366 06:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:58.366 06:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:58.366 06:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:58.366 06:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:58.366 06:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:58.366 06:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:26:58.366 Found 0000:86:00.1 (0x8086 - 0x159b) 00:26:58.366 06:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:58.366 06:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:58.366 06:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:58.366 06:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:58.366 06:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:58.366 06:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:58.366 06:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:58.366 06:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:58.366 06:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:58.366 06:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:58.366 06:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:58.366 06:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:58.366 06:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:58.366 06:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:58.366 06:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:58.366 06:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:26:58.366 Found net devices under 0000:86:00.0: cvl_0_0 00:26:58.366 06:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:58.366 06:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:58.366 06:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:58.366 06:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:58.366 06:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:58.366 06:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:58.366 06:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:58.366 06:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:58.366 06:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:26:58.366 Found net devices under 0000:86:00.1: cvl_0_1 00:26:58.366 06:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:58.366 06:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:58.366 06:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # is_hw=yes 00:26:58.366 06:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:58.366 06:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:58.366 06:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:58.366 06:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:58.366 06:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:58.366 06:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:58.366 06:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:58.366 06:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:58.366 06:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:58.366 06:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:58.366 06:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:58.366 06:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:58.366 06:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:58.366 06:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:58.366 06:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:58.366 06:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:58.366 06:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:58.366 06:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:58.366 06:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:58.366 06:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:58.366 06:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:58.366 06:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:58.366 06:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:58.366 06:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:58.366 06:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:58.366 06:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:58.366 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:58.366 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.474 ms 00:26:58.366 00:26:58.366 --- 10.0.0.2 ping statistics --- 00:26:58.366 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:58.366 rtt min/avg/max/mdev = 0.474/0.474/0.474/0.000 ms 00:26:58.366 06:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:58.366 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:58.366 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.215 ms 00:26:58.366 00:26:58.366 --- 10.0.0.1 ping statistics --- 00:26:58.366 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:58.366 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:26:58.366 06:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:58.366 06:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # return 0 00:26:58.366 06:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:58.366 06:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:58.366 06:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:58.366 06:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:58.366 06:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:58.366 06:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:58.366 06:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:58.366 06:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:26:58.366 06:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:58.366 06:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:58.366 06:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:58.366 06:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # nvmfpid=620052 00:26:58.366 06:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:26:58.366 06:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # waitforlisten 620052 00:26:58.366 06:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@833 -- # '[' -z 620052 ']' 00:26:58.367 06:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:58.367 06:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:58.367 06:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:58.367 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:58.367 06:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:58.367 06:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:58.367 [2024-11-20 06:37:29.407549] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:26:58.367 [2024-11-20 06:37:29.407592] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:58.367 [2024-11-20 06:37:29.484187] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:58.367 [2024-11-20 06:37:29.525942] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:58.367 [2024-11-20 06:37:29.525976] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:58.367 [2024-11-20 06:37:29.525982] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:58.367 [2024-11-20 06:37:29.525988] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:58.367 [2024-11-20 06:37:29.525993] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:58.367 [2024-11-20 06:37:29.527389] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:58.367 [2024-11-20 06:37:29.527494] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:58.367 [2024-11-20 06:37:29.527495] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:58.367 06:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:58.367 06:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@866 -- # return 0 00:26:58.367 06:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:58.367 06:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:58.367 06:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:58.367 06:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:58.367 06:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:58.367 06:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:58.367 06:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:58.367 [2024-11-20 06:37:29.662391] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:58.367 06:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:58.367 06:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:58.367 06:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:58.367 06:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:58.367 Malloc0 00:26:58.367 06:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:58.367 06:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:58.367 06:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:58.367 06:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:58.367 06:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:58.367 06:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:58.367 06:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:58.367 06:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:58.367 06:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:58.367 06:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:58.367 06:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:58.367 06:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:58.367 [2024-11-20 06:37:29.726194] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:58.367 06:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:58.367 06:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:26:58.367 06:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:58.367 06:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:58.367 [2024-11-20 06:37:29.734135] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:58.367 06:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:58.367 06:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:26:58.367 06:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:58.367 06:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:58.367 Malloc1 00:26:58.367 06:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:58.367 06:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:26:58.367 06:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:58.367 06:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:58.367 06:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:58.367 06:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:26:58.367 06:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:58.367 06:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:58.367 06:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:58.367 06:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:26:58.367 06:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:58.367 06:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:58.367 06:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:58.367 06:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:26:58.367 06:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:58.367 06:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:58.367 06:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:58.367 06:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=620079 00:26:58.367 06:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:26:58.367 06:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:58.367 06:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 620079 /var/tmp/bdevperf.sock 00:26:58.367 06:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@833 -- # '[' -z 620079 ']' 00:26:58.367 06:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:58.367 06:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:58.367 06:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:58.367 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:58.367 06:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:58.367 06:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:58.367 06:37:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:58.367 06:37:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@866 -- # return 0 00:26:58.367 06:37:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:26:58.367 06:37:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:58.367 06:37:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:58.627 NVMe0n1 00:26:58.627 06:37:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:58.627 06:37:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:58.627 06:37:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:58.627 06:37:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:26:58.627 06:37:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:58.627 06:37:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:58.627 1 00:26:58.627 06:37:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:26:58.627 06:37:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:26:58.627 06:37:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:26:58.627 06:37:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:26:58.627 06:37:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:58.627 06:37:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:26:58.627 06:37:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:58.627 06:37:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:26:58.627 06:37:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:58.627 06:37:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:58.627 request: 00:26:58.627 { 00:26:58.627 "name": "NVMe0", 00:26:58.627 "trtype": "tcp", 00:26:58.627 "traddr": "10.0.0.2", 00:26:58.627 "adrfam": "ipv4", 00:26:58.627 "trsvcid": "4420", 00:26:58.627 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:58.627 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:26:58.627 "hostaddr": "10.0.0.1", 00:26:58.627 "prchk_reftag": false, 00:26:58.627 "prchk_guard": false, 00:26:58.627 "hdgst": false, 00:26:58.627 "ddgst": false, 00:26:58.627 "allow_unrecognized_csi": false, 00:26:58.627 "method": "bdev_nvme_attach_controller", 00:26:58.627 "req_id": 1 00:26:58.627 } 00:26:58.627 Got JSON-RPC error response 00:26:58.627 response: 00:26:58.627 { 00:26:58.627 "code": -114, 00:26:58.627 "message": "A controller named NVMe0 already exists with the specified network path" 00:26:58.627 } 00:26:58.627 06:37:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:26:58.627 06:37:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:26:58.627 06:37:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:58.627 06:37:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:58.627 06:37:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:58.627 06:37:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:26:58.627 06:37:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:26:58.627 06:37:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:26:58.627 06:37:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:26:58.627 06:37:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:58.627 06:37:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:26:58.627 06:37:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:58.627 06:37:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:26:58.627 06:37:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:58.627 06:37:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:58.627 request: 00:26:58.627 { 00:26:58.627 "name": "NVMe0", 00:26:58.627 "trtype": "tcp", 00:26:58.627 "traddr": "10.0.0.2", 00:26:58.627 "adrfam": "ipv4", 00:26:58.627 "trsvcid": "4420", 00:26:58.627 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:26:58.627 "hostaddr": "10.0.0.1", 00:26:58.627 "prchk_reftag": false, 00:26:58.627 "prchk_guard": false, 00:26:58.627 "hdgst": false, 00:26:58.627 "ddgst": false, 00:26:58.627 "allow_unrecognized_csi": false, 00:26:58.627 "method": "bdev_nvme_attach_controller", 00:26:58.627 "req_id": 1 00:26:58.627 } 00:26:58.627 Got JSON-RPC error response 00:26:58.627 response: 00:26:58.627 { 00:26:58.627 "code": -114, 00:26:58.627 "message": "A controller named NVMe0 already exists with the specified network path" 00:26:58.627 } 00:26:58.627 06:37:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:26:58.627 06:37:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:26:58.627 06:37:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:58.627 06:37:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:58.627 06:37:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:58.627 06:37:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:26:58.627 06:37:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:26:58.627 06:37:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:26:58.627 06:37:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:26:58.627 06:37:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:58.627 06:37:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:26:58.627 06:37:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:58.627 06:37:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:26:58.627 06:37:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:58.627 06:37:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:58.627 request: 00:26:58.627 { 00:26:58.627 "name": "NVMe0", 00:26:58.627 "trtype": "tcp", 00:26:58.627 "traddr": "10.0.0.2", 00:26:58.627 "adrfam": "ipv4", 00:26:58.627 "trsvcid": "4420", 00:26:58.627 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:58.627 "hostaddr": "10.0.0.1", 00:26:58.627 "prchk_reftag": false, 00:26:58.627 "prchk_guard": false, 00:26:58.628 "hdgst": false, 00:26:58.628 "ddgst": false, 00:26:58.628 "multipath": "disable", 00:26:58.628 "allow_unrecognized_csi": false, 00:26:58.628 "method": "bdev_nvme_attach_controller", 00:26:58.628 "req_id": 1 00:26:58.628 } 00:26:58.628 Got JSON-RPC error response 00:26:58.628 response: 00:26:58.628 { 00:26:58.628 "code": -114, 00:26:58.628 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:26:58.628 } 00:26:58.628 06:37:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:26:58.628 06:37:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:26:58.628 06:37:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:58.628 06:37:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:58.628 06:37:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:58.628 06:37:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:26:58.628 06:37:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:26:58.628 06:37:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:26:58.628 06:37:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:26:58.628 06:37:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:58.628 06:37:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:26:58.628 06:37:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:58.628 06:37:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:26:58.628 06:37:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:58.628 06:37:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:58.628 request: 00:26:58.628 { 00:26:58.628 "name": "NVMe0", 00:26:58.628 "trtype": "tcp", 00:26:58.628 "traddr": "10.0.0.2", 00:26:58.628 "adrfam": "ipv4", 00:26:58.628 "trsvcid": "4420", 00:26:58.628 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:58.628 "hostaddr": "10.0.0.1", 00:26:58.628 "prchk_reftag": false, 00:26:58.628 "prchk_guard": false, 00:26:58.628 "hdgst": false, 00:26:58.628 "ddgst": false, 00:26:58.628 "multipath": "failover", 00:26:58.628 "allow_unrecognized_csi": false, 00:26:58.628 "method": "bdev_nvme_attach_controller", 00:26:58.628 "req_id": 1 00:26:58.628 } 00:26:58.628 Got JSON-RPC error response 00:26:58.628 response: 00:26:58.628 { 00:26:58.628 "code": -114, 00:26:58.628 "message": "A controller named NVMe0 already exists with the specified network path" 00:26:58.628 } 00:26:58.628 06:37:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:26:58.628 06:37:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:26:58.628 06:37:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:58.628 06:37:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:58.628 06:37:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:58.628 06:37:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:58.628 06:37:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:58.628 06:37:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:58.887 NVMe0n1 00:26:58.887 06:37:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:58.887 06:37:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:58.887 06:37:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:58.887 06:37:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:58.887 06:37:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:58.887 06:37:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:26:58.887 06:37:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:58.887 06:37:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:58.887 00:26:58.887 06:37:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:58.887 06:37:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:58.887 06:37:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:26:58.887 06:37:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:58.887 06:37:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:59.146 06:37:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:59.146 06:37:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:26:59.146 06:37:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:27:00.082 { 00:27:00.082 "results": [ 00:27:00.082 { 00:27:00.082 "job": "NVMe0n1", 00:27:00.082 "core_mask": "0x1", 00:27:00.082 "workload": "write", 00:27:00.082 "status": "finished", 00:27:00.082 "queue_depth": 128, 00:27:00.082 "io_size": 4096, 00:27:00.082 "runtime": 1.006514, 00:27:00.082 "iops": 23556.552616257697, 00:27:00.082 "mibps": 92.01778365725663, 00:27:00.082 "io_failed": 0, 00:27:00.082 "io_timeout": 0, 00:27:00.082 "avg_latency_us": 5416.962179992368, 00:27:00.082 "min_latency_us": 4181.820952380953, 00:27:00.082 "max_latency_us": 14355.504761904762 00:27:00.082 } 00:27:00.082 ], 00:27:00.082 "core_count": 1 00:27:00.082 } 00:27:00.082 06:37:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:27:00.082 06:37:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:00.082 06:37:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:00.082 06:37:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:00.082 06:37:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:27:00.082 06:37:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 620079 00:27:00.082 06:37:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@952 -- # '[' -z 620079 ']' 00:27:00.082 06:37:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # kill -0 620079 00:27:00.082 06:37:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@957 -- # uname 00:27:00.082 06:37:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:27:00.082 06:37:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 620079 00:27:00.342 06:37:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:27:00.342 06:37:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:27:00.342 06:37:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@970 -- # echo 'killing process with pid 620079' 00:27:00.342 killing process with pid 620079 00:27:00.342 06:37:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@971 -- # kill 620079 00:27:00.342 06:37:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@976 -- # wait 620079 00:27:00.342 06:37:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:00.342 06:37:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:00.342 06:37:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:00.342 06:37:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:00.342 06:37:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:27:00.342 06:37:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:00.342 06:37:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:00.342 06:37:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:00.342 06:37:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:27:00.342 06:37:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:00.342 06:37:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1597 -- # read -r file 00:27:00.342 06:37:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1596 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:27:00.342 06:37:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1596 -- # sort -u 00:27:00.342 06:37:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # cat 00:27:00.342 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:27:00.342 [2024-11-20 06:37:29.842494] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:27:00.342 [2024-11-20 06:37:29.842548] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid620079 ] 00:27:00.342 [2024-11-20 06:37:29.917970] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:00.342 [2024-11-20 06:37:29.960122] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:00.342 [2024-11-20 06:37:30.701112] bdev.c:4688:bdev_name_add: *ERROR*: Bdev name b7cf8555-528a-4467-94fe-8916eac4b8fc already exists 00:27:00.342 [2024-11-20 06:37:30.701139] bdev.c:7833:bdev_register: *ERROR*: Unable to add uuid:b7cf8555-528a-4467-94fe-8916eac4b8fc alias for bdev NVMe1n1 00:27:00.342 [2024-11-20 06:37:30.701147] bdev_nvme.c:4658:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:27:00.342 Running I/O for 1 seconds... 00:27:00.342 23550.00 IOPS, 91.99 MiB/s 00:27:00.342 Latency(us) 00:27:00.342 [2024-11-20T05:37:32.178Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:00.342 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:27:00.342 NVMe0n1 : 1.01 23556.55 92.02 0.00 0.00 5416.96 4181.82 14355.50 00:27:00.342 [2024-11-20T05:37:32.178Z] =================================================================================================================== 00:27:00.342 [2024-11-20T05:37:32.178Z] Total : 23556.55 92.02 0.00 0.00 5416.96 4181.82 14355.50 00:27:00.342 Received shutdown signal, test time was about 1.000000 seconds 00:27:00.342 00:27:00.342 Latency(us) 00:27:00.342 [2024-11-20T05:37:32.178Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:00.342 [2024-11-20T05:37:32.178Z] =================================================================================================================== 00:27:00.342 [2024-11-20T05:37:32.178Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:00.343 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:27:00.343 06:37:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1603 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:00.343 06:37:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1597 -- # read -r file 00:27:00.343 06:37:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:27:00.343 06:37:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:00.343 06:37:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:27:00.343 06:37:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:00.343 06:37:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:27:00.343 06:37:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:00.343 06:37:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:00.343 rmmod nvme_tcp 00:27:00.343 rmmod nvme_fabrics 00:27:00.343 rmmod nvme_keyring 00:27:00.343 06:37:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:00.602 06:37:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:27:00.602 06:37:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:27:00.602 06:37:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@517 -- # '[' -n 620052 ']' 00:27:00.602 06:37:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # killprocess 620052 00:27:00.602 06:37:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@952 -- # '[' -z 620052 ']' 00:27:00.602 06:37:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # kill -0 620052 00:27:00.602 06:37:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@957 -- # uname 00:27:00.602 06:37:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:27:00.602 06:37:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 620052 00:27:00.602 06:37:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:27:00.602 06:37:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:27:00.602 06:37:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@970 -- # echo 'killing process with pid 620052' 00:27:00.602 killing process with pid 620052 00:27:00.602 06:37:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@971 -- # kill 620052 00:27:00.602 06:37:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@976 -- # wait 620052 00:27:00.602 06:37:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:00.602 06:37:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:00.602 06:37:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:00.602 06:37:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:27:00.602 06:37:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-save 00:27:00.602 06:37:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:00.861 06:37:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-restore 00:27:00.861 06:37:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:00.861 06:37:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:00.861 06:37:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:00.861 06:37:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:00.861 06:37:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:02.768 06:37:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:02.768 00:27:02.768 real 0m11.328s 00:27:02.768 user 0m12.697s 00:27:02.768 sys 0m5.323s 00:27:02.768 06:37:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1128 -- # xtrace_disable 00:27:02.768 06:37:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:02.768 ************************************ 00:27:02.768 END TEST nvmf_multicontroller 00:27:02.768 ************************************ 00:27:02.768 06:37:34 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:27:02.768 06:37:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:27:02.768 06:37:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:27:02.768 06:37:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.768 ************************************ 00:27:02.768 START TEST nvmf_aer 00:27:02.768 ************************************ 00:27:02.768 06:37:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:27:03.028 * Looking for test storage... 00:27:03.028 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:03.028 06:37:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:27:03.028 06:37:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1691 -- # lcov --version 00:27:03.028 06:37:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:27:03.028 06:37:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:27:03.028 06:37:34 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:03.028 06:37:34 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:03.028 06:37:34 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:03.028 06:37:34 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:27:03.028 06:37:34 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:27:03.028 06:37:34 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:27:03.028 06:37:34 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:27:03.028 06:37:34 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:27:03.028 06:37:34 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:27:03.028 06:37:34 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:27:03.028 06:37:34 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:03.028 06:37:34 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:27:03.028 06:37:34 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:27:03.028 06:37:34 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:03.028 06:37:34 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:03.028 06:37:34 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:27:03.028 06:37:34 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:27:03.028 06:37:34 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:03.028 06:37:34 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:27:03.028 06:37:34 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:27:03.028 06:37:34 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:27:03.028 06:37:34 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:27:03.028 06:37:34 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:03.028 06:37:34 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:27:03.028 06:37:34 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:27:03.028 06:37:34 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:03.028 06:37:34 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:03.028 06:37:34 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:27:03.028 06:37:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:03.028 06:37:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:27:03.028 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:03.028 --rc genhtml_branch_coverage=1 00:27:03.028 --rc genhtml_function_coverage=1 00:27:03.028 --rc genhtml_legend=1 00:27:03.028 --rc geninfo_all_blocks=1 00:27:03.028 --rc geninfo_unexecuted_blocks=1 00:27:03.028 00:27:03.028 ' 00:27:03.028 06:37:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:27:03.028 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:03.028 --rc genhtml_branch_coverage=1 00:27:03.028 --rc genhtml_function_coverage=1 00:27:03.028 --rc genhtml_legend=1 00:27:03.028 --rc geninfo_all_blocks=1 00:27:03.028 --rc geninfo_unexecuted_blocks=1 00:27:03.028 00:27:03.028 ' 00:27:03.028 06:37:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:27:03.029 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:03.029 --rc genhtml_branch_coverage=1 00:27:03.029 --rc genhtml_function_coverage=1 00:27:03.029 --rc genhtml_legend=1 00:27:03.029 --rc geninfo_all_blocks=1 00:27:03.029 --rc geninfo_unexecuted_blocks=1 00:27:03.029 00:27:03.029 ' 00:27:03.029 06:37:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:27:03.029 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:03.029 --rc genhtml_branch_coverage=1 00:27:03.029 --rc genhtml_function_coverage=1 00:27:03.029 --rc genhtml_legend=1 00:27:03.029 --rc geninfo_all_blocks=1 00:27:03.029 --rc geninfo_unexecuted_blocks=1 00:27:03.029 00:27:03.029 ' 00:27:03.029 06:37:34 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:03.029 06:37:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:27:03.029 06:37:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:03.029 06:37:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:03.029 06:37:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:03.029 06:37:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:03.029 06:37:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:03.029 06:37:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:03.029 06:37:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:03.029 06:37:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:03.029 06:37:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:03.029 06:37:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:03.029 06:37:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:27:03.029 06:37:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:27:03.029 06:37:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:03.029 06:37:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:03.029 06:37:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:03.029 06:37:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:03.029 06:37:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:03.029 06:37:34 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:27:03.029 06:37:34 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:03.029 06:37:34 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:03.029 06:37:34 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:03.029 06:37:34 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:03.029 06:37:34 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:03.029 06:37:34 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:03.029 06:37:34 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:27:03.029 06:37:34 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:03.029 06:37:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:27:03.029 06:37:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:03.029 06:37:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:03.029 06:37:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:03.029 06:37:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:03.029 06:37:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:03.029 06:37:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:03.029 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:03.029 06:37:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:03.029 06:37:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:03.029 06:37:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:03.029 06:37:34 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:27:03.029 06:37:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:03.029 06:37:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:03.029 06:37:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:03.029 06:37:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:03.029 06:37:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:03.029 06:37:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:03.029 06:37:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:03.029 06:37:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:03.029 06:37:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:03.029 06:37:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:03.029 06:37:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:27:03.029 06:37:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:09.602 06:37:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:09.603 06:37:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:27:09.603 06:37:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:09.603 06:37:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:09.603 06:37:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:09.603 06:37:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:09.603 06:37:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:09.603 06:37:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:27:09.603 06:37:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:09.603 06:37:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:27:09.603 06:37:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:27:09.603 06:37:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:27:09.603 06:37:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:27:09.603 06:37:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:27:09.603 06:37:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:27:09.603 06:37:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:09.603 06:37:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:09.603 06:37:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:09.603 06:37:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:09.603 06:37:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:09.603 06:37:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:09.603 06:37:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:09.603 06:37:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:09.603 06:37:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:09.603 06:37:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:09.603 06:37:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:09.603 06:37:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:09.603 06:37:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:09.603 06:37:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:09.603 06:37:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:09.603 06:37:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:09.603 06:37:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:09.603 06:37:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:09.603 06:37:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:09.603 06:37:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:09.603 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:09.603 06:37:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:09.603 06:37:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:09.603 06:37:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:09.603 06:37:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:09.603 06:37:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:09.603 06:37:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:09.603 06:37:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:09.603 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:09.603 06:37:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:09.603 06:37:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:09.603 06:37:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:09.603 06:37:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:09.603 06:37:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:09.603 06:37:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:09.603 06:37:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:09.603 06:37:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:09.603 06:37:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:09.603 06:37:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:09.603 06:37:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:09.603 06:37:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:09.603 06:37:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:09.603 06:37:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:09.603 06:37:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:09.603 06:37:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:09.603 Found net devices under 0000:86:00.0: cvl_0_0 00:27:09.603 06:37:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:09.603 06:37:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:09.603 06:37:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:09.603 06:37:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:09.603 06:37:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:09.603 06:37:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:09.603 06:37:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:09.603 06:37:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:09.603 06:37:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:09.603 Found net devices under 0000:86:00.1: cvl_0_1 00:27:09.603 06:37:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:09.603 06:37:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:09.603 06:37:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes 00:27:09.603 06:37:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:09.603 06:37:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:09.603 06:37:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:09.603 06:37:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:09.603 06:37:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:09.603 06:37:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:09.603 06:37:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:09.603 06:37:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:09.603 06:37:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:09.603 06:37:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:09.603 06:37:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:09.603 06:37:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:09.603 06:37:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:09.603 06:37:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:09.603 06:37:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:09.603 06:37:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:09.603 06:37:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:09.603 06:37:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:09.603 06:37:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:09.603 06:37:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:09.603 06:37:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:09.603 06:37:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:09.603 06:37:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:09.603 06:37:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:09.603 06:37:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:09.603 06:37:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:09.603 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:09.603 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.476 ms 00:27:09.603 00:27:09.603 --- 10.0.0.2 ping statistics --- 00:27:09.603 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:09.603 rtt min/avg/max/mdev = 0.476/0.476/0.476/0.000 ms 00:27:09.603 06:37:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:09.603 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:09.603 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.220 ms 00:27:09.603 00:27:09.603 --- 10.0.0.1 ping statistics --- 00:27:09.603 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:09.603 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:27:09.603 06:37:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:09.603 06:37:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0 00:27:09.603 06:37:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:09.603 06:37:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:09.603 06:37:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:09.603 06:37:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:09.603 06:37:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:09.603 06:37:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:09.603 06:37:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:09.603 06:37:40 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:27:09.603 06:37:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:09.604 06:37:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:09.604 06:37:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:09.604 06:37:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=624066 00:27:09.604 06:37:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:09.604 06:37:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 624066 00:27:09.604 06:37:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@833 -- # '[' -z 624066 ']' 00:27:09.604 06:37:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:09.604 06:37:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:09.604 06:37:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:09.604 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:09.604 06:37:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:09.604 06:37:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:09.604 [2024-11-20 06:37:40.839722] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:27:09.604 [2024-11-20 06:37:40.839769] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:09.604 [2024-11-20 06:37:40.920525] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:09.604 [2024-11-20 06:37:40.963368] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:09.604 [2024-11-20 06:37:40.963404] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:09.604 [2024-11-20 06:37:40.963411] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:09.604 [2024-11-20 06:37:40.963417] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:09.604 [2024-11-20 06:37:40.963422] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:09.604 [2024-11-20 06:37:40.964854] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:09.604 [2024-11-20 06:37:40.964962] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:09.604 [2024-11-20 06:37:40.965066] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:09.604 [2024-11-20 06:37:40.965067] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:09.604 06:37:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:27:09.604 06:37:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@866 -- # return 0 00:27:09.604 06:37:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:09.604 06:37:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:09.604 06:37:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:09.604 06:37:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:09.604 06:37:41 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:09.604 06:37:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:09.604 06:37:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:09.604 [2024-11-20 06:37:41.102058] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:09.604 06:37:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:09.604 06:37:41 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:27:09.604 06:37:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:09.604 06:37:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:09.604 Malloc0 00:27:09.604 06:37:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:09.604 06:37:41 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:27:09.604 06:37:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:09.604 06:37:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:09.604 06:37:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:09.604 06:37:41 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:09.604 06:37:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:09.604 06:37:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:09.604 06:37:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:09.604 06:37:41 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:09.604 06:37:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:09.604 06:37:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:09.604 [2024-11-20 06:37:41.167632] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:09.604 06:37:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:09.604 06:37:41 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:27:09.604 06:37:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:09.604 06:37:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:09.604 [ 00:27:09.604 { 00:27:09.604 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:27:09.604 "subtype": "Discovery", 00:27:09.604 "listen_addresses": [], 00:27:09.604 "allow_any_host": true, 00:27:09.604 "hosts": [] 00:27:09.604 }, 00:27:09.604 { 00:27:09.604 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:27:09.604 "subtype": "NVMe", 00:27:09.604 "listen_addresses": [ 00:27:09.604 { 00:27:09.604 "trtype": "TCP", 00:27:09.604 "adrfam": "IPv4", 00:27:09.604 "traddr": "10.0.0.2", 00:27:09.604 "trsvcid": "4420" 00:27:09.604 } 00:27:09.604 ], 00:27:09.604 "allow_any_host": true, 00:27:09.604 "hosts": [], 00:27:09.604 "serial_number": "SPDK00000000000001", 00:27:09.604 "model_number": "SPDK bdev Controller", 00:27:09.604 "max_namespaces": 2, 00:27:09.604 "min_cntlid": 1, 00:27:09.604 "max_cntlid": 65519, 00:27:09.604 "namespaces": [ 00:27:09.604 { 00:27:09.604 "nsid": 1, 00:27:09.604 "bdev_name": "Malloc0", 00:27:09.604 "name": "Malloc0", 00:27:09.604 "nguid": "F183CDB8AD91423DA674A85AF86336C8", 00:27:09.604 "uuid": "f183cdb8-ad91-423d-a674-a85af86336c8" 00:27:09.604 } 00:27:09.604 ] 00:27:09.604 } 00:27:09.604 ] 00:27:09.604 06:37:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:09.604 06:37:41 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:27:09.604 06:37:41 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:27:09.604 06:37:41 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=624098 00:27:09.604 06:37:41 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:27:09.604 06:37:41 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:27:09.604 06:37:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # local i=0 00:27:09.604 06:37:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:27:09.604 06:37:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # '[' 0 -lt 200 ']' 00:27:09.604 06:37:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # i=1 00:27:09.604 06:37:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # sleep 0.1 00:27:09.604 06:37:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:27:09.604 06:37:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # '[' 1 -lt 200 ']' 00:27:09.604 06:37:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # i=2 00:27:09.604 06:37:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # sleep 0.1 00:27:09.604 06:37:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:27:09.604 06:37:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # '[' 2 -lt 200 ']' 00:27:09.604 06:37:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # i=3 00:27:09.604 06:37:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # sleep 0.1 00:27:09.863 06:37:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:27:09.863 06:37:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1274 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:27:09.863 06:37:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1278 -- # return 0 00:27:09.863 06:37:41 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:27:09.863 06:37:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:09.863 06:37:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:09.864 Malloc1 00:27:09.864 06:37:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:09.864 06:37:41 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:27:09.864 06:37:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:09.864 06:37:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:09.864 06:37:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:09.864 06:37:41 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:27:09.864 06:37:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:09.864 06:37:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:09.864 Asynchronous Event Request test 00:27:09.864 Attaching to 10.0.0.2 00:27:09.864 Attached to 10.0.0.2 00:27:09.864 Registering asynchronous event callbacks... 00:27:09.864 Starting namespace attribute notice tests for all controllers... 00:27:09.864 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:27:09.864 aer_cb - Changed Namespace 00:27:09.864 Cleaning up... 00:27:09.864 [ 00:27:09.864 { 00:27:09.864 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:27:09.864 "subtype": "Discovery", 00:27:09.864 "listen_addresses": [], 00:27:09.864 "allow_any_host": true, 00:27:09.864 "hosts": [] 00:27:09.864 }, 00:27:09.864 { 00:27:09.864 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:27:09.864 "subtype": "NVMe", 00:27:09.864 "listen_addresses": [ 00:27:09.864 { 00:27:09.864 "trtype": "TCP", 00:27:09.864 "adrfam": "IPv4", 00:27:09.864 "traddr": "10.0.0.2", 00:27:09.864 "trsvcid": "4420" 00:27:09.864 } 00:27:09.864 ], 00:27:09.864 "allow_any_host": true, 00:27:09.864 "hosts": [], 00:27:09.864 "serial_number": "SPDK00000000000001", 00:27:09.864 "model_number": "SPDK bdev Controller", 00:27:09.864 "max_namespaces": 2, 00:27:09.864 "min_cntlid": 1, 00:27:09.864 "max_cntlid": 65519, 00:27:09.864 "namespaces": [ 00:27:09.864 { 00:27:09.864 "nsid": 1, 00:27:09.864 "bdev_name": "Malloc0", 00:27:09.864 "name": "Malloc0", 00:27:09.864 "nguid": "F183CDB8AD91423DA674A85AF86336C8", 00:27:09.864 "uuid": "f183cdb8-ad91-423d-a674-a85af86336c8" 00:27:09.864 }, 00:27:09.864 { 00:27:09.864 "nsid": 2, 00:27:09.864 "bdev_name": "Malloc1", 00:27:09.864 "name": "Malloc1", 00:27:09.864 "nguid": "4BBFDD2DEFAF459CBCA31B2271A84E0F", 00:27:09.864 "uuid": "4bbfdd2d-efaf-459c-bca3-1b2271a84e0f" 00:27:09.864 } 00:27:09.864 ] 00:27:09.864 } 00:27:09.864 ] 00:27:09.864 06:37:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:09.864 06:37:41 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 624098 00:27:09.864 06:37:41 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:27:09.864 06:37:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:09.864 06:37:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:09.864 06:37:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:09.864 06:37:41 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:27:09.864 06:37:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:09.864 06:37:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:09.864 06:37:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:09.864 06:37:41 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:09.864 06:37:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:09.864 06:37:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:09.864 06:37:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:09.864 06:37:41 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:27:09.864 06:37:41 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:27:09.864 06:37:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:09.864 06:37:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:27:09.864 06:37:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:09.864 06:37:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:27:09.864 06:37:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:09.864 06:37:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:09.864 rmmod nvme_tcp 00:27:09.864 rmmod nvme_fabrics 00:27:09.864 rmmod nvme_keyring 00:27:09.864 06:37:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:09.864 06:37:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:27:09.864 06:37:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:27:09.864 06:37:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 624066 ']' 00:27:09.864 06:37:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 624066 00:27:09.864 06:37:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@952 -- # '[' -z 624066 ']' 00:27:09.864 06:37:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # kill -0 624066 00:27:09.864 06:37:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@957 -- # uname 00:27:10.124 06:37:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:27:10.124 06:37:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 624066 00:27:10.124 06:37:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:27:10.124 06:37:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:27:10.124 06:37:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@970 -- # echo 'killing process with pid 624066' 00:27:10.124 killing process with pid 624066 00:27:10.124 06:37:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@971 -- # kill 624066 00:27:10.124 06:37:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@976 -- # wait 624066 00:27:10.124 06:37:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:10.124 06:37:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:10.124 06:37:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:10.124 06:37:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:27:10.124 06:37:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-save 00:27:10.124 06:37:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:10.124 06:37:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-restore 00:27:10.124 06:37:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:10.124 06:37:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:10.124 06:37:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:10.124 06:37:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:10.124 06:37:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:12.661 06:37:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:12.661 00:27:12.661 real 0m9.404s 00:27:12.661 user 0m5.538s 00:27:12.661 sys 0m4.929s 00:27:12.661 06:37:43 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1128 -- # xtrace_disable 00:27:12.661 06:37:43 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:12.661 ************************************ 00:27:12.661 END TEST nvmf_aer 00:27:12.661 ************************************ 00:27:12.661 06:37:44 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:27:12.661 06:37:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:27:12.661 06:37:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:27:12.661 06:37:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.661 ************************************ 00:27:12.661 START TEST nvmf_async_init 00:27:12.661 ************************************ 00:27:12.661 06:37:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:27:12.661 * Looking for test storage... 00:27:12.661 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:12.661 06:37:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:27:12.661 06:37:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1691 -- # lcov --version 00:27:12.661 06:37:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:27:12.661 06:37:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:27:12.661 06:37:44 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:12.661 06:37:44 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:12.661 06:37:44 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:12.661 06:37:44 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:27:12.661 06:37:44 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:27:12.661 06:37:44 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:27:12.661 06:37:44 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:27:12.661 06:37:44 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:27:12.661 06:37:44 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:27:12.661 06:37:44 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:27:12.661 06:37:44 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:12.661 06:37:44 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:27:12.661 06:37:44 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:27:12.661 06:37:44 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:12.661 06:37:44 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:12.661 06:37:44 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:27:12.661 06:37:44 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:27:12.661 06:37:44 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:12.661 06:37:44 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:27:12.661 06:37:44 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:27:12.661 06:37:44 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:27:12.661 06:37:44 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:27:12.661 06:37:44 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:12.661 06:37:44 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:27:12.661 06:37:44 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:27:12.661 06:37:44 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:12.661 06:37:44 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:12.661 06:37:44 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:27:12.661 06:37:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:12.661 06:37:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:27:12.661 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:12.661 --rc genhtml_branch_coverage=1 00:27:12.661 --rc genhtml_function_coverage=1 00:27:12.661 --rc genhtml_legend=1 00:27:12.661 --rc geninfo_all_blocks=1 00:27:12.661 --rc geninfo_unexecuted_blocks=1 00:27:12.661 00:27:12.661 ' 00:27:12.661 06:37:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:27:12.661 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:12.661 --rc genhtml_branch_coverage=1 00:27:12.661 --rc genhtml_function_coverage=1 00:27:12.661 --rc genhtml_legend=1 00:27:12.661 --rc geninfo_all_blocks=1 00:27:12.661 --rc geninfo_unexecuted_blocks=1 00:27:12.661 00:27:12.661 ' 00:27:12.661 06:37:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:27:12.661 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:12.661 --rc genhtml_branch_coverage=1 00:27:12.661 --rc genhtml_function_coverage=1 00:27:12.661 --rc genhtml_legend=1 00:27:12.661 --rc geninfo_all_blocks=1 00:27:12.661 --rc geninfo_unexecuted_blocks=1 00:27:12.661 00:27:12.661 ' 00:27:12.661 06:37:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:27:12.661 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:12.661 --rc genhtml_branch_coverage=1 00:27:12.661 --rc genhtml_function_coverage=1 00:27:12.661 --rc genhtml_legend=1 00:27:12.661 --rc geninfo_all_blocks=1 00:27:12.661 --rc geninfo_unexecuted_blocks=1 00:27:12.661 00:27:12.661 ' 00:27:12.661 06:37:44 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:12.661 06:37:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:27:12.661 06:37:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:12.661 06:37:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:12.661 06:37:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:12.661 06:37:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:12.661 06:37:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:12.661 06:37:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:12.661 06:37:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:12.661 06:37:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:12.661 06:37:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:12.661 06:37:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:12.661 06:37:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:27:12.661 06:37:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:27:12.661 06:37:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:12.661 06:37:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:12.661 06:37:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:12.662 06:37:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:12.662 06:37:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:12.662 06:37:44 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:27:12.662 06:37:44 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:12.662 06:37:44 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:12.662 06:37:44 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:12.662 06:37:44 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:12.662 06:37:44 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:12.662 06:37:44 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:12.662 06:37:44 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:27:12.662 06:37:44 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:12.662 06:37:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:27:12.662 06:37:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:12.662 06:37:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:12.662 06:37:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:12.662 06:37:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:12.662 06:37:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:12.662 06:37:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:12.662 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:12.662 06:37:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:12.662 06:37:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:12.662 06:37:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:12.662 06:37:44 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:27:12.662 06:37:44 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:27:12.662 06:37:44 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:27:12.662 06:37:44 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:27:12.662 06:37:44 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:27:12.662 06:37:44 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:27:12.662 06:37:44 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=1f8e052212c74b4c962abd9dac3bfc37 00:27:12.662 06:37:44 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:27:12.662 06:37:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:12.662 06:37:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:12.662 06:37:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:12.662 06:37:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:12.662 06:37:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:12.662 06:37:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:12.662 06:37:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:12.662 06:37:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:12.662 06:37:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:12.662 06:37:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:12.662 06:37:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:27:12.662 06:37:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:19.237 06:37:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:19.237 06:37:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:27:19.237 06:37:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:19.237 06:37:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:19.237 06:37:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:19.237 06:37:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:19.237 06:37:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:19.237 06:37:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:27:19.237 06:37:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:19.237 06:37:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:27:19.237 06:37:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:27:19.237 06:37:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:27:19.237 06:37:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:27:19.237 06:37:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:27:19.237 06:37:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:27:19.237 06:37:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:19.237 06:37:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:19.237 06:37:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:19.237 06:37:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:19.237 06:37:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:19.237 06:37:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:19.237 06:37:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:19.237 06:37:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:19.237 06:37:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:19.237 06:37:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:19.237 06:37:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:19.237 06:37:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:19.237 06:37:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:19.237 06:37:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:19.237 06:37:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:19.237 06:37:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:19.237 06:37:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:19.237 06:37:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:19.237 06:37:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:19.237 06:37:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:19.237 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:19.237 06:37:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:19.237 06:37:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:19.237 06:37:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:19.237 06:37:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:19.237 06:37:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:19.237 06:37:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:19.237 06:37:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:19.237 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:19.237 06:37:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:19.237 06:37:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:19.237 06:37:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:19.237 06:37:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:19.237 06:37:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:19.237 06:37:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:19.237 06:37:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:19.237 06:37:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:19.237 06:37:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:19.237 06:37:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:19.237 06:37:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:19.237 06:37:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:19.237 06:37:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:19.237 06:37:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:19.237 06:37:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:19.237 06:37:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:19.237 Found net devices under 0000:86:00.0: cvl_0_0 00:27:19.237 06:37:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:19.237 06:37:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:19.237 06:37:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:19.237 06:37:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:19.237 06:37:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:19.237 06:37:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:19.237 06:37:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:19.237 06:37:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:19.237 06:37:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:19.237 Found net devices under 0000:86:00.1: cvl_0_1 00:27:19.237 06:37:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:19.237 06:37:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:19.237 06:37:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes 00:27:19.237 06:37:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:19.237 06:37:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:19.237 06:37:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:19.237 06:37:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:19.237 06:37:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:19.237 06:37:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:19.237 06:37:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:19.237 06:37:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:19.237 06:37:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:19.237 06:37:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:19.237 06:37:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:19.237 06:37:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:19.237 06:37:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:19.237 06:37:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:19.237 06:37:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:19.237 06:37:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:19.238 06:37:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:19.238 06:37:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:19.238 06:37:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:19.238 06:37:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:19.238 06:37:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:19.238 06:37:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:19.238 06:37:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:19.238 06:37:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:19.238 06:37:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:19.238 06:37:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:19.238 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:19.238 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.443 ms 00:27:19.238 00:27:19.238 --- 10.0.0.2 ping statistics --- 00:27:19.238 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:19.238 rtt min/avg/max/mdev = 0.443/0.443/0.443/0.000 ms 00:27:19.238 06:37:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:19.238 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:19.238 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.206 ms 00:27:19.238 00:27:19.238 --- 10.0.0.1 ping statistics --- 00:27:19.238 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:19.238 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:27:19.238 06:37:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:19.238 06:37:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0 00:27:19.238 06:37:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:19.238 06:37:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:19.238 06:37:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:19.238 06:37:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:19.238 06:37:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:19.238 06:37:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:19.238 06:37:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:19.238 06:37:50 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:27:19.238 06:37:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:19.238 06:37:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:19.238 06:37:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:19.238 06:37:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=627628 00:27:19.238 06:37:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:27:19.238 06:37:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 627628 00:27:19.238 06:37:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@833 -- # '[' -z 627628 ']' 00:27:19.238 06:37:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:19.238 06:37:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:19.238 06:37:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:19.238 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:19.238 06:37:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:19.238 06:37:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:19.238 [2024-11-20 06:37:50.273899] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:27:19.238 [2024-11-20 06:37:50.273946] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:19.238 [2024-11-20 06:37:50.353004] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:19.238 [2024-11-20 06:37:50.393191] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:19.238 [2024-11-20 06:37:50.393233] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:19.238 [2024-11-20 06:37:50.393240] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:19.238 [2024-11-20 06:37:50.393246] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:19.238 [2024-11-20 06:37:50.393252] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:19.238 [2024-11-20 06:37:50.393804] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:19.238 06:37:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:27:19.238 06:37:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@866 -- # return 0 00:27:19.238 06:37:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:19.238 06:37:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:19.238 06:37:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:19.238 06:37:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:19.238 06:37:50 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:27:19.238 06:37:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.238 06:37:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:19.238 [2024-11-20 06:37:50.538037] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:19.238 06:37:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.238 06:37:50 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:27:19.238 06:37:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.238 06:37:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:19.238 null0 00:27:19.238 06:37:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.238 06:37:50 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:27:19.238 06:37:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.238 06:37:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:19.238 06:37:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.238 06:37:50 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:27:19.238 06:37:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.238 06:37:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:19.238 06:37:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.238 06:37:50 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 1f8e052212c74b4c962abd9dac3bfc37 00:27:19.238 06:37:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.238 06:37:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:19.238 06:37:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.238 06:37:50 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:19.238 06:37:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.238 06:37:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:19.238 [2024-11-20 06:37:50.582314] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:19.238 06:37:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.238 06:37:50 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:27:19.238 06:37:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.238 06:37:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:19.238 nvme0n1 00:27:19.238 06:37:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.238 06:37:50 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:27:19.238 06:37:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.238 06:37:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:19.238 [ 00:27:19.238 { 00:27:19.238 "name": "nvme0n1", 00:27:19.238 "aliases": [ 00:27:19.238 "1f8e0522-12c7-4b4c-962a-bd9dac3bfc37" 00:27:19.238 ], 00:27:19.238 "product_name": "NVMe disk", 00:27:19.238 "block_size": 512, 00:27:19.238 "num_blocks": 2097152, 00:27:19.238 "uuid": "1f8e0522-12c7-4b4c-962a-bd9dac3bfc37", 00:27:19.238 "numa_id": 1, 00:27:19.238 "assigned_rate_limits": { 00:27:19.238 "rw_ios_per_sec": 0, 00:27:19.238 "rw_mbytes_per_sec": 0, 00:27:19.238 "r_mbytes_per_sec": 0, 00:27:19.238 "w_mbytes_per_sec": 0 00:27:19.238 }, 00:27:19.238 "claimed": false, 00:27:19.238 "zoned": false, 00:27:19.238 "supported_io_types": { 00:27:19.238 "read": true, 00:27:19.238 "write": true, 00:27:19.238 "unmap": false, 00:27:19.238 "flush": true, 00:27:19.238 "reset": true, 00:27:19.238 "nvme_admin": true, 00:27:19.238 "nvme_io": true, 00:27:19.238 "nvme_io_md": false, 00:27:19.238 "write_zeroes": true, 00:27:19.238 "zcopy": false, 00:27:19.238 "get_zone_info": false, 00:27:19.238 "zone_management": false, 00:27:19.238 "zone_append": false, 00:27:19.238 "compare": true, 00:27:19.238 "compare_and_write": true, 00:27:19.238 "abort": true, 00:27:19.238 "seek_hole": false, 00:27:19.238 "seek_data": false, 00:27:19.238 "copy": true, 00:27:19.238 "nvme_iov_md": false 00:27:19.238 }, 00:27:19.238 "memory_domains": [ 00:27:19.238 { 00:27:19.238 "dma_device_id": "system", 00:27:19.238 "dma_device_type": 1 00:27:19.239 } 00:27:19.239 ], 00:27:19.239 "driver_specific": { 00:27:19.239 "nvme": [ 00:27:19.239 { 00:27:19.239 "trid": { 00:27:19.239 "trtype": "TCP", 00:27:19.239 "adrfam": "IPv4", 00:27:19.239 "traddr": "10.0.0.2", 00:27:19.239 "trsvcid": "4420", 00:27:19.239 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:27:19.239 }, 00:27:19.239 "ctrlr_data": { 00:27:19.239 "cntlid": 1, 00:27:19.239 "vendor_id": "0x8086", 00:27:19.239 "model_number": "SPDK bdev Controller", 00:27:19.239 "serial_number": "00000000000000000000", 00:27:19.239 "firmware_revision": "25.01", 00:27:19.239 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:19.239 "oacs": { 00:27:19.239 "security": 0, 00:27:19.239 "format": 0, 00:27:19.239 "firmware": 0, 00:27:19.239 "ns_manage": 0 00:27:19.239 }, 00:27:19.239 "multi_ctrlr": true, 00:27:19.239 "ana_reporting": false 00:27:19.239 }, 00:27:19.239 "vs": { 00:27:19.239 "nvme_version": "1.3" 00:27:19.239 }, 00:27:19.239 "ns_data": { 00:27:19.239 "id": 1, 00:27:19.239 "can_share": true 00:27:19.239 } 00:27:19.239 } 00:27:19.239 ], 00:27:19.239 "mp_policy": "active_passive" 00:27:19.239 } 00:27:19.239 } 00:27:19.239 ] 00:27:19.239 06:37:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.239 06:37:50 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:27:19.239 06:37:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.239 06:37:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:19.239 [2024-11-20 06:37:50.843903] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:27:19.239 [2024-11-20 06:37:50.843963] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ec8e0 (9): Bad file descriptor 00:27:19.239 [2024-11-20 06:37:50.976288] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:27:19.239 06:37:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.239 06:37:50 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:27:19.239 06:37:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.239 06:37:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:19.239 [ 00:27:19.239 { 00:27:19.239 "name": "nvme0n1", 00:27:19.239 "aliases": [ 00:27:19.239 "1f8e0522-12c7-4b4c-962a-bd9dac3bfc37" 00:27:19.239 ], 00:27:19.239 "product_name": "NVMe disk", 00:27:19.239 "block_size": 512, 00:27:19.239 "num_blocks": 2097152, 00:27:19.239 "uuid": "1f8e0522-12c7-4b4c-962a-bd9dac3bfc37", 00:27:19.239 "numa_id": 1, 00:27:19.239 "assigned_rate_limits": { 00:27:19.239 "rw_ios_per_sec": 0, 00:27:19.239 "rw_mbytes_per_sec": 0, 00:27:19.239 "r_mbytes_per_sec": 0, 00:27:19.239 "w_mbytes_per_sec": 0 00:27:19.239 }, 00:27:19.239 "claimed": false, 00:27:19.239 "zoned": false, 00:27:19.239 "supported_io_types": { 00:27:19.239 "read": true, 00:27:19.239 "write": true, 00:27:19.239 "unmap": false, 00:27:19.239 "flush": true, 00:27:19.239 "reset": true, 00:27:19.239 "nvme_admin": true, 00:27:19.239 "nvme_io": true, 00:27:19.239 "nvme_io_md": false, 00:27:19.239 "write_zeroes": true, 00:27:19.239 "zcopy": false, 00:27:19.239 "get_zone_info": false, 00:27:19.239 "zone_management": false, 00:27:19.239 "zone_append": false, 00:27:19.239 "compare": true, 00:27:19.239 "compare_and_write": true, 00:27:19.239 "abort": true, 00:27:19.239 "seek_hole": false, 00:27:19.239 "seek_data": false, 00:27:19.239 "copy": true, 00:27:19.239 "nvme_iov_md": false 00:27:19.239 }, 00:27:19.239 "memory_domains": [ 00:27:19.239 { 00:27:19.239 "dma_device_id": "system", 00:27:19.239 "dma_device_type": 1 00:27:19.239 } 00:27:19.239 ], 00:27:19.239 "driver_specific": { 00:27:19.239 "nvme": [ 00:27:19.239 { 00:27:19.239 "trid": { 00:27:19.239 "trtype": "TCP", 00:27:19.239 "adrfam": "IPv4", 00:27:19.239 "traddr": "10.0.0.2", 00:27:19.239 "trsvcid": "4420", 00:27:19.239 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:27:19.239 }, 00:27:19.239 "ctrlr_data": { 00:27:19.239 "cntlid": 2, 00:27:19.239 "vendor_id": "0x8086", 00:27:19.239 "model_number": "SPDK bdev Controller", 00:27:19.239 "serial_number": "00000000000000000000", 00:27:19.239 "firmware_revision": "25.01", 00:27:19.239 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:19.239 "oacs": { 00:27:19.239 "security": 0, 00:27:19.239 "format": 0, 00:27:19.239 "firmware": 0, 00:27:19.239 "ns_manage": 0 00:27:19.239 }, 00:27:19.239 "multi_ctrlr": true, 00:27:19.239 "ana_reporting": false 00:27:19.239 }, 00:27:19.239 "vs": { 00:27:19.239 "nvme_version": "1.3" 00:27:19.239 }, 00:27:19.239 "ns_data": { 00:27:19.239 "id": 1, 00:27:19.239 "can_share": true 00:27:19.239 } 00:27:19.239 } 00:27:19.239 ], 00:27:19.239 "mp_policy": "active_passive" 00:27:19.239 } 00:27:19.239 } 00:27:19.239 ] 00:27:19.239 06:37:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.239 06:37:51 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:19.239 06:37:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.239 06:37:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:19.239 06:37:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.239 06:37:51 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:27:19.239 06:37:51 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.FTH3jjnHcy 00:27:19.239 06:37:51 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:27:19.239 06:37:51 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.FTH3jjnHcy 00:27:19.239 06:37:51 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.FTH3jjnHcy 00:27:19.239 06:37:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.239 06:37:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:19.239 06:37:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.239 06:37:51 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:27:19.239 06:37:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.239 06:37:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:19.239 06:37:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.239 06:37:51 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:27:19.239 06:37:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.239 06:37:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:19.239 [2024-11-20 06:37:51.052537] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:27:19.239 [2024-11-20 06:37:51.052652] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:27:19.239 06:37:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.239 06:37:51 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:27:19.239 06:37:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.239 06:37:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:19.239 06:37:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.239 06:37:51 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:27:19.239 06:37:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.499 06:37:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:19.499 [2024-11-20 06:37:51.068591] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:27:19.499 nvme0n1 00:27:19.499 06:37:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.499 06:37:51 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:27:19.499 06:37:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.499 06:37:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:19.499 [ 00:27:19.499 { 00:27:19.499 "name": "nvme0n1", 00:27:19.499 "aliases": [ 00:27:19.499 "1f8e0522-12c7-4b4c-962a-bd9dac3bfc37" 00:27:19.499 ], 00:27:19.499 "product_name": "NVMe disk", 00:27:19.499 "block_size": 512, 00:27:19.499 "num_blocks": 2097152, 00:27:19.499 "uuid": "1f8e0522-12c7-4b4c-962a-bd9dac3bfc37", 00:27:19.499 "numa_id": 1, 00:27:19.499 "assigned_rate_limits": { 00:27:19.499 "rw_ios_per_sec": 0, 00:27:19.499 "rw_mbytes_per_sec": 0, 00:27:19.499 "r_mbytes_per_sec": 0, 00:27:19.499 "w_mbytes_per_sec": 0 00:27:19.499 }, 00:27:19.499 "claimed": false, 00:27:19.499 "zoned": false, 00:27:19.499 "supported_io_types": { 00:27:19.499 "read": true, 00:27:19.499 "write": true, 00:27:19.499 "unmap": false, 00:27:19.499 "flush": true, 00:27:19.499 "reset": true, 00:27:19.499 "nvme_admin": true, 00:27:19.499 "nvme_io": true, 00:27:19.499 "nvme_io_md": false, 00:27:19.499 "write_zeroes": true, 00:27:19.499 "zcopy": false, 00:27:19.499 "get_zone_info": false, 00:27:19.499 "zone_management": false, 00:27:19.499 "zone_append": false, 00:27:19.499 "compare": true, 00:27:19.499 "compare_and_write": true, 00:27:19.499 "abort": true, 00:27:19.499 "seek_hole": false, 00:27:19.499 "seek_data": false, 00:27:19.499 "copy": true, 00:27:19.499 "nvme_iov_md": false 00:27:19.499 }, 00:27:19.499 "memory_domains": [ 00:27:19.499 { 00:27:19.499 "dma_device_id": "system", 00:27:19.499 "dma_device_type": 1 00:27:19.499 } 00:27:19.499 ], 00:27:19.499 "driver_specific": { 00:27:19.499 "nvme": [ 00:27:19.499 { 00:27:19.499 "trid": { 00:27:19.499 "trtype": "TCP", 00:27:19.499 "adrfam": "IPv4", 00:27:19.499 "traddr": "10.0.0.2", 00:27:19.499 "trsvcid": "4421", 00:27:19.499 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:27:19.499 }, 00:27:19.499 "ctrlr_data": { 00:27:19.499 "cntlid": 3, 00:27:19.499 "vendor_id": "0x8086", 00:27:19.499 "model_number": "SPDK bdev Controller", 00:27:19.499 "serial_number": "00000000000000000000", 00:27:19.499 "firmware_revision": "25.01", 00:27:19.499 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:19.499 "oacs": { 00:27:19.499 "security": 0, 00:27:19.499 "format": 0, 00:27:19.499 "firmware": 0, 00:27:19.499 "ns_manage": 0 00:27:19.499 }, 00:27:19.499 "multi_ctrlr": true, 00:27:19.499 "ana_reporting": false 00:27:19.499 }, 00:27:19.499 "vs": { 00:27:19.499 "nvme_version": "1.3" 00:27:19.499 }, 00:27:19.499 "ns_data": { 00:27:19.499 "id": 1, 00:27:19.499 "can_share": true 00:27:19.499 } 00:27:19.499 } 00:27:19.499 ], 00:27:19.499 "mp_policy": "active_passive" 00:27:19.499 } 00:27:19.499 } 00:27:19.499 ] 00:27:19.499 06:37:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.499 06:37:51 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:19.499 06:37:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.499 06:37:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:19.499 06:37:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.499 06:37:51 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.FTH3jjnHcy 00:27:19.499 06:37:51 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:27:19.499 06:37:51 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:27:19.499 06:37:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:19.499 06:37:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:27:19.499 06:37:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:19.499 06:37:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:27:19.499 06:37:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:19.499 06:37:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:19.499 rmmod nvme_tcp 00:27:19.499 rmmod nvme_fabrics 00:27:19.499 rmmod nvme_keyring 00:27:19.499 06:37:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:19.499 06:37:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:27:19.499 06:37:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:27:19.499 06:37:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 627628 ']' 00:27:19.499 06:37:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 627628 00:27:19.499 06:37:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@952 -- # '[' -z 627628 ']' 00:27:19.499 06:37:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # kill -0 627628 00:27:19.499 06:37:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@957 -- # uname 00:27:19.499 06:37:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:27:19.499 06:37:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 627628 00:27:19.499 06:37:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:27:19.499 06:37:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:27:19.499 06:37:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@970 -- # echo 'killing process with pid 627628' 00:27:19.499 killing process with pid 627628 00:27:19.499 06:37:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@971 -- # kill 627628 00:27:19.499 06:37:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@976 -- # wait 627628 00:27:19.758 06:37:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:19.758 06:37:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:19.758 06:37:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:19.758 06:37:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:27:19.758 06:37:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-save 00:27:19.758 06:37:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:19.758 06:37:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-restore 00:27:19.758 06:37:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:19.758 06:37:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:19.758 06:37:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:19.758 06:37:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:19.758 06:37:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:21.663 06:37:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:21.922 00:27:21.922 real 0m9.441s 00:27:21.922 user 0m3.011s 00:27:21.922 sys 0m4.845s 00:27:21.922 06:37:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:27:21.922 06:37:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:21.922 ************************************ 00:27:21.922 END TEST nvmf_async_init 00:27:21.922 ************************************ 00:27:21.922 06:37:53 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:27:21.922 06:37:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:27:21.922 06:37:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:27:21.922 06:37:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.922 ************************************ 00:27:21.922 START TEST dma 00:27:21.922 ************************************ 00:27:21.922 06:37:53 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:27:21.922 * Looking for test storage... 00:27:21.923 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:21.923 06:37:53 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:27:21.923 06:37:53 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1691 -- # lcov --version 00:27:21.923 06:37:53 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:27:21.923 06:37:53 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:27:21.923 06:37:53 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:21.923 06:37:53 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:21.923 06:37:53 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:21.923 06:37:53 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:27:21.923 06:37:53 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:27:21.923 06:37:53 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:27:21.923 06:37:53 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:27:21.923 06:37:53 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:27:21.923 06:37:53 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:27:21.923 06:37:53 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:27:21.923 06:37:53 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:21.923 06:37:53 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:27:21.923 06:37:53 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:27:21.923 06:37:53 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:21.923 06:37:53 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:21.923 06:37:53 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:27:21.923 06:37:53 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:27:21.923 06:37:53 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:21.923 06:37:53 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:27:21.923 06:37:53 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:27:21.923 06:37:53 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:27:21.923 06:37:53 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:27:21.923 06:37:53 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:21.923 06:37:53 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:27:21.923 06:37:53 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:27:21.923 06:37:53 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:21.923 06:37:53 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:21.923 06:37:53 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:27:21.923 06:37:53 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:21.923 06:37:53 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:27:21.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:21.923 --rc genhtml_branch_coverage=1 00:27:21.923 --rc genhtml_function_coverage=1 00:27:21.923 --rc genhtml_legend=1 00:27:21.923 --rc geninfo_all_blocks=1 00:27:21.923 --rc geninfo_unexecuted_blocks=1 00:27:21.923 00:27:21.923 ' 00:27:21.923 06:37:53 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:27:21.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:21.923 --rc genhtml_branch_coverage=1 00:27:21.923 --rc genhtml_function_coverage=1 00:27:21.923 --rc genhtml_legend=1 00:27:21.923 --rc geninfo_all_blocks=1 00:27:21.923 --rc geninfo_unexecuted_blocks=1 00:27:21.923 00:27:21.923 ' 00:27:21.923 06:37:53 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:27:21.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:21.923 --rc genhtml_branch_coverage=1 00:27:21.923 --rc genhtml_function_coverage=1 00:27:21.923 --rc genhtml_legend=1 00:27:21.923 --rc geninfo_all_blocks=1 00:27:21.923 --rc geninfo_unexecuted_blocks=1 00:27:21.923 00:27:21.923 ' 00:27:21.923 06:37:53 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:27:21.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:21.923 --rc genhtml_branch_coverage=1 00:27:21.923 --rc genhtml_function_coverage=1 00:27:21.923 --rc genhtml_legend=1 00:27:21.923 --rc geninfo_all_blocks=1 00:27:21.923 --rc geninfo_unexecuted_blocks=1 00:27:21.923 00:27:21.923 ' 00:27:21.923 06:37:53 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:21.923 06:37:53 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:27:21.923 06:37:53 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:21.923 06:37:53 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:21.923 06:37:53 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:21.923 06:37:53 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:21.923 06:37:53 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:21.923 06:37:53 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:21.923 06:37:53 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:21.923 06:37:53 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:21.923 06:37:53 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:21.923 06:37:53 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:21.923 06:37:53 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:27:21.923 06:37:53 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:27:22.183 06:37:53 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:22.183 06:37:53 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:22.183 06:37:53 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:22.183 06:37:53 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:22.183 06:37:53 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:22.183 06:37:53 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:27:22.183 06:37:53 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:22.183 06:37:53 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:22.183 06:37:53 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:22.183 06:37:53 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:22.183 06:37:53 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:22.183 06:37:53 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:22.183 06:37:53 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:27:22.183 06:37:53 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:22.183 06:37:53 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:27:22.183 06:37:53 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:22.183 06:37:53 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:22.183 06:37:53 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:22.183 06:37:53 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:22.183 06:37:53 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:22.183 06:37:53 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:22.183 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:22.183 06:37:53 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:22.183 06:37:53 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:22.183 06:37:53 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:22.183 06:37:53 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:27:22.183 06:37:53 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:27:22.183 00:27:22.183 real 0m0.203s 00:27:22.183 user 0m0.123s 00:27:22.183 sys 0m0.095s 00:27:22.183 06:37:53 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1128 -- # xtrace_disable 00:27:22.183 06:37:53 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:27:22.183 ************************************ 00:27:22.183 END TEST dma 00:27:22.183 ************************************ 00:27:22.183 06:37:53 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:27:22.183 06:37:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:27:22.183 06:37:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:27:22.183 06:37:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.183 ************************************ 00:27:22.183 START TEST nvmf_identify 00:27:22.183 ************************************ 00:27:22.183 06:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:27:22.183 * Looking for test storage... 00:27:22.183 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:22.183 06:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:27:22.183 06:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # lcov --version 00:27:22.183 06:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:27:22.183 06:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:27:22.183 06:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:22.183 06:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:22.183 06:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:22.183 06:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:27:22.183 06:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:27:22.183 06:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:27:22.183 06:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:27:22.183 06:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:27:22.183 06:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:27:22.184 06:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:27:22.184 06:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:22.184 06:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:27:22.184 06:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:27:22.184 06:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:22.184 06:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:22.184 06:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:27:22.184 06:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:27:22.184 06:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:22.184 06:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:27:22.184 06:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:27:22.184 06:37:54 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:27:22.184 06:37:54 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:27:22.184 06:37:54 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:22.184 06:37:54 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:27:22.184 06:37:54 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:27:22.184 06:37:54 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:22.184 06:37:54 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:22.184 06:37:54 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:27:22.184 06:37:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:22.184 06:37:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:27:22.184 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:22.184 --rc genhtml_branch_coverage=1 00:27:22.184 --rc genhtml_function_coverage=1 00:27:22.184 --rc genhtml_legend=1 00:27:22.184 --rc geninfo_all_blocks=1 00:27:22.184 --rc geninfo_unexecuted_blocks=1 00:27:22.184 00:27:22.184 ' 00:27:22.184 06:37:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:27:22.184 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:22.184 --rc genhtml_branch_coverage=1 00:27:22.184 --rc genhtml_function_coverage=1 00:27:22.184 --rc genhtml_legend=1 00:27:22.184 --rc geninfo_all_blocks=1 00:27:22.184 --rc geninfo_unexecuted_blocks=1 00:27:22.184 00:27:22.184 ' 00:27:22.184 06:37:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:27:22.184 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:22.184 --rc genhtml_branch_coverage=1 00:27:22.184 --rc genhtml_function_coverage=1 00:27:22.184 --rc genhtml_legend=1 00:27:22.184 --rc geninfo_all_blocks=1 00:27:22.184 --rc geninfo_unexecuted_blocks=1 00:27:22.184 00:27:22.184 ' 00:27:22.184 06:37:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:27:22.184 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:22.184 --rc genhtml_branch_coverage=1 00:27:22.184 --rc genhtml_function_coverage=1 00:27:22.184 --rc genhtml_legend=1 00:27:22.184 --rc geninfo_all_blocks=1 00:27:22.184 --rc geninfo_unexecuted_blocks=1 00:27:22.184 00:27:22.184 ' 00:27:22.184 06:37:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:22.184 06:37:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:27:22.184 06:37:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:22.184 06:37:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:22.184 06:37:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:22.184 06:37:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:22.184 06:37:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:22.443 06:37:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:22.443 06:37:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:22.443 06:37:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:22.443 06:37:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:22.443 06:37:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:22.443 06:37:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:27:22.443 06:37:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:27:22.444 06:37:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:22.444 06:37:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:22.444 06:37:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:22.444 06:37:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:22.444 06:37:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:22.444 06:37:54 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:27:22.444 06:37:54 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:22.444 06:37:54 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:22.444 06:37:54 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:22.444 06:37:54 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:22.444 06:37:54 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:22.444 06:37:54 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:22.444 06:37:54 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:27:22.444 06:37:54 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:22.444 06:37:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:27:22.444 06:37:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:22.444 06:37:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:22.444 06:37:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:22.444 06:37:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:22.444 06:37:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:22.444 06:37:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:22.444 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:22.444 06:37:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:22.444 06:37:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:22.444 06:37:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:22.444 06:37:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:22.444 06:37:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:22.444 06:37:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:27:22.444 06:37:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:22.444 06:37:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:22.444 06:37:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:22.444 06:37:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:22.444 06:37:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:22.444 06:37:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:22.444 06:37:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:22.444 06:37:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:22.444 06:37:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:22.444 06:37:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:22.444 06:37:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:27:22.444 06:37:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:29.015 06:37:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:29.015 06:37:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:27:29.015 06:37:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:29.015 06:37:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:29.015 06:37:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:29.015 06:37:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:29.015 06:37:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:29.015 06:37:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:27:29.015 06:37:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:29.015 06:37:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:27:29.015 06:37:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:27:29.015 06:37:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:27:29.015 06:37:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:27:29.015 06:37:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:27:29.015 06:37:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:27:29.015 06:37:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:29.015 06:37:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:29.015 06:37:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:29.015 06:37:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:29.015 06:37:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:29.015 06:37:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:29.015 06:37:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:29.015 06:37:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:29.015 06:37:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:29.015 06:37:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:29.015 06:37:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:29.015 06:37:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:29.015 06:37:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:29.015 06:37:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:29.015 06:37:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:29.015 06:37:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:29.015 06:37:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:29.015 06:37:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:29.015 06:37:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:29.015 06:37:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:29.015 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:29.015 06:37:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:29.015 06:37:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:29.015 06:37:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:29.015 06:37:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:29.015 06:37:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:29.015 06:37:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:29.015 06:37:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:29.015 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:29.015 06:37:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:29.015 06:37:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:29.015 06:37:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:29.015 06:37:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:29.015 06:37:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:29.015 06:37:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:29.015 06:37:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:29.016 06:37:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:29.016 06:37:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:29.016 06:37:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:29.016 06:37:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:29.016 06:37:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:29.016 06:37:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:29.016 06:37:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:29.016 06:37:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:29.016 06:37:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:29.016 Found net devices under 0000:86:00.0: cvl_0_0 00:27:29.016 06:37:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:29.016 06:37:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:29.016 06:37:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:29.016 06:37:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:29.016 06:37:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:29.016 06:37:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:29.016 06:37:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:29.016 06:37:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:29.016 06:37:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:29.016 Found net devices under 0000:86:00.1: cvl_0_1 00:27:29.016 06:37:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:29.016 06:37:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:29.016 06:37:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes 00:27:29.016 06:37:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:29.016 06:37:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:29.016 06:37:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:29.016 06:37:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:29.016 06:37:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:29.016 06:37:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:29.016 06:37:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:29.016 06:37:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:29.016 06:37:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:29.016 06:37:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:29.016 06:37:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:29.016 06:37:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:29.016 06:37:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:29.016 06:37:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:29.016 06:37:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:29.016 06:37:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:29.016 06:37:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:29.016 06:37:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:29.016 06:37:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:29.016 06:37:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:29.016 06:37:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:29.016 06:37:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:29.016 06:37:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:29.016 06:37:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:29.016 06:37:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:29.016 06:37:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:29.016 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:29.016 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.452 ms 00:27:29.016 00:27:29.016 --- 10.0.0.2 ping statistics --- 00:27:29.016 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:29.016 rtt min/avg/max/mdev = 0.452/0.452/0.452/0.000 ms 00:27:29.016 06:37:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:29.016 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:29.016 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.188 ms 00:27:29.016 00:27:29.016 --- 10.0.0.1 ping statistics --- 00:27:29.016 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:29.016 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:27:29.016 06:37:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:29.016 06:37:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0 00:27:29.016 06:37:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:29.016 06:37:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:29.016 06:37:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:29.016 06:37:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:29.016 06:37:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:29.016 06:37:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:29.016 06:37:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:29.016 06:37:59 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:27:29.016 06:37:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:29.016 06:37:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:29.016 06:37:59 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=631443 00:27:29.016 06:37:59 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:29.016 06:37:59 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:29.016 06:37:59 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 631443 00:27:29.016 06:37:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@833 -- # '[' -z 631443 ']' 00:27:29.016 06:37:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:29.016 06:37:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:29.016 06:37:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:29.016 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:29.016 06:37:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:29.016 06:37:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:29.016 [2024-11-20 06:38:00.026169] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:27:29.016 [2024-11-20 06:38:00.026224] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:29.016 [2024-11-20 06:38:00.107444] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:29.016 [2024-11-20 06:38:00.149546] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:29.016 [2024-11-20 06:38:00.149587] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:29.016 [2024-11-20 06:38:00.149595] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:29.016 [2024-11-20 06:38:00.149602] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:29.016 [2024-11-20 06:38:00.149607] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:29.016 [2024-11-20 06:38:00.151175] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:29.016 [2024-11-20 06:38:00.151287] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:29.016 [2024-11-20 06:38:00.151320] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:29.016 [2024-11-20 06:38:00.151321] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:29.277 06:38:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:27:29.277 06:38:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@866 -- # return 0 00:27:29.277 06:38:00 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:29.277 06:38:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.277 06:38:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:29.277 [2024-11-20 06:38:00.882549] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:29.277 06:38:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.277 06:38:00 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:27:29.277 06:38:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:29.277 06:38:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:29.277 06:38:00 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:29.277 06:38:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.277 06:38:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:29.277 Malloc0 00:27:29.277 06:38:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.277 06:38:00 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:29.277 06:38:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.277 06:38:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:29.277 06:38:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.277 06:38:00 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:27:29.277 06:38:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.277 06:38:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:29.277 06:38:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.277 06:38:00 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:29.277 06:38:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.277 06:38:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:29.277 [2024-11-20 06:38:00.992594] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:29.277 06:38:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.277 06:38:00 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:29.277 06:38:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.277 06:38:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:29.277 06:38:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.277 06:38:01 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:27:29.277 06:38:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.277 06:38:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:29.277 [ 00:27:29.277 { 00:27:29.278 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:27:29.278 "subtype": "Discovery", 00:27:29.278 "listen_addresses": [ 00:27:29.278 { 00:27:29.278 "trtype": "TCP", 00:27:29.278 "adrfam": "IPv4", 00:27:29.278 "traddr": "10.0.0.2", 00:27:29.278 "trsvcid": "4420" 00:27:29.278 } 00:27:29.278 ], 00:27:29.278 "allow_any_host": true, 00:27:29.278 "hosts": [] 00:27:29.278 }, 00:27:29.278 { 00:27:29.278 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:27:29.278 "subtype": "NVMe", 00:27:29.278 "listen_addresses": [ 00:27:29.278 { 00:27:29.278 "trtype": "TCP", 00:27:29.278 "adrfam": "IPv4", 00:27:29.278 "traddr": "10.0.0.2", 00:27:29.278 "trsvcid": "4420" 00:27:29.278 } 00:27:29.278 ], 00:27:29.278 "allow_any_host": true, 00:27:29.278 "hosts": [], 00:27:29.278 "serial_number": "SPDK00000000000001", 00:27:29.278 "model_number": "SPDK bdev Controller", 00:27:29.278 "max_namespaces": 32, 00:27:29.278 "min_cntlid": 1, 00:27:29.278 "max_cntlid": 65519, 00:27:29.278 "namespaces": [ 00:27:29.278 { 00:27:29.278 "nsid": 1, 00:27:29.278 "bdev_name": "Malloc0", 00:27:29.278 "name": "Malloc0", 00:27:29.278 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:27:29.278 "eui64": "ABCDEF0123456789", 00:27:29.278 "uuid": "380f3563-8988-4b17-a26f-6eaf1866993a" 00:27:29.278 } 00:27:29.278 ] 00:27:29.278 } 00:27:29.278 ] 00:27:29.278 06:38:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.278 06:38:01 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:27:29.278 [2024-11-20 06:38:01.045403] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:27:29.278 [2024-11-20 06:38:01.045437] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid631693 ] 00:27:29.278 [2024-11-20 06:38:01.085731] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:27:29.278 [2024-11-20 06:38:01.085777] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:27:29.278 [2024-11-20 06:38:01.085782] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:27:29.278 [2024-11-20 06:38:01.085795] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:27:29.278 [2024-11-20 06:38:01.085804] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:27:29.278 [2024-11-20 06:38:01.089506] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:27:29.278 [2024-11-20 06:38:01.089536] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1bc8690 0 00:27:29.278 [2024-11-20 06:38:01.096267] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:27:29.278 [2024-11-20 06:38:01.096282] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:27:29.278 [2024-11-20 06:38:01.096287] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:27:29.278 [2024-11-20 06:38:01.096290] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:27:29.278 [2024-11-20 06:38:01.096322] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:29.278 [2024-11-20 06:38:01.096327] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:29.278 [2024-11-20 06:38:01.096331] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1bc8690) 00:27:29.278 [2024-11-20 06:38:01.096344] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:27:29.278 [2024-11-20 06:38:01.096359] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c2a100, cid 0, qid 0 00:27:29.278 [2024-11-20 06:38:01.104212] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:29.278 [2024-11-20 06:38:01.104220] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:29.278 [2024-11-20 06:38:01.104224] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:29.278 [2024-11-20 06:38:01.104228] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c2a100) on tqpair=0x1bc8690 00:27:29.278 [2024-11-20 06:38:01.104240] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:27:29.278 [2024-11-20 06:38:01.104246] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:27:29.278 [2024-11-20 06:38:01.104251] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:27:29.278 [2024-11-20 06:38:01.104264] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:29.278 [2024-11-20 06:38:01.104267] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:29.278 [2024-11-20 06:38:01.104271] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1bc8690) 00:27:29.278 [2024-11-20 06:38:01.104277] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.278 [2024-11-20 06:38:01.104292] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c2a100, cid 0, qid 0 00:27:29.278 [2024-11-20 06:38:01.104468] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:29.278 [2024-11-20 06:38:01.104474] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:29.278 [2024-11-20 06:38:01.104477] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:29.278 [2024-11-20 06:38:01.104480] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c2a100) on tqpair=0x1bc8690 00:27:29.278 [2024-11-20 06:38:01.104485] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:27:29.278 [2024-11-20 06:38:01.104491] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:27:29.278 [2024-11-20 06:38:01.104498] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:29.278 [2024-11-20 06:38:01.104501] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:29.278 [2024-11-20 06:38:01.104504] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1bc8690) 00:27:29.278 [2024-11-20 06:38:01.104510] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.278 [2024-11-20 06:38:01.104520] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c2a100, cid 0, qid 0 00:27:29.278 [2024-11-20 06:38:01.104584] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:29.278 [2024-11-20 06:38:01.104590] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:29.278 [2024-11-20 06:38:01.104593] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:29.278 [2024-11-20 06:38:01.104596] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c2a100) on tqpair=0x1bc8690 00:27:29.278 [2024-11-20 06:38:01.104601] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:27:29.278 [2024-11-20 06:38:01.104608] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:27:29.278 [2024-11-20 06:38:01.104614] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:29.278 [2024-11-20 06:38:01.104617] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:29.278 [2024-11-20 06:38:01.104620] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1bc8690) 00:27:29.278 [2024-11-20 06:38:01.104626] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.278 [2024-11-20 06:38:01.104635] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c2a100, cid 0, qid 0 00:27:29.278 [2024-11-20 06:38:01.104698] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:29.278 [2024-11-20 06:38:01.104704] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:29.278 [2024-11-20 06:38:01.104707] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:29.278 [2024-11-20 06:38:01.104711] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c2a100) on tqpair=0x1bc8690 00:27:29.278 [2024-11-20 06:38:01.104715] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:27:29.279 [2024-11-20 06:38:01.104723] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:29.279 [2024-11-20 06:38:01.104727] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:29.279 [2024-11-20 06:38:01.104730] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1bc8690) 00:27:29.279 [2024-11-20 06:38:01.104735] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.279 [2024-11-20 06:38:01.104745] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c2a100, cid 0, qid 0 00:27:29.279 [2024-11-20 06:38:01.104802] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:29.279 [2024-11-20 06:38:01.104810] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:29.279 [2024-11-20 06:38:01.104813] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:29.279 [2024-11-20 06:38:01.104816] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c2a100) on tqpair=0x1bc8690 00:27:29.279 [2024-11-20 06:38:01.104820] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:27:29.279 [2024-11-20 06:38:01.104825] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:27:29.279 [2024-11-20 06:38:01.104832] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:27:29.279 [2024-11-20 06:38:01.104939] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:27:29.279 [2024-11-20 06:38:01.104944] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:27:29.279 [2024-11-20 06:38:01.104952] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:29.279 [2024-11-20 06:38:01.104955] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:29.279 [2024-11-20 06:38:01.104958] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1bc8690) 00:27:29.279 [2024-11-20 06:38:01.104963] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.279 [2024-11-20 06:38:01.104973] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c2a100, cid 0, qid 0 00:27:29.279 [2024-11-20 06:38:01.105037] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:29.279 [2024-11-20 06:38:01.105043] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:29.279 [2024-11-20 06:38:01.105046] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:29.279 [2024-11-20 06:38:01.105049] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c2a100) on tqpair=0x1bc8690 00:27:29.279 [2024-11-20 06:38:01.105053] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:27:29.279 [2024-11-20 06:38:01.105060] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:29.279 [2024-11-20 06:38:01.105064] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:29.279 [2024-11-20 06:38:01.105067] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1bc8690) 00:27:29.279 [2024-11-20 06:38:01.105072] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.279 [2024-11-20 06:38:01.105081] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c2a100, cid 0, qid 0 00:27:29.279 [2024-11-20 06:38:01.105147] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:29.279 [2024-11-20 06:38:01.105152] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:29.279 [2024-11-20 06:38:01.105155] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:29.279 [2024-11-20 06:38:01.105158] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c2a100) on tqpair=0x1bc8690 00:27:29.279 [2024-11-20 06:38:01.105162] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:27:29.279 [2024-11-20 06:38:01.105166] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:27:29.279 [2024-11-20 06:38:01.105174] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:27:29.279 [2024-11-20 06:38:01.105181] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:27:29.279 [2024-11-20 06:38:01.105192] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:29.279 [2024-11-20 06:38:01.105196] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1bc8690) 00:27:29.279 [2024-11-20 06:38:01.105207] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.279 [2024-11-20 06:38:01.105218] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c2a100, cid 0, qid 0 00:27:29.279 [2024-11-20 06:38:01.105312] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:29.279 [2024-11-20 06:38:01.105318] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:29.279 [2024-11-20 06:38:01.105321] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:29.279 [2024-11-20 06:38:01.105325] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1bc8690): datao=0, datal=4096, cccid=0 00:27:29.279 [2024-11-20 06:38:01.105329] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1c2a100) on tqpair(0x1bc8690): expected_datao=0, payload_size=4096 00:27:29.279 [2024-11-20 06:38:01.105333] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:29.279 [2024-11-20 06:38:01.105346] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:29.279 [2024-11-20 06:38:01.105351] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:29.543 [2024-11-20 06:38:01.151209] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:29.543 [2024-11-20 06:38:01.151219] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:29.543 [2024-11-20 06:38:01.151223] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:29.543 [2024-11-20 06:38:01.151227] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c2a100) on tqpair=0x1bc8690 00:27:29.543 [2024-11-20 06:38:01.151235] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:27:29.543 [2024-11-20 06:38:01.151240] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:27:29.543 [2024-11-20 06:38:01.151244] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:27:29.543 [2024-11-20 06:38:01.151252] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:27:29.543 [2024-11-20 06:38:01.151256] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:27:29.543 [2024-11-20 06:38:01.151261] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:27:29.543 [2024-11-20 06:38:01.151271] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:27:29.543 [2024-11-20 06:38:01.151277] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:29.543 [2024-11-20 06:38:01.151281] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:29.543 [2024-11-20 06:38:01.151284] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1bc8690) 00:27:29.543 [2024-11-20 06:38:01.151291] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:27:29.543 [2024-11-20 06:38:01.151304] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c2a100, cid 0, qid 0 00:27:29.543 [2024-11-20 06:38:01.151434] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:29.543 [2024-11-20 06:38:01.151440] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:29.543 [2024-11-20 06:38:01.151443] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:29.543 [2024-11-20 06:38:01.151446] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c2a100) on tqpair=0x1bc8690 00:27:29.543 [2024-11-20 06:38:01.151454] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:29.543 [2024-11-20 06:38:01.151457] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:29.543 [2024-11-20 06:38:01.151463] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1bc8690) 00:27:29.543 [2024-11-20 06:38:01.151468] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:29.543 [2024-11-20 06:38:01.151474] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:29.543 [2024-11-20 06:38:01.151477] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:29.543 [2024-11-20 06:38:01.151480] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1bc8690) 00:27:29.543 [2024-11-20 06:38:01.151485] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:29.543 [2024-11-20 06:38:01.151490] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:29.543 [2024-11-20 06:38:01.151493] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:29.543 [2024-11-20 06:38:01.151496] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1bc8690) 00:27:29.543 [2024-11-20 06:38:01.151501] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:29.543 [2024-11-20 06:38:01.151506] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:29.543 [2024-11-20 06:38:01.151509] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:29.543 [2024-11-20 06:38:01.151512] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1bc8690) 00:27:29.543 [2024-11-20 06:38:01.151517] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:29.543 [2024-11-20 06:38:01.151521] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:27:29.543 [2024-11-20 06:38:01.151529] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:27:29.543 [2024-11-20 06:38:01.151535] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:29.543 [2024-11-20 06:38:01.151538] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1bc8690) 00:27:29.543 [2024-11-20 06:38:01.151544] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.543 [2024-11-20 06:38:01.151555] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c2a100, cid 0, qid 0 00:27:29.543 [2024-11-20 06:38:01.151559] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c2a280, cid 1, qid 0 00:27:29.543 [2024-11-20 06:38:01.151563] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c2a400, cid 2, qid 0 00:27:29.543 [2024-11-20 06:38:01.151567] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c2a580, cid 3, qid 0 00:27:29.543 [2024-11-20 06:38:01.151571] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c2a700, cid 4, qid 0 00:27:29.543 [2024-11-20 06:38:01.151671] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:29.543 [2024-11-20 06:38:01.151677] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:29.543 [2024-11-20 06:38:01.151680] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:29.543 [2024-11-20 06:38:01.151683] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c2a700) on tqpair=0x1bc8690 00:27:29.543 [2024-11-20 06:38:01.151690] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:27:29.543 [2024-11-20 06:38:01.151694] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:27:29.543 [2024-11-20 06:38:01.151704] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:29.543 [2024-11-20 06:38:01.151707] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1bc8690) 00:27:29.543 [2024-11-20 06:38:01.151713] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.543 [2024-11-20 06:38:01.151724] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c2a700, cid 4, qid 0 00:27:29.543 [2024-11-20 06:38:01.151800] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:29.543 [2024-11-20 06:38:01.151807] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:29.543 [2024-11-20 06:38:01.151810] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:29.543 [2024-11-20 06:38:01.151813] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1bc8690): datao=0, datal=4096, cccid=4 00:27:29.544 [2024-11-20 06:38:01.151817] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1c2a700) on tqpair(0x1bc8690): expected_datao=0, payload_size=4096 00:27:29.544 [2024-11-20 06:38:01.151821] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:29.544 [2024-11-20 06:38:01.151826] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:29.544 [2024-11-20 06:38:01.151830] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:29.544 [2024-11-20 06:38:01.151850] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:29.544 [2024-11-20 06:38:01.151855] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:29.544 [2024-11-20 06:38:01.151858] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:29.544 [2024-11-20 06:38:01.151862] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c2a700) on tqpair=0x1bc8690 00:27:29.544 [2024-11-20 06:38:01.151874] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:27:29.544 [2024-11-20 06:38:01.151895] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:29.544 [2024-11-20 06:38:01.151899] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1bc8690) 00:27:29.544 [2024-11-20 06:38:01.151904] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.544 [2024-11-20 06:38:01.151910] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:29.544 [2024-11-20 06:38:01.151913] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:29.544 [2024-11-20 06:38:01.151916] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1bc8690) 00:27:29.544 [2024-11-20 06:38:01.151922] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:27:29.544 [2024-11-20 06:38:01.151935] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c2a700, cid 4, qid 0 00:27:29.544 [2024-11-20 06:38:01.151940] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c2a880, cid 5, qid 0 00:27:29.544 [2024-11-20 06:38:01.152046] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:29.544 [2024-11-20 06:38:01.152051] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:29.544 [2024-11-20 06:38:01.152054] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:29.544 [2024-11-20 06:38:01.152058] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1bc8690): datao=0, datal=1024, cccid=4 00:27:29.544 [2024-11-20 06:38:01.152061] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1c2a700) on tqpair(0x1bc8690): expected_datao=0, payload_size=1024 00:27:29.544 [2024-11-20 06:38:01.152065] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:29.544 [2024-11-20 06:38:01.152070] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:29.544 [2024-11-20 06:38:01.152074] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:29.544 [2024-11-20 06:38:01.152078] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:29.544 [2024-11-20 06:38:01.152083] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:29.544 [2024-11-20 06:38:01.152086] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:29.544 [2024-11-20 06:38:01.152090] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c2a880) on tqpair=0x1bc8690 00:27:29.544 [2024-11-20 06:38:01.193392] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:29.544 [2024-11-20 06:38:01.193404] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:29.544 [2024-11-20 06:38:01.193408] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:29.544 [2024-11-20 06:38:01.193411] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c2a700) on tqpair=0x1bc8690 00:27:29.544 [2024-11-20 06:38:01.193422] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:29.544 [2024-11-20 06:38:01.193426] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1bc8690) 00:27:29.544 [2024-11-20 06:38:01.193432] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.544 [2024-11-20 06:38:01.193450] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c2a700, cid 4, qid 0 00:27:29.544 [2024-11-20 06:38:01.193579] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:29.544 [2024-11-20 06:38:01.193585] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:29.544 [2024-11-20 06:38:01.193588] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:29.544 [2024-11-20 06:38:01.193591] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1bc8690): datao=0, datal=3072, cccid=4 00:27:29.544 [2024-11-20 06:38:01.193595] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1c2a700) on tqpair(0x1bc8690): expected_datao=0, payload_size=3072 00:27:29.544 [2024-11-20 06:38:01.193598] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:29.544 [2024-11-20 06:38:01.193614] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:29.544 [2024-11-20 06:38:01.193619] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:29.544 [2024-11-20 06:38:01.193656] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:29.544 [2024-11-20 06:38:01.193661] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:29.544 [2024-11-20 06:38:01.193664] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:29.544 [2024-11-20 06:38:01.193668] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c2a700) on tqpair=0x1bc8690 00:27:29.544 [2024-11-20 06:38:01.193675] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:29.544 [2024-11-20 06:38:01.193678] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1bc8690) 00:27:29.544 [2024-11-20 06:38:01.193684] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.544 [2024-11-20 06:38:01.193697] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c2a700, cid 4, qid 0 00:27:29.544 [2024-11-20 06:38:01.193771] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:29.544 [2024-11-20 06:38:01.193777] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:29.544 [2024-11-20 06:38:01.193780] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:29.544 [2024-11-20 06:38:01.193783] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1bc8690): datao=0, datal=8, cccid=4 00:27:29.544 [2024-11-20 06:38:01.193787] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1c2a700) on tqpair(0x1bc8690): expected_datao=0, payload_size=8 00:27:29.544 [2024-11-20 06:38:01.193791] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:29.544 [2024-11-20 06:38:01.193796] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:29.544 [2024-11-20 06:38:01.193799] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:29.544 [2024-11-20 06:38:01.235335] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:29.544 [2024-11-20 06:38:01.235345] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:29.544 [2024-11-20 06:38:01.235348] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:29.544 [2024-11-20 06:38:01.235351] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c2a700) on tqpair=0x1bc8690 00:27:29.544 ===================================================== 00:27:29.544 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:27:29.544 ===================================================== 00:27:29.544 Controller Capabilities/Features 00:27:29.544 ================================ 00:27:29.544 Vendor ID: 0000 00:27:29.544 Subsystem Vendor ID: 0000 00:27:29.544 Serial Number: .................... 00:27:29.544 Model Number: ........................................ 00:27:29.544 Firmware Version: 25.01 00:27:29.544 Recommended Arb Burst: 0 00:27:29.544 IEEE OUI Identifier: 00 00 00 00:27:29.544 Multi-path I/O 00:27:29.544 May have multiple subsystem ports: No 00:27:29.544 May have multiple controllers: No 00:27:29.544 Associated with SR-IOV VF: No 00:27:29.544 Max Data Transfer Size: 131072 00:27:29.544 Max Number of Namespaces: 0 00:27:29.544 Max Number of I/O Queues: 1024 00:27:29.544 NVMe Specification Version (VS): 1.3 00:27:29.544 NVMe Specification Version (Identify): 1.3 00:27:29.544 Maximum Queue Entries: 128 00:27:29.544 Contiguous Queues Required: Yes 00:27:29.544 Arbitration Mechanisms Supported 00:27:29.544 Weighted Round Robin: Not Supported 00:27:29.544 Vendor Specific: Not Supported 00:27:29.544 Reset Timeout: 15000 ms 00:27:29.544 Doorbell Stride: 4 bytes 00:27:29.544 NVM Subsystem Reset: Not Supported 00:27:29.544 Command Sets Supported 00:27:29.544 NVM Command Set: Supported 00:27:29.544 Boot Partition: Not Supported 00:27:29.544 Memory Page Size Minimum: 4096 bytes 00:27:29.544 Memory Page Size Maximum: 4096 bytes 00:27:29.544 Persistent Memory Region: Not Supported 00:27:29.544 Optional Asynchronous Events Supported 00:27:29.544 Namespace Attribute Notices: Not Supported 00:27:29.544 Firmware Activation Notices: Not Supported 00:27:29.544 ANA Change Notices: Not Supported 00:27:29.544 PLE Aggregate Log Change Notices: Not Supported 00:27:29.544 LBA Status Info Alert Notices: Not Supported 00:27:29.544 EGE Aggregate Log Change Notices: Not Supported 00:27:29.544 Normal NVM Subsystem Shutdown event: Not Supported 00:27:29.544 Zone Descriptor Change Notices: Not Supported 00:27:29.544 Discovery Log Change Notices: Supported 00:27:29.544 Controller Attributes 00:27:29.544 128-bit Host Identifier: Not Supported 00:27:29.544 Non-Operational Permissive Mode: Not Supported 00:27:29.544 NVM Sets: Not Supported 00:27:29.544 Read Recovery Levels: Not Supported 00:27:29.544 Endurance Groups: Not Supported 00:27:29.544 Predictable Latency Mode: Not Supported 00:27:29.544 Traffic Based Keep ALive: Not Supported 00:27:29.544 Namespace Granularity: Not Supported 00:27:29.544 SQ Associations: Not Supported 00:27:29.544 UUID List: Not Supported 00:27:29.544 Multi-Domain Subsystem: Not Supported 00:27:29.544 Fixed Capacity Management: Not Supported 00:27:29.544 Variable Capacity Management: Not Supported 00:27:29.544 Delete Endurance Group: Not Supported 00:27:29.544 Delete NVM Set: Not Supported 00:27:29.544 Extended LBA Formats Supported: Not Supported 00:27:29.544 Flexible Data Placement Supported: Not Supported 00:27:29.544 00:27:29.544 Controller Memory Buffer Support 00:27:29.544 ================================ 00:27:29.544 Supported: No 00:27:29.544 00:27:29.544 Persistent Memory Region Support 00:27:29.545 ================================ 00:27:29.545 Supported: No 00:27:29.545 00:27:29.545 Admin Command Set Attributes 00:27:29.545 ============================ 00:27:29.545 Security Send/Receive: Not Supported 00:27:29.545 Format NVM: Not Supported 00:27:29.545 Firmware Activate/Download: Not Supported 00:27:29.545 Namespace Management: Not Supported 00:27:29.545 Device Self-Test: Not Supported 00:27:29.545 Directives: Not Supported 00:27:29.545 NVMe-MI: Not Supported 00:27:29.545 Virtualization Management: Not Supported 00:27:29.545 Doorbell Buffer Config: Not Supported 00:27:29.545 Get LBA Status Capability: Not Supported 00:27:29.545 Command & Feature Lockdown Capability: Not Supported 00:27:29.545 Abort Command Limit: 1 00:27:29.545 Async Event Request Limit: 4 00:27:29.545 Number of Firmware Slots: N/A 00:27:29.545 Firmware Slot 1 Read-Only: N/A 00:27:29.545 Firmware Activation Without Reset: N/A 00:27:29.545 Multiple Update Detection Support: N/A 00:27:29.545 Firmware Update Granularity: No Information Provided 00:27:29.545 Per-Namespace SMART Log: No 00:27:29.545 Asymmetric Namespace Access Log Page: Not Supported 00:27:29.545 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:27:29.545 Command Effects Log Page: Not Supported 00:27:29.545 Get Log Page Extended Data: Supported 00:27:29.545 Telemetry Log Pages: Not Supported 00:27:29.545 Persistent Event Log Pages: Not Supported 00:27:29.545 Supported Log Pages Log Page: May Support 00:27:29.545 Commands Supported & Effects Log Page: Not Supported 00:27:29.545 Feature Identifiers & Effects Log Page:May Support 00:27:29.545 NVMe-MI Commands & Effects Log Page: May Support 00:27:29.545 Data Area 4 for Telemetry Log: Not Supported 00:27:29.545 Error Log Page Entries Supported: 128 00:27:29.545 Keep Alive: Not Supported 00:27:29.545 00:27:29.545 NVM Command Set Attributes 00:27:29.545 ========================== 00:27:29.545 Submission Queue Entry Size 00:27:29.545 Max: 1 00:27:29.545 Min: 1 00:27:29.545 Completion Queue Entry Size 00:27:29.545 Max: 1 00:27:29.545 Min: 1 00:27:29.545 Number of Namespaces: 0 00:27:29.545 Compare Command: Not Supported 00:27:29.545 Write Uncorrectable Command: Not Supported 00:27:29.545 Dataset Management Command: Not Supported 00:27:29.545 Write Zeroes Command: Not Supported 00:27:29.545 Set Features Save Field: Not Supported 00:27:29.545 Reservations: Not Supported 00:27:29.545 Timestamp: Not Supported 00:27:29.545 Copy: Not Supported 00:27:29.545 Volatile Write Cache: Not Present 00:27:29.545 Atomic Write Unit (Normal): 1 00:27:29.545 Atomic Write Unit (PFail): 1 00:27:29.545 Atomic Compare & Write Unit: 1 00:27:29.545 Fused Compare & Write: Supported 00:27:29.545 Scatter-Gather List 00:27:29.545 SGL Command Set: Supported 00:27:29.545 SGL Keyed: Supported 00:27:29.545 SGL Bit Bucket Descriptor: Not Supported 00:27:29.545 SGL Metadata Pointer: Not Supported 00:27:29.545 Oversized SGL: Not Supported 00:27:29.545 SGL Metadata Address: Not Supported 00:27:29.545 SGL Offset: Supported 00:27:29.545 Transport SGL Data Block: Not Supported 00:27:29.545 Replay Protected Memory Block: Not Supported 00:27:29.545 00:27:29.545 Firmware Slot Information 00:27:29.545 ========================= 00:27:29.545 Active slot: 0 00:27:29.545 00:27:29.545 00:27:29.545 Error Log 00:27:29.545 ========= 00:27:29.545 00:27:29.545 Active Namespaces 00:27:29.545 ================= 00:27:29.545 Discovery Log Page 00:27:29.545 ================== 00:27:29.545 Generation Counter: 2 00:27:29.545 Number of Records: 2 00:27:29.545 Record Format: 0 00:27:29.545 00:27:29.545 Discovery Log Entry 0 00:27:29.545 ---------------------- 00:27:29.545 Transport Type: 3 (TCP) 00:27:29.545 Address Family: 1 (IPv4) 00:27:29.545 Subsystem Type: 3 (Current Discovery Subsystem) 00:27:29.545 Entry Flags: 00:27:29.545 Duplicate Returned Information: 1 00:27:29.545 Explicit Persistent Connection Support for Discovery: 1 00:27:29.545 Transport Requirements: 00:27:29.545 Secure Channel: Not Required 00:27:29.545 Port ID: 0 (0x0000) 00:27:29.545 Controller ID: 65535 (0xffff) 00:27:29.545 Admin Max SQ Size: 128 00:27:29.545 Transport Service Identifier: 4420 00:27:29.545 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:27:29.545 Transport Address: 10.0.0.2 00:27:29.545 Discovery Log Entry 1 00:27:29.545 ---------------------- 00:27:29.545 Transport Type: 3 (TCP) 00:27:29.545 Address Family: 1 (IPv4) 00:27:29.545 Subsystem Type: 2 (NVM Subsystem) 00:27:29.545 Entry Flags: 00:27:29.545 Duplicate Returned Information: 0 00:27:29.545 Explicit Persistent Connection Support for Discovery: 0 00:27:29.545 Transport Requirements: 00:27:29.545 Secure Channel: Not Required 00:27:29.545 Port ID: 0 (0x0000) 00:27:29.545 Controller ID: 65535 (0xffff) 00:27:29.545 Admin Max SQ Size: 128 00:27:29.545 Transport Service Identifier: 4420 00:27:29.545 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:27:29.545 Transport Address: 10.0.0.2 [2024-11-20 06:38:01.235430] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:27:29.545 [2024-11-20 06:38:01.235442] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c2a100) on tqpair=0x1bc8690 00:27:29.545 [2024-11-20 06:38:01.235448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.545 [2024-11-20 06:38:01.235453] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c2a280) on tqpair=0x1bc8690 00:27:29.545 [2024-11-20 06:38:01.235457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.545 [2024-11-20 06:38:01.235461] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c2a400) on tqpair=0x1bc8690 00:27:29.545 [2024-11-20 06:38:01.235465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.545 [2024-11-20 06:38:01.235469] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c2a580) on tqpair=0x1bc8690 00:27:29.545 [2024-11-20 06:38:01.235473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.545 [2024-11-20 06:38:01.235483] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:29.545 [2024-11-20 06:38:01.235486] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:29.545 [2024-11-20 06:38:01.235489] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1bc8690) 00:27:29.545 [2024-11-20 06:38:01.235496] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.545 [2024-11-20 06:38:01.235509] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c2a580, cid 3, qid 0 00:27:29.545 [2024-11-20 06:38:01.235569] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:29.545 [2024-11-20 06:38:01.235575] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:29.545 [2024-11-20 06:38:01.235578] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:29.545 [2024-11-20 06:38:01.235581] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c2a580) on tqpair=0x1bc8690 00:27:29.545 [2024-11-20 06:38:01.235588] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:29.545 [2024-11-20 06:38:01.235591] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:29.545 [2024-11-20 06:38:01.235594] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1bc8690) 00:27:29.545 [2024-11-20 06:38:01.235599] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.545 [2024-11-20 06:38:01.235612] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c2a580, cid 3, qid 0 00:27:29.545 [2024-11-20 06:38:01.235699] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:29.545 [2024-11-20 06:38:01.235705] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:29.545 [2024-11-20 06:38:01.235708] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:29.545 [2024-11-20 06:38:01.235711] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c2a580) on tqpair=0x1bc8690 00:27:29.545 [2024-11-20 06:38:01.235716] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:27:29.545 [2024-11-20 06:38:01.235720] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:27:29.545 [2024-11-20 06:38:01.235728] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:29.545 [2024-11-20 06:38:01.235731] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:29.545 [2024-11-20 06:38:01.235734] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1bc8690) 00:27:29.545 [2024-11-20 06:38:01.235740] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.545 [2024-11-20 06:38:01.235749] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c2a580, cid 3, qid 0 00:27:29.545 [2024-11-20 06:38:01.235808] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:29.545 [2024-11-20 06:38:01.235815] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:29.545 [2024-11-20 06:38:01.235818] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:29.545 [2024-11-20 06:38:01.235822] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c2a580) on tqpair=0x1bc8690 00:27:29.545 [2024-11-20 06:38:01.235830] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:29.545 [2024-11-20 06:38:01.235834] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:29.546 [2024-11-20 06:38:01.235837] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1bc8690) 00:27:29.546 [2024-11-20 06:38:01.235843] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.546 [2024-11-20 06:38:01.235852] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c2a580, cid 3, qid 0 00:27:29.546 [2024-11-20 06:38:01.235916] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:29.546 [2024-11-20 06:38:01.235921] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:29.546 [2024-11-20 06:38:01.235924] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:29.546 [2024-11-20 06:38:01.235927] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c2a580) on tqpair=0x1bc8690 00:27:29.546 [2024-11-20 06:38:01.235935] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:29.546 [2024-11-20 06:38:01.235939] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:29.546 [2024-11-20 06:38:01.235942] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1bc8690) 00:27:29.546 [2024-11-20 06:38:01.235948] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.546 [2024-11-20 06:38:01.235957] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c2a580, cid 3, qid 0 00:27:29.546 [2024-11-20 06:38:01.236015] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:29.546 [2024-11-20 06:38:01.236021] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:29.546 [2024-11-20 06:38:01.236024] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:29.546 [2024-11-20 06:38:01.236027] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c2a580) on tqpair=0x1bc8690 00:27:29.546 [2024-11-20 06:38:01.236035] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:29.546 [2024-11-20 06:38:01.236039] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:29.546 [2024-11-20 06:38:01.236042] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1bc8690) 00:27:29.546 [2024-11-20 06:38:01.236047] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.546 [2024-11-20 06:38:01.236057] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c2a580, cid 3, qid 0 00:27:29.546 [2024-11-20 06:38:01.236114] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:29.546 [2024-11-20 06:38:01.236120] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:29.546 [2024-11-20 06:38:01.236123] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:29.546 [2024-11-20 06:38:01.236126] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c2a580) on tqpair=0x1bc8690 00:27:29.546 [2024-11-20 06:38:01.236134] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:29.546 [2024-11-20 06:38:01.236138] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:29.546 [2024-11-20 06:38:01.236141] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1bc8690) 00:27:29.546 [2024-11-20 06:38:01.236146] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.546 [2024-11-20 06:38:01.236155] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c2a580, cid 3, qid 0 00:27:29.546 [2024-11-20 06:38:01.240208] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:29.546 [2024-11-20 06:38:01.240215] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:29.546 [2024-11-20 06:38:01.240220] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:29.546 [2024-11-20 06:38:01.240224] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c2a580) on tqpair=0x1bc8690 00:27:29.546 [2024-11-20 06:38:01.240233] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:29.546 [2024-11-20 06:38:01.240236] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:29.546 [2024-11-20 06:38:01.240240] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1bc8690) 00:27:29.546 [2024-11-20 06:38:01.240245] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.546 [2024-11-20 06:38:01.240256] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c2a580, cid 3, qid 0 00:27:29.546 [2024-11-20 06:38:01.240408] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:29.546 [2024-11-20 06:38:01.240414] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:29.546 [2024-11-20 06:38:01.240416] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:29.546 [2024-11-20 06:38:01.240420] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c2a580) on tqpair=0x1bc8690 00:27:29.546 [2024-11-20 06:38:01.240427] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 4 milliseconds 00:27:29.546 00:27:29.546 06:38:01 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:27:29.546 [2024-11-20 06:38:01.277851] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:27:29.546 [2024-11-20 06:38:01.277885] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid631695 ] 00:27:29.546 [2024-11-20 06:38:01.318374] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:27:29.546 [2024-11-20 06:38:01.318416] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:27:29.546 [2024-11-20 06:38:01.318421] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:27:29.546 [2024-11-20 06:38:01.318432] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:27:29.546 [2024-11-20 06:38:01.318441] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:27:29.546 [2024-11-20 06:38:01.318811] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:27:29.546 [2024-11-20 06:38:01.318838] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x218c690 0 00:27:29.546 [2024-11-20 06:38:01.329219] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:27:29.546 [2024-11-20 06:38:01.329234] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:27:29.546 [2024-11-20 06:38:01.329238] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:27:29.546 [2024-11-20 06:38:01.329241] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:27:29.546 [2024-11-20 06:38:01.329267] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:29.546 [2024-11-20 06:38:01.329272] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:29.546 [2024-11-20 06:38:01.329275] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x218c690) 00:27:29.546 [2024-11-20 06:38:01.329286] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:27:29.546 [2024-11-20 06:38:01.329305] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21ee100, cid 0, qid 0 00:27:29.546 [2024-11-20 06:38:01.337216] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:29.546 [2024-11-20 06:38:01.337226] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:29.546 [2024-11-20 06:38:01.337229] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:29.546 [2024-11-20 06:38:01.337233] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21ee100) on tqpair=0x218c690 00:27:29.546 [2024-11-20 06:38:01.337243] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:27:29.546 [2024-11-20 06:38:01.337249] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:27:29.546 [2024-11-20 06:38:01.337254] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:27:29.546 [2024-11-20 06:38:01.337264] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:29.546 [2024-11-20 06:38:01.337268] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:29.546 [2024-11-20 06:38:01.337271] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x218c690) 00:27:29.546 [2024-11-20 06:38:01.337278] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.546 [2024-11-20 06:38:01.337290] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21ee100, cid 0, qid 0 00:27:29.546 [2024-11-20 06:38:01.337430] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:29.546 [2024-11-20 06:38:01.337436] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:29.546 [2024-11-20 06:38:01.337439] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:29.546 [2024-11-20 06:38:01.337442] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21ee100) on tqpair=0x218c690 00:27:29.546 [2024-11-20 06:38:01.337446] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:27:29.546 [2024-11-20 06:38:01.337452] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:27:29.546 [2024-11-20 06:38:01.337459] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:29.546 [2024-11-20 06:38:01.337462] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:29.546 [2024-11-20 06:38:01.337465] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x218c690) 00:27:29.546 [2024-11-20 06:38:01.337471] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.546 [2024-11-20 06:38:01.337481] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21ee100, cid 0, qid 0 00:27:29.546 [2024-11-20 06:38:01.337543] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:29.546 [2024-11-20 06:38:01.337549] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:29.546 [2024-11-20 06:38:01.337551] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:29.546 [2024-11-20 06:38:01.337555] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21ee100) on tqpair=0x218c690 00:27:29.546 [2024-11-20 06:38:01.337559] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:27:29.546 [2024-11-20 06:38:01.337566] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:27:29.546 [2024-11-20 06:38:01.337572] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:29.546 [2024-11-20 06:38:01.337575] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:29.546 [2024-11-20 06:38:01.337578] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x218c690) 00:27:29.546 [2024-11-20 06:38:01.337584] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.546 [2024-11-20 06:38:01.337593] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21ee100, cid 0, qid 0 00:27:29.546 [2024-11-20 06:38:01.337661] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:29.546 [2024-11-20 06:38:01.337667] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:29.547 [2024-11-20 06:38:01.337669] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:29.547 [2024-11-20 06:38:01.337673] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21ee100) on tqpair=0x218c690 00:27:29.547 [2024-11-20 06:38:01.337677] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:27:29.547 [2024-11-20 06:38:01.337685] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:29.547 [2024-11-20 06:38:01.337688] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:29.547 [2024-11-20 06:38:01.337691] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x218c690) 00:27:29.547 [2024-11-20 06:38:01.337697] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.547 [2024-11-20 06:38:01.337706] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21ee100, cid 0, qid 0 00:27:29.547 [2024-11-20 06:38:01.337779] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:29.547 [2024-11-20 06:38:01.337784] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:29.547 [2024-11-20 06:38:01.337787] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:29.547 [2024-11-20 06:38:01.337790] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21ee100) on tqpair=0x218c690 00:27:29.547 [2024-11-20 06:38:01.337794] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:27:29.547 [2024-11-20 06:38:01.337798] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:27:29.547 [2024-11-20 06:38:01.337805] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:27:29.547 [2024-11-20 06:38:01.337912] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:27:29.547 [2024-11-20 06:38:01.337917] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:27:29.547 [2024-11-20 06:38:01.337924] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:29.547 [2024-11-20 06:38:01.337927] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:29.547 [2024-11-20 06:38:01.337930] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x218c690) 00:27:29.547 [2024-11-20 06:38:01.337935] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.547 [2024-11-20 06:38:01.337945] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21ee100, cid 0, qid 0 00:27:29.547 [2024-11-20 06:38:01.338005] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:29.547 [2024-11-20 06:38:01.338011] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:29.547 [2024-11-20 06:38:01.338014] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:29.547 [2024-11-20 06:38:01.338018] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21ee100) on tqpair=0x218c690 00:27:29.547 [2024-11-20 06:38:01.338022] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:27:29.547 [2024-11-20 06:38:01.338029] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:29.547 [2024-11-20 06:38:01.338033] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:29.547 [2024-11-20 06:38:01.338036] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x218c690) 00:27:29.547 [2024-11-20 06:38:01.338041] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.547 [2024-11-20 06:38:01.338053] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21ee100, cid 0, qid 0 00:27:29.547 [2024-11-20 06:38:01.338124] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:29.547 [2024-11-20 06:38:01.338130] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:29.547 [2024-11-20 06:38:01.338132] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:29.547 [2024-11-20 06:38:01.338136] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21ee100) on tqpair=0x218c690 00:27:29.547 [2024-11-20 06:38:01.338140] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:27:29.547 [2024-11-20 06:38:01.338143] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:27:29.547 [2024-11-20 06:38:01.338150] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:27:29.547 [2024-11-20 06:38:01.338160] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:27:29.547 [2024-11-20 06:38:01.338167] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:29.547 [2024-11-20 06:38:01.338170] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x218c690) 00:27:29.547 [2024-11-20 06:38:01.338176] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.547 [2024-11-20 06:38:01.338186] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21ee100, cid 0, qid 0 00:27:29.547 [2024-11-20 06:38:01.338288] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:29.547 [2024-11-20 06:38:01.338294] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:29.547 [2024-11-20 06:38:01.338298] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:29.547 [2024-11-20 06:38:01.338301] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x218c690): datao=0, datal=4096, cccid=0 00:27:29.547 [2024-11-20 06:38:01.338305] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x21ee100) on tqpair(0x218c690): expected_datao=0, payload_size=4096 00:27:29.547 [2024-11-20 06:38:01.338308] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:29.547 [2024-11-20 06:38:01.338314] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:29.547 [2024-11-20 06:38:01.338318] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:29.547 [2024-11-20 06:38:01.338333] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:29.547 [2024-11-20 06:38:01.338338] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:29.547 [2024-11-20 06:38:01.338341] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:29.547 [2024-11-20 06:38:01.338344] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21ee100) on tqpair=0x218c690 00:27:29.547 [2024-11-20 06:38:01.338351] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:27:29.547 [2024-11-20 06:38:01.338355] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:27:29.547 [2024-11-20 06:38:01.338358] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:27:29.547 [2024-11-20 06:38:01.338366] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:27:29.547 [2024-11-20 06:38:01.338370] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:27:29.547 [2024-11-20 06:38:01.338375] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:27:29.547 [2024-11-20 06:38:01.338384] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:27:29.547 [2024-11-20 06:38:01.338390] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:29.547 [2024-11-20 06:38:01.338395] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:29.547 [2024-11-20 06:38:01.338398] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x218c690) 00:27:29.547 [2024-11-20 06:38:01.338404] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:27:29.547 [2024-11-20 06:38:01.338415] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21ee100, cid 0, qid 0 00:27:29.547 [2024-11-20 06:38:01.338480] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:29.547 [2024-11-20 06:38:01.338486] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:29.547 [2024-11-20 06:38:01.338489] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:29.547 [2024-11-20 06:38:01.338492] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21ee100) on tqpair=0x218c690 00:27:29.547 [2024-11-20 06:38:01.338498] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:29.547 [2024-11-20 06:38:01.338501] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:29.547 [2024-11-20 06:38:01.338504] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x218c690) 00:27:29.547 [2024-11-20 06:38:01.338509] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:29.547 [2024-11-20 06:38:01.338514] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:29.547 [2024-11-20 06:38:01.338517] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:29.547 [2024-11-20 06:38:01.338520] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x218c690) 00:27:29.547 [2024-11-20 06:38:01.338525] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:29.547 [2024-11-20 06:38:01.338530] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:29.547 [2024-11-20 06:38:01.338533] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:29.547 [2024-11-20 06:38:01.338536] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x218c690) 00:27:29.547 [2024-11-20 06:38:01.338541] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:29.547 [2024-11-20 06:38:01.338546] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:29.547 [2024-11-20 06:38:01.338549] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:29.547 [2024-11-20 06:38:01.338552] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x218c690) 00:27:29.547 [2024-11-20 06:38:01.338557] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:29.547 [2024-11-20 06:38:01.338561] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:27:29.547 [2024-11-20 06:38:01.338568] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:27:29.548 [2024-11-20 06:38:01.338574] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:29.548 [2024-11-20 06:38:01.338577] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x218c690) 00:27:29.548 [2024-11-20 06:38:01.338582] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.548 [2024-11-20 06:38:01.338593] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21ee100, cid 0, qid 0 00:27:29.548 [2024-11-20 06:38:01.338598] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21ee280, cid 1, qid 0 00:27:29.548 [2024-11-20 06:38:01.338602] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21ee400, cid 2, qid 0 00:27:29.548 [2024-11-20 06:38:01.338606] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21ee580, cid 3, qid 0 00:27:29.548 [2024-11-20 06:38:01.338611] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21ee700, cid 4, qid 0 00:27:29.548 [2024-11-20 06:38:01.338703] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:29.548 [2024-11-20 06:38:01.338710] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:29.548 [2024-11-20 06:38:01.338712] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:29.548 [2024-11-20 06:38:01.338716] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21ee700) on tqpair=0x218c690 00:27:29.548 [2024-11-20 06:38:01.338722] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:27:29.548 [2024-11-20 06:38:01.338726] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:27:29.548 [2024-11-20 06:38:01.338734] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:27:29.548 [2024-11-20 06:38:01.338740] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:27:29.548 [2024-11-20 06:38:01.338745] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:29.548 [2024-11-20 06:38:01.338749] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:29.548 [2024-11-20 06:38:01.338752] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x218c690) 00:27:29.548 [2024-11-20 06:38:01.338757] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:27:29.548 [2024-11-20 06:38:01.338766] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21ee700, cid 4, qid 0 00:27:29.548 [2024-11-20 06:38:01.338829] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:29.548 [2024-11-20 06:38:01.338834] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:29.548 [2024-11-20 06:38:01.338837] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:29.548 [2024-11-20 06:38:01.338840] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21ee700) on tqpair=0x218c690 00:27:29.548 [2024-11-20 06:38:01.338892] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:27:29.548 [2024-11-20 06:38:01.338901] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:27:29.548 [2024-11-20 06:38:01.338908] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:29.548 [2024-11-20 06:38:01.338911] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x218c690) 00:27:29.548 [2024-11-20 06:38:01.338916] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.548 [2024-11-20 06:38:01.338926] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21ee700, cid 4, qid 0 00:27:29.548 [2024-11-20 06:38:01.338997] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:29.548 [2024-11-20 06:38:01.339002] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:29.548 [2024-11-20 06:38:01.339005] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:29.548 [2024-11-20 06:38:01.339009] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x218c690): datao=0, datal=4096, cccid=4 00:27:29.548 [2024-11-20 06:38:01.339013] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x21ee700) on tqpair(0x218c690): expected_datao=0, payload_size=4096 00:27:29.548 [2024-11-20 06:38:01.339016] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:29.548 [2024-11-20 06:38:01.339027] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:29.548 [2024-11-20 06:38:01.339030] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:29.548 [2024-11-20 06:38:01.339063] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:29.548 [2024-11-20 06:38:01.339069] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:29.548 [2024-11-20 06:38:01.339073] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:29.548 [2024-11-20 06:38:01.339077] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21ee700) on tqpair=0x218c690 00:27:29.548 [2024-11-20 06:38:01.339086] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:27:29.548 [2024-11-20 06:38:01.339093] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:27:29.548 [2024-11-20 06:38:01.339102] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:27:29.548 [2024-11-20 06:38:01.339108] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:29.548 [2024-11-20 06:38:01.339111] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x218c690) 00:27:29.548 [2024-11-20 06:38:01.339116] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.548 [2024-11-20 06:38:01.339127] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21ee700, cid 4, qid 0 00:27:29.548 [2024-11-20 06:38:01.339214] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:29.548 [2024-11-20 06:38:01.339220] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:29.548 [2024-11-20 06:38:01.339223] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:29.548 [2024-11-20 06:38:01.339226] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x218c690): datao=0, datal=4096, cccid=4 00:27:29.548 [2024-11-20 06:38:01.339230] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x21ee700) on tqpair(0x218c690): expected_datao=0, payload_size=4096 00:27:29.548 [2024-11-20 06:38:01.339233] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:29.548 [2024-11-20 06:38:01.339243] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:29.548 [2024-11-20 06:38:01.339247] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:29.548 [2024-11-20 06:38:01.339271] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:29.548 [2024-11-20 06:38:01.339277] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:29.548 [2024-11-20 06:38:01.339280] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:29.548 [2024-11-20 06:38:01.339283] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21ee700) on tqpair=0x218c690 00:27:29.548 [2024-11-20 06:38:01.339294] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:27:29.548 [2024-11-20 06:38:01.339303] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:27:29.548 [2024-11-20 06:38:01.339309] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:29.548 [2024-11-20 06:38:01.339313] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x218c690) 00:27:29.548 [2024-11-20 06:38:01.339318] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.548 [2024-11-20 06:38:01.339329] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21ee700, cid 4, qid 0 00:27:29.548 [2024-11-20 06:38:01.339394] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:29.548 [2024-11-20 06:38:01.339400] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:29.548 [2024-11-20 06:38:01.339403] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:29.548 [2024-11-20 06:38:01.339406] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x218c690): datao=0, datal=4096, cccid=4 00:27:29.548 [2024-11-20 06:38:01.339410] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x21ee700) on tqpair(0x218c690): expected_datao=0, payload_size=4096 00:27:29.548 [2024-11-20 06:38:01.339413] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:29.548 [2024-11-20 06:38:01.339425] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:29.548 [2024-11-20 06:38:01.339428] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:29.548 [2024-11-20 06:38:01.339452] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:29.548 [2024-11-20 06:38:01.339458] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:29.548 [2024-11-20 06:38:01.339461] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:29.548 [2024-11-20 06:38:01.339464] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21ee700) on tqpair=0x218c690 00:27:29.548 [2024-11-20 06:38:01.339470] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:27:29.548 [2024-11-20 06:38:01.339477] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:27:29.548 [2024-11-20 06:38:01.339485] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:27:29.548 [2024-11-20 06:38:01.339490] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:27:29.548 [2024-11-20 06:38:01.339495] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:27:29.548 [2024-11-20 06:38:01.339499] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:27:29.548 [2024-11-20 06:38:01.339504] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:27:29.548 [2024-11-20 06:38:01.339508] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:27:29.548 [2024-11-20 06:38:01.339513] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:27:29.549 [2024-11-20 06:38:01.339524] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:29.549 [2024-11-20 06:38:01.339528] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x218c690) 00:27:29.549 [2024-11-20 06:38:01.339533] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.549 [2024-11-20 06:38:01.339539] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:29.549 [2024-11-20 06:38:01.339542] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:29.549 [2024-11-20 06:38:01.339545] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x218c690) 00:27:29.549 [2024-11-20 06:38:01.339550] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:27:29.549 [2024-11-20 06:38:01.339562] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21ee700, cid 4, qid 0 00:27:29.549 [2024-11-20 06:38:01.339567] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21ee880, cid 5, qid 0 00:27:29.549 [2024-11-20 06:38:01.339647] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:29.549 [2024-11-20 06:38:01.339653] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:29.549 [2024-11-20 06:38:01.339656] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:29.549 [2024-11-20 06:38:01.339659] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21ee700) on tqpair=0x218c690 00:27:29.549 [2024-11-20 06:38:01.339665] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:29.549 [2024-11-20 06:38:01.339669] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:29.549 [2024-11-20 06:38:01.339672] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:29.549 [2024-11-20 06:38:01.339676] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21ee880) on tqpair=0x218c690 00:27:29.549 [2024-11-20 06:38:01.339683] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:29.549 [2024-11-20 06:38:01.339689] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x218c690) 00:27:29.549 [2024-11-20 06:38:01.339694] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.549 [2024-11-20 06:38:01.339704] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21ee880, cid 5, qid 0 00:27:29.549 [2024-11-20 06:38:01.339769] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:29.549 [2024-11-20 06:38:01.339775] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:29.549 [2024-11-20 06:38:01.339778] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:29.549 [2024-11-20 06:38:01.339781] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21ee880) on tqpair=0x218c690 00:27:29.549 [2024-11-20 06:38:01.339789] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:29.549 [2024-11-20 06:38:01.339792] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x218c690) 00:27:29.549 [2024-11-20 06:38:01.339798] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.549 [2024-11-20 06:38:01.339807] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21ee880, cid 5, qid 0 00:27:29.549 [2024-11-20 06:38:01.339868] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:29.549 [2024-11-20 06:38:01.339874] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:29.549 [2024-11-20 06:38:01.339877] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:29.549 [2024-11-20 06:38:01.339880] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21ee880) on tqpair=0x218c690 00:27:29.549 [2024-11-20 06:38:01.339888] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:29.549 [2024-11-20 06:38:01.339892] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x218c690) 00:27:29.549 [2024-11-20 06:38:01.339897] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.549 [2024-11-20 06:38:01.339906] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21ee880, cid 5, qid 0 00:27:29.549 [2024-11-20 06:38:01.339964] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:29.549 [2024-11-20 06:38:01.339970] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:29.549 [2024-11-20 06:38:01.339973] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:29.549 [2024-11-20 06:38:01.339976] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21ee880) on tqpair=0x218c690 00:27:29.549 [2024-11-20 06:38:01.339988] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:29.549 [2024-11-20 06:38:01.339993] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x218c690) 00:27:29.549 [2024-11-20 06:38:01.339998] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.549 [2024-11-20 06:38:01.340004] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:29.549 [2024-11-20 06:38:01.340007] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x218c690) 00:27:29.549 [2024-11-20 06:38:01.340012] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.549 [2024-11-20 06:38:01.340018] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:29.549 [2024-11-20 06:38:01.340022] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x218c690) 00:27:29.549 [2024-11-20 06:38:01.340027] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.549 [2024-11-20 06:38:01.340033] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:29.549 [2024-11-20 06:38:01.340038] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x218c690) 00:27:29.549 [2024-11-20 06:38:01.340043] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.549 [2024-11-20 06:38:01.340054] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21ee880, cid 5, qid 0 00:27:29.549 [2024-11-20 06:38:01.340058] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21ee700, cid 4, qid 0 00:27:29.549 [2024-11-20 06:38:01.340062] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21eea00, cid 6, qid 0 00:27:29.549 [2024-11-20 06:38:01.340066] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21eeb80, cid 7, qid 0 00:27:29.549 [2024-11-20 06:38:01.340207] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:29.549 [2024-11-20 06:38:01.340214] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:29.549 [2024-11-20 06:38:01.340217] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:29.549 [2024-11-20 06:38:01.340220] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x218c690): datao=0, datal=8192, cccid=5 00:27:29.549 [2024-11-20 06:38:01.340223] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x21ee880) on tqpair(0x218c690): expected_datao=0, payload_size=8192 00:27:29.549 [2024-11-20 06:38:01.340227] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:29.549 [2024-11-20 06:38:01.340239] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:29.549 [2024-11-20 06:38:01.340243] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:29.549 [2024-11-20 06:38:01.340252] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:29.549 [2024-11-20 06:38:01.340256] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:29.549 [2024-11-20 06:38:01.340259] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:29.549 [2024-11-20 06:38:01.340262] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x218c690): datao=0, datal=512, cccid=4 00:27:29.549 [2024-11-20 06:38:01.340266] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x21ee700) on tqpair(0x218c690): expected_datao=0, payload_size=512 00:27:29.549 [2024-11-20 06:38:01.340269] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:29.549 [2024-11-20 06:38:01.340275] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:29.549 [2024-11-20 06:38:01.340281] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:29.549 [2024-11-20 06:38:01.340288] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:29.549 [2024-11-20 06:38:01.340293] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:29.549 [2024-11-20 06:38:01.340297] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:29.549 [2024-11-20 06:38:01.340302] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x218c690): datao=0, datal=512, cccid=6 00:27:29.549 [2024-11-20 06:38:01.340308] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x21eea00) on tqpair(0x218c690): expected_datao=0, payload_size=512 00:27:29.549 [2024-11-20 06:38:01.340314] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:29.549 [2024-11-20 06:38:01.340321] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:29.549 [2024-11-20 06:38:01.340325] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:29.549 [2024-11-20 06:38:01.340330] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:29.549 [2024-11-20 06:38:01.340336] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:29.549 [2024-11-20 06:38:01.340339] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:29.549 [2024-11-20 06:38:01.340342] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x218c690): datao=0, datal=4096, cccid=7 00:27:29.549 [2024-11-20 06:38:01.340346] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x21eeb80) on tqpair(0x218c690): expected_datao=0, payload_size=4096 00:27:29.549 [2024-11-20 06:38:01.340350] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:29.549 [2024-11-20 06:38:01.340356] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:29.549 [2024-11-20 06:38:01.340362] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:29.549 [2024-11-20 06:38:01.340370] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:29.549 [2024-11-20 06:38:01.340376] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:29.549 [2024-11-20 06:38:01.340379] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:29.549 [2024-11-20 06:38:01.340383] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21ee880) on tqpair=0x218c690 00:27:29.549 [2024-11-20 06:38:01.340393] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:29.549 [2024-11-20 06:38:01.340398] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:29.549 [2024-11-20 06:38:01.340401] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:29.549 [2024-11-20 06:38:01.340404] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21ee700) on tqpair=0x218c690 00:27:29.549 [2024-11-20 06:38:01.340412] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:29.549 [2024-11-20 06:38:01.340417] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:29.549 [2024-11-20 06:38:01.340421] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:29.549 [2024-11-20 06:38:01.340424] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21eea00) on tqpair=0x218c690 00:27:29.549 [2024-11-20 06:38:01.340430] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:29.549 [2024-11-20 06:38:01.340435] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:29.549 [2024-11-20 06:38:01.340437] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:29.550 [2024-11-20 06:38:01.340440] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21eeb80) on tqpair=0x218c690 00:27:29.550 ===================================================== 00:27:29.550 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:29.550 ===================================================== 00:27:29.550 Controller Capabilities/Features 00:27:29.550 ================================ 00:27:29.550 Vendor ID: 8086 00:27:29.550 Subsystem Vendor ID: 8086 00:27:29.550 Serial Number: SPDK00000000000001 00:27:29.550 Model Number: SPDK bdev Controller 00:27:29.550 Firmware Version: 25.01 00:27:29.550 Recommended Arb Burst: 6 00:27:29.550 IEEE OUI Identifier: e4 d2 5c 00:27:29.550 Multi-path I/O 00:27:29.550 May have multiple subsystem ports: Yes 00:27:29.550 May have multiple controllers: Yes 00:27:29.550 Associated with SR-IOV VF: No 00:27:29.550 Max Data Transfer Size: 131072 00:27:29.550 Max Number of Namespaces: 32 00:27:29.550 Max Number of I/O Queues: 127 00:27:29.550 NVMe Specification Version (VS): 1.3 00:27:29.550 NVMe Specification Version (Identify): 1.3 00:27:29.550 Maximum Queue Entries: 128 00:27:29.550 Contiguous Queues Required: Yes 00:27:29.550 Arbitration Mechanisms Supported 00:27:29.550 Weighted Round Robin: Not Supported 00:27:29.550 Vendor Specific: Not Supported 00:27:29.550 Reset Timeout: 15000 ms 00:27:29.550 Doorbell Stride: 4 bytes 00:27:29.550 NVM Subsystem Reset: Not Supported 00:27:29.550 Command Sets Supported 00:27:29.550 NVM Command Set: Supported 00:27:29.550 Boot Partition: Not Supported 00:27:29.550 Memory Page Size Minimum: 4096 bytes 00:27:29.550 Memory Page Size Maximum: 4096 bytes 00:27:29.550 Persistent Memory Region: Not Supported 00:27:29.550 Optional Asynchronous Events Supported 00:27:29.550 Namespace Attribute Notices: Supported 00:27:29.550 Firmware Activation Notices: Not Supported 00:27:29.550 ANA Change Notices: Not Supported 00:27:29.550 PLE Aggregate Log Change Notices: Not Supported 00:27:29.550 LBA Status Info Alert Notices: Not Supported 00:27:29.550 EGE Aggregate Log Change Notices: Not Supported 00:27:29.550 Normal NVM Subsystem Shutdown event: Not Supported 00:27:29.550 Zone Descriptor Change Notices: Not Supported 00:27:29.550 Discovery Log Change Notices: Not Supported 00:27:29.550 Controller Attributes 00:27:29.550 128-bit Host Identifier: Supported 00:27:29.550 Non-Operational Permissive Mode: Not Supported 00:27:29.550 NVM Sets: Not Supported 00:27:29.550 Read Recovery Levels: Not Supported 00:27:29.550 Endurance Groups: Not Supported 00:27:29.550 Predictable Latency Mode: Not Supported 00:27:29.550 Traffic Based Keep ALive: Not Supported 00:27:29.550 Namespace Granularity: Not Supported 00:27:29.550 SQ Associations: Not Supported 00:27:29.550 UUID List: Not Supported 00:27:29.550 Multi-Domain Subsystem: Not Supported 00:27:29.550 Fixed Capacity Management: Not Supported 00:27:29.550 Variable Capacity Management: Not Supported 00:27:29.550 Delete Endurance Group: Not Supported 00:27:29.550 Delete NVM Set: Not Supported 00:27:29.550 Extended LBA Formats Supported: Not Supported 00:27:29.550 Flexible Data Placement Supported: Not Supported 00:27:29.550 00:27:29.550 Controller Memory Buffer Support 00:27:29.550 ================================ 00:27:29.550 Supported: No 00:27:29.550 00:27:29.550 Persistent Memory Region Support 00:27:29.550 ================================ 00:27:29.550 Supported: No 00:27:29.550 00:27:29.550 Admin Command Set Attributes 00:27:29.550 ============================ 00:27:29.550 Security Send/Receive: Not Supported 00:27:29.550 Format NVM: Not Supported 00:27:29.550 Firmware Activate/Download: Not Supported 00:27:29.550 Namespace Management: Not Supported 00:27:29.550 Device Self-Test: Not Supported 00:27:29.550 Directives: Not Supported 00:27:29.550 NVMe-MI: Not Supported 00:27:29.550 Virtualization Management: Not Supported 00:27:29.550 Doorbell Buffer Config: Not Supported 00:27:29.550 Get LBA Status Capability: Not Supported 00:27:29.550 Command & Feature Lockdown Capability: Not Supported 00:27:29.550 Abort Command Limit: 4 00:27:29.550 Async Event Request Limit: 4 00:27:29.550 Number of Firmware Slots: N/A 00:27:29.550 Firmware Slot 1 Read-Only: N/A 00:27:29.550 Firmware Activation Without Reset: N/A 00:27:29.550 Multiple Update Detection Support: N/A 00:27:29.550 Firmware Update Granularity: No Information Provided 00:27:29.550 Per-Namespace SMART Log: No 00:27:29.550 Asymmetric Namespace Access Log Page: Not Supported 00:27:29.550 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:27:29.550 Command Effects Log Page: Supported 00:27:29.550 Get Log Page Extended Data: Supported 00:27:29.550 Telemetry Log Pages: Not Supported 00:27:29.550 Persistent Event Log Pages: Not Supported 00:27:29.550 Supported Log Pages Log Page: May Support 00:27:29.550 Commands Supported & Effects Log Page: Not Supported 00:27:29.550 Feature Identifiers & Effects Log Page:May Support 00:27:29.550 NVMe-MI Commands & Effects Log Page: May Support 00:27:29.550 Data Area 4 for Telemetry Log: Not Supported 00:27:29.550 Error Log Page Entries Supported: 128 00:27:29.550 Keep Alive: Supported 00:27:29.550 Keep Alive Granularity: 10000 ms 00:27:29.550 00:27:29.550 NVM Command Set Attributes 00:27:29.550 ========================== 00:27:29.550 Submission Queue Entry Size 00:27:29.550 Max: 64 00:27:29.550 Min: 64 00:27:29.550 Completion Queue Entry Size 00:27:29.550 Max: 16 00:27:29.550 Min: 16 00:27:29.550 Number of Namespaces: 32 00:27:29.550 Compare Command: Supported 00:27:29.550 Write Uncorrectable Command: Not Supported 00:27:29.550 Dataset Management Command: Supported 00:27:29.550 Write Zeroes Command: Supported 00:27:29.550 Set Features Save Field: Not Supported 00:27:29.550 Reservations: Supported 00:27:29.550 Timestamp: Not Supported 00:27:29.550 Copy: Supported 00:27:29.550 Volatile Write Cache: Present 00:27:29.550 Atomic Write Unit (Normal): 1 00:27:29.550 Atomic Write Unit (PFail): 1 00:27:29.550 Atomic Compare & Write Unit: 1 00:27:29.550 Fused Compare & Write: Supported 00:27:29.550 Scatter-Gather List 00:27:29.550 SGL Command Set: Supported 00:27:29.550 SGL Keyed: Supported 00:27:29.550 SGL Bit Bucket Descriptor: Not Supported 00:27:29.550 SGL Metadata Pointer: Not Supported 00:27:29.550 Oversized SGL: Not Supported 00:27:29.550 SGL Metadata Address: Not Supported 00:27:29.550 SGL Offset: Supported 00:27:29.550 Transport SGL Data Block: Not Supported 00:27:29.550 Replay Protected Memory Block: Not Supported 00:27:29.550 00:27:29.550 Firmware Slot Information 00:27:29.550 ========================= 00:27:29.550 Active slot: 1 00:27:29.550 Slot 1 Firmware Revision: 25.01 00:27:29.550 00:27:29.550 00:27:29.550 Commands Supported and Effects 00:27:29.550 ============================== 00:27:29.550 Admin Commands 00:27:29.550 -------------- 00:27:29.550 Get Log Page (02h): Supported 00:27:29.550 Identify (06h): Supported 00:27:29.550 Abort (08h): Supported 00:27:29.550 Set Features (09h): Supported 00:27:29.550 Get Features (0Ah): Supported 00:27:29.550 Asynchronous Event Request (0Ch): Supported 00:27:29.550 Keep Alive (18h): Supported 00:27:29.550 I/O Commands 00:27:29.550 ------------ 00:27:29.550 Flush (00h): Supported LBA-Change 00:27:29.550 Write (01h): Supported LBA-Change 00:27:29.550 Read (02h): Supported 00:27:29.550 Compare (05h): Supported 00:27:29.550 Write Zeroes (08h): Supported LBA-Change 00:27:29.550 Dataset Management (09h): Supported LBA-Change 00:27:29.550 Copy (19h): Supported LBA-Change 00:27:29.550 00:27:29.550 Error Log 00:27:29.550 ========= 00:27:29.550 00:27:29.550 Arbitration 00:27:29.550 =========== 00:27:29.550 Arbitration Burst: 1 00:27:29.550 00:27:29.550 Power Management 00:27:29.550 ================ 00:27:29.550 Number of Power States: 1 00:27:29.550 Current Power State: Power State #0 00:27:29.550 Power State #0: 00:27:29.550 Max Power: 0.00 W 00:27:29.550 Non-Operational State: Operational 00:27:29.550 Entry Latency: Not Reported 00:27:29.550 Exit Latency: Not Reported 00:27:29.550 Relative Read Throughput: 0 00:27:29.550 Relative Read Latency: 0 00:27:29.550 Relative Write Throughput: 0 00:27:29.550 Relative Write Latency: 0 00:27:29.550 Idle Power: Not Reported 00:27:29.550 Active Power: Not Reported 00:27:29.550 Non-Operational Permissive Mode: Not Supported 00:27:29.550 00:27:29.550 Health Information 00:27:29.550 ================== 00:27:29.550 Critical Warnings: 00:27:29.550 Available Spare Space: OK 00:27:29.550 Temperature: OK 00:27:29.551 Device Reliability: OK 00:27:29.551 Read Only: No 00:27:29.551 Volatile Memory Backup: OK 00:27:29.551 Current Temperature: 0 Kelvin (-273 Celsius) 00:27:29.551 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:27:29.551 Available Spare: 0% 00:27:29.551 Available Spare Threshold: 0% 00:27:29.551 Life Percentage Used:[2024-11-20 06:38:01.340518] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:29.551 [2024-11-20 06:38:01.340523] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x218c690) 00:27:29.551 [2024-11-20 06:38:01.340528] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.551 [2024-11-20 06:38:01.340539] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21eeb80, cid 7, qid 0 00:27:29.551 [2024-11-20 06:38:01.340615] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:29.551 [2024-11-20 06:38:01.340621] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:29.551 [2024-11-20 06:38:01.340623] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:29.551 [2024-11-20 06:38:01.340627] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21eeb80) on tqpair=0x218c690 00:27:29.551 [2024-11-20 06:38:01.340653] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:27:29.551 [2024-11-20 06:38:01.340661] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21ee100) on tqpair=0x218c690 00:27:29.551 [2024-11-20 06:38:01.340666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.551 [2024-11-20 06:38:01.340670] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21ee280) on tqpair=0x218c690 00:27:29.551 [2024-11-20 06:38:01.340674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.551 [2024-11-20 06:38:01.340678] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21ee400) on tqpair=0x218c690 00:27:29.551 [2024-11-20 06:38:01.340682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.551 [2024-11-20 06:38:01.340686] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21ee580) on tqpair=0x218c690 00:27:29.551 [2024-11-20 06:38:01.340690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.551 [2024-11-20 06:38:01.340697] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:29.551 [2024-11-20 06:38:01.340700] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:29.551 [2024-11-20 06:38:01.340704] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x218c690) 00:27:29.551 [2024-11-20 06:38:01.340710] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.551 [2024-11-20 06:38:01.340721] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21ee580, cid 3, qid 0 00:27:29.551 [2024-11-20 06:38:01.340779] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:29.551 [2024-11-20 06:38:01.340785] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:29.551 [2024-11-20 06:38:01.340788] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:29.551 [2024-11-20 06:38:01.340791] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21ee580) on tqpair=0x218c690 00:27:29.551 [2024-11-20 06:38:01.340797] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:29.551 [2024-11-20 06:38:01.340800] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:29.551 [2024-11-20 06:38:01.340804] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x218c690) 00:27:29.551 [2024-11-20 06:38:01.340809] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.551 [2024-11-20 06:38:01.340821] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21ee580, cid 3, qid 0 00:27:29.551 [2024-11-20 06:38:01.340894] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:29.551 [2024-11-20 06:38:01.340900] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:29.551 [2024-11-20 06:38:01.340904] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:29.551 [2024-11-20 06:38:01.340907] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21ee580) on tqpair=0x218c690 00:27:29.551 [2024-11-20 06:38:01.340911] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:27:29.551 [2024-11-20 06:38:01.340915] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:27:29.551 [2024-11-20 06:38:01.340922] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:29.551 [2024-11-20 06:38:01.340926] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:29.551 [2024-11-20 06:38:01.340929] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x218c690) 00:27:29.551 [2024-11-20 06:38:01.340935] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.551 [2024-11-20 06:38:01.340945] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21ee580, cid 3, qid 0 00:27:29.551 [2024-11-20 06:38:01.341004] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:29.551 [2024-11-20 06:38:01.341010] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:29.551 [2024-11-20 06:38:01.341013] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:29.551 [2024-11-20 06:38:01.341016] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21ee580) on tqpair=0x218c690 00:27:29.551 [2024-11-20 06:38:01.341024] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:29.551 [2024-11-20 06:38:01.341029] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:29.551 [2024-11-20 06:38:01.341033] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x218c690) 00:27:29.551 [2024-11-20 06:38:01.341039] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.551 [2024-11-20 06:38:01.341048] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21ee580, cid 3, qid 0 00:27:29.551 [2024-11-20 06:38:01.341116] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:29.551 [2024-11-20 06:38:01.341121] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:29.551 [2024-11-20 06:38:01.341124] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:29.551 [2024-11-20 06:38:01.341127] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21ee580) on tqpair=0x218c690 00:27:29.551 [2024-11-20 06:38:01.341139] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:29.551 [2024-11-20 06:38:01.341144] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:29.551 [2024-11-20 06:38:01.341147] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x218c690) 00:27:29.551 [2024-11-20 06:38:01.341153] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.551 [2024-11-20 06:38:01.341162] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21ee580, cid 3, qid 0 00:27:29.551 [2024-11-20 06:38:01.345211] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:29.551 [2024-11-20 06:38:01.345219] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:29.551 [2024-11-20 06:38:01.345222] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:29.551 [2024-11-20 06:38:01.345225] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21ee580) on tqpair=0x218c690 00:27:29.551 [2024-11-20 06:38:01.345233] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:29.551 [2024-11-20 06:38:01.345237] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:29.551 [2024-11-20 06:38:01.345240] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x218c690) 00:27:29.551 [2024-11-20 06:38:01.345246] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.551 [2024-11-20 06:38:01.345257] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21ee580, cid 3, qid 0 00:27:29.551 [2024-11-20 06:38:01.345399] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:29.551 [2024-11-20 06:38:01.345405] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:29.551 [2024-11-20 06:38:01.345407] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:29.551 [2024-11-20 06:38:01.345410] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21ee580) on tqpair=0x218c690 00:27:29.551 [2024-11-20 06:38:01.345417] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 4 milliseconds 00:27:29.551 0% 00:27:29.551 Data Units Read: 0 00:27:29.551 Data Units Written: 0 00:27:29.551 Host Read Commands: 0 00:27:29.551 Host Write Commands: 0 00:27:29.551 Controller Busy Time: 0 minutes 00:27:29.551 Power Cycles: 0 00:27:29.551 Power On Hours: 0 hours 00:27:29.551 Unsafe Shutdowns: 0 00:27:29.551 Unrecoverable Media Errors: 0 00:27:29.551 Lifetime Error Log Entries: 0 00:27:29.551 Warning Temperature Time: 0 minutes 00:27:29.551 Critical Temperature Time: 0 minutes 00:27:29.551 00:27:29.551 Number of Queues 00:27:29.551 ================ 00:27:29.551 Number of I/O Submission Queues: 127 00:27:29.551 Number of I/O Completion Queues: 127 00:27:29.551 00:27:29.551 Active Namespaces 00:27:29.551 ================= 00:27:29.551 Namespace ID:1 00:27:29.551 Error Recovery Timeout: Unlimited 00:27:29.551 Command Set Identifier: NVM (00h) 00:27:29.551 Deallocate: Supported 00:27:29.551 Deallocated/Unwritten Error: Not Supported 00:27:29.551 Deallocated Read Value: Unknown 00:27:29.551 Deallocate in Write Zeroes: Not Supported 00:27:29.551 Deallocated Guard Field: 0xFFFF 00:27:29.551 Flush: Supported 00:27:29.551 Reservation: Supported 00:27:29.551 Namespace Sharing Capabilities: Multiple Controllers 00:27:29.551 Size (in LBAs): 131072 (0GiB) 00:27:29.551 Capacity (in LBAs): 131072 (0GiB) 00:27:29.551 Utilization (in LBAs): 131072 (0GiB) 00:27:29.551 NGUID: ABCDEF0123456789ABCDEF0123456789 00:27:29.551 EUI64: ABCDEF0123456789 00:27:29.551 UUID: 380f3563-8988-4b17-a26f-6eaf1866993a 00:27:29.551 Thin Provisioning: Not Supported 00:27:29.551 Per-NS Atomic Units: Yes 00:27:29.551 Atomic Boundary Size (Normal): 0 00:27:29.551 Atomic Boundary Size (PFail): 0 00:27:29.551 Atomic Boundary Offset: 0 00:27:29.551 Maximum Single Source Range Length: 65535 00:27:29.551 Maximum Copy Length: 65535 00:27:29.551 Maximum Source Range Count: 1 00:27:29.551 NGUID/EUI64 Never Reused: No 00:27:29.552 Namespace Write Protected: No 00:27:29.552 Number of LBA Formats: 1 00:27:29.552 Current LBA Format: LBA Format #00 00:27:29.552 LBA Format #00: Data Size: 512 Metadata Size: 0 00:27:29.552 00:27:29.552 06:38:01 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:27:29.552 06:38:01 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:29.552 06:38:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.552 06:38:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:29.552 06:38:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.552 06:38:01 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:27:29.552 06:38:01 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:27:29.552 06:38:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:29.552 06:38:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:27:29.811 06:38:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:29.811 06:38:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:27:29.811 06:38:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:29.811 06:38:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:29.811 rmmod nvme_tcp 00:27:29.811 rmmod nvme_fabrics 00:27:29.811 rmmod nvme_keyring 00:27:29.811 06:38:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:29.811 06:38:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:27:29.811 06:38:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:27:29.811 06:38:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 631443 ']' 00:27:29.811 06:38:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 631443 00:27:29.811 06:38:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@952 -- # '[' -z 631443 ']' 00:27:29.811 06:38:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # kill -0 631443 00:27:29.811 06:38:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@957 -- # uname 00:27:29.811 06:38:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:27:29.811 06:38:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 631443 00:27:29.811 06:38:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:27:29.811 06:38:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:27:29.811 06:38:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@970 -- # echo 'killing process with pid 631443' 00:27:29.811 killing process with pid 631443 00:27:29.811 06:38:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@971 -- # kill 631443 00:27:29.811 06:38:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@976 -- # wait 631443 00:27:30.070 06:38:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:30.070 06:38:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:30.070 06:38:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:30.070 06:38:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:27:30.070 06:38:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:27:30.070 06:38:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:30.070 06:38:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:27:30.070 06:38:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:30.070 06:38:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:30.070 06:38:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:30.070 06:38:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:30.070 06:38:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:31.976 06:38:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:31.976 00:27:31.976 real 0m9.901s 00:27:31.976 user 0m7.910s 00:27:31.976 sys 0m4.876s 00:27:31.976 06:38:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1128 -- # xtrace_disable 00:27:31.976 06:38:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:31.976 ************************************ 00:27:31.976 END TEST nvmf_identify 00:27:31.976 ************************************ 00:27:31.976 06:38:03 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:27:31.976 06:38:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:27:31.976 06:38:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:27:31.976 06:38:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.976 ************************************ 00:27:31.976 START TEST nvmf_perf 00:27:31.976 ************************************ 00:27:32.236 06:38:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:27:32.236 * Looking for test storage... 00:27:32.236 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:32.236 06:38:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:27:32.236 06:38:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # lcov --version 00:27:32.236 06:38:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:27:32.236 06:38:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:27:32.236 06:38:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:32.236 06:38:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:32.236 06:38:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:32.236 06:38:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:27:32.236 06:38:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:27:32.236 06:38:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:27:32.236 06:38:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:27:32.236 06:38:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:27:32.236 06:38:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:27:32.236 06:38:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:27:32.236 06:38:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:32.236 06:38:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:27:32.236 06:38:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:27:32.236 06:38:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:32.236 06:38:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:32.236 06:38:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:27:32.236 06:38:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:27:32.236 06:38:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:32.236 06:38:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:27:32.236 06:38:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:27:32.237 06:38:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:27:32.237 06:38:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:27:32.237 06:38:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:32.237 06:38:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:27:32.237 06:38:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:27:32.237 06:38:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:32.237 06:38:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:32.237 06:38:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:27:32.237 06:38:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:32.237 06:38:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:27:32.237 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:32.237 --rc genhtml_branch_coverage=1 00:27:32.237 --rc genhtml_function_coverage=1 00:27:32.237 --rc genhtml_legend=1 00:27:32.237 --rc geninfo_all_blocks=1 00:27:32.237 --rc geninfo_unexecuted_blocks=1 00:27:32.237 00:27:32.237 ' 00:27:32.237 06:38:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:27:32.237 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:32.237 --rc genhtml_branch_coverage=1 00:27:32.237 --rc genhtml_function_coverage=1 00:27:32.237 --rc genhtml_legend=1 00:27:32.237 --rc geninfo_all_blocks=1 00:27:32.237 --rc geninfo_unexecuted_blocks=1 00:27:32.237 00:27:32.237 ' 00:27:32.237 06:38:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:27:32.237 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:32.237 --rc genhtml_branch_coverage=1 00:27:32.237 --rc genhtml_function_coverage=1 00:27:32.237 --rc genhtml_legend=1 00:27:32.237 --rc geninfo_all_blocks=1 00:27:32.237 --rc geninfo_unexecuted_blocks=1 00:27:32.237 00:27:32.237 ' 00:27:32.237 06:38:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:27:32.237 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:32.237 --rc genhtml_branch_coverage=1 00:27:32.237 --rc genhtml_function_coverage=1 00:27:32.237 --rc genhtml_legend=1 00:27:32.237 --rc geninfo_all_blocks=1 00:27:32.237 --rc geninfo_unexecuted_blocks=1 00:27:32.237 00:27:32.237 ' 00:27:32.237 06:38:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:32.237 06:38:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:27:32.237 06:38:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:32.237 06:38:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:32.237 06:38:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:32.237 06:38:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:32.237 06:38:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:32.237 06:38:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:32.237 06:38:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:32.237 06:38:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:32.237 06:38:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:32.237 06:38:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:32.237 06:38:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:27:32.237 06:38:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:27:32.237 06:38:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:32.237 06:38:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:32.237 06:38:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:32.237 06:38:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:32.237 06:38:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:32.237 06:38:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:27:32.237 06:38:04 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:32.237 06:38:04 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:32.237 06:38:04 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:32.237 06:38:04 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:32.237 06:38:04 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:32.237 06:38:04 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:32.237 06:38:04 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:27:32.237 06:38:04 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:32.237 06:38:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:27:32.237 06:38:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:32.237 06:38:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:32.237 06:38:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:32.237 06:38:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:32.237 06:38:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:32.237 06:38:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:32.237 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:32.237 06:38:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:32.237 06:38:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:32.237 06:38:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:32.237 06:38:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:27:32.237 06:38:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:27:32.237 06:38:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:27:32.237 06:38:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:27:32.237 06:38:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:32.237 06:38:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:32.237 06:38:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:32.237 06:38:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:32.237 06:38:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:32.237 06:38:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:32.237 06:38:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:32.237 06:38:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:32.237 06:38:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:32.237 06:38:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:32.237 06:38:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:27:32.237 06:38:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:27:38.812 06:38:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:38.812 06:38:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:27:38.812 06:38:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:38.812 06:38:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:38.812 06:38:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:38.812 06:38:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:38.812 06:38:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:38.812 06:38:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:27:38.812 06:38:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:38.812 06:38:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:27:38.812 06:38:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:27:38.812 06:38:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:27:38.812 06:38:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:27:38.812 06:38:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:27:38.812 06:38:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:27:38.812 06:38:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:38.812 06:38:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:38.812 06:38:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:38.812 06:38:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:38.812 06:38:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:38.812 06:38:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:38.812 06:38:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:38.812 06:38:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:38.812 06:38:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:38.812 06:38:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:38.812 06:38:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:38.812 06:38:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:38.812 06:38:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:38.812 06:38:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:38.812 06:38:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:38.812 06:38:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:38.812 06:38:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:38.812 06:38:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:38.812 06:38:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:38.812 06:38:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:38.812 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:38.812 06:38:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:38.812 06:38:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:38.812 06:38:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:38.812 06:38:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:38.812 06:38:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:38.812 06:38:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:38.812 06:38:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:38.812 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:38.812 06:38:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:38.812 06:38:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:38.812 06:38:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:38.812 06:38:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:38.812 06:38:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:38.812 06:38:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:38.812 06:38:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:38.812 06:38:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:38.812 06:38:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:38.812 06:38:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:38.812 06:38:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:38.812 06:38:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:38.812 06:38:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:38.812 06:38:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:38.812 06:38:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:38.812 06:38:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:38.812 Found net devices under 0000:86:00.0: cvl_0_0 00:27:38.812 06:38:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:38.812 06:38:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:38.812 06:38:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:38.812 06:38:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:38.812 06:38:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:38.812 06:38:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:38.812 06:38:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:38.812 06:38:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:38.812 06:38:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:38.812 Found net devices under 0000:86:00.1: cvl_0_1 00:27:38.812 06:38:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:38.812 06:38:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:38.812 06:38:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes 00:27:38.812 06:38:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:38.812 06:38:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:38.812 06:38:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:38.812 06:38:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:38.812 06:38:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:38.812 06:38:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:38.812 06:38:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:38.812 06:38:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:38.812 06:38:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:38.812 06:38:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:38.812 06:38:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:38.813 06:38:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:38.813 06:38:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:38.813 06:38:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:38.813 06:38:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:38.813 06:38:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:38.813 06:38:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:38.813 06:38:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:38.813 06:38:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:38.813 06:38:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:38.813 06:38:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:38.813 06:38:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:38.813 06:38:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:38.813 06:38:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:38.813 06:38:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:38.813 06:38:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:38.813 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:38.813 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.416 ms 00:27:38.813 00:27:38.813 --- 10.0.0.2 ping statistics --- 00:27:38.813 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:38.813 rtt min/avg/max/mdev = 0.416/0.416/0.416/0.000 ms 00:27:38.813 06:38:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:38.813 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:38.813 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:27:38.813 00:27:38.813 --- 10.0.0.1 ping statistics --- 00:27:38.813 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:38.813 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:27:38.813 06:38:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:38.813 06:38:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0 00:27:38.813 06:38:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:38.813 06:38:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:38.813 06:38:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:38.813 06:38:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:38.813 06:38:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:38.813 06:38:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:38.813 06:38:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:38.813 06:38:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:27:38.813 06:38:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:38.813 06:38:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:38.813 06:38:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:27:38.813 06:38:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=635218 00:27:38.813 06:38:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 635218 00:27:38.813 06:38:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:38.813 06:38:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@833 -- # '[' -z 635218 ']' 00:27:38.813 06:38:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:38.813 06:38:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:38.813 06:38:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:38.813 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:38.813 06:38:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:38.813 06:38:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:27:38.813 [2024-11-20 06:38:10.056900] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:27:38.813 [2024-11-20 06:38:10.056960] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:38.813 [2024-11-20 06:38:10.136153] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:38.813 [2024-11-20 06:38:10.177002] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:38.813 [2024-11-20 06:38:10.177038] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:38.813 [2024-11-20 06:38:10.177045] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:38.813 [2024-11-20 06:38:10.177052] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:38.813 [2024-11-20 06:38:10.177057] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:38.813 [2024-11-20 06:38:10.178662] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:38.813 [2024-11-20 06:38:10.178698] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:38.813 [2024-11-20 06:38:10.178787] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:38.813 [2024-11-20 06:38:10.178789] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:39.072 06:38:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:27:39.072 06:38:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@866 -- # return 0 00:27:39.072 06:38:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:39.072 06:38:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:39.072 06:38:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:27:39.331 06:38:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:39.331 06:38:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:27:39.331 06:38:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:27:42.619 06:38:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:27:42.619 06:38:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:27:42.619 06:38:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:5e:00.0 00:27:42.619 06:38:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:27:42.619 06:38:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:27:42.619 06:38:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:5e:00.0 ']' 00:27:42.619 06:38:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:27:42.619 06:38:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:27:42.619 06:38:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:27:42.878 [2024-11-20 06:38:14.558266] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:42.878 06:38:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:43.137 06:38:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:27:43.137 06:38:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:43.396 06:38:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:27:43.396 06:38:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:27:43.654 06:38:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:43.654 [2024-11-20 06:38:15.397403] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:43.654 06:38:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:43.913 06:38:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:5e:00.0 ']' 00:27:43.913 06:38:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:27:43.913 06:38:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:27:43.913 06:38:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:27:45.028 Initializing NVMe Controllers 00:27:45.028 Attached to NVMe Controller at 0000:5e:00.0 [8086:0a54] 00:27:45.028 Associating PCIE (0000:5e:00.0) NSID 1 with lcore 0 00:27:45.028 Initialization complete. Launching workers. 00:27:45.028 ======================================================== 00:27:45.028 Latency(us) 00:27:45.028 Device Information : IOPS MiB/s Average min max 00:27:45.028 PCIE (0000:5e:00.0) NSID 1 from core 0: 98286.10 383.93 324.96 33.82 4653.96 00:27:45.028 ======================================================== 00:27:45.028 Total : 98286.10 383.93 324.96 33.82 4653.96 00:27:45.028 00:27:45.287 06:38:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:46.665 Initializing NVMe Controllers 00:27:46.665 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:46.665 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:46.665 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:27:46.665 Initialization complete. Launching workers. 00:27:46.665 ======================================================== 00:27:46.665 Latency(us) 00:27:46.665 Device Information : IOPS MiB/s Average min max 00:27:46.665 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 243.00 0.95 4167.44 105.19 45689.09 00:27:46.665 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 40.00 0.16 25916.50 7956.98 47903.04 00:27:46.665 ======================================================== 00:27:46.665 Total : 283.00 1.11 7241.51 105.19 47903.04 00:27:46.665 00:27:46.665 06:38:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:48.040 Initializing NVMe Controllers 00:27:48.040 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:48.040 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:48.040 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:27:48.040 Initialization complete. Launching workers. 00:27:48.040 ======================================================== 00:27:48.041 Latency(us) 00:27:48.041 Device Information : IOPS MiB/s Average min max 00:27:48.041 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11351.97 44.34 2845.20 345.40 44454.18 00:27:48.041 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3847.65 15.03 8350.66 6452.39 15817.08 00:27:48.041 ======================================================== 00:27:48.041 Total : 15199.62 59.37 4238.86 345.40 44454.18 00:27:48.041 00:27:48.041 06:38:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:27:48.041 06:38:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:27:48.041 06:38:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:50.575 Initializing NVMe Controllers 00:27:50.576 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:50.576 Controller IO queue size 128, less than required. 00:27:50.576 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:50.576 Controller IO queue size 128, less than required. 00:27:50.576 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:50.576 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:50.576 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:27:50.576 Initialization complete. Launching workers. 00:27:50.576 ======================================================== 00:27:50.576 Latency(us) 00:27:50.576 Device Information : IOPS MiB/s Average min max 00:27:50.576 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1799.20 449.80 71904.51 53606.30 112846.08 00:27:50.576 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 603.89 150.97 219558.16 78350.56 322228.11 00:27:50.576 ======================================================== 00:27:50.576 Total : 2403.09 600.77 109009.58 53606.30 322228.11 00:27:50.576 00:27:50.576 06:38:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:27:50.834 No valid NVMe controllers or AIO or URING devices found 00:27:50.834 Initializing NVMe Controllers 00:27:50.834 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:50.834 Controller IO queue size 128, less than required. 00:27:50.834 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:50.834 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:27:50.834 Controller IO queue size 128, less than required. 00:27:50.834 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:50.834 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:27:50.834 WARNING: Some requested NVMe devices were skipped 00:27:50.834 06:38:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:27:53.371 Initializing NVMe Controllers 00:27:53.371 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:53.371 Controller IO queue size 128, less than required. 00:27:53.371 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:53.371 Controller IO queue size 128, less than required. 00:27:53.371 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:53.371 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:53.371 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:27:53.371 Initialization complete. Launching workers. 00:27:53.371 00:27:53.371 ==================== 00:27:53.371 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:27:53.371 TCP transport: 00:27:53.371 polls: 15733 00:27:53.371 idle_polls: 12317 00:27:53.371 sock_completions: 3416 00:27:53.371 nvme_completions: 6259 00:27:53.371 submitted_requests: 9472 00:27:53.371 queued_requests: 1 00:27:53.371 00:27:53.371 ==================== 00:27:53.371 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:27:53.371 TCP transport: 00:27:53.371 polls: 15499 00:27:53.371 idle_polls: 11586 00:27:53.371 sock_completions: 3913 00:27:53.371 nvme_completions: 6575 00:27:53.371 submitted_requests: 9810 00:27:53.371 queued_requests: 1 00:27:53.371 ======================================================== 00:27:53.371 Latency(us) 00:27:53.371 Device Information : IOPS MiB/s Average min max 00:27:53.371 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1562.03 390.51 84362.21 53139.40 143724.10 00:27:53.371 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1640.90 410.23 78428.94 47860.67 112362.50 00:27:53.371 ======================================================== 00:27:53.371 Total : 3202.93 800.73 81322.52 47860.67 143724.10 00:27:53.371 00:27:53.371 06:38:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:27:53.371 06:38:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:53.371 06:38:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:27:53.371 06:38:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:27:53.371 06:38:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:27:53.371 06:38:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:53.371 06:38:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:27:53.371 06:38:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:53.371 06:38:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:27:53.371 06:38:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:53.371 06:38:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:53.630 rmmod nvme_tcp 00:27:53.630 rmmod nvme_fabrics 00:27:53.630 rmmod nvme_keyring 00:27:53.630 06:38:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:53.630 06:38:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:27:53.630 06:38:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:27:53.630 06:38:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 635218 ']' 00:27:53.630 06:38:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 635218 00:27:53.630 06:38:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@952 -- # '[' -z 635218 ']' 00:27:53.630 06:38:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # kill -0 635218 00:27:53.630 06:38:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@957 -- # uname 00:27:53.630 06:38:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:27:53.630 06:38:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 635218 00:27:53.630 06:38:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:27:53.630 06:38:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:27:53.630 06:38:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@970 -- # echo 'killing process with pid 635218' 00:27:53.630 killing process with pid 635218 00:27:53.630 06:38:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@971 -- # kill 635218 00:27:53.630 06:38:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@976 -- # wait 635218 00:27:55.536 06:38:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:55.536 06:38:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:55.536 06:38:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:55.536 06:38:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:27:55.536 06:38:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:27:55.536 06:38:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:27:55.536 06:38:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:55.536 06:38:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:55.536 06:38:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:55.536 06:38:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:55.536 06:38:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:55.536 06:38:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:58.074 06:38:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:58.074 00:27:58.074 real 0m25.582s 00:27:58.074 user 1m8.319s 00:27:58.074 sys 0m8.308s 00:27:58.074 06:38:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:27:58.074 06:38:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:27:58.074 ************************************ 00:27:58.074 END TEST nvmf_perf 00:27:58.074 ************************************ 00:27:58.074 06:38:29 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:27:58.074 06:38:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:27:58.074 06:38:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:27:58.074 06:38:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.074 ************************************ 00:27:58.074 START TEST nvmf_fio_host 00:27:58.074 ************************************ 00:27:58.074 06:38:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:27:58.074 * Looking for test storage... 00:27:58.074 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:58.074 06:38:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:27:58.074 06:38:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # lcov --version 00:27:58.074 06:38:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:27:58.074 06:38:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:27:58.074 06:38:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:58.074 06:38:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:58.074 06:38:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:58.074 06:38:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:27:58.074 06:38:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:27:58.074 06:38:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:27:58.074 06:38:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:27:58.074 06:38:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:27:58.074 06:38:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:27:58.074 06:38:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:27:58.074 06:38:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:58.074 06:38:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:27:58.074 06:38:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:27:58.074 06:38:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:58.074 06:38:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:58.074 06:38:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:27:58.074 06:38:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:27:58.074 06:38:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:58.074 06:38:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:27:58.074 06:38:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:27:58.074 06:38:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:27:58.074 06:38:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:27:58.074 06:38:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:58.074 06:38:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:27:58.074 06:38:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:27:58.074 06:38:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:58.074 06:38:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:58.074 06:38:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:27:58.074 06:38:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:58.074 06:38:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:27:58.074 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:58.074 --rc genhtml_branch_coverage=1 00:27:58.074 --rc genhtml_function_coverage=1 00:27:58.074 --rc genhtml_legend=1 00:27:58.074 --rc geninfo_all_blocks=1 00:27:58.075 --rc geninfo_unexecuted_blocks=1 00:27:58.075 00:27:58.075 ' 00:27:58.075 06:38:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:27:58.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:58.075 --rc genhtml_branch_coverage=1 00:27:58.075 --rc genhtml_function_coverage=1 00:27:58.075 --rc genhtml_legend=1 00:27:58.075 --rc geninfo_all_blocks=1 00:27:58.075 --rc geninfo_unexecuted_blocks=1 00:27:58.075 00:27:58.075 ' 00:27:58.075 06:38:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:27:58.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:58.075 --rc genhtml_branch_coverage=1 00:27:58.075 --rc genhtml_function_coverage=1 00:27:58.075 --rc genhtml_legend=1 00:27:58.075 --rc geninfo_all_blocks=1 00:27:58.075 --rc geninfo_unexecuted_blocks=1 00:27:58.075 00:27:58.075 ' 00:27:58.075 06:38:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:27:58.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:58.075 --rc genhtml_branch_coverage=1 00:27:58.075 --rc genhtml_function_coverage=1 00:27:58.075 --rc genhtml_legend=1 00:27:58.075 --rc geninfo_all_blocks=1 00:27:58.075 --rc geninfo_unexecuted_blocks=1 00:27:58.075 00:27:58.075 ' 00:27:58.075 06:38:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:58.075 06:38:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:27:58.075 06:38:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:58.075 06:38:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:58.075 06:38:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:58.075 06:38:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:58.075 06:38:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:58.075 06:38:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:58.075 06:38:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:27:58.075 06:38:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:58.075 06:38:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:58.075 06:38:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:27:58.075 06:38:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:58.075 06:38:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:58.075 06:38:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:58.075 06:38:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:58.075 06:38:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:58.075 06:38:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:58.075 06:38:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:58.075 06:38:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:58.075 06:38:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:58.075 06:38:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:58.075 06:38:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:27:58.075 06:38:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:27:58.075 06:38:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:58.075 06:38:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:58.075 06:38:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:58.075 06:38:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:58.075 06:38:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:58.075 06:38:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:27:58.075 06:38:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:58.075 06:38:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:58.075 06:38:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:58.075 06:38:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:58.075 06:38:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:58.075 06:38:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:58.075 06:38:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:27:58.075 06:38:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:58.075 06:38:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:27:58.075 06:38:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:58.075 06:38:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:58.075 06:38:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:58.075 06:38:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:58.075 06:38:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:58.075 06:38:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:58.075 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:58.075 06:38:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:58.075 06:38:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:58.075 06:38:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:58.075 06:38:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:27:58.075 06:38:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:27:58.075 06:38:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:58.075 06:38:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:58.075 06:38:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:58.075 06:38:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:58.075 06:38:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:58.075 06:38:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:58.075 06:38:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:58.075 06:38:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:58.075 06:38:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:58.075 06:38:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:58.075 06:38:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:27:58.075 06:38:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.651 06:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:04.651 06:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:28:04.651 06:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:04.651 06:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:04.651 06:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:04.651 06:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:04.651 06:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:04.651 06:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:28:04.651 06:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:04.651 06:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:28:04.651 06:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:28:04.651 06:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:28:04.651 06:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:28:04.651 06:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:28:04.651 06:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:28:04.651 06:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:04.651 06:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:04.651 06:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:04.651 06:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:04.651 06:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:04.651 06:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:04.651 06:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:04.651 06:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:04.651 06:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:04.651 06:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:04.651 06:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:04.651 06:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:04.651 06:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:04.651 06:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:04.651 06:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:04.651 06:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:04.651 06:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:04.651 06:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:04.651 06:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:04.651 06:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:28:04.651 Found 0000:86:00.0 (0x8086 - 0x159b) 00:28:04.651 06:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:04.651 06:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:04.651 06:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:04.651 06:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:04.651 06:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:04.651 06:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:04.651 06:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:28:04.651 Found 0000:86:00.1 (0x8086 - 0x159b) 00:28:04.651 06:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:04.651 06:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:04.651 06:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:04.651 06:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:04.651 06:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:04.651 06:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:04.651 06:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:04.651 06:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:04.651 06:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:04.651 06:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:04.651 06:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:04.651 06:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:04.651 06:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:04.651 06:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:04.651 06:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:04.651 06:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:28:04.651 Found net devices under 0000:86:00.0: cvl_0_0 00:28:04.652 06:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:04.652 06:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:04.652 06:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:04.652 06:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:04.652 06:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:04.652 06:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:04.652 06:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:04.652 06:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:04.652 06:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:28:04.652 Found net devices under 0000:86:00.1: cvl_0_1 00:28:04.652 06:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:04.652 06:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:04.652 06:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes 00:28:04.652 06:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:04.652 06:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:04.652 06:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:04.652 06:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:04.652 06:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:04.652 06:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:04.652 06:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:04.652 06:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:04.652 06:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:04.652 06:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:04.652 06:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:04.652 06:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:04.652 06:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:04.652 06:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:04.652 06:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:04.652 06:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:04.652 06:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:04.652 06:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:04.652 06:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:04.652 06:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:04.652 06:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:04.652 06:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:04.652 06:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:04.652 06:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:04.652 06:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:04.652 06:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:04.652 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:04.652 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.472 ms 00:28:04.652 00:28:04.652 --- 10.0.0.2 ping statistics --- 00:28:04.652 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:04.652 rtt min/avg/max/mdev = 0.472/0.472/0.472/0.000 ms 00:28:04.652 06:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:04.652 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:04.652 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.159 ms 00:28:04.652 00:28:04.652 --- 10.0.0.1 ping statistics --- 00:28:04.652 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:04.652 rtt min/avg/max/mdev = 0.159/0.159/0.159/0.000 ms 00:28:04.652 06:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:04.652 06:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0 00:28:04.652 06:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:04.652 06:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:04.652 06:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:04.652 06:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:04.652 06:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:04.652 06:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:04.652 06:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:04.652 06:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:28:04.652 06:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:28:04.652 06:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:04.652 06:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.652 06:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=641570 00:28:04.652 06:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:04.652 06:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:28:04.652 06:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 641570 00:28:04.652 06:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@833 -- # '[' -z 641570 ']' 00:28:04.652 06:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:04.652 06:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:04.652 06:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:04.652 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:04.652 06:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:04.652 06:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.652 [2024-11-20 06:38:35.617865] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:28:04.652 [2024-11-20 06:38:35.617905] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:04.652 [2024-11-20 06:38:35.696389] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:04.652 [2024-11-20 06:38:35.739110] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:04.652 [2024-11-20 06:38:35.739147] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:04.652 [2024-11-20 06:38:35.739154] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:04.652 [2024-11-20 06:38:35.739160] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:04.652 [2024-11-20 06:38:35.739166] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:04.652 [2024-11-20 06:38:35.740749] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:04.652 [2024-11-20 06:38:35.740857] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:04.652 [2024-11-20 06:38:35.740948] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:04.652 [2024-11-20 06:38:35.740950] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:04.652 06:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:04.652 06:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@866 -- # return 0 00:28:04.652 06:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:28:04.652 [2024-11-20 06:38:36.014597] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:04.652 06:38:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:28:04.653 06:38:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:04.653 06:38:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.653 06:38:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:28:04.653 Malloc1 00:28:04.653 06:38:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:04.911 06:38:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:28:04.911 06:38:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:05.238 [2024-11-20 06:38:36.894292] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:05.238 06:38:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:05.496 06:38:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:28:05.496 06:38:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:28:05.496 06:38:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1362 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:28:05.496 06:38:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:28:05.496 06:38:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:05.496 06:38:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local sanitizers 00:28:05.496 06:38:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:28:05.496 06:38:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # shift 00:28:05.496 06:38:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # local asan_lib= 00:28:05.496 06:38:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:28:05.496 06:38:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:28:05.496 06:38:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libasan 00:28:05.496 06:38:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:28:05.496 06:38:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib= 00:28:05.496 06:38:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:28:05.496 06:38:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:28:05.496 06:38:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:28:05.496 06:38:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:28:05.496 06:38:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:28:05.496 06:38:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib= 00:28:05.496 06:38:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:28:05.496 06:38:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:28:05.496 06:38:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:28:05.754 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:28:05.754 fio-3.35 00:28:05.754 Starting 1 thread 00:28:08.287 00:28:08.287 test: (groupid=0, jobs=1): err= 0: pid=641996: Wed Nov 20 06:38:39 2024 00:28:08.287 read: IOPS=11.8k, BW=46.3MiB/s (48.5MB/s)(92.8MiB/2005msec) 00:28:08.287 slat (nsec): min=1503, max=174091, avg=1692.31, stdev=1604.86 00:28:08.287 clat (usec): min=2522, max=10434, avg=5934.73, stdev=479.48 00:28:08.287 lat (usec): min=2546, max=10436, avg=5936.42, stdev=479.37 00:28:08.287 clat percentiles (usec): 00:28:08.287 | 1.00th=[ 4883], 5.00th=[ 5145], 10.00th=[ 5342], 20.00th=[ 5538], 00:28:08.287 | 30.00th=[ 5735], 40.00th=[ 5800], 50.00th=[ 5932], 60.00th=[ 6063], 00:28:08.287 | 70.00th=[ 6194], 80.00th=[ 6325], 90.00th=[ 6521], 95.00th=[ 6652], 00:28:08.287 | 99.00th=[ 6980], 99.50th=[ 7111], 99.90th=[ 8586], 99.95th=[ 9765], 00:28:08.287 | 99.99th=[ 9896] 00:28:08.287 bw ( KiB/s): min=46120, max=48176, per=99.96%, avg=47376.00, stdev=913.15, samples=4 00:28:08.287 iops : min=11530, max=12044, avg=11844.00, stdev=228.29, samples=4 00:28:08.287 write: IOPS=11.8k, BW=46.1MiB/s (48.3MB/s)(92.4MiB/2005msec); 0 zone resets 00:28:08.287 slat (nsec): min=1535, max=157517, avg=1754.26, stdev=1162.87 00:28:08.287 clat (usec): min=1658, max=9735, avg=4823.88, stdev=396.64 00:28:08.287 lat (usec): min=1669, max=9736, avg=4825.64, stdev=396.57 00:28:08.287 clat percentiles (usec): 00:28:08.287 | 1.00th=[ 3949], 5.00th=[ 4228], 10.00th=[ 4359], 20.00th=[ 4555], 00:28:08.287 | 30.00th=[ 4621], 40.00th=[ 4752], 50.00th=[ 4817], 60.00th=[ 4883], 00:28:08.287 | 70.00th=[ 5014], 80.00th=[ 5145], 90.00th=[ 5276], 95.00th=[ 5407], 00:28:08.287 | 99.00th=[ 5669], 99.50th=[ 5800], 99.90th=[ 7570], 99.95th=[ 9241], 00:28:08.287 | 99.99th=[ 9634] 00:28:08.287 bw ( KiB/s): min=46664, max=47744, per=100.00%, avg=47168.00, stdev=450.33, samples=4 00:28:08.287 iops : min=11666, max=11936, avg=11792.00, stdev=112.58, samples=4 00:28:08.287 lat (msec) : 2=0.03%, 4=0.66%, 10=99.31%, 20=0.01% 00:28:08.287 cpu : usr=72.90%, sys=26.20%, ctx=118, majf=0, minf=3 00:28:08.287 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:28:08.287 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:08.287 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:28:08.287 issued rwts: total=23757,23642,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:08.287 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:08.287 00:28:08.287 Run status group 0 (all jobs): 00:28:08.287 READ: bw=46.3MiB/s (48.5MB/s), 46.3MiB/s-46.3MiB/s (48.5MB/s-48.5MB/s), io=92.8MiB (97.3MB), run=2005-2005msec 00:28:08.287 WRITE: bw=46.1MiB/s (48.3MB/s), 46.1MiB/s-46.1MiB/s (48.3MB/s-48.3MB/s), io=92.4MiB (96.8MB), run=2005-2005msec 00:28:08.287 06:38:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:28:08.287 06:38:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1362 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:28:08.287 06:38:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:28:08.287 06:38:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:08.287 06:38:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local sanitizers 00:28:08.287 06:38:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:28:08.287 06:38:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # shift 00:28:08.287 06:38:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # local asan_lib= 00:28:08.287 06:38:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:28:08.287 06:38:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:28:08.287 06:38:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libasan 00:28:08.287 06:38:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:28:08.287 06:38:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib= 00:28:08.287 06:38:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:28:08.287 06:38:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:28:08.287 06:38:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:28:08.287 06:38:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:28:08.287 06:38:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:28:08.287 06:38:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib= 00:28:08.287 06:38:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:28:08.287 06:38:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:28:08.287 06:38:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:28:08.546 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:28:08.546 fio-3.35 00:28:08.546 Starting 1 thread 00:28:11.084 00:28:11.084 test: (groupid=0, jobs=1): err= 0: pid=642523: Wed Nov 20 06:38:42 2024 00:28:11.084 read: IOPS=11.0k, BW=172MiB/s (181MB/s)(346MiB/2006msec) 00:28:11.084 slat (nsec): min=2513, max=86593, avg=2834.28, stdev=1302.82 00:28:11.084 clat (usec): min=1312, max=13628, avg=6667.01, stdev=1532.28 00:28:11.084 lat (usec): min=1315, max=13642, avg=6669.84, stdev=1532.44 00:28:11.084 clat percentiles (usec): 00:28:11.084 | 1.00th=[ 3687], 5.00th=[ 4228], 10.00th=[ 4686], 20.00th=[ 5342], 00:28:11.084 | 30.00th=[ 5800], 40.00th=[ 6194], 50.00th=[ 6652], 60.00th=[ 7111], 00:28:11.084 | 70.00th=[ 7439], 80.00th=[ 7898], 90.00th=[ 8586], 95.00th=[ 9241], 00:28:11.084 | 99.00th=[10552], 99.50th=[11076], 99.90th=[12125], 99.95th=[13042], 00:28:11.084 | 99.99th=[13566] 00:28:11.084 bw ( KiB/s): min=84608, max=94208, per=51.03%, avg=90000.00, stdev=4054.83, samples=4 00:28:11.084 iops : min= 5288, max= 5888, avg=5625.00, stdev=253.43, samples=4 00:28:11.084 write: IOPS=6505, BW=102MiB/s (107MB/s)(184MiB/1811msec); 0 zone resets 00:28:11.084 slat (usec): min=29, max=381, avg=31.82, stdev= 7.72 00:28:11.084 clat (usec): min=3613, max=15170, avg=8614.74, stdev=1565.47 00:28:11.084 lat (usec): min=3645, max=15234, avg=8646.56, stdev=1567.04 00:28:11.084 clat percentiles (usec): 00:28:11.084 | 1.00th=[ 5735], 5.00th=[ 6390], 10.00th=[ 6783], 20.00th=[ 7242], 00:28:11.084 | 30.00th=[ 7635], 40.00th=[ 8029], 50.00th=[ 8455], 60.00th=[ 8717], 00:28:11.084 | 70.00th=[ 9241], 80.00th=[ 9896], 90.00th=[10814], 95.00th=[11469], 00:28:11.084 | 99.00th=[12518], 99.50th=[13173], 99.90th=[14746], 99.95th=[15008], 00:28:11.084 | 99.99th=[15139] 00:28:11.084 bw ( KiB/s): min=89504, max=98304, per=89.97%, avg=93648.00, stdev=3613.26, samples=4 00:28:11.084 iops : min= 5594, max= 6144, avg=5853.00, stdev=225.83, samples=4 00:28:11.084 lat (msec) : 2=0.04%, 4=1.86%, 10=90.21%, 20=7.89% 00:28:11.084 cpu : usr=86.68%, sys=12.67%, ctx=42, majf=0, minf=3 00:28:11.084 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:28:11.084 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:11.084 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:28:11.084 issued rwts: total=22114,11781,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:11.084 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:11.084 00:28:11.084 Run status group 0 (all jobs): 00:28:11.084 READ: bw=172MiB/s (181MB/s), 172MiB/s-172MiB/s (181MB/s-181MB/s), io=346MiB (362MB), run=2006-2006msec 00:28:11.084 WRITE: bw=102MiB/s (107MB/s), 102MiB/s-102MiB/s (107MB/s-107MB/s), io=184MiB (193MB), run=1811-1811msec 00:28:11.084 06:38:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:11.084 06:38:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:28:11.084 06:38:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:28:11.084 06:38:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:28:11.084 06:38:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:28:11.084 06:38:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:11.084 06:38:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:28:11.084 06:38:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:11.084 06:38:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:28:11.084 06:38:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:11.084 06:38:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:11.084 rmmod nvme_tcp 00:28:11.084 rmmod nvme_fabrics 00:28:11.084 rmmod nvme_keyring 00:28:11.084 06:38:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:11.084 06:38:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:28:11.084 06:38:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:28:11.084 06:38:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 641570 ']' 00:28:11.084 06:38:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 641570 00:28:11.084 06:38:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@952 -- # '[' -z 641570 ']' 00:28:11.084 06:38:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # kill -0 641570 00:28:11.084 06:38:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@957 -- # uname 00:28:11.084 06:38:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:11.084 06:38:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 641570 00:28:11.084 06:38:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:28:11.084 06:38:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:28:11.084 06:38:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@970 -- # echo 'killing process with pid 641570' 00:28:11.084 killing process with pid 641570 00:28:11.084 06:38:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@971 -- # kill 641570 00:28:11.084 06:38:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@976 -- # wait 641570 00:28:11.343 06:38:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:11.343 06:38:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:11.343 06:38:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:11.343 06:38:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:28:11.343 06:38:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:28:11.343 06:38:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:11.343 06:38:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:28:11.343 06:38:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:11.343 06:38:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:11.343 06:38:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:11.343 06:38:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:11.343 06:38:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:13.881 06:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:13.881 00:28:13.881 real 0m15.700s 00:28:13.881 user 0m46.529s 00:28:13.881 sys 0m6.449s 00:28:13.881 06:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1128 -- # xtrace_disable 00:28:13.881 06:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.881 ************************************ 00:28:13.881 END TEST nvmf_fio_host 00:28:13.881 ************************************ 00:28:13.881 06:38:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:28:13.881 06:38:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:28:13.881 06:38:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:28:13.881 06:38:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.881 ************************************ 00:28:13.881 START TEST nvmf_failover 00:28:13.881 ************************************ 00:28:13.881 06:38:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:28:13.881 * Looking for test storage... 00:28:13.881 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:13.881 06:38:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:28:13.881 06:38:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # lcov --version 00:28:13.881 06:38:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:28:13.881 06:38:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:28:13.881 06:38:45 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:13.881 06:38:45 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:13.881 06:38:45 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:13.881 06:38:45 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:28:13.881 06:38:45 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:28:13.881 06:38:45 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:28:13.881 06:38:45 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:28:13.881 06:38:45 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:28:13.881 06:38:45 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:28:13.881 06:38:45 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:28:13.881 06:38:45 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:13.881 06:38:45 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:28:13.881 06:38:45 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:28:13.881 06:38:45 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:13.881 06:38:45 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:13.881 06:38:45 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:28:13.881 06:38:45 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:28:13.881 06:38:45 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:13.881 06:38:45 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:28:13.881 06:38:45 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:28:13.881 06:38:45 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:28:13.881 06:38:45 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:28:13.881 06:38:45 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:13.881 06:38:45 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:28:13.881 06:38:45 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:28:13.881 06:38:45 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:13.881 06:38:45 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:13.881 06:38:45 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:28:13.881 06:38:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:13.881 06:38:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:28:13.881 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:13.881 --rc genhtml_branch_coverage=1 00:28:13.881 --rc genhtml_function_coverage=1 00:28:13.881 --rc genhtml_legend=1 00:28:13.881 --rc geninfo_all_blocks=1 00:28:13.881 --rc geninfo_unexecuted_blocks=1 00:28:13.881 00:28:13.881 ' 00:28:13.881 06:38:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:28:13.881 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:13.881 --rc genhtml_branch_coverage=1 00:28:13.881 --rc genhtml_function_coverage=1 00:28:13.881 --rc genhtml_legend=1 00:28:13.881 --rc geninfo_all_blocks=1 00:28:13.881 --rc geninfo_unexecuted_blocks=1 00:28:13.881 00:28:13.881 ' 00:28:13.881 06:38:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:28:13.881 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:13.881 --rc genhtml_branch_coverage=1 00:28:13.881 --rc genhtml_function_coverage=1 00:28:13.881 --rc genhtml_legend=1 00:28:13.881 --rc geninfo_all_blocks=1 00:28:13.881 --rc geninfo_unexecuted_blocks=1 00:28:13.881 00:28:13.881 ' 00:28:13.881 06:38:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:28:13.881 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:13.881 --rc genhtml_branch_coverage=1 00:28:13.881 --rc genhtml_function_coverage=1 00:28:13.881 --rc genhtml_legend=1 00:28:13.881 --rc geninfo_all_blocks=1 00:28:13.881 --rc geninfo_unexecuted_blocks=1 00:28:13.881 00:28:13.881 ' 00:28:13.881 06:38:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:13.881 06:38:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:28:13.881 06:38:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:13.881 06:38:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:13.881 06:38:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:13.881 06:38:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:13.881 06:38:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:13.881 06:38:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:13.881 06:38:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:13.881 06:38:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:13.881 06:38:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:13.881 06:38:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:13.881 06:38:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:28:13.881 06:38:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:28:13.881 06:38:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:13.881 06:38:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:13.881 06:38:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:13.881 06:38:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:13.881 06:38:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:13.881 06:38:45 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:28:13.881 06:38:45 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:13.881 06:38:45 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:13.881 06:38:45 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:13.881 06:38:45 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:13.882 06:38:45 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:13.882 06:38:45 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:13.882 06:38:45 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:28:13.882 06:38:45 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:13.882 06:38:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:28:13.882 06:38:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:13.882 06:38:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:13.882 06:38:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:13.882 06:38:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:13.882 06:38:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:13.882 06:38:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:13.882 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:13.882 06:38:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:13.882 06:38:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:13.882 06:38:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:13.882 06:38:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:13.882 06:38:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:13.882 06:38:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:13.882 06:38:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:28:13.882 06:38:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:28:13.882 06:38:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:13.882 06:38:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:13.882 06:38:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:13.882 06:38:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:13.882 06:38:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:13.882 06:38:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:13.882 06:38:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:13.882 06:38:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:13.882 06:38:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:13.882 06:38:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:13.882 06:38:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:28:13.882 06:38:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:28:20.454 06:38:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:20.454 06:38:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:28:20.454 06:38:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:20.454 06:38:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:20.454 06:38:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:20.454 06:38:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:20.454 06:38:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:20.454 06:38:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:28:20.454 06:38:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:20.454 06:38:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:28:20.454 06:38:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:28:20.454 06:38:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:28:20.454 06:38:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:28:20.454 06:38:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:28:20.454 06:38:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:28:20.454 06:38:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:20.454 06:38:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:20.454 06:38:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:20.455 06:38:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:20.455 06:38:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:20.455 06:38:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:20.455 06:38:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:20.455 06:38:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:20.455 06:38:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:20.455 06:38:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:20.455 06:38:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:20.455 06:38:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:20.455 06:38:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:20.455 06:38:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:20.455 06:38:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:20.455 06:38:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:20.455 06:38:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:20.455 06:38:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:20.455 06:38:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:20.455 06:38:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:28:20.455 Found 0000:86:00.0 (0x8086 - 0x159b) 00:28:20.455 06:38:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:20.455 06:38:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:20.455 06:38:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:20.455 06:38:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:20.455 06:38:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:20.455 06:38:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:20.455 06:38:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:28:20.455 Found 0000:86:00.1 (0x8086 - 0x159b) 00:28:20.455 06:38:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:20.455 06:38:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:20.455 06:38:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:20.455 06:38:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:20.455 06:38:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:20.455 06:38:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:20.455 06:38:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:20.455 06:38:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:20.455 06:38:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:20.455 06:38:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:20.455 06:38:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:20.455 06:38:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:20.455 06:38:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:20.455 06:38:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:20.455 06:38:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:20.455 06:38:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:28:20.455 Found net devices under 0000:86:00.0: cvl_0_0 00:28:20.455 06:38:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:20.455 06:38:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:20.455 06:38:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:20.455 06:38:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:20.455 06:38:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:20.455 06:38:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:20.455 06:38:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:20.455 06:38:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:20.455 06:38:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:28:20.455 Found net devices under 0000:86:00.1: cvl_0_1 00:28:20.455 06:38:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:20.455 06:38:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:20.455 06:38:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes 00:28:20.455 06:38:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:20.455 06:38:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:20.455 06:38:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:20.455 06:38:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:20.455 06:38:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:20.455 06:38:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:20.455 06:38:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:20.455 06:38:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:20.455 06:38:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:20.455 06:38:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:20.455 06:38:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:20.455 06:38:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:20.455 06:38:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:20.455 06:38:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:20.455 06:38:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:20.455 06:38:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:20.455 06:38:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:20.455 06:38:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:20.455 06:38:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:20.455 06:38:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:20.455 06:38:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:20.455 06:38:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:20.455 06:38:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:20.456 06:38:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:20.456 06:38:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:20.456 06:38:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:20.456 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:20.456 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.471 ms 00:28:20.456 00:28:20.456 --- 10.0.0.2 ping statistics --- 00:28:20.456 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:20.456 rtt min/avg/max/mdev = 0.471/0.471/0.471/0.000 ms 00:28:20.456 06:38:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:20.456 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:20.456 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.215 ms 00:28:20.456 00:28:20.456 --- 10.0.0.1 ping statistics --- 00:28:20.456 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:20.456 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:28:20.456 06:38:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:20.456 06:38:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0 00:28:20.456 06:38:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:20.456 06:38:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:20.456 06:38:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:20.456 06:38:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:20.456 06:38:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:20.456 06:38:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:20.456 06:38:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:20.456 06:38:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:28:20.456 06:38:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:20.456 06:38:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:20.456 06:38:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:28:20.456 06:38:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=646494 00:28:20.456 06:38:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 646494 00:28:20.456 06:38:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:28:20.456 06:38:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # '[' -z 646494 ']' 00:28:20.456 06:38:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:20.456 06:38:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:20.456 06:38:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:20.456 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:20.456 06:38:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:20.456 06:38:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:28:20.456 [2024-11-20 06:38:51.439703] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:28:20.456 [2024-11-20 06:38:51.439748] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:20.456 [2024-11-20 06:38:51.519266] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:20.456 [2024-11-20 06:38:51.561508] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:20.456 [2024-11-20 06:38:51.561544] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:20.456 [2024-11-20 06:38:51.561551] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:20.456 [2024-11-20 06:38:51.561557] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:20.456 [2024-11-20 06:38:51.561562] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:20.456 [2024-11-20 06:38:51.562899] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:20.456 [2024-11-20 06:38:51.562920] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:20.456 [2024-11-20 06:38:51.562921] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:20.456 06:38:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:20.456 06:38:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@866 -- # return 0 00:28:20.456 06:38:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:20.456 06:38:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:20.456 06:38:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:28:20.715 06:38:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:20.715 06:38:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:28:20.715 [2024-11-20 06:38:52.489657] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:20.715 06:38:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:28:20.975 Malloc0 00:28:20.975 06:38:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:21.234 06:38:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:21.492 06:38:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:21.492 [2024-11-20 06:38:53.288895] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:21.492 06:38:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:28:21.751 [2024-11-20 06:38:53.477413] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:28:21.751 06:38:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:28:22.009 [2024-11-20 06:38:53.686084] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:28:22.009 06:38:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=646908 00:28:22.009 06:38:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:28:22.009 06:38:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:22.009 06:38:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 646908 /var/tmp/bdevperf.sock 00:28:22.009 06:38:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # '[' -z 646908 ']' 00:28:22.009 06:38:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:22.009 06:38:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:22.009 06:38:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:22.009 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:22.009 06:38:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:22.009 06:38:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:28:22.268 06:38:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:22.268 06:38:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@866 -- # return 0 00:28:22.268 06:38:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:28:22.835 NVMe0n1 00:28:22.835 06:38:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:28:23.094 00:28:23.094 06:38:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:28:23.094 06:38:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=646998 00:28:23.094 06:38:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:28:24.032 06:38:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:24.291 [2024-11-20 06:38:56.009693] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10382d0 is same with the state(6) to be set 00:28:24.292 [2024-11-20 06:38:56.009760] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10382d0 is same with the state(6) to be set 00:28:24.292 [2024-11-20 06:38:56.009768] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10382d0 is same with the state(6) to be set 00:28:24.292 [2024-11-20 06:38:56.009775] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10382d0 is same with the state(6) to be set 00:28:24.292 [2024-11-20 06:38:56.009786] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10382d0 is same with the state(6) to be set 00:28:24.292 [2024-11-20 06:38:56.009792] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10382d0 is same with the state(6) to be set 00:28:24.292 [2024-11-20 06:38:56.009798] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10382d0 is same with the state(6) to be set 00:28:24.292 [2024-11-20 06:38:56.009804] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10382d0 is same with the state(6) to be set 00:28:24.292 [2024-11-20 06:38:56.009810] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10382d0 is same with the state(6) to be set 00:28:24.292 [2024-11-20 06:38:56.009816] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10382d0 is same with the state(6) to be set 00:28:24.292 [2024-11-20 06:38:56.009822] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10382d0 is same with the state(6) to be set 00:28:24.292 [2024-11-20 06:38:56.009828] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10382d0 is same with the state(6) to be set 00:28:24.292 [2024-11-20 06:38:56.009834] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10382d0 is same with the state(6) to be set 00:28:24.292 [2024-11-20 06:38:56.009840] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10382d0 is same with the state(6) to be set 00:28:24.292 [2024-11-20 06:38:56.009846] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10382d0 is same with the state(6) to be set 00:28:24.292 [2024-11-20 06:38:56.009852] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10382d0 is same with the state(6) to be set 00:28:24.292 [2024-11-20 06:38:56.009857] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10382d0 is same with the state(6) to be set 00:28:24.292 [2024-11-20 06:38:56.009863] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10382d0 is same with the state(6) to be set 00:28:24.292 [2024-11-20 06:38:56.009869] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10382d0 is same with the state(6) to be set 00:28:24.292 [2024-11-20 06:38:56.009875] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10382d0 is same with the state(6) to be set 00:28:24.292 [2024-11-20 06:38:56.009881] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10382d0 is same with the state(6) to be set 00:28:24.292 [2024-11-20 06:38:56.009887] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10382d0 is same with the state(6) to be set 00:28:24.292 [2024-11-20 06:38:56.009893] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10382d0 is same with the state(6) to be set 00:28:24.292 [2024-11-20 06:38:56.009899] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10382d0 is same with the state(6) to be set 00:28:24.292 [2024-11-20 06:38:56.009904] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10382d0 is same with the state(6) to be set 00:28:24.292 [2024-11-20 06:38:56.009910] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10382d0 is same with the state(6) to be set 00:28:24.292 [2024-11-20 06:38:56.009917] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10382d0 is same with the state(6) to be set 00:28:24.292 [2024-11-20 06:38:56.009923] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10382d0 is same with the state(6) to be set 00:28:24.292 [2024-11-20 06:38:56.009929] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10382d0 is same with the state(6) to be set 00:28:24.292 [2024-11-20 06:38:56.009935] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10382d0 is same with the state(6) to be set 00:28:24.292 [2024-11-20 06:38:56.009941] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10382d0 is same with the state(6) to be set 00:28:24.292 [2024-11-20 06:38:56.009948] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10382d0 is same with the state(6) to be set 00:28:24.292 [2024-11-20 06:38:56.009954] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10382d0 is same with the state(6) to be set 00:28:24.292 [2024-11-20 06:38:56.009960] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10382d0 is same with the state(6) to be set 00:28:24.292 [2024-11-20 06:38:56.009967] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10382d0 is same with the state(6) to be set 00:28:24.292 [2024-11-20 06:38:56.009973] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10382d0 is same with the state(6) to be set 00:28:24.292 [2024-11-20 06:38:56.009980] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10382d0 is same with the state(6) to be set 00:28:24.292 [2024-11-20 06:38:56.009987] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10382d0 is same with the state(6) to be set 00:28:24.292 [2024-11-20 06:38:56.009992] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10382d0 is same with the state(6) to be set 00:28:24.292 [2024-11-20 06:38:56.009999] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10382d0 is same with the state(6) to be set 00:28:24.292 [2024-11-20 06:38:56.010005] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10382d0 is same with the state(6) to be set 00:28:24.292 [2024-11-20 06:38:56.010011] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10382d0 is same with the state(6) to be set 00:28:24.292 [2024-11-20 06:38:56.010017] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10382d0 is same with the state(6) to be set 00:28:24.292 [2024-11-20 06:38:56.010023] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10382d0 is same with the state(6) to be set 00:28:24.292 [2024-11-20 06:38:56.010028] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10382d0 is same with the state(6) to be set 00:28:24.292 [2024-11-20 06:38:56.010034] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10382d0 is same with the state(6) to be set 00:28:24.292 [2024-11-20 06:38:56.010040] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10382d0 is same with the state(6) to be set 00:28:24.292 [2024-11-20 06:38:56.010047] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10382d0 is same with the state(6) to be set 00:28:24.292 [2024-11-20 06:38:56.010053] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10382d0 is same with the state(6) to be set 00:28:24.292 [2024-11-20 06:38:56.010059] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10382d0 is same with the state(6) to be set 00:28:24.292 [2024-11-20 06:38:56.010066] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10382d0 is same with the state(6) to be set 00:28:24.292 [2024-11-20 06:38:56.010072] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10382d0 is same with the state(6) to be set 00:28:24.292 [2024-11-20 06:38:56.010077] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10382d0 is same with the state(6) to be set 00:28:24.292 [2024-11-20 06:38:56.010083] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10382d0 is same with the state(6) to be set 00:28:24.292 [2024-11-20 06:38:56.010089] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10382d0 is same with the state(6) to be set 00:28:24.292 [2024-11-20 06:38:56.010095] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10382d0 is same with the state(6) to be set 00:28:24.292 [2024-11-20 06:38:56.010101] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10382d0 is same with the state(6) to be set 00:28:24.292 [2024-11-20 06:38:56.010107] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10382d0 is same with the state(6) to be set 00:28:24.292 [2024-11-20 06:38:56.010114] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10382d0 is same with the state(6) to be set 00:28:24.292 [2024-11-20 06:38:56.010120] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10382d0 is same with the state(6) to be set 00:28:24.292 [2024-11-20 06:38:56.010126] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10382d0 is same with the state(6) to be set 00:28:24.292 [2024-11-20 06:38:56.010132] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10382d0 is same with the state(6) to be set 00:28:24.292 [2024-11-20 06:38:56.010138] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10382d0 is same with the state(6) to be set 00:28:24.292 [2024-11-20 06:38:56.010144] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10382d0 is same with the state(6) to be set 00:28:24.292 06:38:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:28:27.579 06:38:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:28:27.579 00:28:27.579 06:38:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:28:27.837 [2024-11-20 06:38:59.532290] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10390a0 is same with the state(6) to be set 00:28:27.837 [2024-11-20 06:38:59.532330] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10390a0 is same with the state(6) to be set 00:28:27.837 [2024-11-20 06:38:59.532337] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10390a0 is same with the state(6) to be set 00:28:27.837 [2024-11-20 06:38:59.532344] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10390a0 is same with the state(6) to be set 00:28:27.837 [2024-11-20 06:38:59.532350] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10390a0 is same with the state(6) to be set 00:28:27.837 [2024-11-20 06:38:59.532356] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10390a0 is same with the state(6) to be set 00:28:27.837 [2024-11-20 06:38:59.532362] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10390a0 is same with the state(6) to be set 00:28:27.837 [2024-11-20 06:38:59.532368] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10390a0 is same with the state(6) to be set 00:28:27.837 [2024-11-20 06:38:59.532374] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10390a0 is same with the state(6) to be set 00:28:27.837 [2024-11-20 06:38:59.532380] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10390a0 is same with the state(6) to be set 00:28:27.837 [2024-11-20 06:38:59.532385] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10390a0 is same with the state(6) to be set 00:28:27.837 [2024-11-20 06:38:59.532391] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10390a0 is same with the state(6) to be set 00:28:27.837 [2024-11-20 06:38:59.532397] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10390a0 is same with the state(6) to be set 00:28:27.837 [2024-11-20 06:38:59.532402] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10390a0 is same with the state(6) to be set 00:28:27.837 [2024-11-20 06:38:59.532408] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10390a0 is same with the state(6) to be set 00:28:27.837 [2024-11-20 06:38:59.532414] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10390a0 is same with the state(6) to be set 00:28:27.837 [2024-11-20 06:38:59.532420] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10390a0 is same with the state(6) to be set 00:28:27.837 [2024-11-20 06:38:59.532426] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10390a0 is same with the state(6) to be set 00:28:27.837 [2024-11-20 06:38:59.532440] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10390a0 is same with the state(6) to be set 00:28:27.838 [2024-11-20 06:38:59.532446] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10390a0 is same with the state(6) to be set 00:28:27.838 [2024-11-20 06:38:59.532452] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10390a0 is same with the state(6) to be set 00:28:27.838 06:38:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:28:31.124 06:39:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:31.124 [2024-11-20 06:39:02.738551] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:31.124 06:39:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:28:32.060 06:39:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:28:32.318 [2024-11-20 06:39:03.956048] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1185340 is same with the state(6) to be set 00:28:32.318 [2024-11-20 06:39:03.956086] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1185340 is same with the state(6) to be set 00:28:32.318 [2024-11-20 06:39:03.956094] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1185340 is same with the state(6) to be set 00:28:32.318 [2024-11-20 06:39:03.956100] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1185340 is same with the state(6) to be set 00:28:32.318 [2024-11-20 06:39:03.956107] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1185340 is same with the state(6) to be set 00:28:32.318 [2024-11-20 06:39:03.956113] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1185340 is same with the state(6) to be set 00:28:32.318 [2024-11-20 06:39:03.956119] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1185340 is same with the state(6) to be set 00:28:32.318 [2024-11-20 06:39:03.956124] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1185340 is same with the state(6) to be set 00:28:32.318 [2024-11-20 06:39:03.956130] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1185340 is same with the state(6) to be set 00:28:32.318 [2024-11-20 06:39:03.956136] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1185340 is same with the state(6) to be set 00:28:32.318 [2024-11-20 06:39:03.956142] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1185340 is same with the state(6) to be set 00:28:32.318 06:39:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 646998 00:28:38.891 { 00:28:38.891 "results": [ 00:28:38.891 { 00:28:38.891 "job": "NVMe0n1", 00:28:38.891 "core_mask": "0x1", 00:28:38.891 "workload": "verify", 00:28:38.891 "status": "finished", 00:28:38.891 "verify_range": { 00:28:38.891 "start": 0, 00:28:38.891 "length": 16384 00:28:38.891 }, 00:28:38.891 "queue_depth": 128, 00:28:38.891 "io_size": 4096, 00:28:38.891 "runtime": 15.009608, 00:28:38.891 "iops": 11364.254149741952, 00:28:38.891 "mibps": 44.3916177724295, 00:28:38.891 "io_failed": 5573, 00:28:38.891 "io_timeout": 0, 00:28:38.891 "avg_latency_us": 10884.573123442513, 00:28:38.891 "min_latency_us": 415.45142857142855, 00:28:38.891 "max_latency_us": 31207.619047619046 00:28:38.891 } 00:28:38.891 ], 00:28:38.891 "core_count": 1 00:28:38.891 } 00:28:38.891 06:39:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 646908 00:28:38.891 06:39:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # '[' -z 646908 ']' 00:28:38.891 06:39:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # kill -0 646908 00:28:38.891 06:39:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # uname 00:28:38.891 06:39:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:38.891 06:39:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 646908 00:28:38.891 06:39:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:28:38.891 06:39:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:28:38.891 06:39:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@970 -- # echo 'killing process with pid 646908' 00:28:38.891 killing process with pid 646908 00:28:38.891 06:39:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@971 -- # kill 646908 00:28:38.891 06:39:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@976 -- # wait 646908 00:28:38.891 06:39:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:28:38.891 [2024-11-20 06:38:53.763745] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:28:38.891 [2024-11-20 06:38:53.763801] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid646908 ] 00:28:38.891 [2024-11-20 06:38:53.841710] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:38.891 [2024-11-20 06:38:53.883212] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:38.891 Running I/O for 15 seconds... 00:28:38.891 11491.00 IOPS, 44.89 MiB/s [2024-11-20T05:39:10.727Z] [2024-11-20 06:38:56.011232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:101000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.891 [2024-11-20 06:38:56.011267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.891 [2024-11-20 06:38:56.011282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:101008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.891 [2024-11-20 06:38:56.011291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.891 [2024-11-20 06:38:56.011301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:101016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.891 [2024-11-20 06:38:56.011307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.891 [2024-11-20 06:38:56.011316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:101024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.891 [2024-11-20 06:38:56.011322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.891 [2024-11-20 06:38:56.011330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:101032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.891 [2024-11-20 06:38:56.011337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.891 [2024-11-20 06:38:56.011345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:101040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.891 [2024-11-20 06:38:56.011352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.891 [2024-11-20 06:38:56.011360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:101048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.891 [2024-11-20 06:38:56.011367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.891 [2024-11-20 06:38:56.011375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:101056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.891 [2024-11-20 06:38:56.011381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.891 [2024-11-20 06:38:56.011390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:101064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.891 [2024-11-20 06:38:56.011397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.891 [2024-11-20 06:38:56.011405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:101072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.891 [2024-11-20 06:38:56.011412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.891 [2024-11-20 06:38:56.011419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:101080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.892 [2024-11-20 06:38:56.011425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.892 [2024-11-20 06:38:56.011439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:101088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.892 [2024-11-20 06:38:56.011446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.892 [2024-11-20 06:38:56.011454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:101096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.892 [2024-11-20 06:38:56.011461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.892 [2024-11-20 06:38:56.011469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:101104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.892 [2024-11-20 06:38:56.011475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.892 [2024-11-20 06:38:56.011483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:101112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.892 [2024-11-20 06:38:56.011490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.892 [2024-11-20 06:38:56.011498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:101120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.892 [2024-11-20 06:38:56.011505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.892 [2024-11-20 06:38:56.011513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:101128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.892 [2024-11-20 06:38:56.011519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.892 [2024-11-20 06:38:56.011527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:101136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.892 [2024-11-20 06:38:56.011533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.892 [2024-11-20 06:38:56.011541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:101144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.892 [2024-11-20 06:38:56.011549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.892 [2024-11-20 06:38:56.011557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:101152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.892 [2024-11-20 06:38:56.011564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.892 [2024-11-20 06:38:56.011572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:101160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.892 [2024-11-20 06:38:56.011578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.892 [2024-11-20 06:38:56.011586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:101168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.892 [2024-11-20 06:38:56.011593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.892 [2024-11-20 06:38:56.011602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:101176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.892 [2024-11-20 06:38:56.011609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.892 [2024-11-20 06:38:56.011617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:101184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.892 [2024-11-20 06:38:56.011625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.892 [2024-11-20 06:38:56.011633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:101192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.892 [2024-11-20 06:38:56.011639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.892 [2024-11-20 06:38:56.011647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:101328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.892 [2024-11-20 06:38:56.011654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.892 [2024-11-20 06:38:56.011662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:101336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.892 [2024-11-20 06:38:56.011669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.892 [2024-11-20 06:38:56.011677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:101344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.892 [2024-11-20 06:38:56.011684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.892 [2024-11-20 06:38:56.011692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:101352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.892 [2024-11-20 06:38:56.011698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.892 [2024-11-20 06:38:56.011706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:101360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.892 [2024-11-20 06:38:56.011713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.892 [2024-11-20 06:38:56.011721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.892 [2024-11-20 06:38:56.011727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.892 [2024-11-20 06:38:56.011735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:101376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.892 [2024-11-20 06:38:56.011741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.892 [2024-11-20 06:38:56.011749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:101384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.892 [2024-11-20 06:38:56.011756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.892 [2024-11-20 06:38:56.011764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:101392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.892 [2024-11-20 06:38:56.011771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.892 [2024-11-20 06:38:56.011779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:101400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.892 [2024-11-20 06:38:56.011787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.892 [2024-11-20 06:38:56.011795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:101408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.892 [2024-11-20 06:38:56.011802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.892 [2024-11-20 06:38:56.011812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:101416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.892 [2024-11-20 06:38:56.011819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.892 [2024-11-20 06:38:56.011827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:101424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.892 [2024-11-20 06:38:56.011834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.892 [2024-11-20 06:38:56.011842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:101432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.892 [2024-11-20 06:38:56.011848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.892 [2024-11-20 06:38:56.011856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:101440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.892 [2024-11-20 06:38:56.011862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.892 [2024-11-20 06:38:56.011871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:101448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.892 [2024-11-20 06:38:56.011878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.892 [2024-11-20 06:38:56.011886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:101456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.892 [2024-11-20 06:38:56.011893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.892 [2024-11-20 06:38:56.011901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:101464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.892 [2024-11-20 06:38:56.011907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.892 [2024-11-20 06:38:56.011915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:101472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.892 [2024-11-20 06:38:56.011923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.892 [2024-11-20 06:38:56.011932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:101480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.892 [2024-11-20 06:38:56.011939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.892 [2024-11-20 06:38:56.011948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:101488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.892 [2024-11-20 06:38:56.011954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.892 [2024-11-20 06:38:56.011962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:101496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.892 [2024-11-20 06:38:56.011968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.892 [2024-11-20 06:38:56.011975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:101504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.892 [2024-11-20 06:38:56.011982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.892 [2024-11-20 06:38:56.011990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:101512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.892 [2024-11-20 06:38:56.012003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.892 [2024-11-20 06:38:56.012011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:101520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.892 [2024-11-20 06:38:56.012017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.893 [2024-11-20 06:38:56.012025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:101528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.893 [2024-11-20 06:38:56.012033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.893 [2024-11-20 06:38:56.012041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:101536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.893 [2024-11-20 06:38:56.012047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.893 [2024-11-20 06:38:56.012055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:101544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.893 [2024-11-20 06:38:56.012061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.893 [2024-11-20 06:38:56.012069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:101552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.893 [2024-11-20 06:38:56.012075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.893 [2024-11-20 06:38:56.012083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:101560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.893 [2024-11-20 06:38:56.012090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.893 [2024-11-20 06:38:56.012097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:101568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.893 [2024-11-20 06:38:56.012104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.893 [2024-11-20 06:38:56.012112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:101576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.893 [2024-11-20 06:38:56.012118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.893 [2024-11-20 06:38:56.012126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:101584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.893 [2024-11-20 06:38:56.012132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.893 [2024-11-20 06:38:56.012141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:101592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.893 [2024-11-20 06:38:56.012147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.893 [2024-11-20 06:38:56.012155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:101600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.893 [2024-11-20 06:38:56.012161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.893 [2024-11-20 06:38:56.012169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:101608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.893 [2024-11-20 06:38:56.012175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.893 [2024-11-20 06:38:56.012184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:101616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.893 [2024-11-20 06:38:56.012192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.893 [2024-11-20 06:38:56.012199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:101624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.893 [2024-11-20 06:38:56.012211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.893 [2024-11-20 06:38:56.012219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:101632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.893 [2024-11-20 06:38:56.012225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.893 [2024-11-20 06:38:56.012233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:101640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.893 [2024-11-20 06:38:56.012242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.893 [2024-11-20 06:38:56.012250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:101648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.893 [2024-11-20 06:38:56.012257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.893 [2024-11-20 06:38:56.012264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:101656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.893 [2024-11-20 06:38:56.012271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.893 [2024-11-20 06:38:56.012279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:101664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.893 [2024-11-20 06:38:56.012285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.893 [2024-11-20 06:38:56.012293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:101672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.893 [2024-11-20 06:38:56.012299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.893 [2024-11-20 06:38:56.012307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:101680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.893 [2024-11-20 06:38:56.012314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.893 [2024-11-20 06:38:56.012322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:101688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.893 [2024-11-20 06:38:56.012328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.893 [2024-11-20 06:38:56.012336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:101696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.893 [2024-11-20 06:38:56.012342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.893 [2024-11-20 06:38:56.012350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:101704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.893 [2024-11-20 06:38:56.012357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.893 [2024-11-20 06:38:56.012365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:101712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.893 [2024-11-20 06:38:56.012372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.893 [2024-11-20 06:38:56.012381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:101720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.893 [2024-11-20 06:38:56.012387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.893 [2024-11-20 06:38:56.012395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:101728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.893 [2024-11-20 06:38:56.012401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.893 [2024-11-20 06:38:56.012409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:101736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.893 [2024-11-20 06:38:56.012416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.893 [2024-11-20 06:38:56.012424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:101744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.893 [2024-11-20 06:38:56.012431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.893 [2024-11-20 06:38:56.012438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:101752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.893 [2024-11-20 06:38:56.012445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.893 [2024-11-20 06:38:56.012452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:101760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.893 [2024-11-20 06:38:56.012459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.893 [2024-11-20 06:38:56.012466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:101768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.893 [2024-11-20 06:38:56.012474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.893 [2024-11-20 06:38:56.012482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:101776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.893 [2024-11-20 06:38:56.012488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.893 [2024-11-20 06:38:56.012497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:101784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.893 [2024-11-20 06:38:56.012504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.893 [2024-11-20 06:38:56.012511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:101792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.893 [2024-11-20 06:38:56.012518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.893 [2024-11-20 06:38:56.012526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:101800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.893 [2024-11-20 06:38:56.012532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.893 [2024-11-20 06:38:56.012540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:101808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.893 [2024-11-20 06:38:56.012546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.893 [2024-11-20 06:38:56.012554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:101816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.893 [2024-11-20 06:38:56.012562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.893 [2024-11-20 06:38:56.012569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:101824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.893 [2024-11-20 06:38:56.012576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.893 [2024-11-20 06:38:56.012584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:101832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.893 [2024-11-20 06:38:56.012590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.893 [2024-11-20 06:38:56.012609] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.894 [2024-11-20 06:38:56.012616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101840 len:8 PRP1 0x0 PRP2 0x0 00:28:38.894 [2024-11-20 06:38:56.012622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.894 [2024-11-20 06:38:56.012777] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.894 [2024-11-20 06:38:56.012785] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.894 [2024-11-20 06:38:56.012790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101848 len:8 PRP1 0x0 PRP2 0x0 00:28:38.894 [2024-11-20 06:38:56.012797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.894 [2024-11-20 06:38:56.012805] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.894 [2024-11-20 06:38:56.012810] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.894 [2024-11-20 06:38:56.012816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101856 len:8 PRP1 0x0 PRP2 0x0 00:28:38.894 [2024-11-20 06:38:56.012822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.894 [2024-11-20 06:38:56.012828] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.894 [2024-11-20 06:38:56.012834] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.894 [2024-11-20 06:38:56.012839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101864 len:8 PRP1 0x0 PRP2 0x0 00:28:38.894 [2024-11-20 06:38:56.012846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.894 [2024-11-20 06:38:56.012853] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.894 [2024-11-20 06:38:56.012858] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.894 [2024-11-20 06:38:56.012863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101872 len:8 PRP1 0x0 PRP2 0x0 00:28:38.894 [2024-11-20 06:38:56.012871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.894 [2024-11-20 06:38:56.012877] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.894 [2024-11-20 06:38:56.012883] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.894 [2024-11-20 06:38:56.012888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101880 len:8 PRP1 0x0 PRP2 0x0 00:28:38.894 [2024-11-20 06:38:56.012894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.894 [2024-11-20 06:38:56.012901] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.894 [2024-11-20 06:38:56.012908] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.894 [2024-11-20 06:38:56.012913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101888 len:8 PRP1 0x0 PRP2 0x0 00:28:38.894 [2024-11-20 06:38:56.012919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.894 [2024-11-20 06:38:56.012926] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.894 [2024-11-20 06:38:56.012931] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.894 [2024-11-20 06:38:56.012936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101896 len:8 PRP1 0x0 PRP2 0x0 00:28:38.894 [2024-11-20 06:38:56.012943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.894 [2024-11-20 06:38:56.012949] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.894 [2024-11-20 06:38:56.012954] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.894 [2024-11-20 06:38:56.012959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101904 len:8 PRP1 0x0 PRP2 0x0 00:28:38.894 [2024-11-20 06:38:56.012965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.894 [2024-11-20 06:38:56.012972] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.894 [2024-11-20 06:38:56.012977] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.894 [2024-11-20 06:38:56.012982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101912 len:8 PRP1 0x0 PRP2 0x0 00:28:38.894 [2024-11-20 06:38:56.012988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.894 [2024-11-20 06:38:56.012996] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.894 [2024-11-20 06:38:56.013001] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.894 [2024-11-20 06:38:56.013006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101920 len:8 PRP1 0x0 PRP2 0x0 00:28:38.894 [2024-11-20 06:38:56.013012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.894 [2024-11-20 06:38:56.013019] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.894 [2024-11-20 06:38:56.013023] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.894 [2024-11-20 06:38:56.013029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101928 len:8 PRP1 0x0 PRP2 0x0 00:28:38.894 [2024-11-20 06:38:56.013035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.894 [2024-11-20 06:38:56.013042] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.894 [2024-11-20 06:38:56.013048] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.894 [2024-11-20 06:38:56.013053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101936 len:8 PRP1 0x0 PRP2 0x0 00:28:38.894 [2024-11-20 06:38:56.013063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.894 [2024-11-20 06:38:56.013070] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.894 [2024-11-20 06:38:56.013074] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.894 [2024-11-20 06:38:56.013080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101944 len:8 PRP1 0x0 PRP2 0x0 00:28:38.894 [2024-11-20 06:38:56.013086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.894 [2024-11-20 06:38:56.013094] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.894 [2024-11-20 06:38:56.013100] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.894 [2024-11-20 06:38:56.013105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101952 len:8 PRP1 0x0 PRP2 0x0 00:28:38.894 [2024-11-20 06:38:56.013112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.894 [2024-11-20 06:38:56.013119] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.894 [2024-11-20 06:38:56.013123] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.894 [2024-11-20 06:38:56.013129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101960 len:8 PRP1 0x0 PRP2 0x0 00:28:38.894 [2024-11-20 06:38:56.013135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.894 [2024-11-20 06:38:56.013141] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.894 [2024-11-20 06:38:56.013146] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.894 [2024-11-20 06:38:56.013152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101968 len:8 PRP1 0x0 PRP2 0x0 00:28:38.894 [2024-11-20 06:38:56.013159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.894 [2024-11-20 06:38:56.013165] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.894 [2024-11-20 06:38:56.013170] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.894 [2024-11-20 06:38:56.013175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101976 len:8 PRP1 0x0 PRP2 0x0 00:28:38.894 [2024-11-20 06:38:56.013181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.894 [2024-11-20 06:38:56.013188] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.894 [2024-11-20 06:38:56.013192] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.894 [2024-11-20 06:38:56.013198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101984 len:8 PRP1 0x0 PRP2 0x0 00:28:38.894 [2024-11-20 06:38:56.013208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.894 [2024-11-20 06:38:56.013216] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.894 [2024-11-20 06:38:56.013221] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.894 [2024-11-20 06:38:56.013226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101992 len:8 PRP1 0x0 PRP2 0x0 00:28:38.894 [2024-11-20 06:38:56.013233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.894 [2024-11-20 06:38:56.013240] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.894 [2024-11-20 06:38:56.013245] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.894 [2024-11-20 06:38:56.013250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102000 len:8 PRP1 0x0 PRP2 0x0 00:28:38.894 [2024-11-20 06:38:56.013257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.894 [2024-11-20 06:38:56.013264] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.894 [2024-11-20 06:38:56.013270] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.894 [2024-11-20 06:38:56.013275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102008 len:8 PRP1 0x0 PRP2 0x0 00:28:38.894 [2024-11-20 06:38:56.013283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.894 [2024-11-20 06:38:56.013289] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.894 [2024-11-20 06:38:56.013294] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.894 [2024-11-20 06:38:56.013299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102016 len:8 PRP1 0x0 PRP2 0x0 00:28:38.894 [2024-11-20 06:38:56.013305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.894 [2024-11-20 06:38:56.013312] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.895 [2024-11-20 06:38:56.013318] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.895 [2024-11-20 06:38:56.013323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:101200 len:8 PRP1 0x0 PRP2 0x0 00:28:38.895 [2024-11-20 06:38:56.013330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.895 [2024-11-20 06:38:56.013336] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.895 [2024-11-20 06:38:56.013341] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.895 [2024-11-20 06:38:56.013346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:101208 len:8 PRP1 0x0 PRP2 0x0 00:28:38.895 [2024-11-20 06:38:56.013352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.895 [2024-11-20 06:38:56.013359] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.895 [2024-11-20 06:38:56.013363] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.895 [2024-11-20 06:38:56.024647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:101216 len:8 PRP1 0x0 PRP2 0x0 00:28:38.895 [2024-11-20 06:38:56.024659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.895 [2024-11-20 06:38:56.024667] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.895 [2024-11-20 06:38:56.024673] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.895 [2024-11-20 06:38:56.024679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:101224 len:8 PRP1 0x0 PRP2 0x0 00:28:38.895 [2024-11-20 06:38:56.024685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.895 [2024-11-20 06:38:56.024691] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.895 [2024-11-20 06:38:56.024696] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.895 [2024-11-20 06:38:56.024701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:101232 len:8 PRP1 0x0 PRP2 0x0 00:28:38.895 [2024-11-20 06:38:56.024708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.895 [2024-11-20 06:38:56.024717] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.895 [2024-11-20 06:38:56.024723] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.895 [2024-11-20 06:38:56.024728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:101240 len:8 PRP1 0x0 PRP2 0x0 00:28:38.895 [2024-11-20 06:38:56.024736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.895 [2024-11-20 06:38:56.024744] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.895 [2024-11-20 06:38:56.024749] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.895 [2024-11-20 06:38:56.024757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:101248 len:8 PRP1 0x0 PRP2 0x0 00:28:38.895 [2024-11-20 06:38:56.024763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.895 [2024-11-20 06:38:56.024770] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.895 [2024-11-20 06:38:56.024775] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.895 [2024-11-20 06:38:56.024781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:101256 len:8 PRP1 0x0 PRP2 0x0 00:28:38.895 [2024-11-20 06:38:56.024788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.895 [2024-11-20 06:38:56.024795] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.895 [2024-11-20 06:38:56.024800] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.895 [2024-11-20 06:38:56.024806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:101264 len:8 PRP1 0x0 PRP2 0x0 00:28:38.895 [2024-11-20 06:38:56.024813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.895 [2024-11-20 06:38:56.024820] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.895 [2024-11-20 06:38:56.024826] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.895 [2024-11-20 06:38:56.024831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:101272 len:8 PRP1 0x0 PRP2 0x0 00:28:38.895 [2024-11-20 06:38:56.024837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.895 [2024-11-20 06:38:56.024843] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.895 [2024-11-20 06:38:56.024849] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.895 [2024-11-20 06:38:56.024855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:101280 len:8 PRP1 0x0 PRP2 0x0 00:28:38.895 [2024-11-20 06:38:56.024862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.895 [2024-11-20 06:38:56.024868] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.895 [2024-11-20 06:38:56.024873] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.895 [2024-11-20 06:38:56.024879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:101288 len:8 PRP1 0x0 PRP2 0x0 00:28:38.895 [2024-11-20 06:38:56.024886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.895 [2024-11-20 06:38:56.024893] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.895 [2024-11-20 06:38:56.024898] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.895 [2024-11-20 06:38:56.024903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:101296 len:8 PRP1 0x0 PRP2 0x0 00:28:38.895 [2024-11-20 06:38:56.024909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.895 [2024-11-20 06:38:56.024917] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.895 [2024-11-20 06:38:56.024923] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.895 [2024-11-20 06:38:56.024929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:101304 len:8 PRP1 0x0 PRP2 0x0 00:28:38.895 [2024-11-20 06:38:56.024935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.895 [2024-11-20 06:38:56.024943] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.895 [2024-11-20 06:38:56.024949] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.895 [2024-11-20 06:38:56.024955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:101312 len:8 PRP1 0x0 PRP2 0x0 00:28:38.895 [2024-11-20 06:38:56.024961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.895 [2024-11-20 06:38:56.024968] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.895 [2024-11-20 06:38:56.024973] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.895 [2024-11-20 06:38:56.024978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:101320 len:8 PRP1 0x0 PRP2 0x0 00:28:38.895 [2024-11-20 06:38:56.024985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.895 [2024-11-20 06:38:56.024992] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.895 [2024-11-20 06:38:56.024998] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.895 [2024-11-20 06:38:56.025003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:101000 len:8 PRP1 0x0 PRP2 0x0 00:28:38.895 [2024-11-20 06:38:56.025009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.895 [2024-11-20 06:38:56.025017] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.895 [2024-11-20 06:38:56.025022] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.895 [2024-11-20 06:38:56.025028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:101008 len:8 PRP1 0x0 PRP2 0x0 00:28:38.895 [2024-11-20 06:38:56.025034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.895 [2024-11-20 06:38:56.025041] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.895 [2024-11-20 06:38:56.025045] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.895 [2024-11-20 06:38:56.025051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:101016 len:8 PRP1 0x0 PRP2 0x0 00:28:38.895 [2024-11-20 06:38:56.025058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.895 [2024-11-20 06:38:56.025065] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.895 [2024-11-20 06:38:56.025070] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.895 [2024-11-20 06:38:56.025076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:101024 len:8 PRP1 0x0 PRP2 0x0 00:28:38.895 [2024-11-20 06:38:56.025082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.895 [2024-11-20 06:38:56.025089] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.895 [2024-11-20 06:38:56.025094] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.895 [2024-11-20 06:38:56.025100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:101032 len:8 PRP1 0x0 PRP2 0x0 00:28:38.895 [2024-11-20 06:38:56.025106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.895 [2024-11-20 06:38:56.025113] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.896 [2024-11-20 06:38:56.025119] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.896 [2024-11-20 06:38:56.025125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:101040 len:8 PRP1 0x0 PRP2 0x0 00:28:38.896 [2024-11-20 06:38:56.025133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.896 [2024-11-20 06:38:56.025140] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.896 [2024-11-20 06:38:56.025144] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.896 [2024-11-20 06:38:56.025150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:101048 len:8 PRP1 0x0 PRP2 0x0 00:28:38.896 [2024-11-20 06:38:56.025157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.896 [2024-11-20 06:38:56.025164] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.896 [2024-11-20 06:38:56.025169] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.896 [2024-11-20 06:38:56.025175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:101056 len:8 PRP1 0x0 PRP2 0x0 00:28:38.896 [2024-11-20 06:38:56.025181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.896 [2024-11-20 06:38:56.025187] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.896 [2024-11-20 06:38:56.025195] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.896 [2024-11-20 06:38:56.025203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:101064 len:8 PRP1 0x0 PRP2 0x0 00:28:38.896 [2024-11-20 06:38:56.025211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.896 [2024-11-20 06:38:56.025218] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.896 [2024-11-20 06:38:56.025223] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.896 [2024-11-20 06:38:56.025228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:101072 len:8 PRP1 0x0 PRP2 0x0 00:28:38.896 [2024-11-20 06:38:56.025235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.896 [2024-11-20 06:38:56.025242] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.896 [2024-11-20 06:38:56.025247] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.896 [2024-11-20 06:38:56.025252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:101080 len:8 PRP1 0x0 PRP2 0x0 00:28:38.896 [2024-11-20 06:38:56.025274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.896 [2024-11-20 06:38:56.025284] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.896 [2024-11-20 06:38:56.025291] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.896 [2024-11-20 06:38:56.025298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:101088 len:8 PRP1 0x0 PRP2 0x0 00:28:38.896 [2024-11-20 06:38:56.025307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.896 [2024-11-20 06:38:56.025316] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.896 [2024-11-20 06:38:56.025323] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.896 [2024-11-20 06:38:56.025330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:101096 len:8 PRP1 0x0 PRP2 0x0 00:28:38.896 [2024-11-20 06:38:56.025338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.896 [2024-11-20 06:38:56.025349] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.896 [2024-11-20 06:38:56.025356] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.896 [2024-11-20 06:38:56.025365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:101104 len:8 PRP1 0x0 PRP2 0x0 00:28:38.896 [2024-11-20 06:38:56.025374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.896 [2024-11-20 06:38:56.025384] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.896 [2024-11-20 06:38:56.025391] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.896 [2024-11-20 06:38:56.025398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:101112 len:8 PRP1 0x0 PRP2 0x0 00:28:38.896 [2024-11-20 06:38:56.025407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.896 [2024-11-20 06:38:56.025417] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.896 [2024-11-20 06:38:56.025423] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.896 [2024-11-20 06:38:56.025431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:101120 len:8 PRP1 0x0 PRP2 0x0 00:28:38.896 [2024-11-20 06:38:56.025439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.896 [2024-11-20 06:38:56.025449] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.896 [2024-11-20 06:38:56.025456] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.896 [2024-11-20 06:38:56.025464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:101128 len:8 PRP1 0x0 PRP2 0x0 00:28:38.896 [2024-11-20 06:38:56.025472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.896 [2024-11-20 06:38:56.025482] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.896 [2024-11-20 06:38:56.025489] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.896 [2024-11-20 06:38:56.025496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:101136 len:8 PRP1 0x0 PRP2 0x0 00:28:38.896 [2024-11-20 06:38:56.025505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.896 [2024-11-20 06:38:56.025514] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.896 [2024-11-20 06:38:56.025521] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.896 [2024-11-20 06:38:56.025529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:101144 len:8 PRP1 0x0 PRP2 0x0 00:28:38.896 [2024-11-20 06:38:56.025537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.896 [2024-11-20 06:38:56.025546] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.896 [2024-11-20 06:38:56.025553] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.896 [2024-11-20 06:38:56.025561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:101152 len:8 PRP1 0x0 PRP2 0x0 00:28:38.896 [2024-11-20 06:38:56.025570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.896 [2024-11-20 06:38:56.025579] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.896 [2024-11-20 06:38:56.025587] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.896 [2024-11-20 06:38:56.025595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:101160 len:8 PRP1 0x0 PRP2 0x0 00:28:38.896 [2024-11-20 06:38:56.025604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.896 [2024-11-20 06:38:56.025613] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.896 [2024-11-20 06:38:56.025622] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.896 [2024-11-20 06:38:56.025631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:101168 len:8 PRP1 0x0 PRP2 0x0 00:28:38.896 [2024-11-20 06:38:56.025640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.896 [2024-11-20 06:38:56.025649] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.896 [2024-11-20 06:38:56.025656] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.896 [2024-11-20 06:38:56.025664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:101176 len:8 PRP1 0x0 PRP2 0x0 00:28:38.896 [2024-11-20 06:38:56.025673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.896 [2024-11-20 06:38:56.025682] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.896 [2024-11-20 06:38:56.025689] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.896 [2024-11-20 06:38:56.025698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:101184 len:8 PRP1 0x0 PRP2 0x0 00:28:38.896 [2024-11-20 06:38:56.025708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.896 [2024-11-20 06:38:56.025717] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.896 [2024-11-20 06:38:56.025724] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.896 [2024-11-20 06:38:56.025731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:101192 len:8 PRP1 0x0 PRP2 0x0 00:28:38.896 [2024-11-20 06:38:56.025740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.896 [2024-11-20 06:38:56.025749] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.896 [2024-11-20 06:38:56.025756] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.896 [2024-11-20 06:38:56.025763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101328 len:8 PRP1 0x0 PRP2 0x0 00:28:38.896 [2024-11-20 06:38:56.025772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.896 [2024-11-20 06:38:56.025782] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.896 [2024-11-20 06:38:56.025788] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.896 [2024-11-20 06:38:56.025796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101336 len:8 PRP1 0x0 PRP2 0x0 00:28:38.896 [2024-11-20 06:38:56.025806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.897 [2024-11-20 06:38:56.025815] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.897 [2024-11-20 06:38:56.025822] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.897 [2024-11-20 06:38:56.025829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101344 len:8 PRP1 0x0 PRP2 0x0 00:28:38.897 [2024-11-20 06:38:56.025838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.897 [2024-11-20 06:38:56.025848] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.897 [2024-11-20 06:38:56.025855] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.897 [2024-11-20 06:38:56.025862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101352 len:8 PRP1 0x0 PRP2 0x0 00:28:38.897 [2024-11-20 06:38:56.025871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.897 [2024-11-20 06:38:56.025884] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.897 [2024-11-20 06:38:56.025891] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.897 [2024-11-20 06:38:56.025898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101360 len:8 PRP1 0x0 PRP2 0x0 00:28:38.897 [2024-11-20 06:38:56.025908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.897 [2024-11-20 06:38:56.025918] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.897 [2024-11-20 06:38:56.025925] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.897 [2024-11-20 06:38:56.025932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101368 len:8 PRP1 0x0 PRP2 0x0 00:28:38.897 [2024-11-20 06:38:56.025940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.897 [2024-11-20 06:38:56.025950] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.897 [2024-11-20 06:38:56.025957] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.897 [2024-11-20 06:38:56.025964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101376 len:8 PRP1 0x0 PRP2 0x0 00:28:38.897 [2024-11-20 06:38:56.025972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.897 [2024-11-20 06:38:56.025981] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.897 [2024-11-20 06:38:56.025990] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.897 [2024-11-20 06:38:56.025997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101384 len:8 PRP1 0x0 PRP2 0x0 00:28:38.897 [2024-11-20 06:38:56.026005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.897 [2024-11-20 06:38:56.026014] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.897 [2024-11-20 06:38:56.026021] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.897 [2024-11-20 06:38:56.026028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101392 len:8 PRP1 0x0 PRP2 0x0 00:28:38.897 [2024-11-20 06:38:56.026036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.897 [2024-11-20 06:38:56.026045] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.897 [2024-11-20 06:38:56.026052] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.897 [2024-11-20 06:38:56.026059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101400 len:8 PRP1 0x0 PRP2 0x0 00:28:38.897 [2024-11-20 06:38:56.026068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.897 [2024-11-20 06:38:56.026076] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.897 [2024-11-20 06:38:56.026083] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.897 [2024-11-20 06:38:56.026091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101408 len:8 PRP1 0x0 PRP2 0x0 00:28:38.897 [2024-11-20 06:38:56.026100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.897 [2024-11-20 06:38:56.026108] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.897 [2024-11-20 06:38:56.026115] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.897 [2024-11-20 06:38:56.026122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101416 len:8 PRP1 0x0 PRP2 0x0 00:28:38.897 [2024-11-20 06:38:56.026132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.897 [2024-11-20 06:38:56.026143] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.897 [2024-11-20 06:38:56.026149] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.897 [2024-11-20 06:38:56.026158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101424 len:8 PRP1 0x0 PRP2 0x0 00:28:38.897 [2024-11-20 06:38:56.026167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.897 [2024-11-20 06:38:56.026176] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.897 [2024-11-20 06:38:56.026183] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.897 [2024-11-20 06:38:56.026190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101432 len:8 PRP1 0x0 PRP2 0x0 00:28:38.897 [2024-11-20 06:38:56.026199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.897 [2024-11-20 06:38:56.026213] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.897 [2024-11-20 06:38:56.026220] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.897 [2024-11-20 06:38:56.026227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101440 len:8 PRP1 0x0 PRP2 0x0 00:28:38.897 [2024-11-20 06:38:56.026235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.897 [2024-11-20 06:38:56.026245] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.897 [2024-11-20 06:38:56.026254] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.897 [2024-11-20 06:38:56.026261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101448 len:8 PRP1 0x0 PRP2 0x0 00:28:38.897 [2024-11-20 06:38:56.026269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.897 [2024-11-20 06:38:56.026278] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.897 [2024-11-20 06:38:56.026285] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.897 [2024-11-20 06:38:56.026292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101456 len:8 PRP1 0x0 PRP2 0x0 00:28:38.897 [2024-11-20 06:38:56.026300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.897 [2024-11-20 06:38:56.026310] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.897 [2024-11-20 06:38:56.026317] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.897 [2024-11-20 06:38:56.026323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101464 len:8 PRP1 0x0 PRP2 0x0 00:28:38.897 [2024-11-20 06:38:56.026331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.897 [2024-11-20 06:38:56.026340] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.897 [2024-11-20 06:38:56.026348] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.897 [2024-11-20 06:38:56.026355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101472 len:8 PRP1 0x0 PRP2 0x0 00:28:38.897 [2024-11-20 06:38:56.026363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.897 [2024-11-20 06:38:56.026372] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.897 [2024-11-20 06:38:56.026380] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.897 [2024-11-20 06:38:56.026389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101480 len:8 PRP1 0x0 PRP2 0x0 00:28:38.897 [2024-11-20 06:38:56.026398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.897 [2024-11-20 06:38:56.026408] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.897 [2024-11-20 06:38:56.026415] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.897 [2024-11-20 06:38:56.026422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101488 len:8 PRP1 0x0 PRP2 0x0 00:28:38.897 [2024-11-20 06:38:56.026431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.897 [2024-11-20 06:38:56.026439] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.897 [2024-11-20 06:38:56.026446] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.897 [2024-11-20 06:38:56.026454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101496 len:8 PRP1 0x0 PRP2 0x0 00:28:38.897 [2024-11-20 06:38:56.026462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.897 [2024-11-20 06:38:56.026471] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.897 [2024-11-20 06:38:56.026477] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.897 [2024-11-20 06:38:56.026486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101504 len:8 PRP1 0x0 PRP2 0x0 00:28:38.897 [2024-11-20 06:38:56.026494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.897 [2024-11-20 06:38:56.026503] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.897 [2024-11-20 06:38:56.026510] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.897 [2024-11-20 06:38:56.026518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101512 len:8 PRP1 0x0 PRP2 0x0 00:28:38.897 [2024-11-20 06:38:56.026527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.897 [2024-11-20 06:38:56.026535] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.897 [2024-11-20 06:38:56.026541] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.897 [2024-11-20 06:38:56.026549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101520 len:8 PRP1 0x0 PRP2 0x0 00:28:38.897 [2024-11-20 06:38:56.026558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.897 [2024-11-20 06:38:56.026566] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.898 [2024-11-20 06:38:56.026572] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.898 [2024-11-20 06:38:56.026579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101528 len:8 PRP1 0x0 PRP2 0x0 00:28:38.898 [2024-11-20 06:38:56.026588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.898 [2024-11-20 06:38:56.026597] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.898 [2024-11-20 06:38:56.026604] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.898 [2024-11-20 06:38:56.026610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101536 len:8 PRP1 0x0 PRP2 0x0 00:28:38.898 [2024-11-20 06:38:56.026620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.898 [2024-11-20 06:38:56.026633] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.898 [2024-11-20 06:38:56.026640] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.898 [2024-11-20 06:38:56.026647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101544 len:8 PRP1 0x0 PRP2 0x0 00:28:38.898 [2024-11-20 06:38:56.026656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.898 [2024-11-20 06:38:56.033550] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.898 [2024-11-20 06:38:56.033560] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.898 [2024-11-20 06:38:56.033569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101552 len:8 PRP1 0x0 PRP2 0x0 00:28:38.898 [2024-11-20 06:38:56.033578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.898 [2024-11-20 06:38:56.033588] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.898 [2024-11-20 06:38:56.033595] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.898 [2024-11-20 06:38:56.033602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101560 len:8 PRP1 0x0 PRP2 0x0 00:28:38.898 [2024-11-20 06:38:56.033610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.898 [2024-11-20 06:38:56.033620] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.898 [2024-11-20 06:38:56.033627] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.898 [2024-11-20 06:38:56.033634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101568 len:8 PRP1 0x0 PRP2 0x0 00:28:38.898 [2024-11-20 06:38:56.033643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.898 [2024-11-20 06:38:56.033652] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.898 [2024-11-20 06:38:56.033661] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.898 [2024-11-20 06:38:56.033669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101576 len:8 PRP1 0x0 PRP2 0x0 00:28:38.898 [2024-11-20 06:38:56.033677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.898 [2024-11-20 06:38:56.033686] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.898 [2024-11-20 06:38:56.033693] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.898 [2024-11-20 06:38:56.033701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101584 len:8 PRP1 0x0 PRP2 0x0 00:28:38.898 [2024-11-20 06:38:56.033710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.898 [2024-11-20 06:38:56.033719] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.898 [2024-11-20 06:38:56.033725] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.898 [2024-11-20 06:38:56.033733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101592 len:8 PRP1 0x0 PRP2 0x0 00:28:38.898 [2024-11-20 06:38:56.033742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.898 [2024-11-20 06:38:56.033750] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.898 [2024-11-20 06:38:56.033757] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.898 [2024-11-20 06:38:56.033764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101600 len:8 PRP1 0x0 PRP2 0x0 00:28:38.898 [2024-11-20 06:38:56.033775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.898 [2024-11-20 06:38:56.033784] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.898 [2024-11-20 06:38:56.033791] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.898 [2024-11-20 06:38:56.033798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101608 len:8 PRP1 0x0 PRP2 0x0 00:28:38.898 [2024-11-20 06:38:56.033808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.898 [2024-11-20 06:38:56.033817] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.898 [2024-11-20 06:38:56.033824] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.898 [2024-11-20 06:38:56.033832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101616 len:8 PRP1 0x0 PRP2 0x0 00:28:38.898 [2024-11-20 06:38:56.033841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.898 [2024-11-20 06:38:56.033850] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.898 [2024-11-20 06:38:56.033857] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.898 [2024-11-20 06:38:56.033864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101624 len:8 PRP1 0x0 PRP2 0x0 00:28:38.898 [2024-11-20 06:38:56.033874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.898 [2024-11-20 06:38:56.033882] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.898 [2024-11-20 06:38:56.033889] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.898 [2024-11-20 06:38:56.033897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101632 len:8 PRP1 0x0 PRP2 0x0 00:28:38.898 [2024-11-20 06:38:56.033905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.898 [2024-11-20 06:38:56.033914] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.898 [2024-11-20 06:38:56.033922] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.898 [2024-11-20 06:38:56.033929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101640 len:8 PRP1 0x0 PRP2 0x0 00:28:38.898 [2024-11-20 06:38:56.033937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.898 [2024-11-20 06:38:56.033947] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.898 [2024-11-20 06:38:56.033954] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.898 [2024-11-20 06:38:56.033961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101648 len:8 PRP1 0x0 PRP2 0x0 00:28:38.898 [2024-11-20 06:38:56.033970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.898 [2024-11-20 06:38:56.033979] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.898 [2024-11-20 06:38:56.033986] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.898 [2024-11-20 06:38:56.033993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101656 len:8 PRP1 0x0 PRP2 0x0 00:28:38.898 [2024-11-20 06:38:56.034002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.898 [2024-11-20 06:38:56.034010] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.898 [2024-11-20 06:38:56.034018] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.898 [2024-11-20 06:38:56.034027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101664 len:8 PRP1 0x0 PRP2 0x0 00:28:38.898 [2024-11-20 06:38:56.034036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.898 [2024-11-20 06:38:56.034046] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.898 [2024-11-20 06:38:56.034052] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.898 [2024-11-20 06:38:56.034060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101672 len:8 PRP1 0x0 PRP2 0x0 00:28:38.898 [2024-11-20 06:38:56.034069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.898 [2024-11-20 06:38:56.034078] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.898 [2024-11-20 06:38:56.034086] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.898 [2024-11-20 06:38:56.034094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101680 len:8 PRP1 0x0 PRP2 0x0 00:28:38.898 [2024-11-20 06:38:56.034102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.898 [2024-11-20 06:38:56.034111] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.898 [2024-11-20 06:38:56.034118] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.898 [2024-11-20 06:38:56.034125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101688 len:8 PRP1 0x0 PRP2 0x0 00:28:38.898 [2024-11-20 06:38:56.034135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.898 [2024-11-20 06:38:56.034144] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.898 [2024-11-20 06:38:56.034150] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.898 [2024-11-20 06:38:56.034157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101696 len:8 PRP1 0x0 PRP2 0x0 00:28:38.898 [2024-11-20 06:38:56.034166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.898 [2024-11-20 06:38:56.034175] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.898 [2024-11-20 06:38:56.034183] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.898 [2024-11-20 06:38:56.034190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101704 len:8 PRP1 0x0 PRP2 0x0 00:28:38.898 [2024-11-20 06:38:56.034198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.898 [2024-11-20 06:38:56.034214] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.898 [2024-11-20 06:38:56.034221] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.899 [2024-11-20 06:38:56.034229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101712 len:8 PRP1 0x0 PRP2 0x0 00:28:38.899 [2024-11-20 06:38:56.034237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.899 [2024-11-20 06:38:56.034246] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.899 [2024-11-20 06:38:56.034252] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.899 [2024-11-20 06:38:56.034260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101720 len:8 PRP1 0x0 PRP2 0x0 00:28:38.899 [2024-11-20 06:38:56.034269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.899 [2024-11-20 06:38:56.034277] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.899 [2024-11-20 06:38:56.034287] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.899 [2024-11-20 06:38:56.034295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101728 len:8 PRP1 0x0 PRP2 0x0 00:28:38.899 [2024-11-20 06:38:56.034303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.899 [2024-11-20 06:38:56.034312] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.899 [2024-11-20 06:38:56.034319] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.899 [2024-11-20 06:38:56.034327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101736 len:8 PRP1 0x0 PRP2 0x0 00:28:38.899 [2024-11-20 06:38:56.034335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.899 [2024-11-20 06:38:56.034345] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.899 [2024-11-20 06:38:56.034352] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.899 [2024-11-20 06:38:56.034359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101744 len:8 PRP1 0x0 PRP2 0x0 00:28:38.899 [2024-11-20 06:38:56.034368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.899 [2024-11-20 06:38:56.034378] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.899 [2024-11-20 06:38:56.034385] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.899 [2024-11-20 06:38:56.034392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101752 len:8 PRP1 0x0 PRP2 0x0 00:28:38.899 [2024-11-20 06:38:56.034402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.899 [2024-11-20 06:38:56.034411] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.899 [2024-11-20 06:38:56.034418] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.899 [2024-11-20 06:38:56.034426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101760 len:8 PRP1 0x0 PRP2 0x0 00:28:38.899 [2024-11-20 06:38:56.034434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.899 [2024-11-20 06:38:56.034443] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.899 [2024-11-20 06:38:56.034451] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.899 [2024-11-20 06:38:56.034459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101768 len:8 PRP1 0x0 PRP2 0x0 00:28:38.899 [2024-11-20 06:38:56.034467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.899 [2024-11-20 06:38:56.034477] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.899 [2024-11-20 06:38:56.034484] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.899 [2024-11-20 06:38:56.034491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101776 len:8 PRP1 0x0 PRP2 0x0 00:28:38.899 [2024-11-20 06:38:56.034500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.899 [2024-11-20 06:38:56.034508] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.899 [2024-11-20 06:38:56.034515] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.899 [2024-11-20 06:38:56.034523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101784 len:8 PRP1 0x0 PRP2 0x0 00:28:38.899 [2024-11-20 06:38:56.034532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.899 [2024-11-20 06:38:56.034542] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.899 [2024-11-20 06:38:56.034550] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.899 [2024-11-20 06:38:56.034557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101792 len:8 PRP1 0x0 PRP2 0x0 00:28:38.899 [2024-11-20 06:38:56.034565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.899 [2024-11-20 06:38:56.034575] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.899 [2024-11-20 06:38:56.034582] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.899 [2024-11-20 06:38:56.034589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101800 len:8 PRP1 0x0 PRP2 0x0 00:28:38.899 [2024-11-20 06:38:56.034598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.899 [2024-11-20 06:38:56.034608] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.899 [2024-11-20 06:38:56.034615] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.899 [2024-11-20 06:38:56.034623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101808 len:8 PRP1 0x0 PRP2 0x0 00:28:38.899 [2024-11-20 06:38:56.034631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.899 [2024-11-20 06:38:56.034640] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.899 [2024-11-20 06:38:56.034647] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.899 [2024-11-20 06:38:56.034655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101816 len:8 PRP1 0x0 PRP2 0x0 00:28:38.899 [2024-11-20 06:38:56.034663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.899 [2024-11-20 06:38:56.034672] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.899 [2024-11-20 06:38:56.034679] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.899 [2024-11-20 06:38:56.034686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101824 len:8 PRP1 0x0 PRP2 0x0 00:28:38.899 [2024-11-20 06:38:56.034695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.899 [2024-11-20 06:38:56.034704] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.899 [2024-11-20 06:38:56.034713] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.899 [2024-11-20 06:38:56.034721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101832 len:8 PRP1 0x0 PRP2 0x0 00:28:38.899 [2024-11-20 06:38:56.034729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.899 [2024-11-20 06:38:56.034738] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.899 [2024-11-20 06:38:56.034745] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.899 [2024-11-20 06:38:56.034752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101840 len:8 PRP1 0x0 PRP2 0x0 00:28:38.899 [2024-11-20 06:38:56.034761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.899 [2024-11-20 06:38:56.034816] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:28:38.899 [2024-11-20 06:38:56.034846] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:38.899 [2024-11-20 06:38:56.034859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.899 [2024-11-20 06:38:56.034870] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:38.899 [2024-11-20 06:38:56.034879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.899 [2024-11-20 06:38:56.034890] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:38.899 [2024-11-20 06:38:56.034898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.899 [2024-11-20 06:38:56.034909] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:38.899 [2024-11-20 06:38:56.034918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.899 [2024-11-20 06:38:56.034927] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:28:38.899 [2024-11-20 06:38:56.034969] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21fc340 (9): Bad file descriptor 00:28:38.899 [2024-11-20 06:38:56.039908] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:28:38.899 [2024-11-20 06:38:56.062958] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:28:38.899 11186.50 IOPS, 43.70 MiB/s [2024-11-20T05:39:10.735Z] 11299.00 IOPS, 44.14 MiB/s [2024-11-20T05:39:10.735Z] 11340.75 IOPS, 44.30 MiB/s [2024-11-20T05:39:10.735Z] [2024-11-20 06:38:59.533803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:32928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.899 [2024-11-20 06:38:59.533840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.899 [2024-11-20 06:38:59.533854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:32936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.899 [2024-11-20 06:38:59.533862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.899 [2024-11-20 06:38:59.533871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:32944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.899 [2024-11-20 06:38:59.533878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.899 [2024-11-20 06:38:59.533886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:32952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.899 [2024-11-20 06:38:59.533893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.899 [2024-11-20 06:38:59.533902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:32960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.900 [2024-11-20 06:38:59.533909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.900 [2024-11-20 06:38:59.533918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:32968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.900 [2024-11-20 06:38:59.533924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.900 [2024-11-20 06:38:59.533932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:32976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.900 [2024-11-20 06:38:59.533938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.900 [2024-11-20 06:38:59.533951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:32984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.900 [2024-11-20 06:38:59.533958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.900 [2024-11-20 06:38:59.533966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:32992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.900 [2024-11-20 06:38:59.533973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.900 [2024-11-20 06:38:59.533981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:33000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.900 [2024-11-20 06:38:59.533987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.900 [2024-11-20 06:38:59.533995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:33008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.900 [2024-11-20 06:38:59.534002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.900 [2024-11-20 06:38:59.534011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:33016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.900 [2024-11-20 06:38:59.534017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.900 [2024-11-20 06:38:59.534025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:33024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.900 [2024-11-20 06:38:59.534031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.900 [2024-11-20 06:38:59.534039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:33032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.900 [2024-11-20 06:38:59.534046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.900 [2024-11-20 06:38:59.534054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:33040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.900 [2024-11-20 06:38:59.534061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.900 [2024-11-20 06:38:59.534068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:33048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.900 [2024-11-20 06:38:59.534075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.900 [2024-11-20 06:38:59.534083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:33056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.900 [2024-11-20 06:38:59.534091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.900 [2024-11-20 06:38:59.534099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:33064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.900 [2024-11-20 06:38:59.534106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.900 [2024-11-20 06:38:59.534114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:33072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.900 [2024-11-20 06:38:59.534121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.900 [2024-11-20 06:38:59.534129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:33080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.900 [2024-11-20 06:38:59.534137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.900 [2024-11-20 06:38:59.534146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:33088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.900 [2024-11-20 06:38:59.534152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.900 [2024-11-20 06:38:59.534161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:33096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.900 [2024-11-20 06:38:59.534168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.900 [2024-11-20 06:38:59.534176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:33216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.900 [2024-11-20 06:38:59.534183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.900 [2024-11-20 06:38:59.534191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:33224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.900 [2024-11-20 06:38:59.534197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.900 [2024-11-20 06:38:59.534212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:33232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.900 [2024-11-20 06:38:59.534219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.900 [2024-11-20 06:38:59.534228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:33240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.900 [2024-11-20 06:38:59.534235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.900 [2024-11-20 06:38:59.534242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:33248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.900 [2024-11-20 06:38:59.534249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.900 [2024-11-20 06:38:59.534258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:33256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.900 [2024-11-20 06:38:59.534265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.900 [2024-11-20 06:38:59.534272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:33264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.900 [2024-11-20 06:38:59.534279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.900 [2024-11-20 06:38:59.534288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:33272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.900 [2024-11-20 06:38:59.534294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.900 [2024-11-20 06:38:59.534304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:33280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.900 [2024-11-20 06:38:59.534311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.900 [2024-11-20 06:38:59.534319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:33288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.900 [2024-11-20 06:38:59.534327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.900 [2024-11-20 06:38:59.534336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:33296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.900 [2024-11-20 06:38:59.534345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.901 [2024-11-20 06:38:59.534354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:33304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.901 [2024-11-20 06:38:59.534360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.901 [2024-11-20 06:38:59.534368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:33312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.901 [2024-11-20 06:38:59.534374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.901 [2024-11-20 06:38:59.534383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:33320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.901 [2024-11-20 06:38:59.534391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.901 [2024-11-20 06:38:59.534399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:33328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.901 [2024-11-20 06:38:59.534405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.901 [2024-11-20 06:38:59.534414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:33336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.901 [2024-11-20 06:38:59.534422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.901 [2024-11-20 06:38:59.534431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:33344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.901 [2024-11-20 06:38:59.534439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.901 [2024-11-20 06:38:59.534447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:33352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.901 [2024-11-20 06:38:59.534456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.901 [2024-11-20 06:38:59.534464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:33360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.901 [2024-11-20 06:38:59.534471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.901 [2024-11-20 06:38:59.534479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:33368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.901 [2024-11-20 06:38:59.534485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.901 [2024-11-20 06:38:59.534493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:33104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.901 [2024-11-20 06:38:59.534500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.901 [2024-11-20 06:38:59.534509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:33112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.901 [2024-11-20 06:38:59.534516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.901 [2024-11-20 06:38:59.534524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:33120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.901 [2024-11-20 06:38:59.534531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.901 [2024-11-20 06:38:59.534541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:33128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.901 [2024-11-20 06:38:59.534547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.901 [2024-11-20 06:38:59.534555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:33136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.901 [2024-11-20 06:38:59.534562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.901 [2024-11-20 06:38:59.534569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:33144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.901 [2024-11-20 06:38:59.534576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.901 [2024-11-20 06:38:59.534583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:33376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.901 [2024-11-20 06:38:59.534590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.901 [2024-11-20 06:38:59.534598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:33384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.901 [2024-11-20 06:38:59.534605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.901 [2024-11-20 06:38:59.534613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:33392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.901 [2024-11-20 06:38:59.534620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.901 [2024-11-20 06:38:59.534628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:33400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.901 [2024-11-20 06:38:59.534635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.901 [2024-11-20 06:38:59.534642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:33408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.901 [2024-11-20 06:38:59.534648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.901 [2024-11-20 06:38:59.534656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:33416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.901 [2024-11-20 06:38:59.534662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.901 [2024-11-20 06:38:59.534670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:33424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.901 [2024-11-20 06:38:59.534676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.901 [2024-11-20 06:38:59.534684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:33432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.901 [2024-11-20 06:38:59.534691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.901 [2024-11-20 06:38:59.534699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:33440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.901 [2024-11-20 06:38:59.534705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.901 [2024-11-20 06:38:59.534712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:33448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.901 [2024-11-20 06:38:59.534721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.901 [2024-11-20 06:38:59.534728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:33456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.901 [2024-11-20 06:38:59.534736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.901 [2024-11-20 06:38:59.534744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:33464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.901 [2024-11-20 06:38:59.534751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.901 [2024-11-20 06:38:59.534759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:33472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.901 [2024-11-20 06:38:59.534765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.901 [2024-11-20 06:38:59.534773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:33480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.901 [2024-11-20 06:38:59.534779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.901 [2024-11-20 06:38:59.534786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:33488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.901 [2024-11-20 06:38:59.534793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.901 [2024-11-20 06:38:59.534801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:33496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.901 [2024-11-20 06:38:59.534809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.901 [2024-11-20 06:38:59.534817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:33504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.901 [2024-11-20 06:38:59.534823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.901 [2024-11-20 06:38:59.534831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:33512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.901 [2024-11-20 06:38:59.534837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.901 [2024-11-20 06:38:59.534845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:33520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.901 [2024-11-20 06:38:59.534852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.901 [2024-11-20 06:38:59.534860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:33528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.901 [2024-11-20 06:38:59.534867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.901 [2024-11-20 06:38:59.534874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:33536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.901 [2024-11-20 06:38:59.534880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.901 [2024-11-20 06:38:59.534888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:33544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.901 [2024-11-20 06:38:59.534894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.901 [2024-11-20 06:38:59.534903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:33552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.901 [2024-11-20 06:38:59.534910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.901 [2024-11-20 06:38:59.534918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.901 [2024-11-20 06:38:59.534925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.901 [2024-11-20 06:38:59.534932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:33568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.902 [2024-11-20 06:38:59.534939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.902 [2024-11-20 06:38:59.534946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:33576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.902 [2024-11-20 06:38:59.534953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.902 [2024-11-20 06:38:59.534962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.902 [2024-11-20 06:38:59.534969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.902 [2024-11-20 06:38:59.534977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:33592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.902 [2024-11-20 06:38:59.534983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.902 [2024-11-20 06:38:59.535003] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.902 [2024-11-20 06:38:59.535011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33600 len:8 PRP1 0x0 PRP2 0x0 00:28:38.902 [2024-11-20 06:38:59.535017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.902 [2024-11-20 06:38:59.535028] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.902 [2024-11-20 06:38:59.535033] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.902 [2024-11-20 06:38:59.535039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33608 len:8 PRP1 0x0 PRP2 0x0 00:28:38.902 [2024-11-20 06:38:59.535045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.902 [2024-11-20 06:38:59.535051] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.902 [2024-11-20 06:38:59.535056] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.902 [2024-11-20 06:38:59.535064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33616 len:8 PRP1 0x0 PRP2 0x0 00:28:38.902 [2024-11-20 06:38:59.535071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.902 [2024-11-20 06:38:59.535078] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.902 [2024-11-20 06:38:59.535082] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.902 [2024-11-20 06:38:59.535088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33624 len:8 PRP1 0x0 PRP2 0x0 00:28:38.902 [2024-11-20 06:38:59.535094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.902 [2024-11-20 06:38:59.535100] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.902 [2024-11-20 06:38:59.535107] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.902 [2024-11-20 06:38:59.535112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33632 len:8 PRP1 0x0 PRP2 0x0 00:28:38.902 [2024-11-20 06:38:59.535118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.902 [2024-11-20 06:38:59.535125] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.902 [2024-11-20 06:38:59.535131] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.902 [2024-11-20 06:38:59.535136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33640 len:8 PRP1 0x0 PRP2 0x0 00:28:38.902 [2024-11-20 06:38:59.535142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.902 [2024-11-20 06:38:59.535148] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.902 [2024-11-20 06:38:59.535153] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.902 [2024-11-20 06:38:59.535158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33648 len:8 PRP1 0x0 PRP2 0x0 00:28:38.902 [2024-11-20 06:38:59.535164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.902 [2024-11-20 06:38:59.535171] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.902 [2024-11-20 06:38:59.535176] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.902 [2024-11-20 06:38:59.535181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33656 len:8 PRP1 0x0 PRP2 0x0 00:28:38.902 [2024-11-20 06:38:59.535188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.902 [2024-11-20 06:38:59.535194] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.902 [2024-11-20 06:38:59.535199] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.902 [2024-11-20 06:38:59.535210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33664 len:8 PRP1 0x0 PRP2 0x0 00:28:38.902 [2024-11-20 06:38:59.535216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.902 [2024-11-20 06:38:59.535222] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.902 [2024-11-20 06:38:59.535228] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.902 [2024-11-20 06:38:59.535233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33672 len:8 PRP1 0x0 PRP2 0x0 00:28:38.902 [2024-11-20 06:38:59.535239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.902 [2024-11-20 06:38:59.535245] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.902 [2024-11-20 06:38:59.535251] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.902 [2024-11-20 06:38:59.535258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33680 len:8 PRP1 0x0 PRP2 0x0 00:28:38.902 [2024-11-20 06:38:59.535264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.902 [2024-11-20 06:38:59.535271] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.902 [2024-11-20 06:38:59.535276] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.902 [2024-11-20 06:38:59.535281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33688 len:8 PRP1 0x0 PRP2 0x0 00:28:38.902 [2024-11-20 06:38:59.535287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.902 [2024-11-20 06:38:59.535295] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.902 [2024-11-20 06:38:59.535300] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.902 [2024-11-20 06:38:59.535306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:33152 len:8 PRP1 0x0 PRP2 0x0 00:28:38.902 [2024-11-20 06:38:59.535313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.902 [2024-11-20 06:38:59.535319] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.902 [2024-11-20 06:38:59.535324] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.902 [2024-11-20 06:38:59.535330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33696 len:8 PRP1 0x0 PRP2 0x0 00:28:38.902 [2024-11-20 06:38:59.535336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.902 [2024-11-20 06:38:59.535342] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.902 [2024-11-20 06:38:59.535346] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.902 [2024-11-20 06:38:59.535352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33704 len:8 PRP1 0x0 PRP2 0x0 00:28:38.902 [2024-11-20 06:38:59.535358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.902 [2024-11-20 06:38:59.535365] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.902 [2024-11-20 06:38:59.535370] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.902 [2024-11-20 06:38:59.535375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33712 len:8 PRP1 0x0 PRP2 0x0 00:28:38.902 [2024-11-20 06:38:59.535382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.902 [2024-11-20 06:38:59.535388] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.902 [2024-11-20 06:38:59.535393] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.902 [2024-11-20 06:38:59.535398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33720 len:8 PRP1 0x0 PRP2 0x0 00:28:38.902 [2024-11-20 06:38:59.535404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.902 [2024-11-20 06:38:59.535410] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.902 [2024-11-20 06:38:59.535414] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.902 [2024-11-20 06:38:59.535420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33728 len:8 PRP1 0x0 PRP2 0x0 00:28:38.902 [2024-11-20 06:38:59.535427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.902 [2024-11-20 06:38:59.535433] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.902 [2024-11-20 06:38:59.535438] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.902 [2024-11-20 06:38:59.535444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33736 len:8 PRP1 0x0 PRP2 0x0 00:28:38.902 [2024-11-20 06:38:59.535450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.902 [2024-11-20 06:38:59.535457] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.902 [2024-11-20 06:38:59.535462] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.902 [2024-11-20 06:38:59.535467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33744 len:8 PRP1 0x0 PRP2 0x0 00:28:38.902 [2024-11-20 06:38:59.535475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.902 [2024-11-20 06:38:59.535483] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.902 [2024-11-20 06:38:59.535488] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.902 [2024-11-20 06:38:59.535493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33752 len:8 PRP1 0x0 PRP2 0x0 00:28:38.902 [2024-11-20 06:38:59.535499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.902 [2024-11-20 06:38:59.535506] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.902 [2024-11-20 06:38:59.535510] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.903 [2024-11-20 06:38:59.535516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33760 len:8 PRP1 0x0 PRP2 0x0 00:28:38.903 [2024-11-20 06:38:59.535522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.903 [2024-11-20 06:38:59.535528] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.903 [2024-11-20 06:38:59.535533] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.903 [2024-11-20 06:38:59.535539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33768 len:8 PRP1 0x0 PRP2 0x0 00:28:38.903 [2024-11-20 06:38:59.535545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.903 [2024-11-20 06:38:59.535552] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.903 [2024-11-20 06:38:59.535556] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.903 [2024-11-20 06:38:59.535562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33776 len:8 PRP1 0x0 PRP2 0x0 00:28:38.903 [2024-11-20 06:38:59.535567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.903 [2024-11-20 06:38:59.535574] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.903 [2024-11-20 06:38:59.535579] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.903 [2024-11-20 06:38:59.535584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33784 len:8 PRP1 0x0 PRP2 0x0 00:28:38.903 [2024-11-20 06:38:59.535591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.903 [2024-11-20 06:38:59.535597] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.903 [2024-11-20 06:38:59.535602] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.903 [2024-11-20 06:38:59.535607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33792 len:8 PRP1 0x0 PRP2 0x0 00:28:38.903 [2024-11-20 06:38:59.535613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.903 [2024-11-20 06:38:59.535619] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.903 [2024-11-20 06:38:59.535624] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.903 [2024-11-20 06:38:59.535630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33800 len:8 PRP1 0x0 PRP2 0x0 00:28:38.903 [2024-11-20 06:38:59.535636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.903 [2024-11-20 06:38:59.535642] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.903 [2024-11-20 06:38:59.535648] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.903 [2024-11-20 06:38:59.535654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33808 len:8 PRP1 0x0 PRP2 0x0 00:28:38.903 [2024-11-20 06:38:59.535661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.903 [2024-11-20 06:38:59.535669] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.903 [2024-11-20 06:38:59.535674] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.903 [2024-11-20 06:38:59.535679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33816 len:8 PRP1 0x0 PRP2 0x0 00:28:38.903 [2024-11-20 06:38:59.535685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.903 [2024-11-20 06:38:59.535691] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.903 [2024-11-20 06:38:59.535697] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.903 [2024-11-20 06:38:59.535703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33824 len:8 PRP1 0x0 PRP2 0x0 00:28:38.903 [2024-11-20 06:38:59.535709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.903 [2024-11-20 06:38:59.535715] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.903 [2024-11-20 06:38:59.535720] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.903 [2024-11-20 06:38:59.535725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33832 len:8 PRP1 0x0 PRP2 0x0 00:28:38.903 [2024-11-20 06:38:59.535731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.903 [2024-11-20 06:38:59.535737] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.903 [2024-11-20 06:38:59.535742] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.903 [2024-11-20 06:38:59.535747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33840 len:8 PRP1 0x0 PRP2 0x0 00:28:38.903 [2024-11-20 06:38:59.535753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.903 [2024-11-20 06:38:59.535760] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.903 [2024-11-20 06:38:59.535765] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.903 [2024-11-20 06:38:59.535771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33848 len:8 PRP1 0x0 PRP2 0x0 00:28:38.903 [2024-11-20 06:38:59.535776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.903 [2024-11-20 06:38:59.535783] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.903 [2024-11-20 06:38:59.535787] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.903 [2024-11-20 06:38:59.535792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33856 len:8 PRP1 0x0 PRP2 0x0 00:28:38.903 [2024-11-20 06:38:59.535798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.903 [2024-11-20 06:38:59.535806] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.903 [2024-11-20 06:38:59.535811] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.903 [2024-11-20 06:38:59.535818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33864 len:8 PRP1 0x0 PRP2 0x0 00:28:38.903 [2024-11-20 06:38:59.535824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.903 [2024-11-20 06:38:59.535830] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.903 [2024-11-20 06:38:59.535839] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.903 [2024-11-20 06:38:59.535844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33872 len:8 PRP1 0x0 PRP2 0x0 00:28:38.903 [2024-11-20 06:38:59.535851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.903 [2024-11-20 06:38:59.535858] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.903 [2024-11-20 06:38:59.535864] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.903 [2024-11-20 06:38:59.535869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33880 len:8 PRP1 0x0 PRP2 0x0 00:28:38.903 [2024-11-20 06:38:59.535875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.903 [2024-11-20 06:38:59.535882] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.903 [2024-11-20 06:38:59.535887] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.903 [2024-11-20 06:38:59.535891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33888 len:8 PRP1 0x0 PRP2 0x0 00:28:38.903 [2024-11-20 06:38:59.535897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.903 [2024-11-20 06:38:59.547101] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.903 [2024-11-20 06:38:59.547121] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.903 [2024-11-20 06:38:59.547129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33896 len:8 PRP1 0x0 PRP2 0x0 00:28:38.903 [2024-11-20 06:38:59.547137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.903 [2024-11-20 06:38:59.547146] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.903 [2024-11-20 06:38:59.547153] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.903 [2024-11-20 06:38:59.547161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33904 len:8 PRP1 0x0 PRP2 0x0 00:28:38.903 [2024-11-20 06:38:59.547169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.903 [2024-11-20 06:38:59.547179] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.903 [2024-11-20 06:38:59.547185] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.903 [2024-11-20 06:38:59.547193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33912 len:8 PRP1 0x0 PRP2 0x0 00:28:38.903 [2024-11-20 06:38:59.547206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.903 [2024-11-20 06:38:59.547215] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.903 [2024-11-20 06:38:59.547223] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.903 [2024-11-20 06:38:59.547232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33920 len:8 PRP1 0x0 PRP2 0x0 00:28:38.903 [2024-11-20 06:38:59.547241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.903 [2024-11-20 06:38:59.547249] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.903 [2024-11-20 06:38:59.547257] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.903 [2024-11-20 06:38:59.547266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33928 len:8 PRP1 0x0 PRP2 0x0 00:28:38.903 [2024-11-20 06:38:59.547274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.903 [2024-11-20 06:38:59.547287] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.903 [2024-11-20 06:38:59.547294] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.903 [2024-11-20 06:38:59.547302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33936 len:8 PRP1 0x0 PRP2 0x0 00:28:38.903 [2024-11-20 06:38:59.547310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.903 [2024-11-20 06:38:59.547321] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.903 [2024-11-20 06:38:59.547328] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.903 [2024-11-20 06:38:59.547336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33944 len:8 PRP1 0x0 PRP2 0x0 00:28:38.904 [2024-11-20 06:38:59.547344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.904 [2024-11-20 06:38:59.547353] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.904 [2024-11-20 06:38:59.547361] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.904 [2024-11-20 06:38:59.547369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:33160 len:8 PRP1 0x0 PRP2 0x0 00:28:38.904 [2024-11-20 06:38:59.547377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.904 [2024-11-20 06:38:59.547387] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.904 [2024-11-20 06:38:59.547394] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.904 [2024-11-20 06:38:59.547401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:33168 len:8 PRP1 0x0 PRP2 0x0 00:28:38.904 [2024-11-20 06:38:59.547410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.904 [2024-11-20 06:38:59.547419] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.904 [2024-11-20 06:38:59.547426] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.904 [2024-11-20 06:38:59.547434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:33176 len:8 PRP1 0x0 PRP2 0x0 00:28:38.904 [2024-11-20 06:38:59.547443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.904 [2024-11-20 06:38:59.547451] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.904 [2024-11-20 06:38:59.547460] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.904 [2024-11-20 06:38:59.547467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:33184 len:8 PRP1 0x0 PRP2 0x0 00:28:38.904 [2024-11-20 06:38:59.547476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.904 [2024-11-20 06:38:59.547484] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.904 [2024-11-20 06:38:59.547492] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.904 [2024-11-20 06:38:59.547499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:33192 len:8 PRP1 0x0 PRP2 0x0 00:28:38.904 [2024-11-20 06:38:59.547507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.904 [2024-11-20 06:38:59.547516] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.904 [2024-11-20 06:38:59.547523] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.904 [2024-11-20 06:38:59.547531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:33200 len:8 PRP1 0x0 PRP2 0x0 00:28:38.904 [2024-11-20 06:38:59.547541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.904 [2024-11-20 06:38:59.547550] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.904 [2024-11-20 06:38:59.547557] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.904 [2024-11-20 06:38:59.547565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:33208 len:8 PRP1 0x0 PRP2 0x0 00:28:38.904 [2024-11-20 06:38:59.547574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.904 [2024-11-20 06:38:59.547622] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:28:38.904 [2024-11-20 06:38:59.547654] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:38.904 [2024-11-20 06:38:59.547666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.904 [2024-11-20 06:38:59.547677] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:38.904 [2024-11-20 06:38:59.547687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.904 [2024-11-20 06:38:59.547697] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:38.904 [2024-11-20 06:38:59.547708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.904 [2024-11-20 06:38:59.547718] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:38.904 [2024-11-20 06:38:59.547728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.904 [2024-11-20 06:38:59.547738] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:28:38.904 [2024-11-20 06:38:59.547768] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21fc340 (9): Bad file descriptor 00:28:38.904 [2024-11-20 06:38:59.551512] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:28:38.904 [2024-11-20 06:38:59.618592] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:28:38.904 11145.40 IOPS, 43.54 MiB/s [2024-11-20T05:39:10.740Z] 11219.50 IOPS, 43.83 MiB/s [2024-11-20T05:39:10.740Z] 11286.29 IOPS, 44.09 MiB/s [2024-11-20T05:39:10.740Z] 11319.50 IOPS, 44.22 MiB/s [2024-11-20T05:39:10.740Z] 11327.11 IOPS, 44.25 MiB/s [2024-11-20T05:39:10.740Z] [2024-11-20 06:39:03.958801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:65288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.904 [2024-11-20 06:39:03.958837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.904 [2024-11-20 06:39:03.958852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:65296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.904 [2024-11-20 06:39:03.958860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.904 [2024-11-20 06:39:03.958869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:65304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.904 [2024-11-20 06:39:03.958876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.904 [2024-11-20 06:39:03.958884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:65312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.904 [2024-11-20 06:39:03.958897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.904 [2024-11-20 06:39:03.958906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:65320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.904 [2024-11-20 06:39:03.958914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.904 [2024-11-20 06:39:03.958922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:65328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.904 [2024-11-20 06:39:03.958929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.904 [2024-11-20 06:39:03.958938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:65336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.904 [2024-11-20 06:39:03.958945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.904 [2024-11-20 06:39:03.958953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:65344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.904 [2024-11-20 06:39:03.958962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.904 [2024-11-20 06:39:03.958970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:65352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.904 [2024-11-20 06:39:03.958978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.904 [2024-11-20 06:39:03.958986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:65360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.904 [2024-11-20 06:39:03.958993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.904 [2024-11-20 06:39:03.959002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:65368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.904 [2024-11-20 06:39:03.959008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.904 [2024-11-20 06:39:03.959016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:65376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.904 [2024-11-20 06:39:03.959023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.904 [2024-11-20 06:39:03.959032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:65384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.904 [2024-11-20 06:39:03.959040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.904 [2024-11-20 06:39:03.959049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:65392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.904 [2024-11-20 06:39:03.959055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.904 [2024-11-20 06:39:03.959064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:65400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.904 [2024-11-20 06:39:03.959071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.904 [2024-11-20 06:39:03.959081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:65408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.904 [2024-11-20 06:39:03.959088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.904 [2024-11-20 06:39:03.959097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:65416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.904 [2024-11-20 06:39:03.959106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.904 [2024-11-20 06:39:03.959116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:65424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.904 [2024-11-20 06:39:03.959123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.904 [2024-11-20 06:39:03.959132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:65432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.904 [2024-11-20 06:39:03.959138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.904 [2024-11-20 06:39:03.959146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:65440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.904 [2024-11-20 06:39:03.959153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.905 [2024-11-20 06:39:03.959160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:65448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.905 [2024-11-20 06:39:03.959167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.905 [2024-11-20 06:39:03.959175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:65456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.905 [2024-11-20 06:39:03.959182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.905 [2024-11-20 06:39:03.959191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:65464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.905 [2024-11-20 06:39:03.959198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.905 [2024-11-20 06:39:03.959212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:65472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.905 [2024-11-20 06:39:03.959219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.905 [2024-11-20 06:39:03.959227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:65480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.905 [2024-11-20 06:39:03.959234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.905 [2024-11-20 06:39:03.959242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:65488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.905 [2024-11-20 06:39:03.959248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.905 [2024-11-20 06:39:03.959256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:65496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.905 [2024-11-20 06:39:03.959263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.905 [2024-11-20 06:39:03.959271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:65504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.905 [2024-11-20 06:39:03.959278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.905 [2024-11-20 06:39:03.959286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:65512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.905 [2024-11-20 06:39:03.959292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.905 [2024-11-20 06:39:03.959302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:65520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.905 [2024-11-20 06:39:03.959309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.905 [2024-11-20 06:39:03.959317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:65528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.905 [2024-11-20 06:39:03.959324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.905 [2024-11-20 06:39:03.959331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:65536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.905 [2024-11-20 06:39:03.959338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.905 [2024-11-20 06:39:03.959346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:65544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.905 [2024-11-20 06:39:03.959352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.905 [2024-11-20 06:39:03.959360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:65552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.905 [2024-11-20 06:39:03.959367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.905 [2024-11-20 06:39:03.959375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:65560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.905 [2024-11-20 06:39:03.959381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.905 [2024-11-20 06:39:03.959391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:65568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.905 [2024-11-20 06:39:03.959398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.905 [2024-11-20 06:39:03.959406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:65576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.905 [2024-11-20 06:39:03.959412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.905 [2024-11-20 06:39:03.959419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:65584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.905 [2024-11-20 06:39:03.959425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.905 [2024-11-20 06:39:03.959433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:65592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.905 [2024-11-20 06:39:03.959440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.905 [2024-11-20 06:39:03.959448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:65600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.905 [2024-11-20 06:39:03.959455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.905 [2024-11-20 06:39:03.959463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:65608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.905 [2024-11-20 06:39:03.959469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.905 [2024-11-20 06:39:03.959477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:65616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.905 [2024-11-20 06:39:03.959486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.905 [2024-11-20 06:39:03.959494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.905 [2024-11-20 06:39:03.959500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.905 [2024-11-20 06:39:03.959509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:65632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.905 [2024-11-20 06:39:03.959515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.905 [2024-11-20 06:39:03.959523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:65640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.905 [2024-11-20 06:39:03.959530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.905 [2024-11-20 06:39:03.959537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:65648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.905 [2024-11-20 06:39:03.959544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.905 [2024-11-20 06:39:03.959551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:65656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.905 [2024-11-20 06:39:03.959557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.905 [2024-11-20 06:39:03.959565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:65664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.905 [2024-11-20 06:39:03.959571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.905 [2024-11-20 06:39:03.959580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:65672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.905 [2024-11-20 06:39:03.959587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.905 [2024-11-20 06:39:03.959596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:65680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.905 [2024-11-20 06:39:03.959602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.905 [2024-11-20 06:39:03.959610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:65688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.905 [2024-11-20 06:39:03.959617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.905 [2024-11-20 06:39:03.959624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:65696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.905 [2024-11-20 06:39:03.959631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.905 [2024-11-20 06:39:03.959639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:65704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.905 [2024-11-20 06:39:03.959646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.905 [2024-11-20 06:39:03.959654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:65712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.905 [2024-11-20 06:39:03.959660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.905 [2024-11-20 06:39:03.959669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:65720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.905 [2024-11-20 06:39:03.959675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.906 [2024-11-20 06:39:03.959683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:65728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.906 [2024-11-20 06:39:03.959689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.906 [2024-11-20 06:39:03.959697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:65736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.906 [2024-11-20 06:39:03.959704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.906 [2024-11-20 06:39:03.959712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:65744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.906 [2024-11-20 06:39:03.959719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.906 [2024-11-20 06:39:03.959726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:65752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.906 [2024-11-20 06:39:03.959733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.906 [2024-11-20 06:39:03.959740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:65760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.906 [2024-11-20 06:39:03.959747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.906 [2024-11-20 06:39:03.959771] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.906 [2024-11-20 06:39:03.959780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65768 len:8 PRP1 0x0 PRP2 0x0 00:28:38.906 [2024-11-20 06:39:03.959786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.906 [2024-11-20 06:39:03.959796] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.906 [2024-11-20 06:39:03.959801] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.906 [2024-11-20 06:39:03.959806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65776 len:8 PRP1 0x0 PRP2 0x0 00:28:38.906 [2024-11-20 06:39:03.959813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.906 [2024-11-20 06:39:03.959820] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.906 [2024-11-20 06:39:03.959826] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.906 [2024-11-20 06:39:03.959831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65784 len:8 PRP1 0x0 PRP2 0x0 00:28:38.906 [2024-11-20 06:39:03.959837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.906 [2024-11-20 06:39:03.959844] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.906 [2024-11-20 06:39:03.959849] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.906 [2024-11-20 06:39:03.959854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65792 len:8 PRP1 0x0 PRP2 0x0 00:28:38.906 [2024-11-20 06:39:03.959860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.906 [2024-11-20 06:39:03.959868] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.906 [2024-11-20 06:39:03.959873] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.906 [2024-11-20 06:39:03.959880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65800 len:8 PRP1 0x0 PRP2 0x0 00:28:38.906 [2024-11-20 06:39:03.959887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.906 [2024-11-20 06:39:03.959893] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.906 [2024-11-20 06:39:03.959898] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.906 [2024-11-20 06:39:03.959903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65808 len:8 PRP1 0x0 PRP2 0x0 00:28:38.906 [2024-11-20 06:39:03.959910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.906 [2024-11-20 06:39:03.959916] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.906 [2024-11-20 06:39:03.959922] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.906 [2024-11-20 06:39:03.959927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65816 len:8 PRP1 0x0 PRP2 0x0 00:28:38.906 [2024-11-20 06:39:03.959933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.906 [2024-11-20 06:39:03.959940] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.906 [2024-11-20 06:39:03.959945] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.906 [2024-11-20 06:39:03.959950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65824 len:8 PRP1 0x0 PRP2 0x0 00:28:38.906 [2024-11-20 06:39:03.959956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.906 [2024-11-20 06:39:03.959962] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.906 [2024-11-20 06:39:03.959968] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.906 [2024-11-20 06:39:03.959973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65832 len:8 PRP1 0x0 PRP2 0x0 00:28:38.906 [2024-11-20 06:39:03.959980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.906 [2024-11-20 06:39:03.959987] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.906 [2024-11-20 06:39:03.959993] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.906 [2024-11-20 06:39:03.959998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65840 len:8 PRP1 0x0 PRP2 0x0 00:28:38.906 [2024-11-20 06:39:03.960004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.906 [2024-11-20 06:39:03.960010] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.906 [2024-11-20 06:39:03.960015] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.906 [2024-11-20 06:39:03.960020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65848 len:8 PRP1 0x0 PRP2 0x0 00:28:38.906 [2024-11-20 06:39:03.960026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.906 [2024-11-20 06:39:03.960033] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.906 [2024-11-20 06:39:03.960039] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.906 [2024-11-20 06:39:03.960045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65856 len:8 PRP1 0x0 PRP2 0x0 00:28:38.906 [2024-11-20 06:39:03.960051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.906 [2024-11-20 06:39:03.960059] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.906 [2024-11-20 06:39:03.960064] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.906 [2024-11-20 06:39:03.960069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65864 len:8 PRP1 0x0 PRP2 0x0 00:28:38.906 [2024-11-20 06:39:03.960075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.906 [2024-11-20 06:39:03.960081] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.906 [2024-11-20 06:39:03.960086] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.906 [2024-11-20 06:39:03.960093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65872 len:8 PRP1 0x0 PRP2 0x0 00:28:38.906 [2024-11-20 06:39:03.960099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.906 [2024-11-20 06:39:03.960111] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.906 [2024-11-20 06:39:03.960116] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.906 [2024-11-20 06:39:03.960122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65880 len:8 PRP1 0x0 PRP2 0x0 00:28:38.906 [2024-11-20 06:39:03.960129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.906 [2024-11-20 06:39:03.960136] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.906 [2024-11-20 06:39:03.960141] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.906 [2024-11-20 06:39:03.960147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65888 len:8 PRP1 0x0 PRP2 0x0 00:28:38.906 [2024-11-20 06:39:03.960154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.906 [2024-11-20 06:39:03.960161] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.906 [2024-11-20 06:39:03.960166] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.906 [2024-11-20 06:39:03.960171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65896 len:8 PRP1 0x0 PRP2 0x0 00:28:38.906 [2024-11-20 06:39:03.960178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.906 [2024-11-20 06:39:03.960185] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.906 [2024-11-20 06:39:03.960189] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.906 [2024-11-20 06:39:03.960195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65904 len:8 PRP1 0x0 PRP2 0x0 00:28:38.906 [2024-11-20 06:39:03.960206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.906 [2024-11-20 06:39:03.960214] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.906 [2024-11-20 06:39:03.960219] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.906 [2024-11-20 06:39:03.960224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65912 len:8 PRP1 0x0 PRP2 0x0 00:28:38.906 [2024-11-20 06:39:03.960231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.906 [2024-11-20 06:39:03.960238] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.906 [2024-11-20 06:39:03.960243] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.906 [2024-11-20 06:39:03.960248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65920 len:8 PRP1 0x0 PRP2 0x0 00:28:38.906 [2024-11-20 06:39:03.960260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.906 [2024-11-20 06:39:03.960268] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.906 [2024-11-20 06:39:03.960273] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.907 [2024-11-20 06:39:03.960278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65928 len:8 PRP1 0x0 PRP2 0x0 00:28:38.907 [2024-11-20 06:39:03.960285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.907 [2024-11-20 06:39:03.960292] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.907 [2024-11-20 06:39:03.960296] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.907 [2024-11-20 06:39:03.960302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65936 len:8 PRP1 0x0 PRP2 0x0 00:28:38.907 [2024-11-20 06:39:03.960308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.907 [2024-11-20 06:39:03.960314] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.907 [2024-11-20 06:39:03.960321] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.907 [2024-11-20 06:39:03.960326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65944 len:8 PRP1 0x0 PRP2 0x0 00:28:38.907 [2024-11-20 06:39:03.960333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.907 [2024-11-20 06:39:03.960340] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.907 [2024-11-20 06:39:03.960345] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.907 [2024-11-20 06:39:03.960353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65952 len:8 PRP1 0x0 PRP2 0x0 00:28:38.907 [2024-11-20 06:39:03.960360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.907 [2024-11-20 06:39:03.960366] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.907 [2024-11-20 06:39:03.960371] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.907 [2024-11-20 06:39:03.960377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65960 len:8 PRP1 0x0 PRP2 0x0 00:28:38.907 [2024-11-20 06:39:03.960383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.907 [2024-11-20 06:39:03.960390] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.907 [2024-11-20 06:39:03.960395] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.907 [2024-11-20 06:39:03.960400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65968 len:8 PRP1 0x0 PRP2 0x0 00:28:38.907 [2024-11-20 06:39:03.960406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.907 [2024-11-20 06:39:03.960413] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.907 [2024-11-20 06:39:03.960418] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.907 [2024-11-20 06:39:03.960422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65976 len:8 PRP1 0x0 PRP2 0x0 00:28:38.907 [2024-11-20 06:39:03.960429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.907 [2024-11-20 06:39:03.960436] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.907 [2024-11-20 06:39:03.960441] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.907 [2024-11-20 06:39:03.960448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65984 len:8 PRP1 0x0 PRP2 0x0 00:28:38.907 [2024-11-20 06:39:03.960454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.907 [2024-11-20 06:39:03.960462] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.907 [2024-11-20 06:39:03.960466] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.907 [2024-11-20 06:39:03.960471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65992 len:8 PRP1 0x0 PRP2 0x0 00:28:38.907 [2024-11-20 06:39:03.960478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.907 [2024-11-20 06:39:03.960484] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.907 [2024-11-20 06:39:03.960489] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.907 [2024-11-20 06:39:03.960494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66000 len:8 PRP1 0x0 PRP2 0x0 00:28:38.907 [2024-11-20 06:39:03.960500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.907 [2024-11-20 06:39:03.960508] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.907 [2024-11-20 06:39:03.960513] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.907 [2024-11-20 06:39:03.960518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66008 len:8 PRP1 0x0 PRP2 0x0 00:28:38.907 [2024-11-20 06:39:03.960524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.907 [2024-11-20 06:39:03.960530] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.907 [2024-11-20 06:39:03.960535] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.907 [2024-11-20 06:39:03.960542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66016 len:8 PRP1 0x0 PRP2 0x0 00:28:38.907 [2024-11-20 06:39:03.960547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.907 [2024-11-20 06:39:03.960554] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.907 [2024-11-20 06:39:03.960559] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.907 [2024-11-20 06:39:03.960565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66024 len:8 PRP1 0x0 PRP2 0x0 00:28:38.907 [2024-11-20 06:39:03.960571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.907 [2024-11-20 06:39:03.960577] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.907 [2024-11-20 06:39:03.960582] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.907 [2024-11-20 06:39:03.960587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66032 len:8 PRP1 0x0 PRP2 0x0 00:28:38.907 [2024-11-20 06:39:03.960593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.907 [2024-11-20 06:39:03.960599] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.907 [2024-11-20 06:39:03.960604] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.907 [2024-11-20 06:39:03.960609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66040 len:8 PRP1 0x0 PRP2 0x0 00:28:38.907 [2024-11-20 06:39:03.960618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.907 [2024-11-20 06:39:03.960625] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.907 [2024-11-20 06:39:03.960631] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.907 [2024-11-20 06:39:03.960637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66048 len:8 PRP1 0x0 PRP2 0x0 00:28:38.907 [2024-11-20 06:39:03.960643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.907 [2024-11-20 06:39:03.960649] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.907 [2024-11-20 06:39:03.960654] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.907 [2024-11-20 06:39:03.960659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66056 len:8 PRP1 0x0 PRP2 0x0 00:28:38.907 [2024-11-20 06:39:03.960665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.907 [2024-11-20 06:39:03.960672] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.907 [2024-11-20 06:39:03.960677] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.907 [2024-11-20 06:39:03.960682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66064 len:8 PRP1 0x0 PRP2 0x0 00:28:38.907 [2024-11-20 06:39:03.960688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.907 [2024-11-20 06:39:03.960695] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.907 [2024-11-20 06:39:03.960700] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.907 [2024-11-20 06:39:03.960705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66072 len:8 PRP1 0x0 PRP2 0x0 00:28:38.907 [2024-11-20 06:39:03.960711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.907 [2024-11-20 06:39:03.960717] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.907 [2024-11-20 06:39:03.960722] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.907 [2024-11-20 06:39:03.960729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66080 len:8 PRP1 0x0 PRP2 0x0 00:28:38.907 [2024-11-20 06:39:03.960736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.907 [2024-11-20 06:39:03.960742] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.907 [2024-11-20 06:39:03.960747] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.907 [2024-11-20 06:39:03.960752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66088 len:8 PRP1 0x0 PRP2 0x0 00:28:38.907 [2024-11-20 06:39:03.960758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.907 [2024-11-20 06:39:03.960764] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.907 [2024-11-20 06:39:03.960769] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.907 [2024-11-20 06:39:03.960774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66096 len:8 PRP1 0x0 PRP2 0x0 00:28:38.907 [2024-11-20 06:39:03.960782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.907 [2024-11-20 06:39:03.960789] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.907 [2024-11-20 06:39:03.960795] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.907 [2024-11-20 06:39:03.960800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66104 len:8 PRP1 0x0 PRP2 0x0 00:28:38.907 [2024-11-20 06:39:03.960809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.907 [2024-11-20 06:39:03.960817] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.907 [2024-11-20 06:39:03.960821] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.907 [2024-11-20 06:39:03.960826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66112 len:8 PRP1 0x0 PRP2 0x0 00:28:38.908 [2024-11-20 06:39:03.960832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.908 [2024-11-20 06:39:03.960839] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.908 [2024-11-20 06:39:03.960845] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.908 [2024-11-20 06:39:03.960850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66120 len:8 PRP1 0x0 PRP2 0x0 00:28:38.908 [2024-11-20 06:39:03.960856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.908 [2024-11-20 06:39:03.960863] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.908 [2024-11-20 06:39:03.960868] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.908 [2024-11-20 06:39:03.960873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66128 len:8 PRP1 0x0 PRP2 0x0 00:28:38.908 [2024-11-20 06:39:03.960879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.908 [2024-11-20 06:39:03.960886] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.908 [2024-11-20 06:39:03.960892] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.908 [2024-11-20 06:39:03.960899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66136 len:8 PRP1 0x0 PRP2 0x0 00:28:38.908 [2024-11-20 06:39:03.960906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.908 [2024-11-20 06:39:03.960912] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.908 [2024-11-20 06:39:03.972268] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.908 [2024-11-20 06:39:03.972285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66144 len:8 PRP1 0x0 PRP2 0x0 00:28:38.908 [2024-11-20 06:39:03.972294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.908 [2024-11-20 06:39:03.972304] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.908 [2024-11-20 06:39:03.972311] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.908 [2024-11-20 06:39:03.972318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66152 len:8 PRP1 0x0 PRP2 0x0 00:28:38.908 [2024-11-20 06:39:03.972327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.908 [2024-11-20 06:39:03.972336] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.908 [2024-11-20 06:39:03.972342] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.908 [2024-11-20 06:39:03.972349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66160 len:8 PRP1 0x0 PRP2 0x0 00:28:38.908 [2024-11-20 06:39:03.972358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.908 [2024-11-20 06:39:03.972367] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.908 [2024-11-20 06:39:03.972374] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.908 [2024-11-20 06:39:03.972381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66168 len:8 PRP1 0x0 PRP2 0x0 00:28:38.908 [2024-11-20 06:39:03.972395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.908 [2024-11-20 06:39:03.972404] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.908 [2024-11-20 06:39:03.972411] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.908 [2024-11-20 06:39:03.972419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66176 len:8 PRP1 0x0 PRP2 0x0 00:28:38.908 [2024-11-20 06:39:03.972428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.908 [2024-11-20 06:39:03.972437] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.908 [2024-11-20 06:39:03.972444] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.908 [2024-11-20 06:39:03.972451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66184 len:8 PRP1 0x0 PRP2 0x0 00:28:38.908 [2024-11-20 06:39:03.972460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.908 [2024-11-20 06:39:03.972468] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.908 [2024-11-20 06:39:03.972476] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.908 [2024-11-20 06:39:03.972484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66192 len:8 PRP1 0x0 PRP2 0x0 00:28:38.908 [2024-11-20 06:39:03.972492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.908 [2024-11-20 06:39:03.972501] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.908 [2024-11-20 06:39:03.972507] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.908 [2024-11-20 06:39:03.972514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66200 len:8 PRP1 0x0 PRP2 0x0 00:28:38.908 [2024-11-20 06:39:03.972522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.908 [2024-11-20 06:39:03.972532] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.908 [2024-11-20 06:39:03.972540] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.908 [2024-11-20 06:39:03.972547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66208 len:8 PRP1 0x0 PRP2 0x0 00:28:38.908 [2024-11-20 06:39:03.972556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.908 [2024-11-20 06:39:03.972565] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.908 [2024-11-20 06:39:03.972572] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.908 [2024-11-20 06:39:03.972580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66216 len:8 PRP1 0x0 PRP2 0x0 00:28:38.908 [2024-11-20 06:39:03.972589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.908 [2024-11-20 06:39:03.972598] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.908 [2024-11-20 06:39:03.972605] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.908 [2024-11-20 06:39:03.972613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66224 len:8 PRP1 0x0 PRP2 0x0 00:28:38.908 [2024-11-20 06:39:03.972621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.908 [2024-11-20 06:39:03.972630] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.908 [2024-11-20 06:39:03.972637] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.908 [2024-11-20 06:39:03.972649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66232 len:8 PRP1 0x0 PRP2 0x0 00:28:38.908 [2024-11-20 06:39:03.972658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.908 [2024-11-20 06:39:03.972669] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.908 [2024-11-20 06:39:03.972676] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.908 [2024-11-20 06:39:03.972683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66240 len:8 PRP1 0x0 PRP2 0x0 00:28:38.908 [2024-11-20 06:39:03.972693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.908 [2024-11-20 06:39:03.972702] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.908 [2024-11-20 06:39:03.972709] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.908 [2024-11-20 06:39:03.972717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66248 len:8 PRP1 0x0 PRP2 0x0 00:28:38.908 [2024-11-20 06:39:03.972726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.908 [2024-11-20 06:39:03.972735] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.908 [2024-11-20 06:39:03.972741] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.908 [2024-11-20 06:39:03.972749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66256 len:8 PRP1 0x0 PRP2 0x0 00:28:38.908 [2024-11-20 06:39:03.972758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.908 [2024-11-20 06:39:03.972767] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.908 [2024-11-20 06:39:03.972773] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.908 [2024-11-20 06:39:03.972780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66264 len:8 PRP1 0x0 PRP2 0x0 00:28:38.908 [2024-11-20 06:39:03.972788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.908 [2024-11-20 06:39:03.972798] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.908 [2024-11-20 06:39:03.972806] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.908 [2024-11-20 06:39:03.972813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66272 len:8 PRP1 0x0 PRP2 0x0 00:28:38.908 [2024-11-20 06:39:03.972821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.908 [2024-11-20 06:39:03.972830] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.908 [2024-11-20 06:39:03.972836] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.908 [2024-11-20 06:39:03.972843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66280 len:8 PRP1 0x0 PRP2 0x0 00:28:38.908 [2024-11-20 06:39:03.972852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.908 [2024-11-20 06:39:03.972862] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.908 [2024-11-20 06:39:03.972869] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.908 [2024-11-20 06:39:03.972877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66288 len:8 PRP1 0x0 PRP2 0x0 00:28:38.908 [2024-11-20 06:39:03.972885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.908 [2024-11-20 06:39:03.972896] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.908 [2024-11-20 06:39:03.972904] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.908 [2024-11-20 06:39:03.972911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66296 len:8 PRP1 0x0 PRP2 0x0 00:28:38.908 [2024-11-20 06:39:03.972920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.908 [2024-11-20 06:39:03.972930] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.909 [2024-11-20 06:39:03.972936] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.909 [2024-11-20 06:39:03.972943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66304 len:8 PRP1 0x0 PRP2 0x0 00:28:38.909 [2024-11-20 06:39:03.972951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.909 [2024-11-20 06:39:03.973001] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:28:38.909 [2024-11-20 06:39:03.973031] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:38.909 [2024-11-20 06:39:03.973043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.909 [2024-11-20 06:39:03.973053] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:38.909 [2024-11-20 06:39:03.973061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.909 [2024-11-20 06:39:03.973072] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:38.909 [2024-11-20 06:39:03.973080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.909 [2024-11-20 06:39:03.973090] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:38.909 [2024-11-20 06:39:03.973098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.909 [2024-11-20 06:39:03.973107] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:28:38.909 [2024-11-20 06:39:03.973145] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21fc340 (9): Bad file descriptor 00:28:38.909 [2024-11-20 06:39:03.976868] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:28:38.909 [2024-11-20 06:39:04.004938] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:28:38.909 11286.30 IOPS, 44.09 MiB/s [2024-11-20T05:39:10.745Z] 11303.82 IOPS, 44.16 MiB/s [2024-11-20T05:39:10.745Z] 11324.67 IOPS, 44.24 MiB/s [2024-11-20T05:39:10.745Z] 11345.62 IOPS, 44.32 MiB/s [2024-11-20T05:39:10.745Z] 11361.57 IOPS, 44.38 MiB/s [2024-11-20T05:39:10.745Z] 11370.67 IOPS, 44.42 MiB/s 00:28:38.909 Latency(us) 00:28:38.909 [2024-11-20T05:39:10.745Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:38.909 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:38.909 Verification LBA range: start 0x0 length 0x4000 00:28:38.909 NVMe0n1 : 15.01 11364.25 44.39 371.30 0.00 10884.57 415.45 31207.62 00:28:38.909 [2024-11-20T05:39:10.745Z] =================================================================================================================== 00:28:38.909 [2024-11-20T05:39:10.745Z] Total : 11364.25 44.39 371.30 0.00 10884.57 415.45 31207.62 00:28:38.909 Received shutdown signal, test time was about 15.000000 seconds 00:28:38.909 00:28:38.909 Latency(us) 00:28:38.909 [2024-11-20T05:39:10.745Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:38.909 [2024-11-20T05:39:10.745Z] =================================================================================================================== 00:28:38.909 [2024-11-20T05:39:10.745Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:38.909 06:39:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:28:38.909 06:39:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:28:38.909 06:39:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:28:38.909 06:39:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=650027 00:28:38.909 06:39:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 650027 /var/tmp/bdevperf.sock 00:28:38.909 06:39:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:28:38.909 06:39:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # '[' -z 650027 ']' 00:28:38.909 06:39:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:38.909 06:39:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:38.909 06:39:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:38.909 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:38.909 06:39:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:38.909 06:39:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:28:38.909 06:39:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:38.909 06:39:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@866 -- # return 0 00:28:38.909 06:39:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:28:38.909 [2024-11-20 06:39:10.641672] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:28:38.909 06:39:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:28:39.168 [2024-11-20 06:39:10.854275] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:28:39.168 06:39:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:28:39.426 NVMe0n1 00:28:39.684 06:39:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:28:39.684 00:28:39.684 06:39:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:28:39.943 00:28:39.943 06:39:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:39.943 06:39:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:28:40.202 06:39:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:40.460 06:39:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:28:43.746 06:39:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:43.746 06:39:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:28:43.746 06:39:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=650946 00:28:43.746 06:39:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:28:43.746 06:39:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 650946 00:28:44.723 { 00:28:44.723 "results": [ 00:28:44.723 { 00:28:44.723 "job": "NVMe0n1", 00:28:44.723 "core_mask": "0x1", 00:28:44.723 "workload": "verify", 00:28:44.723 "status": "finished", 00:28:44.723 "verify_range": { 00:28:44.723 "start": 0, 00:28:44.723 "length": 16384 00:28:44.723 }, 00:28:44.723 "queue_depth": 128, 00:28:44.723 "io_size": 4096, 00:28:44.723 "runtime": 1.010164, 00:28:44.723 "iops": 11477.344272811148, 00:28:44.723 "mibps": 44.833376065668546, 00:28:44.723 "io_failed": 0, 00:28:44.723 "io_timeout": 0, 00:28:44.723 "avg_latency_us": 11108.246216023066, 00:28:44.723 "min_latency_us": 2309.3638095238093, 00:28:44.723 "max_latency_us": 9924.022857142858 00:28:44.723 } 00:28:44.723 ], 00:28:44.723 "core_count": 1 00:28:44.723 } 00:28:44.723 06:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:28:44.723 [2024-11-20 06:39:10.253323] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:28:44.723 [2024-11-20 06:39:10.253376] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid650027 ] 00:28:44.723 [2024-11-20 06:39:10.328506] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:44.723 [2024-11-20 06:39:10.365874] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:44.723 [2024-11-20 06:39:12.119375] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:28:44.723 [2024-11-20 06:39:12.119422] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:44.723 [2024-11-20 06:39:12.119434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.723 [2024-11-20 06:39:12.119443] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:44.723 [2024-11-20 06:39:12.119450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.723 [2024-11-20 06:39:12.119458] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:44.723 [2024-11-20 06:39:12.119466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.723 [2024-11-20 06:39:12.119475] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:44.723 [2024-11-20 06:39:12.119483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.723 [2024-11-20 06:39:12.119490] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:28:44.723 [2024-11-20 06:39:12.119515] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:28:44.723 [2024-11-20 06:39:12.119530] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12ba340 (9): Bad file descriptor 00:28:44.723 [2024-11-20 06:39:12.170317] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:28:44.723 Running I/O for 1 seconds... 00:28:44.723 11466.00 IOPS, 44.79 MiB/s 00:28:44.723 Latency(us) 00:28:44.723 [2024-11-20T05:39:16.559Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:44.723 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:44.723 Verification LBA range: start 0x0 length 0x4000 00:28:44.723 NVMe0n1 : 1.01 11477.34 44.83 0.00 0.00 11108.25 2309.36 9924.02 00:28:44.723 [2024-11-20T05:39:16.559Z] =================================================================================================================== 00:28:44.723 [2024-11-20T05:39:16.559Z] Total : 11477.34 44.83 0.00 0.00 11108.25 2309.36 9924.02 00:28:44.723 06:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:44.723 06:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:28:45.001 06:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:45.265 06:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:45.265 06:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:28:45.265 06:39:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:45.524 06:39:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:28:48.809 06:39:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:48.809 06:39:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:28:48.809 06:39:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 650027 00:28:48.809 06:39:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # '[' -z 650027 ']' 00:28:48.809 06:39:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # kill -0 650027 00:28:48.809 06:39:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # uname 00:28:48.809 06:39:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:48.809 06:39:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 650027 00:28:48.809 06:39:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:28:48.809 06:39:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:28:48.809 06:39:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@970 -- # echo 'killing process with pid 650027' 00:28:48.809 killing process with pid 650027 00:28:48.809 06:39:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@971 -- # kill 650027 00:28:48.809 06:39:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@976 -- # wait 650027 00:28:49.068 06:39:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:28:49.068 06:39:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:49.068 06:39:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:28:49.068 06:39:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:28:49.068 06:39:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:28:49.068 06:39:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:49.068 06:39:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:28:49.068 06:39:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:49.068 06:39:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:28:49.068 06:39:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:49.068 06:39:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:49.068 rmmod nvme_tcp 00:28:49.327 rmmod nvme_fabrics 00:28:49.327 rmmod nvme_keyring 00:28:49.327 06:39:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:49.327 06:39:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:28:49.327 06:39:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:28:49.327 06:39:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 646494 ']' 00:28:49.327 06:39:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 646494 00:28:49.327 06:39:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # '[' -z 646494 ']' 00:28:49.327 06:39:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # kill -0 646494 00:28:49.327 06:39:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # uname 00:28:49.327 06:39:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:49.327 06:39:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 646494 00:28:49.327 06:39:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:28:49.327 06:39:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:28:49.327 06:39:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@970 -- # echo 'killing process with pid 646494' 00:28:49.327 killing process with pid 646494 00:28:49.327 06:39:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@971 -- # kill 646494 00:28:49.327 06:39:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@976 -- # wait 646494 00:28:49.586 06:39:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:49.586 06:39:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:49.586 06:39:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:49.586 06:39:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:28:49.586 06:39:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:28:49.586 06:39:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:49.586 06:39:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:28:49.586 06:39:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:49.586 06:39:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:49.586 06:39:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:49.586 06:39:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:49.586 06:39:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:51.492 06:39:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:51.492 00:28:51.492 real 0m38.036s 00:28:51.492 user 2m0.333s 00:28:51.492 sys 0m7.972s 00:28:51.492 06:39:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1128 -- # xtrace_disable 00:28:51.492 06:39:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:28:51.492 ************************************ 00:28:51.492 END TEST nvmf_failover 00:28:51.492 ************************************ 00:28:51.492 06:39:23 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:28:51.492 06:39:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:28:51.492 06:39:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:28:51.492 06:39:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.752 ************************************ 00:28:51.752 START TEST nvmf_host_discovery 00:28:51.752 ************************************ 00:28:51.752 06:39:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:28:51.752 * Looking for test storage... 00:28:51.752 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:51.752 06:39:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:28:51.752 06:39:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # lcov --version 00:28:51.752 06:39:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:28:51.752 06:39:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:28:51.752 06:39:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:51.752 06:39:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:51.752 06:39:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:51.752 06:39:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:28:51.752 06:39:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:28:51.752 06:39:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:28:51.752 06:39:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:28:51.752 06:39:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:28:51.752 06:39:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:28:51.752 06:39:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:28:51.752 06:39:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:51.752 06:39:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:28:51.752 06:39:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:28:51.752 06:39:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:51.752 06:39:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:51.752 06:39:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:28:51.752 06:39:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:28:51.752 06:39:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:51.752 06:39:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:28:51.752 06:39:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:28:51.752 06:39:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:28:51.752 06:39:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:28:51.752 06:39:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:51.752 06:39:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:28:51.752 06:39:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:28:51.752 06:39:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:51.752 06:39:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:51.752 06:39:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:28:51.752 06:39:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:51.752 06:39:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:28:51.752 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:51.752 --rc genhtml_branch_coverage=1 00:28:51.752 --rc genhtml_function_coverage=1 00:28:51.752 --rc genhtml_legend=1 00:28:51.752 --rc geninfo_all_blocks=1 00:28:51.752 --rc geninfo_unexecuted_blocks=1 00:28:51.752 00:28:51.752 ' 00:28:51.752 06:39:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:28:51.752 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:51.752 --rc genhtml_branch_coverage=1 00:28:51.752 --rc genhtml_function_coverage=1 00:28:51.752 --rc genhtml_legend=1 00:28:51.752 --rc geninfo_all_blocks=1 00:28:51.752 --rc geninfo_unexecuted_blocks=1 00:28:51.752 00:28:51.752 ' 00:28:51.752 06:39:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:28:51.752 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:51.752 --rc genhtml_branch_coverage=1 00:28:51.752 --rc genhtml_function_coverage=1 00:28:51.752 --rc genhtml_legend=1 00:28:51.752 --rc geninfo_all_blocks=1 00:28:51.752 --rc geninfo_unexecuted_blocks=1 00:28:51.752 00:28:51.752 ' 00:28:51.752 06:39:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:28:51.752 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:51.752 --rc genhtml_branch_coverage=1 00:28:51.752 --rc genhtml_function_coverage=1 00:28:51.752 --rc genhtml_legend=1 00:28:51.752 --rc geninfo_all_blocks=1 00:28:51.752 --rc geninfo_unexecuted_blocks=1 00:28:51.752 00:28:51.752 ' 00:28:51.752 06:39:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:51.752 06:39:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:28:51.752 06:39:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:51.752 06:39:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:51.752 06:39:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:51.752 06:39:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:51.752 06:39:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:51.752 06:39:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:51.752 06:39:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:51.752 06:39:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:51.752 06:39:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:51.752 06:39:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:51.752 06:39:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:28:51.753 06:39:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:28:51.753 06:39:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:51.753 06:39:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:51.753 06:39:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:51.753 06:39:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:51.753 06:39:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:51.753 06:39:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:28:51.753 06:39:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:51.753 06:39:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:51.753 06:39:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:51.753 06:39:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:51.753 06:39:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:51.753 06:39:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:51.753 06:39:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:28:51.753 06:39:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:51.753 06:39:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:28:51.753 06:39:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:51.753 06:39:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:51.753 06:39:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:51.753 06:39:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:51.753 06:39:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:51.753 06:39:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:51.753 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:51.753 06:39:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:51.753 06:39:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:51.753 06:39:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:51.753 06:39:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:28:51.753 06:39:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:28:51.753 06:39:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:28:51.753 06:39:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:28:51.753 06:39:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:28:51.753 06:39:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:28:51.753 06:39:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:28:51.753 06:39:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:51.753 06:39:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:51.753 06:39:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:51.753 06:39:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:51.753 06:39:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:51.753 06:39:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:51.753 06:39:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:51.753 06:39:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:51.753 06:39:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:51.753 06:39:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:51.753 06:39:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:28:51.753 06:39:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:58.324 06:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:58.324 06:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:28:58.324 06:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:58.324 06:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:58.324 06:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:58.324 06:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:58.324 06:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:58.324 06:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:28:58.324 06:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:58.324 06:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:28:58.324 06:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:28:58.324 06:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:28:58.324 06:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:28:58.324 06:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:28:58.324 06:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:28:58.324 06:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:58.324 06:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:58.324 06:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:58.324 06:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:58.324 06:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:58.324 06:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:58.324 06:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:58.324 06:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:58.324 06:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:58.324 06:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:58.324 06:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:58.324 06:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:58.324 06:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:58.324 06:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:58.324 06:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:58.324 06:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:58.324 06:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:58.324 06:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:58.324 06:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:58.324 06:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:28:58.324 Found 0000:86:00.0 (0x8086 - 0x159b) 00:28:58.324 06:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:58.324 06:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:58.324 06:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:58.324 06:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:58.324 06:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:58.324 06:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:58.324 06:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:28:58.324 Found 0000:86:00.1 (0x8086 - 0x159b) 00:28:58.324 06:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:58.324 06:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:58.324 06:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:58.324 06:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:58.324 06:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:58.324 06:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:58.324 06:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:58.324 06:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:58.324 06:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:58.324 06:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:58.324 06:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:58.324 06:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:58.324 06:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:58.324 06:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:58.324 06:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:58.324 06:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:28:58.324 Found net devices under 0000:86:00.0: cvl_0_0 00:28:58.324 06:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:58.324 06:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:58.324 06:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:58.324 06:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:58.324 06:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:58.324 06:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:58.324 06:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:58.324 06:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:58.324 06:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:28:58.324 Found net devices under 0000:86:00.1: cvl_0_1 00:28:58.324 06:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:58.324 06:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:58.324 06:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:28:58.324 06:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:58.324 06:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:58.324 06:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:58.324 06:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:58.324 06:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:58.324 06:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:58.324 06:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:58.324 06:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:58.324 06:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:58.324 06:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:58.324 06:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:58.324 06:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:58.324 06:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:58.324 06:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:58.325 06:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:58.325 06:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:58.325 06:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:58.325 06:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:58.325 06:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:58.325 06:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:58.325 06:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:58.325 06:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:58.325 06:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:58.325 06:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:58.325 06:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:58.325 06:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:58.325 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:58.325 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.350 ms 00:28:58.325 00:28:58.325 --- 10.0.0.2 ping statistics --- 00:28:58.325 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:58.325 rtt min/avg/max/mdev = 0.350/0.350/0.350/0.000 ms 00:28:58.325 06:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:58.325 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:58.325 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.213 ms 00:28:58.325 00:28:58.325 --- 10.0.0.1 ping statistics --- 00:28:58.325 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:58.325 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:28:58.325 06:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:58.325 06:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # return 0 00:28:58.325 06:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:58.325 06:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:58.325 06:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:58.325 06:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:58.325 06:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:58.325 06:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:58.325 06:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:58.325 06:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:28:58.325 06:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:58.325 06:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:58.325 06:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:58.325 06:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=655392 00:28:58.325 06:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 655392 00:28:58.325 06:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:28:58.325 06:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@833 -- # '[' -z 655392 ']' 00:28:58.325 06:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:58.325 06:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:58.325 06:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:58.325 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:58.325 06:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:58.325 06:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:58.325 [2024-11-20 06:39:29.540467] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:28:58.325 [2024-11-20 06:39:29.540513] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:58.325 [2024-11-20 06:39:29.618227] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:58.325 [2024-11-20 06:39:29.660651] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:58.325 [2024-11-20 06:39:29.660683] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:58.325 [2024-11-20 06:39:29.660691] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:58.325 [2024-11-20 06:39:29.660697] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:58.325 [2024-11-20 06:39:29.660703] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:58.325 [2024-11-20 06:39:29.661161] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:58.325 06:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:58.325 06:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@866 -- # return 0 00:28:58.325 06:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:58.325 06:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:58.325 06:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:58.325 06:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:58.325 06:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:58.325 06:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:58.325 06:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:58.325 [2024-11-20 06:39:29.804840] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:58.325 06:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:58.325 06:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:28:58.325 06:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:58.325 06:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:58.325 [2024-11-20 06:39:29.817024] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:28:58.325 06:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:58.325 06:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:28:58.325 06:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:58.325 06:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:58.325 null0 00:28:58.325 06:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:58.325 06:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:28:58.325 06:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:58.325 06:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:58.325 null1 00:28:58.325 06:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:58.325 06:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:28:58.325 06:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:58.325 06:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:58.325 06:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:58.325 06:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=655420 00:28:58.325 06:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 655420 /tmp/host.sock 00:28:58.325 06:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:28:58.325 06:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@833 -- # '[' -z 655420 ']' 00:28:58.325 06:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@837 -- # local rpc_addr=/tmp/host.sock 00:28:58.325 06:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:58.326 06:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:28:58.326 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:28:58.326 06:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:58.326 06:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:58.326 [2024-11-20 06:39:29.894950] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:28:58.326 [2024-11-20 06:39:29.894991] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid655420 ] 00:28:58.326 [2024-11-20 06:39:29.970883] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:58.326 [2024-11-20 06:39:30.016143] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:58.326 06:39:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:58.326 06:39:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@866 -- # return 0 00:28:58.326 06:39:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:58.326 06:39:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:28:58.326 06:39:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:58.326 06:39:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:58.326 06:39:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:58.326 06:39:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:28:58.326 06:39:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:58.326 06:39:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:58.326 06:39:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:58.326 06:39:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:28:58.326 06:39:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:28:58.326 06:39:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:58.326 06:39:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:58.326 06:39:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:58.326 06:39:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:28:58.326 06:39:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:58.326 06:39:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:28:58.326 06:39:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:58.585 06:39:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:28:58.585 06:39:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:28:58.585 06:39:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:58.585 06:39:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:58.585 06:39:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:58.585 06:39:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:28:58.585 06:39:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:58.585 06:39:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:28:58.585 06:39:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:58.585 06:39:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:28:58.585 06:39:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:28:58.585 06:39:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:58.585 06:39:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:58.585 06:39:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:58.585 06:39:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:28:58.585 06:39:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:58.585 06:39:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:28:58.585 06:39:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:58.585 06:39:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:58.585 06:39:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:28:58.585 06:39:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:58.585 06:39:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:58.585 06:39:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:28:58.585 06:39:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:28:58.585 06:39:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:58.585 06:39:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:58.585 06:39:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:58.585 06:39:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:28:58.585 06:39:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:58.585 06:39:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:28:58.585 06:39:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:58.585 06:39:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:28:58.585 06:39:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:28:58.585 06:39:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:58.585 06:39:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:58.585 06:39:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:58.585 06:39:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:28:58.585 06:39:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:58.585 06:39:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:58.585 06:39:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:58.585 06:39:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:28:58.585 06:39:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:28:58.585 06:39:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:58.585 06:39:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:58.585 06:39:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:28:58.585 06:39:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:28:58.585 06:39:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:58.585 06:39:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:58.585 06:39:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:58.585 06:39:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:28:58.585 06:39:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:58.585 06:39:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:28:58.586 06:39:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:58.844 06:39:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:28:58.844 06:39:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:58.844 06:39:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:58.844 06:39:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:58.844 [2024-11-20 06:39:30.434630] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:58.844 06:39:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:58.844 06:39:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:28:58.844 06:39:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:58.844 06:39:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:58.844 06:39:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:58.844 06:39:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:58.844 06:39:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:28:58.844 06:39:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:28:58.844 06:39:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:58.845 06:39:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:28:58.845 06:39:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:28:58.845 06:39:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:58.845 06:39:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:58.845 06:39:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:58.845 06:39:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:28:58.845 06:39:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:58.845 06:39:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:28:58.845 06:39:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:58.845 06:39:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:28:58.845 06:39:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:28:58.845 06:39:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:28:58.845 06:39:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:28:58.845 06:39:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:28:58.845 06:39:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:28:58.845 06:39:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:28:58.845 06:39:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:28:58.845 06:39:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:28:58.845 06:39:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:28:58.845 06:39:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:28:58.845 06:39:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:58.845 06:39:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:58.845 06:39:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:58.845 06:39:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:28:58.845 06:39:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:28:58.845 06:39:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:28:58.845 06:39:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:28:58.845 06:39:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:28:58.845 06:39:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:58.845 06:39:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:58.845 06:39:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:58.845 06:39:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:28:58.845 06:39:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:28:58.845 06:39:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:28:58.845 06:39:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:28:58.845 06:39:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:28:58.845 06:39:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:28:58.845 06:39:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:58.845 06:39:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:28:58.845 06:39:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:58.845 06:39:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:58.845 06:39:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:28:58.845 06:39:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:58.845 06:39:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:58.845 06:39:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ '' == \n\v\m\e\0 ]] 00:28:58.845 06:39:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # sleep 1 00:28:59.412 [2024-11-20 06:39:31.182705] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:28:59.412 [2024-11-20 06:39:31.182725] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:28:59.412 [2024-11-20 06:39:31.182737] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:28:59.671 [2024-11-20 06:39:31.312133] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:28:59.671 [2024-11-20 06:39:31.494104] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:28:59.671 [2024-11-20 06:39:31.494837] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0xee5df0:1 started. 00:28:59.671 [2024-11-20 06:39:31.496209] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:28:59.671 [2024-11-20 06:39:31.496225] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:28:59.671 [2024-11-20 06:39:31.502015] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0xee5df0 was disconnected and freed. delete nvme_qpair. 00:28:59.930 06:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:28:59.930 06:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:28:59.930 06:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:28:59.930 06:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:59.930 06:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:59.930 06:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:59.930 06:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:28:59.930 06:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:59.930 06:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:28:59.930 06:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:59.930 06:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:59.930 06:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:28:59.930 06:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:28:59.930 06:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:28:59.930 06:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:28:59.930 06:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:28:59.930 06:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:28:59.930 06:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:28:59.930 06:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:59.930 06:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:59.930 06:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:28:59.930 06:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:59.930 06:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:28:59.930 06:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:59.930 06:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:59.930 06:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:28:59.930 06:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:28:59.930 06:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:28:59.930 06:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:28:59.930 06:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:28:59.930 06:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:28:59.930 06:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:28:59.930 06:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_paths nvme0 00:28:59.930 06:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:28:59.930 06:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:28:59.930 06:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:59.930 06:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:28:59.930 06:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:59.930 06:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:29:00.189 06:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:00.189 06:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ 4420 == \4\4\2\0 ]] 00:29:00.189 06:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:29:00.189 06:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:29:00.189 06:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:29:00.189 06:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:29:00.189 06:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:29:00.189 06:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:29:00.189 06:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:29:00.189 06:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:29:00.190 06:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:29:00.190 06:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:29:00.190 06:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:29:00.190 06:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:00.190 06:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:00.190 06:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:00.190 06:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:29:00.190 06:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:29:00.190 06:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:29:00.190 06:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:29:00.190 06:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:29:00.190 06:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:00.190 06:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:00.190 06:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:00.190 06:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:29:00.190 06:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:29:00.190 06:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:29:00.190 06:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:29:00.190 06:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:29:00.190 06:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:29:00.190 [2024-11-20 06:39:31.856433] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0xeb4620:1 started. 00:29:00.190 06:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:00.190 06:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:00.190 06:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:00.190 06:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:29:00.190 06:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:00.190 06:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:29:00.190 [2024-11-20 06:39:31.862717] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0xeb4620 was disconnected and freed. delete nvme_qpair. 00:29:00.190 06:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:00.190 06:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:29:00.190 06:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:29:00.190 06:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:29:00.190 06:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:29:00.190 06:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:29:00.190 06:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:29:00.190 06:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:29:00.190 06:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:29:00.190 06:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:29:00.190 06:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:29:00.190 06:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:29:00.190 06:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:29:00.190 06:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:00.190 06:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:00.190 06:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:00.190 06:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:29:00.190 06:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:29:00.190 06:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:29:00.190 06:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:29:00.190 06:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:29:00.190 06:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:00.190 06:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:00.190 [2024-11-20 06:39:31.950714] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:29:00.190 [2024-11-20 06:39:31.951174] bdev_nvme.c:7366:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:29:00.190 [2024-11-20 06:39:31.951192] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:29:00.190 06:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:00.190 06:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:29:00.190 06:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:29:00.190 06:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:29:00.190 06:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:29:00.190 06:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:29:00.190 06:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:29:00.190 06:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:00.190 06:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:29:00.190 06:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:29:00.190 06:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:00.190 06:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:29:00.190 06:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:00.190 06:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:00.190 06:39:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:00.190 06:39:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:29:00.190 06:39:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:29:00.190 06:39:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:29:00.190 06:39:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:29:00.190 06:39:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:29:00.190 06:39:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:29:00.190 06:39:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:29:00.190 06:39:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:00.190 06:39:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:00.190 06:39:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:00.190 06:39:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:00.190 06:39:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:29:00.190 06:39:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:29:00.449 06:39:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:00.449 06:39:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:29:00.449 06:39:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:29:00.449 06:39:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:29:00.449 06:39:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:29:00.449 06:39:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:29:00.449 06:39:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:29:00.449 06:39:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:29:00.449 06:39:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_paths nvme0 00:29:00.449 06:39:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:29:00.449 06:39:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:00.449 06:39:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:00.449 06:39:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:29:00.449 06:39:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:29:00.449 06:39:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:29:00.449 06:39:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:00.449 [2024-11-20 06:39:32.078571] bdev_nvme.c:7308:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:29:00.449 06:39:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:29:00.449 06:39:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # sleep 1 00:29:00.708 [2024-11-20 06:39:32.338736] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4421 00:29:00.708 [2024-11-20 06:39:32.338769] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:29:00.708 [2024-11-20 06:39:32.338781] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:29:00.708 [2024-11-20 06:39:32.338786] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:29:01.275 06:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:29:01.275 06:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:29:01.275 06:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_paths nvme0 00:29:01.535 06:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:29:01.535 06:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:29:01.535 06:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:01.535 06:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:29:01.535 06:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:01.535 06:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:29:01.535 06:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:01.535 06:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:29:01.535 06:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:29:01.535 06:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:29:01.535 06:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:29:01.535 06:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:29:01.535 06:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:29:01.535 06:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:29:01.535 06:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:29:01.535 06:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:29:01.535 06:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:29:01.535 06:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:29:01.535 06:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:29:01.535 06:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:01.535 06:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:01.535 06:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:01.535 06:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:29:01.535 06:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:29:01.535 06:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:29:01.535 06:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:29:01.535 06:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:01.535 06:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:01.535 06:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:01.535 [2024-11-20 06:39:33.203225] bdev_nvme.c:7366:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:29:01.535 [2024-11-20 06:39:33.203246] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:29:01.535 06:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:01.535 06:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:29:01.535 06:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:29:01.535 06:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:29:01.535 06:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:29:01.535 [2024-11-20 06:39:33.208421] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:01.535 [2024-11-20 06:39:33.208439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.535 [2024-11-20 06:39:33.208448] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:01.535 [2024-11-20 06:39:33.208455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.535 [2024-11-20 06:39:33.208462] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:01.535 [2024-11-20 06:39:33.208469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.535 [2024-11-20 06:39:33.208475] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:01.535 [2024-11-20 06:39:33.208482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.535 [2024-11-20 06:39:33.208488] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb6390 is same with the state(6) to be set 00:29:01.535 06:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:29:01.535 06:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:29:01.535 06:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:01.535 06:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:01.535 06:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:01.535 06:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:29:01.535 06:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:29:01.535 06:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:29:01.535 [2024-11-20 06:39:33.218434] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xeb6390 (9): Bad file descriptor 00:29:01.535 06:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:01.536 [2024-11-20 06:39:33.228472] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:29:01.536 [2024-11-20 06:39:33.228483] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:29:01.536 [2024-11-20 06:39:33.228488] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:29:01.536 [2024-11-20 06:39:33.228493] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:29:01.536 [2024-11-20 06:39:33.228511] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:29:01.536 [2024-11-20 06:39:33.228751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.536 [2024-11-20 06:39:33.228770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeb6390 with addr=10.0.0.2, port=4420 00:29:01.536 [2024-11-20 06:39:33.228779] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb6390 is same with the state(6) to be set 00:29:01.536 [2024-11-20 06:39:33.228791] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xeb6390 (9): Bad file descriptor 00:29:01.536 [2024-11-20 06:39:33.228802] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:29:01.536 [2024-11-20 06:39:33.228809] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:29:01.536 [2024-11-20 06:39:33.228817] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:29:01.536 [2024-11-20 06:39:33.228824] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:29:01.536 [2024-11-20 06:39:33.228829] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:29:01.536 [2024-11-20 06:39:33.228833] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:29:01.536 [2024-11-20 06:39:33.238542] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:29:01.536 [2024-11-20 06:39:33.238553] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:29:01.536 [2024-11-20 06:39:33.238557] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:29:01.536 [2024-11-20 06:39:33.238561] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:29:01.536 [2024-11-20 06:39:33.238575] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:29:01.536 [2024-11-20 06:39:33.238823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.536 [2024-11-20 06:39:33.238837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeb6390 with addr=10.0.0.2, port=4420 00:29:01.536 [2024-11-20 06:39:33.238844] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb6390 is same with the state(6) to be set 00:29:01.536 [2024-11-20 06:39:33.238854] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xeb6390 (9): Bad file descriptor 00:29:01.536 [2024-11-20 06:39:33.238865] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:29:01.536 [2024-11-20 06:39:33.238872] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:29:01.536 [2024-11-20 06:39:33.238879] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:29:01.536 [2024-11-20 06:39:33.238885] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:29:01.536 [2024-11-20 06:39:33.238889] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:29:01.536 [2024-11-20 06:39:33.238893] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:29:01.536 [2024-11-20 06:39:33.248606] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:29:01.536 [2024-11-20 06:39:33.248620] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:29:01.536 [2024-11-20 06:39:33.248624] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:29:01.536 [2024-11-20 06:39:33.248628] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:29:01.536 [2024-11-20 06:39:33.248643] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:29:01.536 [2024-11-20 06:39:33.248923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.536 [2024-11-20 06:39:33.248937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeb6390 with addr=10.0.0.2, port=4420 00:29:01.536 [2024-11-20 06:39:33.248946] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb6390 is same with the state(6) to be set 00:29:01.536 [2024-11-20 06:39:33.248957] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xeb6390 (9): Bad file descriptor 00:29:01.536 [2024-11-20 06:39:33.248967] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:29:01.536 [2024-11-20 06:39:33.248974] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:29:01.536 [2024-11-20 06:39:33.248981] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:29:01.536 [2024-11-20 06:39:33.248987] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:29:01.536 [2024-11-20 06:39:33.248991] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:29:01.536 [2024-11-20 06:39:33.248995] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:29:01.536 06:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:01.536 06:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:29:01.536 06:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:29:01.536 06:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:29:01.536 06:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:29:01.536 06:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:29:01.536 06:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:29:01.536 06:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:29:01.536 [2024-11-20 06:39:33.258675] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:29:01.536 [2024-11-20 06:39:33.258689] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:29:01.536 [2024-11-20 06:39:33.258693] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:29:01.536 [2024-11-20 06:39:33.258697] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:29:01.536 [2024-11-20 06:39:33.258711] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:29:01.536 [2024-11-20 06:39:33.258933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.536 [2024-11-20 06:39:33.258947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeb6390 with addr=10.0.0.2, port=4420 00:29:01.536 [2024-11-20 06:39:33.258955] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb6390 is same with the state(6) to be set 00:29:01.536 [2024-11-20 06:39:33.258968] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xeb6390 (9): Bad file descriptor 00:29:01.536 [2024-11-20 06:39:33.258978] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:29:01.536 [2024-11-20 06:39:33.258984] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:29:01.536 [2024-11-20 06:39:33.258991] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:29:01.536 [2024-11-20 06:39:33.259001] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:29:01.536 [2024-11-20 06:39:33.259005] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:29:01.536 [2024-11-20 06:39:33.259009] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:29:01.536 06:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:01.536 06:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:01.536 06:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:01.536 06:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:29:01.536 06:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:01.536 06:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:29:01.536 [2024-11-20 06:39:33.268742] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:29:01.536 [2024-11-20 06:39:33.268758] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:29:01.536 [2024-11-20 06:39:33.268763] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:29:01.536 [2024-11-20 06:39:33.268767] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:29:01.536 [2024-11-20 06:39:33.268782] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:29:01.536 [2024-11-20 06:39:33.269007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.536 [2024-11-20 06:39:33.269021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeb6390 with addr=10.0.0.2, port=4420 00:29:01.536 [2024-11-20 06:39:33.269031] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb6390 is same with the state(6) to be set 00:29:01.536 [2024-11-20 06:39:33.269043] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xeb6390 (9): Bad file descriptor 00:29:01.536 [2024-11-20 06:39:33.269054] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:29:01.536 [2024-11-20 06:39:33.269061] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:29:01.536 [2024-11-20 06:39:33.269070] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:29:01.536 [2024-11-20 06:39:33.269076] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:29:01.536 [2024-11-20 06:39:33.269081] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:29:01.536 [2024-11-20 06:39:33.269085] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:29:01.536 [2024-11-20 06:39:33.278814] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:29:01.537 [2024-11-20 06:39:33.278826] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:29:01.537 [2024-11-20 06:39:33.278830] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:29:01.537 [2024-11-20 06:39:33.278834] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:29:01.537 [2024-11-20 06:39:33.278848] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:29:01.537 [2024-11-20 06:39:33.279021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.537 [2024-11-20 06:39:33.279033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeb6390 with addr=10.0.0.2, port=4420 00:29:01.537 [2024-11-20 06:39:33.279044] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb6390 is same with the state(6) to be set 00:29:01.537 [2024-11-20 06:39:33.279054] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xeb6390 (9): Bad file descriptor 00:29:01.537 [2024-11-20 06:39:33.279063] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:29:01.537 [2024-11-20 06:39:33.279069] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:29:01.537 [2024-11-20 06:39:33.279076] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:29:01.537 [2024-11-20 06:39:33.279082] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:29:01.537 [2024-11-20 06:39:33.279087] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:29:01.537 [2024-11-20 06:39:33.279090] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:29:01.537 [2024-11-20 06:39:33.288879] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:29:01.537 [2024-11-20 06:39:33.288889] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:29:01.537 [2024-11-20 06:39:33.288893] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:29:01.537 [2024-11-20 06:39:33.288897] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:29:01.537 [2024-11-20 06:39:33.288910] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:29:01.537 [2024-11-20 06:39:33.289154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.537 [2024-11-20 06:39:33.289167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeb6390 with addr=10.0.0.2, port=4420 00:29:01.537 [2024-11-20 06:39:33.289175] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb6390 is same with the state(6) to be set 00:29:01.537 [2024-11-20 06:39:33.289184] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xeb6390 (9): Bad file descriptor 00:29:01.537 [2024-11-20 06:39:33.289196] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:29:01.537 [2024-11-20 06:39:33.289207] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:29:01.537 [2024-11-20 06:39:33.289214] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:29:01.537 [2024-11-20 06:39:33.289220] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:29:01.537 [2024-11-20 06:39:33.289224] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:29:01.537 [2024-11-20 06:39:33.289228] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:29:01.537 [2024-11-20 06:39:33.290495] bdev_nvme.c:7171:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:29:01.537 [2024-11-20 06:39:33.290510] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:29:01.537 06:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:01.537 06:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:29:01.537 06:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:29:01.537 06:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:29:01.537 06:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:29:01.537 06:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:29:01.537 06:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:29:01.537 06:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:29:01.537 06:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_paths nvme0 00:29:01.537 06:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:29:01.537 06:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:01.537 06:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:01.537 06:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:29:01.537 06:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:29:01.537 06:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:29:01.537 06:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:01.537 06:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ 4421 == \4\4\2\1 ]] 00:29:01.537 06:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:29:01.537 06:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:29:01.537 06:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:29:01.537 06:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:29:01.537 06:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:29:01.537 06:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:29:01.537 06:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:29:01.537 06:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:29:01.537 06:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:29:01.537 06:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:29:01.537 06:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:29:01.537 06:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:01.537 06:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:01.797 06:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:01.797 06:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:29:01.797 06:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:29:01.797 06:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:29:01.797 06:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:29:01.797 06:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:29:01.797 06:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:01.797 06:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:01.797 06:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:01.797 06:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:29:01.797 06:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:29:01.797 06:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:29:01.797 06:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:29:01.797 06:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:29:01.797 06:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:29:01.797 06:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:01.797 06:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:29:01.797 06:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:01.797 06:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:01.797 06:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:29:01.797 06:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:29:01.797 06:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:01.797 06:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ '' == '' ]] 00:29:01.797 06:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:29:01.797 06:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:29:01.797 06:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:29:01.797 06:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:29:01.797 06:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:29:01.797 06:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:29:01.797 06:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:29:01.797 06:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:01.797 06:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:01.797 06:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:01.797 06:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:29:01.797 06:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:01.797 06:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:29:01.797 06:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:01.797 06:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ '' == '' ]] 00:29:01.797 06:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:29:01.797 06:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:29:01.797 06:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:29:01.797 06:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:29:01.797 06:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:29:01.797 06:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:29:01.797 06:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:29:01.797 06:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:29:01.797 06:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:29:01.797 06:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:29:01.797 06:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:29:01.797 06:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:01.797 06:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:01.797 06:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:01.797 06:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:29:01.797 06:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:29:01.797 06:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:29:01.797 06:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:29:01.797 06:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:29:01.797 06:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:01.797 06:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:03.169 [2024-11-20 06:39:34.622741] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:29:03.169 [2024-11-20 06:39:34.622756] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:29:03.169 [2024-11-20 06:39:34.622767] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:29:03.169 [2024-11-20 06:39:34.711037] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:29:03.428 [2024-11-20 06:39:35.016371] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.2:4421 00:29:03.428 [2024-11-20 06:39:35.016928] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x101dea0:1 started. 00:29:03.428 [2024-11-20 06:39:35.018509] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:29:03.428 [2024-11-20 06:39:35.018534] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:29:03.428 06:39:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:03.428 06:39:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:29:03.428 06:39:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:29:03.428 06:39:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:29:03.428 [2024-11-20 06:39:35.021038] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x101dea0 was disconnected and freed. delete nvme_qpair. 00:29:03.428 06:39:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:29:03.428 06:39:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:03.428 06:39:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:29:03.428 06:39:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:03.428 06:39:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:29:03.428 06:39:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:03.428 06:39:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:03.428 request: 00:29:03.428 { 00:29:03.428 "name": "nvme", 00:29:03.428 "trtype": "tcp", 00:29:03.428 "traddr": "10.0.0.2", 00:29:03.428 "adrfam": "ipv4", 00:29:03.428 "trsvcid": "8009", 00:29:03.428 "hostnqn": "nqn.2021-12.io.spdk:test", 00:29:03.428 "wait_for_attach": true, 00:29:03.428 "method": "bdev_nvme_start_discovery", 00:29:03.428 "req_id": 1 00:29:03.428 } 00:29:03.428 Got JSON-RPC error response 00:29:03.428 response: 00:29:03.428 { 00:29:03.428 "code": -17, 00:29:03.428 "message": "File exists" 00:29:03.428 } 00:29:03.428 06:39:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:29:03.428 06:39:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:29:03.428 06:39:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:29:03.428 06:39:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:29:03.428 06:39:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:29:03.428 06:39:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:29:03.429 06:39:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:29:03.429 06:39:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:29:03.429 06:39:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:03.429 06:39:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:29:03.429 06:39:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:03.429 06:39:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:29:03.429 06:39:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:03.429 06:39:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:29:03.429 06:39:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:29:03.429 06:39:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:03.429 06:39:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:03.429 06:39:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:03.429 06:39:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:29:03.429 06:39:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:03.429 06:39:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:29:03.429 06:39:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:03.429 06:39:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:29:03.429 06:39:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:29:03.429 06:39:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:29:03.429 06:39:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:29:03.429 06:39:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:29:03.429 06:39:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:03.429 06:39:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:29:03.429 06:39:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:03.429 06:39:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:29:03.429 06:39:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:03.429 06:39:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:03.429 request: 00:29:03.429 { 00:29:03.429 "name": "nvme_second", 00:29:03.429 "trtype": "tcp", 00:29:03.429 "traddr": "10.0.0.2", 00:29:03.429 "adrfam": "ipv4", 00:29:03.429 "trsvcid": "8009", 00:29:03.429 "hostnqn": "nqn.2021-12.io.spdk:test", 00:29:03.429 "wait_for_attach": true, 00:29:03.429 "method": "bdev_nvme_start_discovery", 00:29:03.429 "req_id": 1 00:29:03.429 } 00:29:03.429 Got JSON-RPC error response 00:29:03.429 response: 00:29:03.429 { 00:29:03.429 "code": -17, 00:29:03.429 "message": "File exists" 00:29:03.429 } 00:29:03.429 06:39:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:29:03.429 06:39:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:29:03.429 06:39:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:29:03.429 06:39:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:29:03.429 06:39:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:29:03.429 06:39:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:29:03.429 06:39:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:29:03.429 06:39:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:29:03.429 06:39:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:03.429 06:39:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:29:03.429 06:39:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:03.429 06:39:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:29:03.429 06:39:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:03.429 06:39:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:29:03.429 06:39:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:29:03.429 06:39:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:03.429 06:39:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:03.429 06:39:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:03.429 06:39:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:29:03.429 06:39:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:03.429 06:39:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:29:03.429 06:39:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:03.429 06:39:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:29:03.429 06:39:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:29:03.429 06:39:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:29:03.429 06:39:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:29:03.429 06:39:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:29:03.429 06:39:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:03.429 06:39:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:29:03.429 06:39:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:03.429 06:39:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:29:03.429 06:39:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:03.429 06:39:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:04.806 [2024-11-20 06:39:36.266087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.806 [2024-11-20 06:39:36.266113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101d9f0 with addr=10.0.0.2, port=8010 00:29:04.806 [2024-11-20 06:39:36.266131] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:29:04.806 [2024-11-20 06:39:36.266138] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:29:04.806 [2024-11-20 06:39:36.266144] bdev_nvme.c:7452:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:29:05.741 [2024-11-20 06:39:37.268592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.741 [2024-11-20 06:39:37.268618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101d9f0 with addr=10.0.0.2, port=8010 00:29:05.741 [2024-11-20 06:39:37.268630] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:29:05.741 [2024-11-20 06:39:37.268636] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:29:05.741 [2024-11-20 06:39:37.268642] bdev_nvme.c:7452:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:29:06.676 [2024-11-20 06:39:38.270823] bdev_nvme.c:7427:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:29:06.676 request: 00:29:06.676 { 00:29:06.676 "name": "nvme_second", 00:29:06.676 "trtype": "tcp", 00:29:06.676 "traddr": "10.0.0.2", 00:29:06.676 "adrfam": "ipv4", 00:29:06.676 "trsvcid": "8010", 00:29:06.676 "hostnqn": "nqn.2021-12.io.spdk:test", 00:29:06.676 "wait_for_attach": false, 00:29:06.676 "attach_timeout_ms": 3000, 00:29:06.676 "method": "bdev_nvme_start_discovery", 00:29:06.676 "req_id": 1 00:29:06.676 } 00:29:06.676 Got JSON-RPC error response 00:29:06.676 response: 00:29:06.676 { 00:29:06.676 "code": -110, 00:29:06.676 "message": "Connection timed out" 00:29:06.676 } 00:29:06.676 06:39:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:29:06.676 06:39:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:29:06.676 06:39:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:29:06.676 06:39:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:29:06.676 06:39:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:29:06.676 06:39:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:29:06.676 06:39:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:29:06.676 06:39:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:29:06.676 06:39:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:06.676 06:39:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:29:06.676 06:39:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:06.676 06:39:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:29:06.676 06:39:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:06.676 06:39:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:29:06.676 06:39:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:29:06.676 06:39:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 655420 00:29:06.676 06:39:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:29:06.676 06:39:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:06.676 06:39:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:29:06.676 06:39:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:06.676 06:39:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:29:06.676 06:39:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:06.676 06:39:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:06.676 rmmod nvme_tcp 00:29:06.676 rmmod nvme_fabrics 00:29:06.676 rmmod nvme_keyring 00:29:06.676 06:39:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:06.676 06:39:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:29:06.676 06:39:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:29:06.676 06:39:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 655392 ']' 00:29:06.676 06:39:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 655392 00:29:06.676 06:39:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@952 -- # '[' -z 655392 ']' 00:29:06.676 06:39:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # kill -0 655392 00:29:06.676 06:39:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@957 -- # uname 00:29:06.676 06:39:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:29:06.676 06:39:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 655392 00:29:06.676 06:39:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:29:06.676 06:39:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:29:06.676 06:39:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@970 -- # echo 'killing process with pid 655392' 00:29:06.676 killing process with pid 655392 00:29:06.676 06:39:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@971 -- # kill 655392 00:29:06.676 06:39:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@976 -- # wait 655392 00:29:06.935 06:39:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:06.935 06:39:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:06.935 06:39:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:06.935 06:39:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:29:06.935 06:39:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:29:06.935 06:39:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:06.935 06:39:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:29:06.935 06:39:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:06.935 06:39:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:06.935 06:39:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:06.935 06:39:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:06.935 06:39:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:08.836 06:39:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:09.095 00:29:09.095 real 0m17.330s 00:29:09.095 user 0m20.658s 00:29:09.095 sys 0m5.918s 00:29:09.095 06:39:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:09.095 06:39:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:09.095 ************************************ 00:29:09.095 END TEST nvmf_host_discovery 00:29:09.095 ************************************ 00:29:09.095 06:39:40 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:29:09.095 06:39:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:29:09.095 06:39:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:29:09.095 06:39:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:09.095 ************************************ 00:29:09.095 START TEST nvmf_host_multipath_status 00:29:09.095 ************************************ 00:29:09.095 06:39:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:29:09.095 * Looking for test storage... 00:29:09.095 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:09.095 06:39:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:29:09.095 06:39:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # lcov --version 00:29:09.095 06:39:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:29:09.095 06:39:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:29:09.095 06:39:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:09.095 06:39:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:09.095 06:39:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:09.095 06:39:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:29:09.095 06:39:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:29:09.095 06:39:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:29:09.095 06:39:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:29:09.095 06:39:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:29:09.095 06:39:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:29:09.095 06:39:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:29:09.095 06:39:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:09.095 06:39:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:29:09.095 06:39:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:29:09.095 06:39:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:09.095 06:39:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:09.095 06:39:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:29:09.095 06:39:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:29:09.095 06:39:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:09.095 06:39:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:29:09.095 06:39:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:29:09.095 06:39:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:29:09.095 06:39:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:29:09.095 06:39:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:09.095 06:39:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:29:09.095 06:39:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:29:09.095 06:39:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:09.095 06:39:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:09.095 06:39:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:29:09.095 06:39:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:09.095 06:39:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:29:09.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:09.095 --rc genhtml_branch_coverage=1 00:29:09.095 --rc genhtml_function_coverage=1 00:29:09.095 --rc genhtml_legend=1 00:29:09.095 --rc geninfo_all_blocks=1 00:29:09.095 --rc geninfo_unexecuted_blocks=1 00:29:09.095 00:29:09.095 ' 00:29:09.095 06:39:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:29:09.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:09.095 --rc genhtml_branch_coverage=1 00:29:09.095 --rc genhtml_function_coverage=1 00:29:09.095 --rc genhtml_legend=1 00:29:09.095 --rc geninfo_all_blocks=1 00:29:09.095 --rc geninfo_unexecuted_blocks=1 00:29:09.095 00:29:09.095 ' 00:29:09.095 06:39:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:29:09.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:09.095 --rc genhtml_branch_coverage=1 00:29:09.095 --rc genhtml_function_coverage=1 00:29:09.095 --rc genhtml_legend=1 00:29:09.095 --rc geninfo_all_blocks=1 00:29:09.095 --rc geninfo_unexecuted_blocks=1 00:29:09.095 00:29:09.095 ' 00:29:09.095 06:39:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:29:09.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:09.095 --rc genhtml_branch_coverage=1 00:29:09.095 --rc genhtml_function_coverage=1 00:29:09.095 --rc genhtml_legend=1 00:29:09.095 --rc geninfo_all_blocks=1 00:29:09.095 --rc geninfo_unexecuted_blocks=1 00:29:09.095 00:29:09.095 ' 00:29:09.095 06:39:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:09.095 06:39:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:29:09.354 06:39:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:09.354 06:39:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:09.354 06:39:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:09.354 06:39:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:09.354 06:39:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:09.354 06:39:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:09.354 06:39:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:09.354 06:39:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:09.354 06:39:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:09.354 06:39:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:09.355 06:39:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:29:09.355 06:39:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:29:09.355 06:39:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:09.355 06:39:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:09.355 06:39:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:09.355 06:39:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:09.355 06:39:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:09.355 06:39:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:29:09.355 06:39:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:09.355 06:39:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:09.355 06:39:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:09.355 06:39:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:09.355 06:39:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:09.355 06:39:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:09.355 06:39:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:29:09.355 06:39:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:09.355 06:39:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:29:09.355 06:39:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:09.355 06:39:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:09.355 06:39:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:09.355 06:39:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:09.355 06:39:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:09.355 06:39:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:09.355 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:09.355 06:39:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:09.355 06:39:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:09.355 06:39:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:09.355 06:39:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:29:09.355 06:39:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:29:09.355 06:39:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:09.355 06:39:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:29:09.355 06:39:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:29:09.355 06:39:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:29:09.355 06:39:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:29:09.355 06:39:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:09.355 06:39:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:09.355 06:39:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:09.355 06:39:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:09.355 06:39:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:09.355 06:39:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:09.355 06:39:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:09.355 06:39:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:09.355 06:39:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:09.355 06:39:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:09.355 06:39:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:29:09.355 06:39:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:29:15.927 06:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:15.927 06:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:29:15.927 06:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:15.927 06:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:15.927 06:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:15.927 06:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:15.927 06:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:15.927 06:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:29:15.927 06:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:15.927 06:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:29:15.927 06:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:29:15.927 06:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:29:15.927 06:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:29:15.927 06:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:29:15.927 06:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:29:15.927 06:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:15.927 06:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:15.927 06:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:15.927 06:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:15.927 06:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:15.927 06:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:15.927 06:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:15.927 06:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:15.927 06:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:15.927 06:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:15.927 06:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:15.927 06:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:15.927 06:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:15.927 06:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:15.927 06:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:15.927 06:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:15.927 06:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:15.927 06:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:15.927 06:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:15.928 06:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:29:15.928 Found 0000:86:00.0 (0x8086 - 0x159b) 00:29:15.928 06:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:15.928 06:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:15.928 06:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:15.928 06:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:15.928 06:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:15.928 06:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:15.928 06:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:29:15.928 Found 0000:86:00.1 (0x8086 - 0x159b) 00:29:15.928 06:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:15.928 06:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:15.928 06:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:15.928 06:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:15.928 06:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:15.928 06:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:15.928 06:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:15.928 06:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:15.928 06:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:15.928 06:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:15.928 06:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:15.928 06:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:15.928 06:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:15.928 06:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:15.928 06:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:15.928 06:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:29:15.928 Found net devices under 0000:86:00.0: cvl_0_0 00:29:15.928 06:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:15.928 06:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:15.928 06:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:15.928 06:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:15.928 06:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:15.928 06:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:15.928 06:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:15.928 06:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:15.928 06:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:29:15.928 Found net devices under 0000:86:00.1: cvl_0_1 00:29:15.928 06:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:15.928 06:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:15.928 06:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes 00:29:15.928 06:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:15.928 06:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:15.928 06:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:15.928 06:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:15.928 06:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:15.928 06:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:15.928 06:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:15.928 06:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:15.928 06:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:15.928 06:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:15.928 06:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:15.928 06:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:15.928 06:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:15.928 06:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:15.928 06:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:15.928 06:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:15.928 06:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:15.928 06:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:15.928 06:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:15.928 06:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:15.928 06:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:15.928 06:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:15.928 06:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:15.928 06:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:15.928 06:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:15.928 06:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:15.928 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:15.928 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.518 ms 00:29:15.928 00:29:15.928 --- 10.0.0.2 ping statistics --- 00:29:15.928 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:15.928 rtt min/avg/max/mdev = 0.518/0.518/0.518/0.000 ms 00:29:15.928 06:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:15.928 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:15.928 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.225 ms 00:29:15.928 00:29:15.928 --- 10.0.0.1 ping statistics --- 00:29:15.928 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:15.928 rtt min/avg/max/mdev = 0.225/0.225/0.225/0.000 ms 00:29:15.928 06:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:15.928 06:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0 00:29:15.928 06:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:15.928 06:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:15.928 06:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:15.928 06:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:15.928 06:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:15.928 06:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:15.928 06:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:15.928 06:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:29:15.928 06:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:15.928 06:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:15.928 06:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:29:15.928 06:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=660493 00:29:15.928 06:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 660493 00:29:15.928 06:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:29:15.928 06:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # '[' -z 660493 ']' 00:29:15.928 06:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:15.928 06:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:15.928 06:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:15.928 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:15.928 06:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:15.928 06:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:29:15.928 [2024-11-20 06:39:46.992200] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:29:15.928 [2024-11-20 06:39:46.992253] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:15.928 [2024-11-20 06:39:47.072414] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:15.928 [2024-11-20 06:39:47.113935] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:15.928 [2024-11-20 06:39:47.113969] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:15.928 [2024-11-20 06:39:47.113976] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:15.928 [2024-11-20 06:39:47.113983] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:15.929 [2024-11-20 06:39:47.113988] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:15.929 [2024-11-20 06:39:47.115191] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:15.929 [2024-11-20 06:39:47.115193] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:15.929 06:39:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:15.929 06:39:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@866 -- # return 0 00:29:15.929 06:39:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:15.929 06:39:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:15.929 06:39:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:29:15.929 06:39:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:15.929 06:39:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=660493 00:29:15.929 06:39:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:29:15.929 [2024-11-20 06:39:47.411619] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:15.929 06:39:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:29:15.929 Malloc0 00:29:15.929 06:39:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:29:16.187 06:39:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:16.445 06:39:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:16.445 [2024-11-20 06:39:48.224426] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:16.445 06:39:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:29:16.703 [2024-11-20 06:39:48.420936] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:29:16.703 06:39:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:29:16.703 06:39:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=660749 00:29:16.703 06:39:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:29:16.703 06:39:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 660749 /var/tmp/bdevperf.sock 00:29:16.703 06:39:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # '[' -z 660749 ']' 00:29:16.703 06:39:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:16.703 06:39:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:16.703 06:39:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:16.703 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:16.703 06:39:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:16.703 06:39:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:29:16.961 06:39:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:16.961 06:39:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@866 -- # return 0 00:29:16.961 06:39:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:29:17.217 06:39:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:29:17.780 Nvme0n1 00:29:17.780 06:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:29:18.039 Nvme0n1 00:29:18.039 06:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:29:18.039 06:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:29:19.955 06:39:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:29:19.955 06:39:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:29:20.224 06:39:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:29:20.482 06:39:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:29:21.415 06:39:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:29:21.415 06:39:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:29:21.415 06:39:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:21.415 06:39:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:29:21.673 06:39:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:21.673 06:39:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:29:21.673 06:39:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:21.673 06:39:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:29:21.931 06:39:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:29:21.931 06:39:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:29:21.931 06:39:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:21.931 06:39:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:29:22.188 06:39:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:22.188 06:39:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:29:22.188 06:39:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:22.188 06:39:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:29:22.446 06:39:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:22.446 06:39:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:29:22.446 06:39:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:22.446 06:39:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:29:22.446 06:39:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:22.446 06:39:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:29:22.446 06:39:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:22.446 06:39:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:29:22.703 06:39:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:22.704 06:39:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:29:22.704 06:39:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:29:22.962 06:39:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:29:23.220 06:39:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:29:24.154 06:39:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:29:24.154 06:39:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:29:24.154 06:39:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:24.154 06:39:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:29:24.412 06:39:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:29:24.412 06:39:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:29:24.412 06:39:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:24.412 06:39:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:29:24.670 06:39:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:24.670 06:39:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:29:24.670 06:39:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:24.670 06:39:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:29:24.670 06:39:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:24.670 06:39:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:29:24.670 06:39:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:29:24.670 06:39:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:24.928 06:39:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:24.928 06:39:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:29:24.928 06:39:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:24.928 06:39:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:29:25.187 06:39:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:25.187 06:39:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:29:25.187 06:39:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:25.187 06:39:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:29:25.445 06:39:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:25.445 06:39:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:29:25.445 06:39:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:29:25.702 06:39:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:29:25.702 06:39:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:29:27.074 06:39:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:29:27.074 06:39:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:29:27.074 06:39:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:27.074 06:39:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:29:27.074 06:39:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:27.074 06:39:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:29:27.074 06:39:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:29:27.074 06:39:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:27.331 06:39:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:29:27.331 06:39:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:29:27.331 06:39:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:27.331 06:39:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:29:27.331 06:39:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:27.331 06:39:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:29:27.331 06:39:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:27.331 06:39:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:29:27.589 06:39:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:27.589 06:39:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:29:27.589 06:39:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:27.589 06:39:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:29:27.846 06:39:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:27.846 06:39:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:29:27.846 06:39:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:27.846 06:39:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:29:28.104 06:39:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:28.104 06:39:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:29:28.104 06:39:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:29:28.361 06:39:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:29:28.619 06:40:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:29:29.552 06:40:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:29:29.552 06:40:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:29:29.552 06:40:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:29.552 06:40:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:29:29.809 06:40:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:29.809 06:40:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:29:29.810 06:40:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:29.810 06:40:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:29:30.068 06:40:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:29:30.068 06:40:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:29:30.068 06:40:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:30.068 06:40:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:29:30.068 06:40:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:30.068 06:40:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:29:30.068 06:40:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:30.068 06:40:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:29:30.326 06:40:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:30.326 06:40:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:29:30.326 06:40:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:30.326 06:40:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:29:30.584 06:40:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:30.584 06:40:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:29:30.584 06:40:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:30.584 06:40:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:29:30.841 06:40:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:29:30.841 06:40:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:29:30.841 06:40:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:29:30.841 06:40:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:29:31.098 06:40:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:29:32.031 06:40:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:29:32.289 06:40:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:29:32.289 06:40:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:32.289 06:40:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:29:32.289 06:40:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:29:32.289 06:40:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:29:32.290 06:40:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:32.290 06:40:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:29:32.548 06:40:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:29:32.548 06:40:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:29:32.548 06:40:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:32.548 06:40:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:29:32.807 06:40:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:32.807 06:40:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:29:32.807 06:40:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:32.807 06:40:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:29:33.065 06:40:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:33.065 06:40:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:29:33.065 06:40:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:33.065 06:40:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:29:33.065 06:40:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:29:33.065 06:40:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:29:33.065 06:40:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:33.065 06:40:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:29:33.323 06:40:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:29:33.323 06:40:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:29:33.323 06:40:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:29:33.585 06:40:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:29:33.585 06:40:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:29:34.959 06:40:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:29:34.959 06:40:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:29:34.959 06:40:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:34.959 06:40:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:29:34.959 06:40:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:29:34.959 06:40:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:29:34.959 06:40:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:34.959 06:40:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:29:35.216 06:40:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:35.216 06:40:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:29:35.217 06:40:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:35.217 06:40:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:29:35.217 06:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:35.217 06:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:29:35.217 06:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:35.217 06:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:29:35.474 06:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:35.474 06:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:29:35.474 06:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:35.475 06:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:29:35.731 06:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:29:35.731 06:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:29:35.731 06:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:35.731 06:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:29:35.987 06:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:35.987 06:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:29:36.244 06:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:29:36.244 06:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:29:36.501 06:40:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:29:36.501 06:40:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:29:37.874 06:40:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:29:37.874 06:40:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:29:37.874 06:40:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:37.874 06:40:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:29:37.874 06:40:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:37.874 06:40:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:29:37.874 06:40:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:37.874 06:40:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:29:38.133 06:40:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:38.134 06:40:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:29:38.134 06:40:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:38.134 06:40:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:29:38.134 06:40:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:38.134 06:40:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:29:38.134 06:40:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:38.134 06:40:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:29:38.392 06:40:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:38.392 06:40:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:29:38.392 06:40:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:38.392 06:40:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:29:38.651 06:40:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:38.651 06:40:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:29:38.651 06:40:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:38.651 06:40:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:29:38.910 06:40:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:38.910 06:40:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:29:38.910 06:40:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:29:39.168 06:40:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:29:39.168 06:40:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:29:40.546 06:40:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:29:40.546 06:40:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:29:40.546 06:40:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:40.546 06:40:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:29:40.546 06:40:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:29:40.546 06:40:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:29:40.546 06:40:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:40.546 06:40:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:29:40.805 06:40:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:40.805 06:40:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:29:40.805 06:40:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:40.805 06:40:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:29:40.805 06:40:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:40.805 06:40:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:29:40.805 06:40:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:40.805 06:40:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:29:41.064 06:40:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:41.064 06:40:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:29:41.064 06:40:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:29:41.064 06:40:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:41.323 06:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:41.323 06:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:29:41.323 06:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:41.323 06:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:29:41.582 06:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:41.582 06:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:29:41.582 06:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:29:41.841 06:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:29:42.100 06:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:29:43.037 06:40:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:29:43.037 06:40:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:29:43.037 06:40:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:43.037 06:40:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:29:43.295 06:40:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:43.296 06:40:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:29:43.296 06:40:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:29:43.296 06:40:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:43.296 06:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:43.296 06:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:29:43.296 06:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:43.296 06:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:29:43.555 06:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:43.555 06:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:29:43.555 06:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:43.555 06:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:29:43.814 06:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:43.814 06:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:29:43.814 06:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:43.814 06:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:29:44.073 06:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:44.073 06:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:29:44.073 06:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:44.073 06:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:29:44.331 06:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:44.331 06:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:29:44.331 06:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:29:44.331 06:40:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:29:44.590 06:40:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:29:45.964 06:40:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:29:45.964 06:40:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:29:45.964 06:40:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:45.964 06:40:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:29:45.964 06:40:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:45.964 06:40:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:29:45.964 06:40:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:45.964 06:40:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:29:46.222 06:40:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:29:46.222 06:40:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:29:46.222 06:40:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:46.222 06:40:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:29:46.222 06:40:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:46.222 06:40:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:29:46.222 06:40:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:46.222 06:40:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:29:46.480 06:40:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:46.480 06:40:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:29:46.480 06:40:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:46.480 06:40:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:29:46.738 06:40:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:46.738 06:40:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:29:46.738 06:40:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:29:46.738 06:40:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:46.997 06:40:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:29:46.997 06:40:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 660749 00:29:46.997 06:40:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # '[' -z 660749 ']' 00:29:46.997 06:40:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # kill -0 660749 00:29:46.997 06:40:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # uname 00:29:46.997 06:40:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:29:46.997 06:40:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 660749 00:29:46.997 06:40:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:29:46.997 06:40:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:29:46.997 06:40:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # echo 'killing process with pid 660749' 00:29:46.997 killing process with pid 660749 00:29:46.997 06:40:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@971 -- # kill 660749 00:29:46.997 06:40:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@976 -- # wait 660749 00:29:46.997 { 00:29:46.997 "results": [ 00:29:46.997 { 00:29:46.997 "job": "Nvme0n1", 00:29:46.997 "core_mask": "0x4", 00:29:46.997 "workload": "verify", 00:29:46.997 "status": "terminated", 00:29:46.997 "verify_range": { 00:29:46.997 "start": 0, 00:29:46.997 "length": 16384 00:29:46.997 }, 00:29:46.997 "queue_depth": 128, 00:29:46.997 "io_size": 4096, 00:29:46.997 "runtime": 28.839307, 00:29:46.997 "iops": 10690.374772181593, 00:29:46.997 "mibps": 41.759276453834346, 00:29:46.997 "io_failed": 0, 00:29:46.997 "io_timeout": 0, 00:29:46.997 "avg_latency_us": 11953.564794213733, 00:29:46.997 "min_latency_us": 577.3409523809523, 00:29:46.997 "max_latency_us": 3083812.083809524 00:29:46.997 } 00:29:46.997 ], 00:29:46.997 "core_count": 1 00:29:46.997 } 00:29:47.282 06:40:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 660749 00:29:47.282 06:40:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:29:47.282 [2024-11-20 06:39:48.482764] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:29:47.282 [2024-11-20 06:39:48.482819] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid660749 ] 00:29:47.282 [2024-11-20 06:39:48.558629] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:47.282 [2024-11-20 06:39:48.599411] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:47.282 Running I/O for 90 seconds... 00:29:47.282 11513.00 IOPS, 44.97 MiB/s [2024-11-20T05:40:19.118Z] 11459.50 IOPS, 44.76 MiB/s [2024-11-20T05:40:19.118Z] 11472.67 IOPS, 44.82 MiB/s [2024-11-20T05:40:19.118Z] 11536.75 IOPS, 45.07 MiB/s [2024-11-20T05:40:19.118Z] 11558.40 IOPS, 45.15 MiB/s [2024-11-20T05:40:19.118Z] 11544.50 IOPS, 45.10 MiB/s [2024-11-20T05:40:19.118Z] 11549.57 IOPS, 45.12 MiB/s [2024-11-20T05:40:19.118Z] 11533.25 IOPS, 45.05 MiB/s [2024-11-20T05:40:19.118Z] 11538.67 IOPS, 45.07 MiB/s [2024-11-20T05:40:19.118Z] 11529.40 IOPS, 45.04 MiB/s [2024-11-20T05:40:19.118Z] 11535.64 IOPS, 45.06 MiB/s [2024-11-20T05:40:19.118Z] 11547.50 IOPS, 45.11 MiB/s [2024-11-20T05:40:19.118Z] [2024-11-20 06:40:02.650305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:2424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.282 [2024-11-20 06:40:02.650343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:29:47.282 [2024-11-20 06:40:02.650364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:2432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.282 [2024-11-20 06:40:02.650371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:29:47.282 [2024-11-20 06:40:02.650384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:2440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.282 [2024-11-20 06:40:02.650391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:47.282 [2024-11-20 06:40:02.650404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:2448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.282 [2024-11-20 06:40:02.650411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:47.282 [2024-11-20 06:40:02.650423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:2456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.282 [2024-11-20 06:40:02.650430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:29:47.282 [2024-11-20 06:40:02.650442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:2464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.282 [2024-11-20 06:40:02.650449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:29:47.282 [2024-11-20 06:40:02.650465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:2472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.282 [2024-11-20 06:40:02.650472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:29:47.282 [2024-11-20 06:40:02.650484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:2480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.282 [2024-11-20 06:40:02.650491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:47.282 [2024-11-20 06:40:02.650503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.282 [2024-11-20 06:40:02.650509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:29:47.282 [2024-11-20 06:40:02.650521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:2496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.282 [2024-11-20 06:40:02.650533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:47.282 [2024-11-20 06:40:02.650545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:2504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.282 [2024-11-20 06:40:02.650552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:47.282 [2024-11-20 06:40:02.650564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:2512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.282 [2024-11-20 06:40:02.650570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:29:47.282 [2024-11-20 06:40:02.650582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:2520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.282 [2024-11-20 06:40:02.650589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:29:47.282 [2024-11-20 06:40:02.650601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:2528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.282 [2024-11-20 06:40:02.650608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:29:47.282 [2024-11-20 06:40:02.650619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:2536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.282 [2024-11-20 06:40:02.650626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:29:47.282 [2024-11-20 06:40:02.650638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:2544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.282 [2024-11-20 06:40:02.650645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:29:47.282 [2024-11-20 06:40:02.650657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:2552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.282 [2024-11-20 06:40:02.650663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:29:47.282 [2024-11-20 06:40:02.650676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:2560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.282 [2024-11-20 06:40:02.650683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:47.282 [2024-11-20 06:40:02.650695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:2568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.282 [2024-11-20 06:40:02.650702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:29:47.282 [2024-11-20 06:40:02.650714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:2576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.282 [2024-11-20 06:40:02.650720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:47.282 [2024-11-20 06:40:02.650732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:2584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.282 [2024-11-20 06:40:02.650739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:29:47.282 [2024-11-20 06:40:02.650751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:2592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.282 [2024-11-20 06:40:02.650758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:29:47.282 [2024-11-20 06:40:02.650772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:2600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.282 [2024-11-20 06:40:02.650779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:47.283 [2024-11-20 06:40:02.650791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:2608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.283 [2024-11-20 06:40:02.650798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:29:47.283 [2024-11-20 06:40:02.650810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:2616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.283 [2024-11-20 06:40:02.650816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:29:47.283 [2024-11-20 06:40:02.650828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:2624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.283 [2024-11-20 06:40:02.650835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:29:47.283 [2024-11-20 06:40:02.650847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:2632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.283 [2024-11-20 06:40:02.650854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:29:47.283 [2024-11-20 06:40:02.650866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:2640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.283 [2024-11-20 06:40:02.650872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:47.283 [2024-11-20 06:40:02.650884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:2648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.283 [2024-11-20 06:40:02.650891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:29:47.283 [2024-11-20 06:40:02.650903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:2656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.283 [2024-11-20 06:40:02.650909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:47.283 [2024-11-20 06:40:02.650921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:2664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.283 [2024-11-20 06:40:02.650928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:29:47.283 [2024-11-20 06:40:02.650940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:2672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.283 [2024-11-20 06:40:02.650947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:29:47.283 [2024-11-20 06:40:02.650959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:2680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.283 [2024-11-20 06:40:02.650966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:47.283 [2024-11-20 06:40:02.650979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:2688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.283 [2024-11-20 06:40:02.650986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:47.283 [2024-11-20 06:40:02.651000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:2696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.283 [2024-11-20 06:40:02.651007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:47.283 [2024-11-20 06:40:02.651019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:2704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.283 [2024-11-20 06:40:02.651025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:47.283 [2024-11-20 06:40:02.651037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:2712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.283 [2024-11-20 06:40:02.651044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:47.283 [2024-11-20 06:40:02.651055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:2720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.283 [2024-11-20 06:40:02.651062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:29:47.283 [2024-11-20 06:40:02.651074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:2728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.283 [2024-11-20 06:40:02.651080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:29:47.283 [2024-11-20 06:40:02.651092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:2736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.283 [2024-11-20 06:40:02.651099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:47.283 [2024-11-20 06:40:02.651110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:2744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.283 [2024-11-20 06:40:02.651117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:47.283 [2024-11-20 06:40:02.651129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:2752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.283 [2024-11-20 06:40:02.651136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:47.283 [2024-11-20 06:40:02.651148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:2760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.283 [2024-11-20 06:40:02.651155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:47.283 [2024-11-20 06:40:02.651167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:2768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.283 [2024-11-20 06:40:02.651173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:29:47.283 [2024-11-20 06:40:02.651185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.283 [2024-11-20 06:40:02.651192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:47.283 [2024-11-20 06:40:02.651211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:2256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.283 [2024-11-20 06:40:02.651218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:47.283 [2024-11-20 06:40:02.651230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:2264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.283 [2024-11-20 06:40:02.651239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:47.283 [2024-11-20 06:40:02.651251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.283 [2024-11-20 06:40:02.651258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:29:47.283 [2024-11-20 06:40:02.651270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:2280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.283 [2024-11-20 06:40:02.651277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:47.283 [2024-11-20 06:40:02.651694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:2288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.283 [2024-11-20 06:40:02.651707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:47.283 [2024-11-20 06:40:02.651722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:2784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.283 [2024-11-20 06:40:02.651728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:29:47.283 [2024-11-20 06:40:02.651741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:2792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.283 [2024-11-20 06:40:02.651748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:29:47.283 [2024-11-20 06:40:02.651760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:2800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.283 [2024-11-20 06:40:02.651767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:29:47.283 [2024-11-20 06:40:02.651779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:2808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.283 [2024-11-20 06:40:02.651785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:29:47.283 [2024-11-20 06:40:02.651797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:2816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.283 [2024-11-20 06:40:02.651804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:47.283 [2024-11-20 06:40:02.651816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:2824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.283 [2024-11-20 06:40:02.651823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:47.283 [2024-11-20 06:40:02.651835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:2832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.283 [2024-11-20 06:40:02.651842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:29:47.283 [2024-11-20 06:40:02.651853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:2840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.283 [2024-11-20 06:40:02.651860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:29:47.283 [2024-11-20 06:40:02.651872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:2848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.283 [2024-11-20 06:40:02.651881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:47.283 [2024-11-20 06:40:02.651893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:2856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.283 [2024-11-20 06:40:02.651900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:47.283 [2024-11-20 06:40:02.651912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:2864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.283 [2024-11-20 06:40:02.651919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:29:47.283 [2024-11-20 06:40:02.651931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:2872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.283 [2024-11-20 06:40:02.651938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:29:47.284 [2024-11-20 06:40:02.651950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:2880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.284 [2024-11-20 06:40:02.651956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:47.284 [2024-11-20 06:40:02.651968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:2888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.284 [2024-11-20 06:40:02.651975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:29:47.284 [2024-11-20 06:40:02.651987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:2896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.284 [2024-11-20 06:40:02.651994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:47.284 [2024-11-20 06:40:02.652007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:2904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.284 [2024-11-20 06:40:02.652013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:47.284 [2024-11-20 06:40:02.652025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:2912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.284 [2024-11-20 06:40:02.652032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:47.284 [2024-11-20 06:40:02.652045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:2920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.284 [2024-11-20 06:40:02.652051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:47.284 [2024-11-20 06:40:02.652063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:2928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.284 [2024-11-20 06:40:02.652070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:47.284 [2024-11-20 06:40:02.652082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:2936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.284 [2024-11-20 06:40:02.652089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:29:47.284 [2024-11-20 06:40:02.652101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:2944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.284 [2024-11-20 06:40:02.652108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:47.284 [2024-11-20 06:40:02.652121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:2952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.284 [2024-11-20 06:40:02.652128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:47.284 [2024-11-20 06:40:02.652140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:2960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.284 [2024-11-20 06:40:02.652147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.284 [2024-11-20 06:40:02.652159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:2968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.284 [2024-11-20 06:40:02.652166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:47.284 [2024-11-20 06:40:02.652178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:2976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.284 [2024-11-20 06:40:02.652184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:47.284 [2024-11-20 06:40:02.652196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:2984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.284 [2024-11-20 06:40:02.652208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:29:47.284 [2024-11-20 06:40:02.652221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:2992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.284 [2024-11-20 06:40:02.652227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:29:47.284 [2024-11-20 06:40:02.652239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:3000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.284 [2024-11-20 06:40:02.652246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:29:47.284 [2024-11-20 06:40:02.652258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:3008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.284 [2024-11-20 06:40:02.652264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:29:47.284 [2024-11-20 06:40:02.652276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:3016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.284 [2024-11-20 06:40:02.652283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:29:47.284 [2024-11-20 06:40:02.652295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:3024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.284 [2024-11-20 06:40:02.652301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:29:47.284 [2024-11-20 06:40:02.652313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:2296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.284 [2024-11-20 06:40:02.652320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:29:47.284 [2024-11-20 06:40:02.652332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:2304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.284 [2024-11-20 06:40:02.652339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:29:47.284 [2024-11-20 06:40:02.652353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:2312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.284 [2024-11-20 06:40:02.652360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:29:47.284 [2024-11-20 06:40:02.652372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:2320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.284 [2024-11-20 06:40:02.652379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:29:47.284 [2024-11-20 06:40:02.652391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.284 [2024-11-20 06:40:02.652397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:29:47.284 [2024-11-20 06:40:02.652410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:2336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.284 [2024-11-20 06:40:02.652416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:29:47.284 [2024-11-20 06:40:02.652428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:2344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.284 [2024-11-20 06:40:02.652435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:29:47.284 [2024-11-20 06:40:02.652447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.284 [2024-11-20 06:40:02.652454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:29:47.284 [2024-11-20 06:40:02.652466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:2360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.284 [2024-11-20 06:40:02.652473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:29:47.284 [2024-11-20 06:40:02.652485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:2368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.284 [2024-11-20 06:40:02.652492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:29:47.284 [2024-11-20 06:40:02.652504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:2376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.284 [2024-11-20 06:40:02.652511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:29:47.284 [2024-11-20 06:40:02.652523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:2384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.284 [2024-11-20 06:40:02.652529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:29:47.284 [2024-11-20 06:40:02.652541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:2392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.284 [2024-11-20 06:40:02.652548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:29:47.284 [2024-11-20 06:40:02.652562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:2400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.284 [2024-11-20 06:40:02.652569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:29:47.284 [2024-11-20 06:40:02.652581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:2408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.284 [2024-11-20 06:40:02.652590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:29:47.284 [2024-11-20 06:40:02.652602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.284 [2024-11-20 06:40:02.652609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:29:47.284 [2024-11-20 06:40:02.653080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:3032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.284 [2024-11-20 06:40:02.653092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:29:47.284 [2024-11-20 06:40:02.653106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:3040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.284 [2024-11-20 06:40:02.653114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:29:47.284 [2024-11-20 06:40:02.653126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:3048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.284 [2024-11-20 06:40:02.653133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:29:47.284 [2024-11-20 06:40:02.653145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:3056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.285 [2024-11-20 06:40:02.653152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:29:47.285 [2024-11-20 06:40:02.653164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:3064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.285 [2024-11-20 06:40:02.653171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:29:47.285 [2024-11-20 06:40:02.653183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:3072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.285 [2024-11-20 06:40:02.653190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:29:47.285 [2024-11-20 06:40:02.653208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:3080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.285 [2024-11-20 06:40:02.653215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:29:47.285 [2024-11-20 06:40:02.653227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:3088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.285 [2024-11-20 06:40:02.653248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:29:47.285 [2024-11-20 06:40:02.653263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:3096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.285 [2024-11-20 06:40:02.653269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:47.285 [2024-11-20 06:40:02.653282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:3104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.285 [2024-11-20 06:40:02.653289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:47.285 [2024-11-20 06:40:02.653301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:3112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.285 [2024-11-20 06:40:02.653308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:29:47.285 [2024-11-20 06:40:02.653323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:3120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.285 [2024-11-20 06:40:02.653330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:29:47.285 [2024-11-20 06:40:02.653343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:3128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.285 [2024-11-20 06:40:02.653350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:29:47.285 [2024-11-20 06:40:02.653362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:3136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.285 [2024-11-20 06:40:02.653369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:29:47.285 [2024-11-20 06:40:02.653381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:3144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.285 [2024-11-20 06:40:02.653388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:29:47.285 [2024-11-20 06:40:02.653400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:3152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.285 [2024-11-20 06:40:02.653407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:29:47.285 [2024-11-20 06:40:02.653420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:3160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.285 [2024-11-20 06:40:02.653427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:29:47.285 [2024-11-20 06:40:02.653601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:3168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.285 [2024-11-20 06:40:02.653611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:29:47.285 [2024-11-20 06:40:02.653625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:3176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.285 [2024-11-20 06:40:02.653632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:29:47.285 [2024-11-20 06:40:02.653644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:3184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.285 [2024-11-20 06:40:02.653651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:29:47.285 [2024-11-20 06:40:02.653663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:3192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.285 [2024-11-20 06:40:02.653670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:29:47.285 [2024-11-20 06:40:02.653682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:3200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.285 [2024-11-20 06:40:02.653689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:29:47.285 [2024-11-20 06:40:02.653701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:3208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.285 [2024-11-20 06:40:02.653708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:29:47.285 [2024-11-20 06:40:02.653725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:3216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.285 [2024-11-20 06:40:02.653731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:29:47.285 [2024-11-20 06:40:02.653744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:3224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.285 [2024-11-20 06:40:02.653751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:29:47.285 [2024-11-20 06:40:02.653764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:3232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.285 [2024-11-20 06:40:02.653771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:29:47.285 [2024-11-20 06:40:02.653783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:3240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.285 [2024-11-20 06:40:02.653789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:29:47.285 [2024-11-20 06:40:02.653802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.285 [2024-11-20 06:40:02.653809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:29:47.285 [2024-11-20 06:40:02.653821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:3256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.285 [2024-11-20 06:40:02.653827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:29:47.285 [2024-11-20 06:40:02.653840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:3264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.285 [2024-11-20 06:40:02.653846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:29:47.285 [2024-11-20 06:40:02.653859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:3272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.285 [2024-11-20 06:40:02.653866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:29:47.285 [2024-11-20 06:40:02.653879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.285 [2024-11-20 06:40:02.653886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:29:47.285 [2024-11-20 06:40:02.653899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:2432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.285 [2024-11-20 06:40:02.653905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:29:47.285 [2024-11-20 06:40:02.653918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:2440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.285 [2024-11-20 06:40:02.653926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:47.285 [2024-11-20 06:40:02.653938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.285 [2024-11-20 06:40:02.653944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:47.285 [2024-11-20 06:40:02.653956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:2456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.285 [2024-11-20 06:40:02.653965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:29:47.285 [2024-11-20 06:40:02.653978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:2464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.285 [2024-11-20 06:40:02.653984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:29:47.285 [2024-11-20 06:40:02.653997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:2472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.285 [2024-11-20 06:40:02.654003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:29:47.285 [2024-11-20 06:40:02.654015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:2480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.285 [2024-11-20 06:40:02.654022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:47.285 [2024-11-20 06:40:02.654035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:2488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.285 [2024-11-20 06:40:02.654041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:29:47.285 [2024-11-20 06:40:02.654053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:2496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.285 [2024-11-20 06:40:02.654059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:47.285 [2024-11-20 06:40:02.654071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:2504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.285 [2024-11-20 06:40:02.654078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:47.285 [2024-11-20 06:40:02.654091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:2512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.286 [2024-11-20 06:40:02.654097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:29:47.286 [2024-11-20 06:40:02.654109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:2520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.286 [2024-11-20 06:40:02.654116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:29:47.286 [2024-11-20 06:40:02.654129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:2528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.286 [2024-11-20 06:40:02.654136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:29:47.286 [2024-11-20 06:40:02.654148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:2536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.286 [2024-11-20 06:40:02.654155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:29:47.286 [2024-11-20 06:40:02.654166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:2544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.286 [2024-11-20 06:40:02.654173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:29:47.286 [2024-11-20 06:40:02.654185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:2552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.286 [2024-11-20 06:40:02.654194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:29:47.286 [2024-11-20 06:40:02.654213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:2560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.286 [2024-11-20 06:40:02.654220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:47.286 [2024-11-20 06:40:02.654232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:2568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.286 [2024-11-20 06:40:02.654239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:29:47.286 [2024-11-20 06:40:02.654251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:2576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.286 [2024-11-20 06:40:02.654258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:47.286 [2024-11-20 06:40:02.654270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:2584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.286 [2024-11-20 06:40:02.654276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:29:47.286 [2024-11-20 06:40:02.654288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:2592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.286 [2024-11-20 06:40:02.654296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:29:47.286 [2024-11-20 06:40:02.654308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:2600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.286 [2024-11-20 06:40:02.654315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:47.286 [2024-11-20 06:40:02.654327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:2608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.286 [2024-11-20 06:40:02.654333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:29:47.286 [2024-11-20 06:40:02.654345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:2616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.286 [2024-11-20 06:40:02.654351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:29:47.286 [2024-11-20 06:40:02.654363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:2624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.286 [2024-11-20 06:40:02.654370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:29:47.286 [2024-11-20 06:40:02.654382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:2632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.286 [2024-11-20 06:40:02.654388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:29:47.286 [2024-11-20 06:40:02.654400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:2640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.286 [2024-11-20 06:40:02.654406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:47.286 [2024-11-20 06:40:02.654419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:2648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.286 [2024-11-20 06:40:02.654426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:29:47.286 [2024-11-20 06:40:02.654439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:2656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.286 [2024-11-20 06:40:02.654446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:47.286 [2024-11-20 06:40:02.654457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:2664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.286 [2024-11-20 06:40:02.654464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:29:47.286 [2024-11-20 06:40:02.654476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:2672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.286 [2024-11-20 06:40:02.654483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:29:47.286 [2024-11-20 06:40:02.654495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:2680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.286 [2024-11-20 06:40:02.654501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:47.286 [2024-11-20 06:40:02.654515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:2688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.286 [2024-11-20 06:40:02.654521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:47.286 [2024-11-20 06:40:02.654533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:2696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.286 [2024-11-20 06:40:02.654540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:47.286 [2024-11-20 06:40:02.654552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:2704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.286 [2024-11-20 06:40:02.654558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:47.286 [2024-11-20 06:40:02.654570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:2712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.286 [2024-11-20 06:40:02.654576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:47.286 [2024-11-20 06:40:02.654589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:2720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.286 [2024-11-20 06:40:02.654597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:29:47.286 [2024-11-20 06:40:02.654609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:2728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.286 [2024-11-20 06:40:02.654615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:29:47.286 [2024-11-20 06:40:02.654627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.286 [2024-11-20 06:40:02.654633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:47.286 [2024-11-20 06:40:02.654645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:2744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.286 [2024-11-20 06:40:02.654652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:47.286 [2024-11-20 06:40:02.654665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:2752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.286 [2024-11-20 06:40:02.654672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:47.286 [2024-11-20 06:40:02.654684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:2760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.286 [2024-11-20 06:40:02.654691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:47.286 [2024-11-20 06:40:02.654703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:2768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.287 [2024-11-20 06:40:02.654709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:29:47.287 [2024-11-20 06:40:02.666146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:2776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.287 [2024-11-20 06:40:02.666157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:47.287 [2024-11-20 06:40:02.666171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:2256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.287 [2024-11-20 06:40:02.666179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:47.287 [2024-11-20 06:40:02.666191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:2264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.287 [2024-11-20 06:40:02.666198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:47.287 [2024-11-20 06:40:02.666214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:2272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.287 [2024-11-20 06:40:02.666221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:29:47.287 [2024-11-20 06:40:02.666234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.287 [2024-11-20 06:40:02.666240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:47.287 [2024-11-20 06:40:02.666811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:2288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.287 [2024-11-20 06:40:02.666830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:47.287 [2024-11-20 06:40:02.666846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:2784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.287 [2024-11-20 06:40:02.666853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:29:47.287 [2024-11-20 06:40:02.666865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:2792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.287 [2024-11-20 06:40:02.666872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:29:47.287 [2024-11-20 06:40:02.666884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:2800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.287 [2024-11-20 06:40:02.666891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:29:47.287 [2024-11-20 06:40:02.666903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:2808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.287 [2024-11-20 06:40:02.666914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:29:47.287 [2024-11-20 06:40:02.666926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:2816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.287 [2024-11-20 06:40:02.666934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:47.287 [2024-11-20 06:40:02.666948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:2824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.287 [2024-11-20 06:40:02.666956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:47.287 [2024-11-20 06:40:02.666967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:2832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.287 [2024-11-20 06:40:02.666974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:29:47.287 [2024-11-20 06:40:02.666986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:2840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.287 [2024-11-20 06:40:02.666993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:29:47.287 [2024-11-20 06:40:02.667005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:2848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.287 [2024-11-20 06:40:02.667012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:47.287 [2024-11-20 06:40:02.667024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:2856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.287 [2024-11-20 06:40:02.667030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:47.287 [2024-11-20 06:40:02.667042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:2864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.287 [2024-11-20 06:40:02.667049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:29:47.287 [2024-11-20 06:40:02.667063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:2872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.287 [2024-11-20 06:40:02.667070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:29:47.287 [2024-11-20 06:40:02.667082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:2880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.287 [2024-11-20 06:40:02.667089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:47.287 [2024-11-20 06:40:02.667101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:2888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.287 [2024-11-20 06:40:02.667107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:29:47.287 [2024-11-20 06:40:02.667119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:2896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.287 [2024-11-20 06:40:02.667126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:47.287 [2024-11-20 06:40:02.667138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:2904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.287 [2024-11-20 06:40:02.667145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:47.287 [2024-11-20 06:40:02.667158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:2912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.287 [2024-11-20 06:40:02.667164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:47.287 [2024-11-20 06:40:02.667177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:2920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.287 [2024-11-20 06:40:02.667183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:47.287 [2024-11-20 06:40:02.667195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:2928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.287 [2024-11-20 06:40:02.667208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:47.287 [2024-11-20 06:40:02.667238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:2936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.287 [2024-11-20 06:40:02.667247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:29:47.287 [2024-11-20 06:40:02.667263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:2944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.287 [2024-11-20 06:40:02.667271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:47.287 [2024-11-20 06:40:02.667287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:2952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.287 [2024-11-20 06:40:02.667296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:47.287 [2024-11-20 06:40:02.667312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:2960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.287 [2024-11-20 06:40:02.667321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.287 [2024-11-20 06:40:02.667337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:2968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.287 [2024-11-20 06:40:02.667345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:47.287 [2024-11-20 06:40:02.667362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:2976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.287 [2024-11-20 06:40:02.667370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:47.287 [2024-11-20 06:40:02.667386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:2984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.287 [2024-11-20 06:40:02.667395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:29:47.287 [2024-11-20 06:40:02.667411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:2992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.287 [2024-11-20 06:40:02.667420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:29:47.287 [2024-11-20 06:40:02.667436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:3000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.287 [2024-11-20 06:40:02.667444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:29:47.287 [2024-11-20 06:40:02.667463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:3008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.287 [2024-11-20 06:40:02.667472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:29:47.287 [2024-11-20 06:40:02.667488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:3016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.287 [2024-11-20 06:40:02.667496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:29:47.287 [2024-11-20 06:40:02.667512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:3024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.287 [2024-11-20 06:40:02.667521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:29:47.287 [2024-11-20 06:40:02.667538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.287 [2024-11-20 06:40:02.667546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:29:47.288 [2024-11-20 06:40:02.667562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:2304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.288 [2024-11-20 06:40:02.667571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:29:47.288 [2024-11-20 06:40:02.667587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:2312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.288 [2024-11-20 06:40:02.667596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:29:47.288 [2024-11-20 06:40:02.667612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:2320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.288 [2024-11-20 06:40:02.667621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:29:47.288 [2024-11-20 06:40:02.667637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:2328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.288 [2024-11-20 06:40:02.667645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:29:47.288 [2024-11-20 06:40:02.667662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:2336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.288 [2024-11-20 06:40:02.667671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:29:47.288 [2024-11-20 06:40:02.667687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:2344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.288 [2024-11-20 06:40:02.667695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:29:47.288 [2024-11-20 06:40:02.667712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:2352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.288 [2024-11-20 06:40:02.667720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:29:47.288 [2024-11-20 06:40:02.667737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:2360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.288 [2024-11-20 06:40:02.667745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:29:47.288 [2024-11-20 06:40:02.667761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:2368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.288 [2024-11-20 06:40:02.667772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:29:47.288 [2024-11-20 06:40:02.667788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:2376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.288 [2024-11-20 06:40:02.667797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:29:47.288 [2024-11-20 06:40:02.667813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:2384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.288 [2024-11-20 06:40:02.667822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:29:47.288 [2024-11-20 06:40:02.667838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:2392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.288 [2024-11-20 06:40:02.667847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:29:47.288 [2024-11-20 06:40:02.667863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:2400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.288 [2024-11-20 06:40:02.667872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:29:47.288 [2024-11-20 06:40:02.667888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:2408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.288 [2024-11-20 06:40:02.667896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:29:47.288 [2024-11-20 06:40:02.667913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.288 [2024-11-20 06:40:02.667922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:29:47.288 [2024-11-20 06:40:02.667938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:3032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.288 [2024-11-20 06:40:02.667947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:29:47.288 [2024-11-20 06:40:02.667963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:3040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.288 [2024-11-20 06:40:02.667972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:29:47.288 [2024-11-20 06:40:02.667988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:3048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.288 [2024-11-20 06:40:02.667996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:29:47.288 [2024-11-20 06:40:02.668012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:3056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.288 [2024-11-20 06:40:02.668021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:29:47.288 [2024-11-20 06:40:02.668037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:3064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.288 [2024-11-20 06:40:02.668046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:29:47.288 [2024-11-20 06:40:02.668062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:3072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.288 [2024-11-20 06:40:02.668072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:29:47.288 [2024-11-20 06:40:02.668089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:3080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.288 [2024-11-20 06:40:02.668097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:29:47.288 [2024-11-20 06:40:02.668113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:3088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.288 [2024-11-20 06:40:02.668122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:29:47.288 [2024-11-20 06:40:02.668138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:3096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.288 [2024-11-20 06:40:02.668147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:47.288 [2024-11-20 06:40:02.668163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:3104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.288 [2024-11-20 06:40:02.668171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:47.288 [2024-11-20 06:40:02.668188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:3112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.288 [2024-11-20 06:40:02.668197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:29:47.288 [2024-11-20 06:40:02.668217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:3120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.288 [2024-11-20 06:40:02.668226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:29:47.288 [2024-11-20 06:40:02.668242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:3128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.288 [2024-11-20 06:40:02.668251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:29:47.288 [2024-11-20 06:40:02.668267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:3136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.288 [2024-11-20 06:40:02.668276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:29:47.288 [2024-11-20 06:40:02.668292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:3144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.288 [2024-11-20 06:40:02.668301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:29:47.288 [2024-11-20 06:40:02.668317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:3152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.288 [2024-11-20 06:40:02.668326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:29:47.288 [2024-11-20 06:40:02.669007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.288 [2024-11-20 06:40:02.669022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:29:47.288 [2024-11-20 06:40:02.669040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:3168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.288 [2024-11-20 06:40:02.669050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:29:47.288 [2024-11-20 06:40:02.669069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:3176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.288 [2024-11-20 06:40:02.669078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:29:47.288 [2024-11-20 06:40:02.669094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:3184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.288 [2024-11-20 06:40:02.669103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:29:47.288 [2024-11-20 06:40:02.669118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:3192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.288 [2024-11-20 06:40:02.669128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:29:47.288 [2024-11-20 06:40:02.669144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:3200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.288 [2024-11-20 06:40:02.669152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:29:47.288 [2024-11-20 06:40:02.669168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:3208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.288 [2024-11-20 06:40:02.669177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:29:47.288 [2024-11-20 06:40:02.669193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:3216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.289 [2024-11-20 06:40:02.669209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:29:47.289 [2024-11-20 06:40:02.669225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:3224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.289 [2024-11-20 06:40:02.669234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:29:47.289 [2024-11-20 06:40:02.669250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.289 [2024-11-20 06:40:02.669259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:29:47.289 [2024-11-20 06:40:02.669275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:3240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.289 [2024-11-20 06:40:02.669284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:29:47.289 [2024-11-20 06:40:02.669300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:3248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.289 [2024-11-20 06:40:02.669308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:29:47.289 [2024-11-20 06:40:02.669324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:3256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.289 [2024-11-20 06:40:02.669334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:29:47.289 [2024-11-20 06:40:02.669350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:3264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.289 [2024-11-20 06:40:02.669358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:29:47.289 [2024-11-20 06:40:02.669377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:3272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.289 [2024-11-20 06:40:02.669386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:29:47.289 [2024-11-20 06:40:02.669402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:2424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.289 [2024-11-20 06:40:02.669410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:29:47.289 [2024-11-20 06:40:02.669429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:2432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.289 [2024-11-20 06:40:02.669438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:29:47.289 [2024-11-20 06:40:02.669454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:2440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.289 [2024-11-20 06:40:02.669463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:47.289 [2024-11-20 06:40:02.669479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:2448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.289 [2024-11-20 06:40:02.669488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:47.289 [2024-11-20 06:40:02.669504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:2456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.289 [2024-11-20 06:40:02.669512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:29:47.289 [2024-11-20 06:40:02.669529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:2464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.289 [2024-11-20 06:40:02.669537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:29:47.289 [2024-11-20 06:40:02.669553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:2472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.289 [2024-11-20 06:40:02.669562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:29:47.289 [2024-11-20 06:40:02.669578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:2480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.289 [2024-11-20 06:40:02.669587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:47.289 [2024-11-20 06:40:02.669603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:2488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.289 [2024-11-20 06:40:02.669612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:29:47.289 [2024-11-20 06:40:02.669628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:2496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.289 [2024-11-20 06:40:02.669637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:47.289 [2024-11-20 06:40:02.669653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:2504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.289 [2024-11-20 06:40:02.669662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:47.289 [2024-11-20 06:40:02.669678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:2512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.289 [2024-11-20 06:40:02.669688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:29:47.289 [2024-11-20 06:40:02.669705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:2520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.289 [2024-11-20 06:40:02.669713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:29:47.289 [2024-11-20 06:40:02.669729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:2528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.289 [2024-11-20 06:40:02.669740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:29:47.289 [2024-11-20 06:40:02.669756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:2536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.289 [2024-11-20 06:40:02.669765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:29:47.289 [2024-11-20 06:40:02.669781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:2544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.289 [2024-11-20 06:40:02.669790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:29:47.289 [2024-11-20 06:40:02.669806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:2552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.289 [2024-11-20 06:40:02.669815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:29:47.289 [2024-11-20 06:40:02.669833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:2560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.289 [2024-11-20 06:40:02.669843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:47.289 [2024-11-20 06:40:02.669859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:2568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.289 [2024-11-20 06:40:02.669868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:29:47.289 [2024-11-20 06:40:02.669884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:2576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.289 [2024-11-20 06:40:02.669892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:47.289 [2024-11-20 06:40:02.669911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:2584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.289 [2024-11-20 06:40:02.669920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:29:47.289 [2024-11-20 06:40:02.669936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:2592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.289 [2024-11-20 06:40:02.669946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:29:47.289 [2024-11-20 06:40:02.669963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:2600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.289 [2024-11-20 06:40:02.669972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:47.289 [2024-11-20 06:40:02.669989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:2608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.289 [2024-11-20 06:40:02.669998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:29:47.289 [2024-11-20 06:40:02.670016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:2616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.289 [2024-11-20 06:40:02.670025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:29:47.289 [2024-11-20 06:40:02.670041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:2624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.289 [2024-11-20 06:40:02.670050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:29:47.289 [2024-11-20 06:40:02.670066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:2632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.289 [2024-11-20 06:40:02.670075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:29:47.289 [2024-11-20 06:40:02.670092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:2640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.289 [2024-11-20 06:40:02.670100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:47.289 [2024-11-20 06:40:02.670117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:2648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.289 [2024-11-20 06:40:02.670126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:29:47.289 [2024-11-20 06:40:02.670142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:2656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.289 [2024-11-20 06:40:02.670151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:47.289 [2024-11-20 06:40:02.670167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.289 [2024-11-20 06:40:02.670175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:29:47.289 [2024-11-20 06:40:02.670192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:2672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.290 [2024-11-20 06:40:02.670200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:29:47.290 [2024-11-20 06:40:02.670221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:2680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.290 [2024-11-20 06:40:02.670230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:47.290 [2024-11-20 06:40:02.670247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.290 [2024-11-20 06:40:02.670256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:47.290 [2024-11-20 06:40:02.670272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:2696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.290 [2024-11-20 06:40:02.670281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:47.290 [2024-11-20 06:40:02.670297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:2704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.290 [2024-11-20 06:40:02.670306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:47.290 [2024-11-20 06:40:02.670324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:2712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.290 [2024-11-20 06:40:02.670334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:47.290 [2024-11-20 06:40:02.670350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.290 [2024-11-20 06:40:02.670359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:29:47.290 [2024-11-20 06:40:02.670375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:2728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.290 [2024-11-20 06:40:02.670384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:29:47.290 [2024-11-20 06:40:02.670401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:2736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.290 [2024-11-20 06:40:02.670410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:47.290 [2024-11-20 06:40:02.670426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:2744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.290 [2024-11-20 06:40:02.670435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:47.290 [2024-11-20 06:40:02.670451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:2752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.290 [2024-11-20 06:40:02.670460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:47.290 [2024-11-20 06:40:02.670477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:2760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.290 [2024-11-20 06:40:02.670486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:47.290 [2024-11-20 06:40:02.670502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:2768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.290 [2024-11-20 06:40:02.670510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:29:47.290 [2024-11-20 06:40:02.670526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:2776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.290 [2024-11-20 06:40:02.670535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:47.290 [2024-11-20 06:40:02.670551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:2256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.290 [2024-11-20 06:40:02.670560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:47.290 [2024-11-20 06:40:02.670577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:2264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.290 [2024-11-20 06:40:02.670585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:47.290 [2024-11-20 06:40:02.670602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:2272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.290 [2024-11-20 06:40:02.670611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:29:47.290 [2024-11-20 06:40:02.670627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.290 [2024-11-20 06:40:02.670641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:47.290 [2024-11-20 06:40:02.670657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:2288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.290 [2024-11-20 06:40:02.670667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:47.290 [2024-11-20 06:40:02.670683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:2784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.290 [2024-11-20 06:40:02.670691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:29:47.290 [2024-11-20 06:40:02.670707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:2792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.290 [2024-11-20 06:40:02.670716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:29:47.290 [2024-11-20 06:40:02.670732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:2800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.290 [2024-11-20 06:40:02.670742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:29:47.290 [2024-11-20 06:40:02.670757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:2808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.290 [2024-11-20 06:40:02.670766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:29:47.290 [2024-11-20 06:40:02.670784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:2816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.290 [2024-11-20 06:40:02.670792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:47.290 [2024-11-20 06:40:02.670808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:2824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.290 [2024-11-20 06:40:02.670817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:47.290 [2024-11-20 06:40:02.670834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:2832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.290 [2024-11-20 06:40:02.670843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:29:47.290 [2024-11-20 06:40:02.671653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:2840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.290 [2024-11-20 06:40:02.671671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:29:47.290 [2024-11-20 06:40:02.671691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:2848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.290 [2024-11-20 06:40:02.671701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:47.290 [2024-11-20 06:40:02.671717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:2856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.290 [2024-11-20 06:40:02.671727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:47.290 [2024-11-20 06:40:02.671743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:2864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.290 [2024-11-20 06:40:02.671755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:29:47.290 [2024-11-20 06:40:02.671772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:2872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.290 [2024-11-20 06:40:02.671781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:29:47.290 [2024-11-20 06:40:02.671797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:2880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.290 [2024-11-20 06:40:02.671806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:47.290 [2024-11-20 06:40:02.671823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:2888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.290 [2024-11-20 06:40:02.671832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:29:47.290 [2024-11-20 06:40:02.671848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:2896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.290 [2024-11-20 06:40:02.671857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:47.290 [2024-11-20 06:40:02.671873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:2904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.290 [2024-11-20 06:40:02.671883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:47.290 [2024-11-20 06:40:02.671899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:2912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.290 [2024-11-20 06:40:02.671908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:47.290 [2024-11-20 06:40:02.671925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:2920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.290 [2024-11-20 06:40:02.671934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:47.290 [2024-11-20 06:40:02.671950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:2928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.290 [2024-11-20 06:40:02.671959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:47.290 [2024-11-20 06:40:02.671975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:2936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.290 [2024-11-20 06:40:02.671984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:29:47.291 [2024-11-20 06:40:02.672001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:2944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.291 [2024-11-20 06:40:02.672010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:47.291 [2024-11-20 06:40:02.672026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:2952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.291 [2024-11-20 06:40:02.672035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:47.291 [2024-11-20 06:40:02.672051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:2960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.291 [2024-11-20 06:40:02.672061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.291 [2024-11-20 06:40:02.672078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:2968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.291 [2024-11-20 06:40:02.672088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:47.291 [2024-11-20 06:40:02.672104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:2976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.291 [2024-11-20 06:40:02.672113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:47.291 [2024-11-20 06:40:02.672129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:2984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.291 [2024-11-20 06:40:02.672138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:29:47.291 [2024-11-20 06:40:02.672154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:2992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.291 [2024-11-20 06:40:02.672163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:29:47.291 [2024-11-20 06:40:02.672179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:3000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.291 [2024-11-20 06:40:02.672188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:29:47.291 [2024-11-20 06:40:02.672209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:3008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.291 [2024-11-20 06:40:02.672218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:29:47.291 [2024-11-20 06:40:02.672234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:3016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.291 [2024-11-20 06:40:02.672243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:29:47.291 [2024-11-20 06:40:02.672263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:3024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.291 [2024-11-20 06:40:02.672272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:29:47.291 [2024-11-20 06:40:02.672289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:2296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.291 [2024-11-20 06:40:02.672298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:29:47.291 [2024-11-20 06:40:02.672315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:2304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.291 [2024-11-20 06:40:02.672323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:29:47.291 [2024-11-20 06:40:02.672340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:2312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.291 [2024-11-20 06:40:02.672349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:29:47.291 [2024-11-20 06:40:02.672365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:2320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.291 [2024-11-20 06:40:02.672375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:29:47.291 [2024-11-20 06:40:02.672393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:2328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.291 [2024-11-20 06:40:02.672403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:29:47.291 [2024-11-20 06:40:02.672419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:2336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.291 [2024-11-20 06:40:02.672428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:29:47.291 [2024-11-20 06:40:02.672444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:2344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.291 [2024-11-20 06:40:02.672453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:29:47.291 [2024-11-20 06:40:02.672469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:2352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.291 [2024-11-20 06:40:02.672478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:29:47.291 [2024-11-20 06:40:02.672494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:2360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.291 [2024-11-20 06:40:02.672504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:29:47.291 [2024-11-20 06:40:02.672520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:2368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.291 [2024-11-20 06:40:02.672528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:29:47.291 [2024-11-20 06:40:02.672545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.291 [2024-11-20 06:40:02.672554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:29:47.291 [2024-11-20 06:40:02.672570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:2384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.291 [2024-11-20 06:40:02.672579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:29:47.291 [2024-11-20 06:40:02.672596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:2392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.291 [2024-11-20 06:40:02.672605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:29:47.291 [2024-11-20 06:40:02.672621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.291 [2024-11-20 06:40:02.672630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:29:47.291 [2024-11-20 06:40:02.672646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:2408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.291 [2024-11-20 06:40:02.672655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:29:47.291 [2024-11-20 06:40:02.672671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:2416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.291 [2024-11-20 06:40:02.672680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:29:47.291 [2024-11-20 06:40:02.672696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:3032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.291 [2024-11-20 06:40:02.672707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:29:47.291 [2024-11-20 06:40:02.672724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:3040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.291 [2024-11-20 06:40:02.672732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:29:47.291 [2024-11-20 06:40:02.672749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:3048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.291 [2024-11-20 06:40:02.672758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:29:47.291 [2024-11-20 06:40:02.672774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:3056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.291 [2024-11-20 06:40:02.672783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:29:47.291 [2024-11-20 06:40:02.672799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:3064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.291 [2024-11-20 06:40:02.672808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:29:47.291 [2024-11-20 06:40:02.672824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:3072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.291 [2024-11-20 06:40:02.672833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:29:47.291 [2024-11-20 06:40:02.672849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:3080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.292 [2024-11-20 06:40:02.672858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:29:47.292 [2024-11-20 06:40:02.672874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:3088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.292 [2024-11-20 06:40:02.672883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:29:47.292 [2024-11-20 06:40:02.672900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:3096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.292 [2024-11-20 06:40:02.672909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:47.292 [2024-11-20 06:40:02.672925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:3104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.292 [2024-11-20 06:40:02.672934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:47.292 [2024-11-20 06:40:02.672950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:3112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.292 [2024-11-20 06:40:02.672959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:29:47.292 [2024-11-20 06:40:02.672975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:3120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.292 [2024-11-20 06:40:02.672984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:29:47.292 [2024-11-20 06:40:02.679631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:3128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.292 [2024-11-20 06:40:02.679648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:29:47.292 [2024-11-20 06:40:02.679669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:3136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.292 [2024-11-20 06:40:02.679680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:29:47.292 [2024-11-20 06:40:02.679699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:3144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.292 [2024-11-20 06:40:02.679710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:29:47.292 [2024-11-20 06:40:02.680464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:3152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.292 [2024-11-20 06:40:02.680485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:29:47.292 [2024-11-20 06:40:02.680508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:3160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.292 [2024-11-20 06:40:02.680519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:29:47.292 [2024-11-20 06:40:02.680538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:3168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.292 [2024-11-20 06:40:02.680549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:29:47.292 [2024-11-20 06:40:02.680569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:3176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.292 [2024-11-20 06:40:02.680580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:29:47.292 [2024-11-20 06:40:02.680600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:3184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.292 [2024-11-20 06:40:02.680610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:29:47.292 [2024-11-20 06:40:02.680630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:3192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.292 [2024-11-20 06:40:02.680641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:29:47.292 [2024-11-20 06:40:02.680660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:3200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.292 [2024-11-20 06:40:02.680671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:29:47.292 [2024-11-20 06:40:02.680691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:3208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.292 [2024-11-20 06:40:02.680701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:29:47.292 [2024-11-20 06:40:02.680721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:3216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.292 [2024-11-20 06:40:02.680731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:29:47.292 [2024-11-20 06:40:02.680751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:3224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.292 [2024-11-20 06:40:02.680761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:29:47.292 [2024-11-20 06:40:02.680784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:3232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.292 [2024-11-20 06:40:02.680796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:29:47.292 [2024-11-20 06:40:02.680816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:3240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.292 [2024-11-20 06:40:02.680826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:29:47.292 [2024-11-20 06:40:02.680845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:3248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.292 [2024-11-20 06:40:02.680858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:29:47.292 [2024-11-20 06:40:02.680877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:3256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.292 [2024-11-20 06:40:02.680888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:29:47.292 [2024-11-20 06:40:02.680907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:3264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.292 [2024-11-20 06:40:02.680918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:29:47.292 [2024-11-20 06:40:02.680937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:3272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.292 [2024-11-20 06:40:02.680948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:29:47.292 [2024-11-20 06:40:02.680970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:2424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.292 [2024-11-20 06:40:02.680980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:29:47.292 [2024-11-20 06:40:02.681000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:2432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.292 [2024-11-20 06:40:02.681011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:29:47.292 [2024-11-20 06:40:02.681030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:2440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.292 [2024-11-20 06:40:02.681041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:47.292 [2024-11-20 06:40:02.681060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.292 [2024-11-20 06:40:02.681071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:47.292 [2024-11-20 06:40:02.681091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:2456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.292 [2024-11-20 06:40:02.681101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:29:47.292 [2024-11-20 06:40:02.681121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:2464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.292 [2024-11-20 06:40:02.681132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:29:47.292 [2024-11-20 06:40:02.681150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:2472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.292 [2024-11-20 06:40:02.681163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:29:47.292 [2024-11-20 06:40:02.681182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.292 [2024-11-20 06:40:02.681193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:47.292 [2024-11-20 06:40:02.681220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:2488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.292 [2024-11-20 06:40:02.681231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:29:47.292 [2024-11-20 06:40:02.681250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:2496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.292 [2024-11-20 06:40:02.681261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:47.292 [2024-11-20 06:40:02.681280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.292 [2024-11-20 06:40:02.681291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:47.292 [2024-11-20 06:40:02.681311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:2512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.292 [2024-11-20 06:40:02.681322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:29:47.292 [2024-11-20 06:40:02.681341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:2520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.292 [2024-11-20 06:40:02.681352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:29:47.292 [2024-11-20 06:40:02.681371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:2528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.292 [2024-11-20 06:40:02.681382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:29:47.293 [2024-11-20 06:40:02.681401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:2536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.293 [2024-11-20 06:40:02.681411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:29:47.293 [2024-11-20 06:40:02.681431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:2544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.293 [2024-11-20 06:40:02.681441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:29:47.293 [2024-11-20 06:40:02.681460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:2552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.293 [2024-11-20 06:40:02.681471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:29:47.293 [2024-11-20 06:40:02.681490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:2560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.293 [2024-11-20 06:40:02.681501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:47.293 [2024-11-20 06:40:02.681520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:2568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.293 [2024-11-20 06:40:02.681534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:29:47.293 [2024-11-20 06:40:02.681553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:2576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.293 [2024-11-20 06:40:02.681564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:47.293 [2024-11-20 06:40:02.681583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:2584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.293 [2024-11-20 06:40:02.681593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:29:47.293 [2024-11-20 06:40:02.681612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:2592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.293 [2024-11-20 06:40:02.681623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:29:47.293 [2024-11-20 06:40:02.681642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:2600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.293 [2024-11-20 06:40:02.681652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:47.293 [2024-11-20 06:40:02.681672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:2608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.293 [2024-11-20 06:40:02.681682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:29:47.293 [2024-11-20 06:40:02.681701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:2616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.293 [2024-11-20 06:40:02.681712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:29:47.293 [2024-11-20 06:40:02.681730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:2624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.293 [2024-11-20 06:40:02.681741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:29:47.293 [2024-11-20 06:40:02.681760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:2632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.293 [2024-11-20 06:40:02.681771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:29:47.293 [2024-11-20 06:40:02.681790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:2640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.293 [2024-11-20 06:40:02.681800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:47.293 [2024-11-20 06:40:02.681820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:2648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.293 [2024-11-20 06:40:02.681830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:29:47.293 [2024-11-20 06:40:02.681849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:2656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.293 [2024-11-20 06:40:02.681859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:47.293 [2024-11-20 06:40:02.681879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:2664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.293 [2024-11-20 06:40:02.681889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:29:47.293 [2024-11-20 06:40:02.681910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:2672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.293 [2024-11-20 06:40:02.681921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:29:47.293 [2024-11-20 06:40:02.681940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:2680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.293 [2024-11-20 06:40:02.681950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:47.293 [2024-11-20 06:40:02.681969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:2688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.293 [2024-11-20 06:40:02.681980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:47.293 [2024-11-20 06:40:02.681999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:2696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.293 [2024-11-20 06:40:02.682010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:47.293 [2024-11-20 06:40:02.682029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:2704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.293 [2024-11-20 06:40:02.682039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:47.293 [2024-11-20 06:40:02.682059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:2712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.293 [2024-11-20 06:40:02.682069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:47.293 [2024-11-20 06:40:02.682088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:2720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.293 [2024-11-20 06:40:02.682099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:29:47.293 [2024-11-20 06:40:02.682118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:2728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.293 [2024-11-20 06:40:02.682128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:29:47.293 [2024-11-20 06:40:02.682147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:2736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.293 [2024-11-20 06:40:02.682158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:47.293 [2024-11-20 06:40:02.682177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:2744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.293 [2024-11-20 06:40:02.682188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:47.293 [2024-11-20 06:40:02.682213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:2752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.293 [2024-11-20 06:40:02.682225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:47.293 [2024-11-20 06:40:02.682244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:2760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.293 [2024-11-20 06:40:02.682255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:47.293 [2024-11-20 06:40:02.682276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:2768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.293 [2024-11-20 06:40:02.682287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:29:47.293 [2024-11-20 06:40:02.682306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:2776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.293 [2024-11-20 06:40:02.682317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:47.293 [2024-11-20 06:40:02.682336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.293 [2024-11-20 06:40:02.682347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:47.293 [2024-11-20 06:40:02.682366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.293 [2024-11-20 06:40:02.682377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:47.293 [2024-11-20 06:40:02.682396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:2272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.293 [2024-11-20 06:40:02.682407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:29:47.293 [2024-11-20 06:40:02.682426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:2280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.293 [2024-11-20 06:40:02.682436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:47.293 [2024-11-20 06:40:02.682456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.293 [2024-11-20 06:40:02.682467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:47.293 [2024-11-20 06:40:02.682486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:2784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.293 [2024-11-20 06:40:02.682497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:29:47.293 [2024-11-20 06:40:02.682516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:2792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.293 [2024-11-20 06:40:02.682526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:29:47.293 [2024-11-20 06:40:02.682546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:2800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.293 [2024-11-20 06:40:02.682556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:29:47.294 [2024-11-20 06:40:02.682575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:2808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.294 [2024-11-20 06:40:02.682586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:29:47.294 [2024-11-20 06:40:02.682605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:2816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.294 [2024-11-20 06:40:02.682615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:47.294 [2024-11-20 06:40:02.682635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.294 [2024-11-20 06:40:02.682648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:47.294 [2024-11-20 06:40:02.683602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:2832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.294 [2024-11-20 06:40:02.683622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:29:47.294 [2024-11-20 06:40:02.683644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:2840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.294 [2024-11-20 06:40:02.683656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:29:47.294 [2024-11-20 06:40:02.683675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:2848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.294 [2024-11-20 06:40:02.683686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:47.294 [2024-11-20 06:40:02.683705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:2856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.294 [2024-11-20 06:40:02.683715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:47.294 [2024-11-20 06:40:02.683735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:2864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.294 [2024-11-20 06:40:02.683745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:29:47.294 [2024-11-20 06:40:02.683764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:2872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.294 [2024-11-20 06:40:02.683775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:29:47.294 [2024-11-20 06:40:02.683794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:2880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.294 [2024-11-20 06:40:02.683805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:47.294 [2024-11-20 06:40:02.683824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:2888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.294 [2024-11-20 06:40:02.683835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:29:47.294 [2024-11-20 06:40:02.683855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:2896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.294 [2024-11-20 06:40:02.683866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:47.294 [2024-11-20 06:40:02.683886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:2904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.294 [2024-11-20 06:40:02.683897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:47.294 [2024-11-20 06:40:02.683916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:2912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.294 [2024-11-20 06:40:02.683927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:47.294 [2024-11-20 06:40:02.683947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:2920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.294 [2024-11-20 06:40:02.683957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:47.294 [2024-11-20 06:40:02.683980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:2928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.294 [2024-11-20 06:40:02.683990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:47.294 [2024-11-20 06:40:02.684010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:2936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.294 [2024-11-20 06:40:02.684021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:29:47.294 [2024-11-20 06:40:02.684040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:2944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.294 [2024-11-20 06:40:02.684050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:47.294 [2024-11-20 06:40:02.684070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:2952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.294 [2024-11-20 06:40:02.684080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:47.294 [2024-11-20 06:40:02.684099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:2960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.294 [2024-11-20 06:40:02.684109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.294 [2024-11-20 06:40:02.684129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:2968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.294 [2024-11-20 06:40:02.684140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:47.294 [2024-11-20 06:40:02.684158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:2976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.294 [2024-11-20 06:40:02.684169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:47.294 [2024-11-20 06:40:02.684188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:2984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.294 [2024-11-20 06:40:02.684199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:29:47.294 [2024-11-20 06:40:02.684225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:2992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.294 [2024-11-20 06:40:02.684235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:29:47.294 [2024-11-20 06:40:02.684254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:3000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.294 [2024-11-20 06:40:02.684265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:29:47.294 [2024-11-20 06:40:02.684284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:3008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.294 [2024-11-20 06:40:02.684295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:29:47.294 [2024-11-20 06:40:02.684314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:3016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.294 [2024-11-20 06:40:02.684324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:29:47.294 [2024-11-20 06:40:02.684346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:3024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.294 [2024-11-20 06:40:02.684357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:29:47.294 [2024-11-20 06:40:02.684376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:2296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.294 [2024-11-20 06:40:02.684387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:29:47.294 [2024-11-20 06:40:02.684406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:2304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.294 [2024-11-20 06:40:02.684417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:29:47.294 [2024-11-20 06:40:02.684437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:2312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.294 [2024-11-20 06:40:02.684447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:29:47.294 [2024-11-20 06:40:02.684466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:2320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.294 [2024-11-20 06:40:02.684477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:29:47.294 [2024-11-20 06:40:02.684495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:2328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.294 [2024-11-20 06:40:02.684506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:29:47.294 [2024-11-20 06:40:02.684527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:2336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.294 [2024-11-20 06:40:02.684537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:29:47.294 [2024-11-20 06:40:02.684556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:2344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.294 [2024-11-20 06:40:02.684567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:29:47.294 [2024-11-20 06:40:02.684586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.294 [2024-11-20 06:40:02.684596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:29:47.294 [2024-11-20 06:40:02.684616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:2360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.294 [2024-11-20 06:40:02.684626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:29:47.294 [2024-11-20 06:40:02.684645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:2368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.294 [2024-11-20 06:40:02.684655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:29:47.295 [2024-11-20 06:40:02.684675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:2376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.295 [2024-11-20 06:40:02.684685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:29:47.295 [2024-11-20 06:40:02.684705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:2384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.295 [2024-11-20 06:40:02.684717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:29:47.295 [2024-11-20 06:40:02.684736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:2392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.295 [2024-11-20 06:40:02.684747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:29:47.295 [2024-11-20 06:40:02.684766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:2400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.295 [2024-11-20 06:40:02.684777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:29:47.295 [2024-11-20 06:40:02.684796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:2408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.295 [2024-11-20 06:40:02.684806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:29:47.295 [2024-11-20 06:40:02.684825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:2416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.295 [2024-11-20 06:40:02.684836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:29:47.295 [2024-11-20 06:40:02.684855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:3032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.295 [2024-11-20 06:40:02.684866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:29:47.295 [2024-11-20 06:40:02.684885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:3040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.295 [2024-11-20 06:40:02.684895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:29:47.295 [2024-11-20 06:40:02.684915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:3048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.295 [2024-11-20 06:40:02.684925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:29:47.295 [2024-11-20 06:40:02.684944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:3056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.295 [2024-11-20 06:40:02.684955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:29:47.295 [2024-11-20 06:40:02.684974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:3064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.295 [2024-11-20 06:40:02.684985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:29:47.295 [2024-11-20 06:40:02.685004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:3072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.295 [2024-11-20 06:40:02.685015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:29:47.295 [2024-11-20 06:40:02.685034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:3080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.295 [2024-11-20 06:40:02.685044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:29:47.295 [2024-11-20 06:40:02.685064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:3088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.295 [2024-11-20 06:40:02.685080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:29:47.295 [2024-11-20 06:40:02.685099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:3096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.295 [2024-11-20 06:40:02.685109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:47.295 [2024-11-20 06:40:02.685129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:3104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.295 [2024-11-20 06:40:02.685139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:47.295 [2024-11-20 06:40:02.685158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:3112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.295 [2024-11-20 06:40:02.685169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:29:47.295 [2024-11-20 06:40:02.685188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:3120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.295 [2024-11-20 06:40:02.685199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:29:47.295 [2024-11-20 06:40:02.685222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:3128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.295 [2024-11-20 06:40:02.685233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:29:47.295 [2024-11-20 06:40:02.685252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:3136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.295 [2024-11-20 06:40:02.685263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:29:47.295 [2024-11-20 06:40:02.685966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.295 [2024-11-20 06:40:02.685984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:29:47.295 [2024-11-20 06:40:02.686005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:3152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.295 [2024-11-20 06:40:02.686016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:29:47.295 [2024-11-20 06:40:02.686037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:3160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.295 [2024-11-20 06:40:02.686048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:29:47.295 [2024-11-20 06:40:02.686067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:3168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.295 [2024-11-20 06:40:02.686079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:29:47.295 [2024-11-20 06:40:02.686098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:3176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.295 [2024-11-20 06:40:02.686109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:29:47.295 [2024-11-20 06:40:02.686128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:3184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.295 [2024-11-20 06:40:02.686139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:29:47.295 [2024-11-20 06:40:02.686162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:3192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.295 [2024-11-20 06:40:02.686173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:29:47.295 [2024-11-20 06:40:02.686192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:3200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.295 [2024-11-20 06:40:02.686212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:29:47.295 [2024-11-20 06:40:02.686232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:3208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.295 [2024-11-20 06:40:02.686243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:29:47.295 [2024-11-20 06:40:02.686262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.295 [2024-11-20 06:40:02.686273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:29:47.295 [2024-11-20 06:40:02.686292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:3224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.295 [2024-11-20 06:40:02.686303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:29:47.295 [2024-11-20 06:40:02.686322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:3232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.295 [2024-11-20 06:40:02.686332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:29:47.295 [2024-11-20 06:40:02.686352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:3240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.295 [2024-11-20 06:40:02.686362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:29:47.296 [2024-11-20 06:40:02.686382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:3248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.296 [2024-11-20 06:40:02.686392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:29:47.296 [2024-11-20 06:40:02.686411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:3256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.296 [2024-11-20 06:40:02.686422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:29:47.296 [2024-11-20 06:40:02.686441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:3264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.296 [2024-11-20 06:40:02.686451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:29:47.296 [2024-11-20 06:40:02.686471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:3272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.296 [2024-11-20 06:40:02.686483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:29:47.296 [2024-11-20 06:40:02.686503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:2424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.296 [2024-11-20 06:40:02.686513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:29:47.296 [2024-11-20 06:40:02.686535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:2432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.296 [2024-11-20 06:40:02.686547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:29:47.296 [2024-11-20 06:40:02.686566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:2440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.296 [2024-11-20 06:40:02.686577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:47.296 [2024-11-20 06:40:02.686596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:2448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.296 [2024-11-20 06:40:02.686606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:47.296 [2024-11-20 06:40:02.686626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:2456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.296 [2024-11-20 06:40:02.686636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:29:47.296 [2024-11-20 06:40:02.686656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:2464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.296 [2024-11-20 06:40:02.686667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:29:47.296 [2024-11-20 06:40:02.686686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:2472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.296 [2024-11-20 06:40:02.686696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:29:47.296 [2024-11-20 06:40:02.686716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:2480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.296 [2024-11-20 06:40:02.686726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:47.296 [2024-11-20 06:40:02.686745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:2488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.296 [2024-11-20 06:40:02.686756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:29:47.296 [2024-11-20 06:40:02.686775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:2496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.296 [2024-11-20 06:40:02.686785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:47.296 [2024-11-20 06:40:02.686805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:2504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.296 [2024-11-20 06:40:02.686816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:47.296 [2024-11-20 06:40:02.686835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:2512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.296 [2024-11-20 06:40:02.686846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:29:47.296 [2024-11-20 06:40:02.686865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:2520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.296 [2024-11-20 06:40:02.686875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:29:47.296 [2024-11-20 06:40:02.686895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:2528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.296 [2024-11-20 06:40:02.686908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:29:47.296 [2024-11-20 06:40:02.686928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:2536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.296 [2024-11-20 06:40:02.686938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:29:47.296 [2024-11-20 06:40:02.686957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:2544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.296 [2024-11-20 06:40:02.686968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:29:47.296 [2024-11-20 06:40:02.686987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:2552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.296 [2024-11-20 06:40:02.686998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:29:47.296 [2024-11-20 06:40:02.687017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:2560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.296 [2024-11-20 06:40:02.687028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:47.296 [2024-11-20 06:40:02.687047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:2568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.296 [2024-11-20 06:40:02.687058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:29:47.296 [2024-11-20 06:40:02.687077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:2576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.296 [2024-11-20 06:40:02.687088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:47.296 [2024-11-20 06:40:02.687107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:2584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.296 [2024-11-20 06:40:02.687118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:29:47.296 [2024-11-20 06:40:02.687139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:2592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.296 [2024-11-20 06:40:02.687150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:29:47.296 [2024-11-20 06:40:02.687169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:2600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.296 [2024-11-20 06:40:02.687180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:47.296 [2024-11-20 06:40:02.687199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:2608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.296 [2024-11-20 06:40:02.687225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:29:47.296 [2024-11-20 06:40:02.687244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:2616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.296 [2024-11-20 06:40:02.687255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:29:47.296 [2024-11-20 06:40:02.687274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:2624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.296 [2024-11-20 06:40:02.687284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:29:47.296 [2024-11-20 06:40:02.687306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:2632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.296 [2024-11-20 06:40:02.687317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:29:47.296 [2024-11-20 06:40:02.687336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:2640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.296 [2024-11-20 06:40:02.687347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:47.296 [2024-11-20 06:40:02.687366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.296 [2024-11-20 06:40:02.687376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:29:47.296 [2024-11-20 06:40:02.687395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:2656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.296 [2024-11-20 06:40:02.687406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:47.296 [2024-11-20 06:40:02.687425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:2664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.296 [2024-11-20 06:40:02.687436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:29:47.296 [2024-11-20 06:40:02.687455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.296 [2024-11-20 06:40:02.687466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:29:47.296 [2024-11-20 06:40:02.687485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:2680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.296 [2024-11-20 06:40:02.687496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:47.296 [2024-11-20 06:40:02.687515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:2688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.296 [2024-11-20 06:40:02.687525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:47.296 [2024-11-20 06:40:02.687545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:2696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.297 [2024-11-20 06:40:02.687555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:47.297 [2024-11-20 06:40:02.687574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.297 [2024-11-20 06:40:02.687585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:47.297 [2024-11-20 06:40:02.687604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:2712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.297 [2024-11-20 06:40:02.687614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:47.297 [2024-11-20 06:40:02.687633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:2720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.297 [2024-11-20 06:40:02.687644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:29:47.297 [2024-11-20 06:40:02.687665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:2728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.297 [2024-11-20 06:40:02.687676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:29:47.297 [2024-11-20 06:40:02.687695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:2736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.297 [2024-11-20 06:40:02.687706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:47.297 [2024-11-20 06:40:02.687725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:2744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.297 [2024-11-20 06:40:02.687736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:47.297 [2024-11-20 06:40:02.687755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:2752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.297 [2024-11-20 06:40:02.687766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:47.297 [2024-11-20 06:40:02.687785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:2760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.297 [2024-11-20 06:40:02.687796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:47.297 [2024-11-20 06:40:02.687815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:2768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.297 [2024-11-20 06:40:02.687826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:29:47.297 [2024-11-20 06:40:02.687845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:2776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.297 [2024-11-20 06:40:02.687857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:47.297 [2024-11-20 06:40:02.687876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:2256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.297 [2024-11-20 06:40:02.687887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:47.297 [2024-11-20 06:40:02.687908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.297 [2024-11-20 06:40:02.687918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:47.297 [2024-11-20 06:40:02.687937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:2272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.297 [2024-11-20 06:40:02.687948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:29:47.297 [2024-11-20 06:40:02.687967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:2280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.297 [2024-11-20 06:40:02.687978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:47.297 [2024-11-20 06:40:02.687997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:2288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.297 [2024-11-20 06:40:02.688009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:47.297 [2024-11-20 06:40:02.688028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:2784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.297 [2024-11-20 06:40:02.688041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:29:47.297 [2024-11-20 06:40:02.688061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:2792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.297 [2024-11-20 06:40:02.688071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:29:47.297 [2024-11-20 06:40:02.688090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:2800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.297 [2024-11-20 06:40:02.688101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:29:47.297 [2024-11-20 06:40:02.688120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:2808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.297 [2024-11-20 06:40:02.688131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:29:47.297 [2024-11-20 06:40:02.688150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:2816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.297 [2024-11-20 06:40:02.688161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:47.297 [2024-11-20 06:40:02.689036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:2824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.297 [2024-11-20 06:40:02.689053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:47.297 [2024-11-20 06:40:02.689072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:2832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.297 [2024-11-20 06:40:02.689082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:29:47.297 [2024-11-20 06:40:02.689099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:2840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.297 [2024-11-20 06:40:02.689109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:29:47.297 [2024-11-20 06:40:02.689125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:2848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.297 [2024-11-20 06:40:02.689135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:47.297 [2024-11-20 06:40:02.689152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:2856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.297 [2024-11-20 06:40:02.689161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:47.297 [2024-11-20 06:40:02.689179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:2864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.297 [2024-11-20 06:40:02.689188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:29:47.297 [2024-11-20 06:40:02.689209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:2872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.297 [2024-11-20 06:40:02.689219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:29:47.297 [2024-11-20 06:40:02.689237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:2880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.297 [2024-11-20 06:40:02.689250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:47.297 [2024-11-20 06:40:02.689267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:2888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.297 [2024-11-20 06:40:02.689276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:29:47.297 [2024-11-20 06:40:02.689293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:2896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.297 [2024-11-20 06:40:02.689303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:47.297 [2024-11-20 06:40:02.689321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:2904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.297 [2024-11-20 06:40:02.689331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:47.297 [2024-11-20 06:40:02.689348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:2912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.297 [2024-11-20 06:40:02.689358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:47.297 [2024-11-20 06:40:02.689377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:2920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.297 [2024-11-20 06:40:02.689386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:47.297 [2024-11-20 06:40:02.689404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:2928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.297 [2024-11-20 06:40:02.689413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:47.297 [2024-11-20 06:40:02.689429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:2936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.297 [2024-11-20 06:40:02.689439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:29:47.297 [2024-11-20 06:40:02.689456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:2944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.297 [2024-11-20 06:40:02.689465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:47.297 [2024-11-20 06:40:02.689482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:2952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.297 [2024-11-20 06:40:02.689492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:47.297 [2024-11-20 06:40:02.689510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:2960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.297 [2024-11-20 06:40:02.689519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.297 [2024-11-20 06:40:02.689536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:2968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.298 [2024-11-20 06:40:02.689546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:47.298 [2024-11-20 06:40:02.689563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:2976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.298 [2024-11-20 06:40:02.689572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:47.298 [2024-11-20 06:40:02.689592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:2984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.298 [2024-11-20 06:40:02.689601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:29:47.298 [2024-11-20 06:40:02.689618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:2992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.298 [2024-11-20 06:40:02.689627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:29:47.298 [2024-11-20 06:40:02.689644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:3000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.298 [2024-11-20 06:40:02.689653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:29:47.298 [2024-11-20 06:40:02.689670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:3008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.298 [2024-11-20 06:40:02.689679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:29:47.298 [2024-11-20 06:40:02.689696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:3016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.298 [2024-11-20 06:40:02.689705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:29:47.298 [2024-11-20 06:40:02.689722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:3024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.298 [2024-11-20 06:40:02.689732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:29:47.298 [2024-11-20 06:40:02.689749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:2296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.298 [2024-11-20 06:40:02.689758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:29:47.298 [2024-11-20 06:40:02.689775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:2304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.298 [2024-11-20 06:40:02.689784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:29:47.298 [2024-11-20 06:40:02.689801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:2312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.298 [2024-11-20 06:40:02.689810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:29:47.298 [2024-11-20 06:40:02.689827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:2320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.298 [2024-11-20 06:40:02.689837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:29:47.298 [2024-11-20 06:40:02.689853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:2328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.298 [2024-11-20 06:40:02.689863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:29:47.298 [2024-11-20 06:40:02.689880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:2336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.298 [2024-11-20 06:40:02.689889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:29:47.298 [2024-11-20 06:40:02.689908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:2344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.298 [2024-11-20 06:40:02.689917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:29:47.298 [2024-11-20 06:40:02.689934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:2352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.298 [2024-11-20 06:40:02.689943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:29:47.298 [2024-11-20 06:40:02.689960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.298 [2024-11-20 06:40:02.689970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:29:47.298 [2024-11-20 06:40:02.689987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:2368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.298 [2024-11-20 06:40:02.689997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:29:47.298 [2024-11-20 06:40:02.690014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:2376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.298 [2024-11-20 06:40:02.690025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:29:47.298 [2024-11-20 06:40:02.690043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.298 [2024-11-20 06:40:02.690052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:29:47.298 [2024-11-20 06:40:02.690069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:2392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.298 [2024-11-20 06:40:02.690078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:29:47.298 [2024-11-20 06:40:02.690095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:2400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.298 [2024-11-20 06:40:02.690105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:29:47.298 [2024-11-20 06:40:02.690122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:2408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.298 [2024-11-20 06:40:02.690130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:29:47.298 [2024-11-20 06:40:02.690147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:2416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.298 [2024-11-20 06:40:02.690157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:29:47.298 [2024-11-20 06:40:02.690174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:3032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.298 [2024-11-20 06:40:02.690183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:29:47.298 [2024-11-20 06:40:02.690200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:3040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.298 [2024-11-20 06:40:02.690213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:29:47.298 [2024-11-20 06:40:02.690230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:3048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.298 [2024-11-20 06:40:02.690246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:29:47.298 [2024-11-20 06:40:02.690263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:3056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.298 [2024-11-20 06:40:02.690273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:29:47.298 [2024-11-20 06:40:02.690289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:3064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.298 [2024-11-20 06:40:02.690299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:29:47.298 [2024-11-20 06:40:02.690316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:3072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.298 [2024-11-20 06:40:02.690327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:29:47.298 [2024-11-20 06:40:02.690345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:3080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.298 [2024-11-20 06:40:02.690354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:29:47.298 [2024-11-20 06:40:02.690371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:3088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.298 [2024-11-20 06:40:02.690381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:29:47.298 [2024-11-20 06:40:02.690398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:3096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.298 [2024-11-20 06:40:02.690408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:47.298 [2024-11-20 06:40:02.690424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:3104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.298 [2024-11-20 06:40:02.690436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:47.298 [2024-11-20 06:40:02.690454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:3112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.298 [2024-11-20 06:40:02.690463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:29:47.298 [2024-11-20 06:40:02.690480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:3120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.298 [2024-11-20 06:40:02.690489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:29:47.298 [2024-11-20 06:40:02.690507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:3128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.298 [2024-11-20 06:40:02.690517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:29:47.298 [2024-11-20 06:40:02.691152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:3136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.298 [2024-11-20 06:40:02.691167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:29:47.298 [2024-11-20 06:40:02.691187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:3144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.298 [2024-11-20 06:40:02.691208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:29:47.299 [2024-11-20 06:40:02.691225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:3152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.299 [2024-11-20 06:40:02.691237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:29:47.299 [2024-11-20 06:40:02.691255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:3160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.299 [2024-11-20 06:40:02.691264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:29:47.299 [2024-11-20 06:40:02.691281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:3168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.299 [2024-11-20 06:40:02.691291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:29:47.299 [2024-11-20 06:40:02.691307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:3176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.299 [2024-11-20 06:40:02.691317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:29:47.299 [2024-11-20 06:40:02.691333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:3184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.299 [2024-11-20 06:40:02.691342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:29:47.299 [2024-11-20 06:40:02.691360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:3192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.299 [2024-11-20 06:40:02.691369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:29:47.299 [2024-11-20 06:40:02.691386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:3200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.299 [2024-11-20 06:40:02.691395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:29:47.299 [2024-11-20 06:40:02.691412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:3208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.299 [2024-11-20 06:40:02.691422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:29:47.299 [2024-11-20 06:40:02.691439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:3216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.299 [2024-11-20 06:40:02.691447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:29:47.299 [2024-11-20 06:40:02.691464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:3224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.299 [2024-11-20 06:40:02.691474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:29:47.299 [2024-11-20 06:40:02.691491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:3232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.299 [2024-11-20 06:40:02.691500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:29:47.299 [2024-11-20 06:40:02.691517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:3240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.299 [2024-11-20 06:40:02.691526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:29:47.299 [2024-11-20 06:40:02.691553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:3248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.299 [2024-11-20 06:40:02.691563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:29:47.299 [2024-11-20 06:40:02.691580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:3256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.299 [2024-11-20 06:40:02.691590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:29:47.299 [2024-11-20 06:40:02.691609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:3264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.299 [2024-11-20 06:40:02.691618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:29:47.299 [2024-11-20 06:40:02.691635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:3272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.299 [2024-11-20 06:40:02.691644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:29:47.299 [2024-11-20 06:40:02.691661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:2424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.299 [2024-11-20 06:40:02.691670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:29:47.299 [2024-11-20 06:40:02.691687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.299 [2024-11-20 06:40:02.691697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:29:47.299 [2024-11-20 06:40:02.691714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:2440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.299 [2024-11-20 06:40:02.691723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:47.299 [2024-11-20 06:40:02.691740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:2448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.299 [2024-11-20 06:40:02.691749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:47.299 [2024-11-20 06:40:02.691766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:2456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.299 [2024-11-20 06:40:02.691775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:29:47.299 [2024-11-20 06:40:02.691792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.299 [2024-11-20 06:40:02.691801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:29:47.299 [2024-11-20 06:40:02.691818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:2472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.299 [2024-11-20 06:40:02.691827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:29:47.299 [2024-11-20 06:40:02.691844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:2480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.299 [2024-11-20 06:40:02.691854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:47.299 [2024-11-20 06:40:02.691870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.299 [2024-11-20 06:40:02.691882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:29:47.299 [2024-11-20 06:40:02.691899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:2496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.299 [2024-11-20 06:40:02.691908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:47.299 [2024-11-20 06:40:02.691925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:2504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.299 [2024-11-20 06:40:02.691934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:47.299 [2024-11-20 06:40:02.691951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:2512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.299 [2024-11-20 06:40:02.691960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:29:47.299 [2024-11-20 06:40:02.691977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:2520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.299 [2024-11-20 06:40:02.691986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:29:47.299 [2024-11-20 06:40:02.692003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:2528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.299 [2024-11-20 06:40:02.692012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:29:47.299 [2024-11-20 06:40:02.692029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:2536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.299 [2024-11-20 06:40:02.692038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:29:47.299 [2024-11-20 06:40:02.692055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:2544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.299 [2024-11-20 06:40:02.692064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:29:47.299 [2024-11-20 06:40:02.692081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:2552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.299 [2024-11-20 06:40:02.692090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:29:47.299 [2024-11-20 06:40:02.692107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:2560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.299 [2024-11-20 06:40:02.692116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:47.299 [2024-11-20 06:40:02.692133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:2568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.299 [2024-11-20 06:40:02.692143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:29:47.299 [2024-11-20 06:40:02.692159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:2576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.299 [2024-11-20 06:40:02.692169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:47.299 [2024-11-20 06:40:02.692185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:2584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.299 [2024-11-20 06:40:02.692196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:29:47.299 [2024-11-20 06:40:02.692217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:2592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.299 [2024-11-20 06:40:02.692227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:29:47.299 [2024-11-20 06:40:02.692243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:2600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.299 [2024-11-20 06:40:02.692252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:47.300 [2024-11-20 06:40:02.692269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:2608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.300 [2024-11-20 06:40:02.692278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:29:47.300 [2024-11-20 06:40:02.692295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:2616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.300 [2024-11-20 06:40:02.692304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:29:47.300 [2024-11-20 06:40:02.692321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:2624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.300 [2024-11-20 06:40:02.692330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:29:47.300 [2024-11-20 06:40:02.692347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:2632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.300 [2024-11-20 06:40:02.692356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:29:47.300 [2024-11-20 06:40:02.692373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:2640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.300 [2024-11-20 06:40:02.692382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:47.300 [2024-11-20 06:40:02.692399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:2648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.300 [2024-11-20 06:40:02.692408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:29:47.300 [2024-11-20 06:40:02.692425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:2656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.300 [2024-11-20 06:40:02.692434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:47.300 [2024-11-20 06:40:02.692450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:2664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.300 [2024-11-20 06:40:02.692460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:29:47.300 [2024-11-20 06:40:02.692477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:2672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.300 [2024-11-20 06:40:02.692486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:29:47.300 [2024-11-20 06:40:02.692503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:2680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.300 [2024-11-20 06:40:02.692513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:47.300 [2024-11-20 06:40:02.692535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:2688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.300 [2024-11-20 06:40:02.692545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:47.300 [2024-11-20 06:40:02.692562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:2696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.300 [2024-11-20 06:40:02.692572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:47.300 [2024-11-20 06:40:02.692589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:2704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.300 [2024-11-20 06:40:02.692598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:47.300 [2024-11-20 06:40:02.692615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:2712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.300 [2024-11-20 06:40:02.692624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:47.300 [2024-11-20 06:40:02.692641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:2720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.300 [2024-11-20 06:40:02.692650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:29:47.300 [2024-11-20 06:40:02.692667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:2728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.300 [2024-11-20 06:40:02.692677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:29:47.300 [2024-11-20 06:40:02.692694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:2736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.300 [2024-11-20 06:40:02.692703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:47.300 [2024-11-20 06:40:02.692719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:2744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.300 [2024-11-20 06:40:02.692729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:47.300 [2024-11-20 06:40:02.692745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:2752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.300 [2024-11-20 06:40:02.692755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:47.300 [2024-11-20 06:40:02.692771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:2760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.300 [2024-11-20 06:40:02.692781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:47.300 [2024-11-20 06:40:02.692797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:2768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.300 [2024-11-20 06:40:02.692807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:29:47.300 [2024-11-20 06:40:02.692824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.300 [2024-11-20 06:40:02.692833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:47.300 [2024-11-20 06:40:02.692851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:2256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.300 [2024-11-20 06:40:02.692861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:47.300 [2024-11-20 06:40:02.692878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:2264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.300 [2024-11-20 06:40:02.692887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:47.300 [2024-11-20 06:40:02.692904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.300 [2024-11-20 06:40:02.692913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:29:47.300 [2024-11-20 06:40:02.692930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:2280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.300 [2024-11-20 06:40:02.692939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:47.300 [2024-11-20 06:40:02.692956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:2288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.300 [2024-11-20 06:40:02.692966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:47.300 [2024-11-20 06:40:02.692982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:2784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.300 [2024-11-20 06:40:02.692992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:29:47.300 [2024-11-20 06:40:02.693008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:2792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.300 [2024-11-20 06:40:02.693018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:29:47.300 [2024-11-20 06:40:02.693036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:2800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.300 [2024-11-20 06:40:02.693045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:29:47.300 [2024-11-20 06:40:02.693063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.300 [2024-11-20 06:40:02.693072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:29:47.300 [2024-11-20 06:40:02.693914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:2816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.300 [2024-11-20 06:40:02.693932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:47.300 [2024-11-20 06:40:02.693951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:2824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.300 [2024-11-20 06:40:02.693961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:47.300 [2024-11-20 06:40:02.693979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:2832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.301 [2024-11-20 06:40:02.693988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:29:47.301 [2024-11-20 06:40:02.694006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:2840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.301 [2024-11-20 06:40:02.694019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:29:47.301 [2024-11-20 06:40:02.694036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:2848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.301 [2024-11-20 06:40:02.694045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:47.301 [2024-11-20 06:40:02.694062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:2856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.301 [2024-11-20 06:40:02.694072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:47.301 [2024-11-20 06:40:02.694089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:2864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.301 [2024-11-20 06:40:02.694098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:29:47.301 [2024-11-20 06:40:02.694115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:2872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.301 [2024-11-20 06:40:02.694124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:29:47.301 [2024-11-20 06:40:02.694141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:2880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.301 [2024-11-20 06:40:02.694151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:47.301 [2024-11-20 06:40:02.694167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:2888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.301 [2024-11-20 06:40:02.694177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:29:47.301 [2024-11-20 06:40:02.694194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:2896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.301 [2024-11-20 06:40:02.694211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:47.301 [2024-11-20 06:40:02.694229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:2904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.301 [2024-11-20 06:40:02.694238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:47.301 [2024-11-20 06:40:02.694254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:2912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.301 [2024-11-20 06:40:02.694264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:47.301 [2024-11-20 06:40:02.694281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:2920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.301 [2024-11-20 06:40:02.694291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:47.301 [2024-11-20 06:40:02.694309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:2928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.301 [2024-11-20 06:40:02.694319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:47.301 [2024-11-20 06:40:02.694336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:2936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.301 [2024-11-20 06:40:02.694345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:29:47.301 [2024-11-20 06:40:02.694366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:2944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.301 [2024-11-20 06:40:02.694376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:47.301 [2024-11-20 06:40:02.694393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:2952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.301 [2024-11-20 06:40:02.694402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:47.301 [2024-11-20 06:40:02.694419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:2960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.301 [2024-11-20 06:40:02.694429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.301 [2024-11-20 06:40:02.694446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:2968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.301 [2024-11-20 06:40:02.694455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:47.301 [2024-11-20 06:40:02.694472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:2976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.301 [2024-11-20 06:40:02.694482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:47.301 [2024-11-20 06:40:02.694499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:2984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.301 [2024-11-20 06:40:02.694508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:29:47.301 [2024-11-20 06:40:02.694526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:2992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.301 [2024-11-20 06:40:02.694535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:29:47.301 [2024-11-20 06:40:02.694552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:3000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.301 [2024-11-20 06:40:02.694562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:29:47.301 [2024-11-20 06:40:02.694581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:3008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.301 [2024-11-20 06:40:02.694591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:29:47.301 [2024-11-20 06:40:02.694608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:3016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.301 [2024-11-20 06:40:02.694617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:29:47.301 [2024-11-20 06:40:02.694634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:3024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.301 [2024-11-20 06:40:02.694643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:29:47.301 [2024-11-20 06:40:02.694660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:2296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.301 [2024-11-20 06:40:02.694670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:29:47.301 [2024-11-20 06:40:02.694690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:2304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.301 [2024-11-20 06:40:02.694701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:29:47.301 [2024-11-20 06:40:02.694718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:2312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.301 [2024-11-20 06:40:02.694729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:29:47.301 [2024-11-20 06:40:02.694746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:2320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.301 [2024-11-20 06:40:02.694755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:29:47.301 [2024-11-20 06:40:02.694772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:2328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.301 [2024-11-20 06:40:02.694782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:29:47.301 [2024-11-20 06:40:02.694799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.301 [2024-11-20 06:40:02.694809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:29:47.301 [2024-11-20 06:40:02.694827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:2344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.301 [2024-11-20 06:40:02.694836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:29:47.301 [2024-11-20 06:40:02.694853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:2352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.301 [2024-11-20 06:40:02.694863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:29:47.301 [2024-11-20 06:40:02.694880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:2360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.301 [2024-11-20 06:40:02.694889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:29:47.301 [2024-11-20 06:40:02.694907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:2368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.301 [2024-11-20 06:40:02.694916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:29:47.301 [2024-11-20 06:40:02.694933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:2376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.301 [2024-11-20 06:40:02.694942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:29:47.301 [2024-11-20 06:40:02.694959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:2384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.301 [2024-11-20 06:40:02.694969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:29:47.301 [2024-11-20 06:40:02.694986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:2392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.301 [2024-11-20 06:40:02.694996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:29:47.301 [2024-11-20 06:40:02.695013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:2400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.301 [2024-11-20 06:40:02.695025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:29:47.301 [2024-11-20 06:40:02.695042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:2408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.302 [2024-11-20 06:40:02.695051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:29:47.302 [2024-11-20 06:40:02.695068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:2416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.302 [2024-11-20 06:40:02.695077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:29:47.302 [2024-11-20 06:40:02.695094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:3032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.302 [2024-11-20 06:40:02.695103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:29:47.302 [2024-11-20 06:40:02.695120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:3040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.302 [2024-11-20 06:40:02.695129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:29:47.302 [2024-11-20 06:40:02.695146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:3048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.302 [2024-11-20 06:40:02.695155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:29:47.302 [2024-11-20 06:40:02.695172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:3056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.302 [2024-11-20 06:40:02.695181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:29:47.302 [2024-11-20 06:40:02.695198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:3064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.302 [2024-11-20 06:40:02.695213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:29:47.302 [2024-11-20 06:40:02.695229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:3072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.302 [2024-11-20 06:40:02.695239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:29:47.302 [2024-11-20 06:40:02.695255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:3080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.302 [2024-11-20 06:40:02.695265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:29:47.302 [2024-11-20 06:40:02.695282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:3088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.302 [2024-11-20 06:40:02.695292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:29:47.302 [2024-11-20 06:40:02.695309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:3096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.302 [2024-11-20 06:40:02.695319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:47.302 [2024-11-20 06:40:02.695335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:3104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.302 [2024-11-20 06:40:02.695347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:47.302 [2024-11-20 06:40:02.695364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:3112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.302 [2024-11-20 06:40:02.695373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:29:47.302 [2024-11-20 06:40:02.695391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:3120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.302 [2024-11-20 06:40:02.695400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:29:47.302 [2024-11-20 06:40:02.696018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.302 [2024-11-20 06:40:02.696034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:29:47.302 [2024-11-20 06:40:02.696052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:3136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.302 [2024-11-20 06:40:02.696062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:29:47.302 [2024-11-20 06:40:02.696080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:3144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.302 [2024-11-20 06:40:02.696089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:29:47.302 [2024-11-20 06:40:02.696106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:3152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.302 [2024-11-20 06:40:02.696115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:29:47.302 [2024-11-20 06:40:02.696132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:3160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.302 [2024-11-20 06:40:02.696142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:29:47.302 [2024-11-20 06:40:02.696159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:3168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.302 [2024-11-20 06:40:02.696168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:29:47.302 [2024-11-20 06:40:02.696184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:3176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.302 [2024-11-20 06:40:02.696194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:29:47.302 [2024-11-20 06:40:02.696217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:3184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.302 [2024-11-20 06:40:02.696227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:29:47.302 [2024-11-20 06:40:02.696244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:3192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.302 [2024-11-20 06:40:02.696253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:29:47.302 [2024-11-20 06:40:02.696270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.302 [2024-11-20 06:40:02.696280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:29:47.302 [2024-11-20 06:40:02.696300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:3208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.302 [2024-11-20 06:40:02.696310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:29:47.302 [2024-11-20 06:40:02.696327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:3216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.302 [2024-11-20 06:40:02.696336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:29:47.302 [2024-11-20 06:40:02.696353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:3224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.302 [2024-11-20 06:40:02.696363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:29:47.302 [2024-11-20 06:40:02.696380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:3232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.302 [2024-11-20 06:40:02.696389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:29:47.302 [2024-11-20 06:40:02.696406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:3240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.302 [2024-11-20 06:40:02.696415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:29:47.302 [2024-11-20 06:40:02.696432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:3248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.302 [2024-11-20 06:40:02.696441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:29:47.302 [2024-11-20 06:40:02.696458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:3256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.302 [2024-11-20 06:40:02.696467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:29:47.302 [2024-11-20 06:40:02.696484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:3264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.302 [2024-11-20 06:40:02.696494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:29:47.302 [2024-11-20 06:40:02.696511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:3272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.302 [2024-11-20 06:40:02.696520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:29:47.302 [2024-11-20 06:40:02.696537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:2424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.302 [2024-11-20 06:40:02.696547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:29:47.302 [2024-11-20 06:40:02.696564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:2432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.302 [2024-11-20 06:40:02.696573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:29:47.302 [2024-11-20 06:40:02.696590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:2440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.302 [2024-11-20 06:40:02.696599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:47.302 [2024-11-20 06:40:02.696618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:2448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.302 [2024-11-20 06:40:02.696627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:47.302 [2024-11-20 06:40:02.696645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:2456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.302 [2024-11-20 06:40:02.696654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:29:47.302 [2024-11-20 06:40:02.696671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:2464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.302 [2024-11-20 06:40:02.696680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:29:47.303 [2024-11-20 06:40:02.696697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:2472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.303 [2024-11-20 06:40:02.696706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:29:47.303 [2024-11-20 06:40:02.696724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:2480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.303 [2024-11-20 06:40:02.696733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:47.303 [2024-11-20 06:40:02.696750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:2488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.303 [2024-11-20 06:40:02.696759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:29:47.303 [2024-11-20 06:40:02.696776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:2496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.303 [2024-11-20 06:40:02.696786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:47.303 [2024-11-20 06:40:02.696802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:2504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.303 [2024-11-20 06:40:02.696812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:47.303 [2024-11-20 06:40:02.696828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:2512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.303 [2024-11-20 06:40:02.696838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:29:47.303 [2024-11-20 06:40:02.696855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:2520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.303 [2024-11-20 06:40:02.696864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:29:47.303 [2024-11-20 06:40:02.696881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:2528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.303 [2024-11-20 06:40:02.696890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:29:47.303 [2024-11-20 06:40:02.696907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:2536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.303 [2024-11-20 06:40:02.696917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:29:47.303 [2024-11-20 06:40:02.696934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:2544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.303 [2024-11-20 06:40:02.696946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:29:47.303 [2024-11-20 06:40:02.696963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:2552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.303 [2024-11-20 06:40:02.696973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:29:47.303 [2024-11-20 06:40:02.696990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:2560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.303 [2024-11-20 06:40:02.697000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:47.303 [2024-11-20 06:40:02.697017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:2568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.303 [2024-11-20 06:40:02.697026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:29:47.303 [2024-11-20 06:40:02.697043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:2576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.303 [2024-11-20 06:40:02.697053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:47.303 [2024-11-20 06:40:02.697070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:2584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.303 [2024-11-20 06:40:02.697079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:29:47.303 [2024-11-20 06:40:02.697096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:2592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.303 [2024-11-20 06:40:02.697105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:29:47.303 [2024-11-20 06:40:02.697122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:2600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.303 [2024-11-20 06:40:02.697131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:47.303 [2024-11-20 06:40:02.697148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:2608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.303 [2024-11-20 06:40:02.697158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:29:47.303 [2024-11-20 06:40:02.697175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:2616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.303 [2024-11-20 06:40:02.697184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:29:47.303 [2024-11-20 06:40:02.697205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:2624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.303 [2024-11-20 06:40:02.697215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:29:47.303 [2024-11-20 06:40:02.697232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.303 [2024-11-20 06:40:02.697241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:29:47.303 [2024-11-20 06:40:02.697258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:2640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.303 [2024-11-20 06:40:02.697268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:47.303 [2024-11-20 06:40:02.697287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:2648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.303 [2024-11-20 06:40:02.697297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:29:47.303 [2024-11-20 06:40:02.697314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.303 [2024-11-20 06:40:02.697323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:47.303 [2024-11-20 06:40:02.697340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:2664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.303 [2024-11-20 06:40:02.697349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:29:47.303 [2024-11-20 06:40:02.697367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:2672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.303 [2024-11-20 06:40:02.697376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:29:47.303 [2024-11-20 06:40:02.697393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:2680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.303 [2024-11-20 06:40:02.697403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:47.303 [2024-11-20 06:40:02.697419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.303 [2024-11-20 06:40:02.697429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:47.303 [2024-11-20 06:40:02.697446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:2696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.303 [2024-11-20 06:40:02.697455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:47.303 [2024-11-20 06:40:02.697472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:2704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.303 [2024-11-20 06:40:02.697482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:47.303 [2024-11-20 06:40:02.697499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:2712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.303 [2024-11-20 06:40:02.697508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:47.303 [2024-11-20 06:40:02.697525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:2720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.303 [2024-11-20 06:40:02.697535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:29:47.303 [2024-11-20 06:40:02.697552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:2728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.303 [2024-11-20 06:40:02.697561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:29:47.303 [2024-11-20 06:40:02.697579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:2736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.303 [2024-11-20 06:40:02.697588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:47.303 [2024-11-20 06:40:02.697607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:2744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.303 [2024-11-20 06:40:02.697616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:47.303 [2024-11-20 06:40:02.697633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:2752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.303 [2024-11-20 06:40:02.697642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:47.303 [2024-11-20 06:40:02.697659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:2760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.303 [2024-11-20 06:40:02.697669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:47.303 [2024-11-20 06:40:02.697686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:2768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.303 [2024-11-20 06:40:02.697695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:29:47.303 [2024-11-20 06:40:02.697712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.303 [2024-11-20 06:40:02.697721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:47.304 [2024-11-20 06:40:02.697738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:2256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.304 [2024-11-20 06:40:02.697748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:47.304 [2024-11-20 06:40:02.697765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:2264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.304 [2024-11-20 06:40:02.697774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:47.304 [2024-11-20 06:40:02.697791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:2272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.304 [2024-11-20 06:40:02.697800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:29:47.304 [2024-11-20 06:40:02.697817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:2280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.304 [2024-11-20 06:40:02.697827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:47.304 [2024-11-20 06:40:02.697843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:2288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.304 [2024-11-20 06:40:02.697853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:47.304 [2024-11-20 06:40:02.697869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:2784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.304 [2024-11-20 06:40:02.697879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:29:47.304 [2024-11-20 06:40:02.697897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:2792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.304 [2024-11-20 06:40:02.697906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:29:47.304 [2024-11-20 06:40:02.697923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:2800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.304 [2024-11-20 06:40:02.697935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:29:47.304 [2024-11-20 06:40:02.698731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:2808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.304 [2024-11-20 06:40:02.698744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:29:47.304 [2024-11-20 06:40:02.698758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:2816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.304 [2024-11-20 06:40:02.698764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:47.304 [2024-11-20 06:40:02.698777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:2824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.304 [2024-11-20 06:40:02.698783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:47.304 [2024-11-20 06:40:02.698796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:2832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.304 [2024-11-20 06:40:02.698803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:29:47.304 [2024-11-20 06:40:02.698815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:2840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.304 [2024-11-20 06:40:02.698822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:29:47.304 [2024-11-20 06:40:02.698834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:2848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.304 [2024-11-20 06:40:02.698841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:47.304 [2024-11-20 06:40:02.698853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:2856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.304 [2024-11-20 06:40:02.698859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:47.304 [2024-11-20 06:40:02.698871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:2864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.304 [2024-11-20 06:40:02.698878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:29:47.304 [2024-11-20 06:40:02.698889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:2872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.304 [2024-11-20 06:40:02.698896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:29:47.304 [2024-11-20 06:40:02.698908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:2880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.304 [2024-11-20 06:40:02.698915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:47.304 [2024-11-20 06:40:02.698926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:2888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.304 [2024-11-20 06:40:02.698933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:29:47.304 [2024-11-20 06:40:02.698945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:2896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.304 [2024-11-20 06:40:02.698955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:47.304 [2024-11-20 06:40:02.698967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:2904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.304 [2024-11-20 06:40:02.698973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:47.304 [2024-11-20 06:40:02.698985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:2912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.304 [2024-11-20 06:40:02.698992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:47.304 [2024-11-20 06:40:02.699004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:2920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.304 [2024-11-20 06:40:02.699011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:47.304 [2024-11-20 06:40:02.699023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:2928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.304 [2024-11-20 06:40:02.699030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:47.304 [2024-11-20 06:40:02.699042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:2936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.304 [2024-11-20 06:40:02.699048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:29:47.304 [2024-11-20 06:40:02.699060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:2944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.304 [2024-11-20 06:40:02.699067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:47.304 [2024-11-20 06:40:02.699079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:2952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.304 [2024-11-20 06:40:02.699085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:47.304 [2024-11-20 06:40:02.699098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:2960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.304 [2024-11-20 06:40:02.699104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.304 [2024-11-20 06:40:02.699116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:2968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.304 [2024-11-20 06:40:02.699122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:47.304 [2024-11-20 06:40:02.699134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:2976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.304 [2024-11-20 06:40:02.699141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:47.304 [2024-11-20 06:40:02.699153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:2984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.304 [2024-11-20 06:40:02.699160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:29:47.304 [2024-11-20 06:40:02.699172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:2992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.304 [2024-11-20 06:40:02.699178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:29:47.304 [2024-11-20 06:40:02.699191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:3000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.304 [2024-11-20 06:40:02.699198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:29:47.304 [2024-11-20 06:40:02.699216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:3008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.304 [2024-11-20 06:40:02.699222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:29:47.304 [2024-11-20 06:40:02.699234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:3016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.305 [2024-11-20 06:40:02.699241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:29:47.305 [2024-11-20 06:40:02.699253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:3024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.305 [2024-11-20 06:40:02.699260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:29:47.305 [2024-11-20 06:40:02.699273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:2296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.305 [2024-11-20 06:40:02.699279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:29:47.305 [2024-11-20 06:40:02.699292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:2304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.305 [2024-11-20 06:40:02.699299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:29:47.305 [2024-11-20 06:40:02.699310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:2312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.305 [2024-11-20 06:40:02.699317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:29:47.305 [2024-11-20 06:40:02.699329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:2320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.305 [2024-11-20 06:40:02.699335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:29:47.305 [2024-11-20 06:40:02.699347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:2328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.305 [2024-11-20 06:40:02.699354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:29:47.305 [2024-11-20 06:40:02.699366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:2336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.305 [2024-11-20 06:40:02.699373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:29:47.305 [2024-11-20 06:40:02.699385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.305 [2024-11-20 06:40:02.699391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:29:47.305 [2024-11-20 06:40:02.699404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:2352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.305 [2024-11-20 06:40:02.699410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:29:47.305 [2024-11-20 06:40:02.699426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:2360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.305 [2024-11-20 06:40:02.699433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:29:47.305 [2024-11-20 06:40:02.699445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.305 [2024-11-20 06:40:02.699451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:29:47.305 [2024-11-20 06:40:02.699463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:2376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.305 [2024-11-20 06:40:02.699470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:29:47.305 [2024-11-20 06:40:02.699482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:2384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.305 [2024-11-20 06:40:02.699489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:29:47.305 [2024-11-20 06:40:02.699500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:2392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.305 [2024-11-20 06:40:02.699507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:29:47.305 [2024-11-20 06:40:02.699519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:2400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.305 [2024-11-20 06:40:02.699526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:29:47.305 [2024-11-20 06:40:02.699538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:2408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.305 [2024-11-20 06:40:02.699544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:29:47.305 [2024-11-20 06:40:02.699557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:2416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.305 [2024-11-20 06:40:02.699563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:29:47.305 [2024-11-20 06:40:02.699575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:3032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.305 [2024-11-20 06:40:02.699582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:29:47.305 [2024-11-20 06:40:02.699594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:3040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.305 [2024-11-20 06:40:02.699601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:29:47.305 [2024-11-20 06:40:02.699612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:3048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.305 [2024-11-20 06:40:02.699619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:29:47.305 [2024-11-20 06:40:02.699631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:3056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.305 [2024-11-20 06:40:02.699637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:29:47.305 [2024-11-20 06:40:02.699649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:3064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.305 [2024-11-20 06:40:02.699657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:29:47.305 [2024-11-20 06:40:02.699669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:3072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.305 [2024-11-20 06:40:02.699676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:29:47.305 [2024-11-20 06:40:02.699688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:3080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.305 [2024-11-20 06:40:02.699694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:29:47.305 [2024-11-20 06:40:02.699706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:3088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.305 [2024-11-20 06:40:02.699713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:29:47.305 [2024-11-20 06:40:02.699725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:3096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.305 [2024-11-20 06:40:02.699731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:47.305 [2024-11-20 06:40:02.699744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:3104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.305 [2024-11-20 06:40:02.699750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:47.305 [2024-11-20 06:40:02.699762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:3112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.305 [2024-11-20 06:40:02.699769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:29:47.305 [2024-11-20 06:40:02.700317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:3120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.305 [2024-11-20 06:40:02.700329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:29:47.305 [2024-11-20 06:40:02.700343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:3128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.305 [2024-11-20 06:40:02.700350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:29:47.305 [2024-11-20 06:40:02.700362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:3136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.305 [2024-11-20 06:40:02.700369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:29:47.305 [2024-11-20 06:40:02.700381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:3144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.305 [2024-11-20 06:40:02.700388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:29:47.305 [2024-11-20 06:40:02.700400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:3152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.305 [2024-11-20 06:40:02.700406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:29:47.305 [2024-11-20 06:40:02.700418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:3160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.305 [2024-11-20 06:40:02.700428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:29:47.305 [2024-11-20 06:40:02.700440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:3168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.305 [2024-11-20 06:40:02.700446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:29:47.305 [2024-11-20 06:40:02.700458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:3176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.305 [2024-11-20 06:40:02.700465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:29:47.305 [2024-11-20 06:40:02.700477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:3184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.305 [2024-11-20 06:40:02.700483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:29:47.305 [2024-11-20 06:40:02.700495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:3192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.305 [2024-11-20 06:40:02.700502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:29:47.306 [2024-11-20 06:40:02.700514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:3200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.306 [2024-11-20 06:40:02.700521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:29:47.306 [2024-11-20 06:40:02.700533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:3208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.306 [2024-11-20 06:40:02.700540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:29:47.306 [2024-11-20 06:40:02.700552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:3216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.306 [2024-11-20 06:40:02.700558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:29:47.306 [2024-11-20 06:40:02.700571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:3224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.306 [2024-11-20 06:40:02.700577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:29:47.306 [2024-11-20 06:40:02.700589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:3232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.306 [2024-11-20 06:40:02.700596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:29:47.306 [2024-11-20 06:40:02.700608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:3240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.306 [2024-11-20 06:40:02.700615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:29:47.306 [2024-11-20 06:40:02.700626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:3248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.306 [2024-11-20 06:40:02.700633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:29:47.306 [2024-11-20 06:40:02.700645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:3256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.306 [2024-11-20 06:40:02.700652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:29:47.306 [2024-11-20 06:40:02.700667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:3264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.306 [2024-11-20 06:40:02.700673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:29:47.306 [2024-11-20 06:40:02.700686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.306 [2024-11-20 06:40:02.700692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:29:47.306 [2024-11-20 06:40:02.700704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:2424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.306 [2024-11-20 06:40:02.700711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:29:47.306 [2024-11-20 06:40:02.700723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:2432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.306 [2024-11-20 06:40:02.700730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:29:47.306 [2024-11-20 06:40:02.700741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:2440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.306 [2024-11-20 06:40:02.700748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:47.306 [2024-11-20 06:40:02.700760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.306 [2024-11-20 06:40:02.700767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:47.306 [2024-11-20 06:40:02.700778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:2456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.306 [2024-11-20 06:40:02.700785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:29:47.306 [2024-11-20 06:40:02.700797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:2464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.306 [2024-11-20 06:40:02.700804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:29:47.306 [2024-11-20 06:40:02.700815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.306 [2024-11-20 06:40:02.700822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:29:47.306 [2024-11-20 06:40:02.700834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:2480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.306 [2024-11-20 06:40:02.700840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:47.306 [2024-11-20 06:40:02.700852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:2488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.306 [2024-11-20 06:40:02.700859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:29:47.306 [2024-11-20 06:40:02.700871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:2496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.306 [2024-11-20 06:40:02.700877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:47.306 [2024-11-20 06:40:02.700889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:2504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.306 [2024-11-20 06:40:02.700897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:47.306 [2024-11-20 06:40:02.700909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:2512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.306 [2024-11-20 06:40:02.700916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:29:47.306 [2024-11-20 06:40:02.700928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:2520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.306 [2024-11-20 06:40:02.700934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:29:47.306 [2024-11-20 06:40:02.700947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:2528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.306 [2024-11-20 06:40:02.700954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:29:47.306 [2024-11-20 06:40:02.701264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:2536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.306 [2024-11-20 06:40:02.701276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:29:47.306 [2024-11-20 06:40:02.701289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:2544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.306 [2024-11-20 06:40:02.701296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:29:47.306 [2024-11-20 06:40:02.701308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:2552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.306 [2024-11-20 06:40:02.701315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:29:47.306 [2024-11-20 06:40:02.701327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:2560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.306 [2024-11-20 06:40:02.701333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:47.306 [2024-11-20 06:40:02.701345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:2568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.306 [2024-11-20 06:40:02.701352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:29:47.306 [2024-11-20 06:40:02.701364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:2576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.306 [2024-11-20 06:40:02.701371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:47.306 [2024-11-20 06:40:02.701383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:2584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.306 [2024-11-20 06:40:02.701390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:29:47.306 [2024-11-20 06:40:02.701402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:2592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.306 [2024-11-20 06:40:02.701408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:29:47.306 [2024-11-20 06:40:02.701420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:2600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.306 [2024-11-20 06:40:02.701429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:47.306 [2024-11-20 06:40:02.701441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:2608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.306 [2024-11-20 06:40:02.701448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:29:47.306 [2024-11-20 06:40:02.701460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:2616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.306 [2024-11-20 06:40:02.701466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:29:47.306 [2024-11-20 06:40:02.701478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:2624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.306 [2024-11-20 06:40:02.701485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:29:47.306 [2024-11-20 06:40:02.701497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:2632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.306 [2024-11-20 06:40:02.701503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:29:47.306 [2024-11-20 06:40:02.701515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:2640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.306 [2024-11-20 06:40:02.701522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:47.306 [2024-11-20 06:40:02.701534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:2648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.306 [2024-11-20 06:40:02.701541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:29:47.307 [2024-11-20 06:40:02.701553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:2656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.307 [2024-11-20 06:40:02.701559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:47.307 [2024-11-20 06:40:02.701571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:2664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.307 [2024-11-20 06:40:02.701578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:29:47.307 [2024-11-20 06:40:02.701590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:2672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.307 [2024-11-20 06:40:02.701596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:29:47.307 [2024-11-20 06:40:02.701608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:2680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.307 [2024-11-20 06:40:02.701615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:47.307 [2024-11-20 06:40:02.701627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:2688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.307 [2024-11-20 06:40:02.701633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:47.307 [2024-11-20 06:40:02.701645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:2696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.307 [2024-11-20 06:40:02.701652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:47.307 [2024-11-20 06:40:02.701666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:2704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.307 [2024-11-20 06:40:02.701672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:47.307 [2024-11-20 06:40:02.701684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:2712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.307 [2024-11-20 06:40:02.701691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:47.307 [2024-11-20 06:40:02.701703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:2720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.307 [2024-11-20 06:40:02.701709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:29:47.307 [2024-11-20 06:40:02.701721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:2728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.307 [2024-11-20 06:40:02.701728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:29:47.307 [2024-11-20 06:40:02.701740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:2736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.307 [2024-11-20 06:40:02.701746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:47.307 [2024-11-20 06:40:02.701758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:2744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.307 [2024-11-20 06:40:02.701765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:47.307 [2024-11-20 06:40:02.701777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:2752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.307 [2024-11-20 06:40:02.701784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:47.307 [2024-11-20 06:40:02.701796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.307 [2024-11-20 06:40:02.701802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:47.307 [2024-11-20 06:40:02.701815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:2768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.307 [2024-11-20 06:40:02.701821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:29:47.307 [2024-11-20 06:40:02.701833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:2776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.307 [2024-11-20 06:40:02.701840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:47.307 [2024-11-20 06:40:02.701852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.307 [2024-11-20 06:40:02.701858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:47.307 [2024-11-20 06:40:02.701870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:2264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.307 [2024-11-20 06:40:02.701877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:47.307 [2024-11-20 06:40:02.701890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:2272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.307 [2024-11-20 06:40:02.701897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:29:47.307 [2024-11-20 06:40:02.701909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:2280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.307 [2024-11-20 06:40:02.701915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:47.307 [2024-11-20 06:40:02.701928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:2288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.307 [2024-11-20 06:40:02.701934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:47.307 [2024-11-20 06:40:02.701946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:2784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.307 [2024-11-20 06:40:02.701953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:29:47.307 [2024-11-20 06:40:02.701964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.307 [2024-11-20 06:40:02.701971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:29:47.307 [2024-11-20 06:40:02.701984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:2800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.307 [2024-11-20 06:40:02.701991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:29:47.307 [2024-11-20 06:40:02.702003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:2808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.307 [2024-11-20 06:40:02.702009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:29:47.307 [2024-11-20 06:40:02.702021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:2816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.307 [2024-11-20 06:40:02.702028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:47.307 [2024-11-20 06:40:02.702040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:2824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.307 [2024-11-20 06:40:02.702047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:47.307 [2024-11-20 06:40:02.702059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:2832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.307 [2024-11-20 06:40:02.702065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:29:47.307 [2024-11-20 06:40:02.702077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:2840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.307 [2024-11-20 06:40:02.702084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:29:47.307 [2024-11-20 06:40:02.702096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:2848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.307 [2024-11-20 06:40:02.702103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:47.307 [2024-11-20 06:40:02.702115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:2856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.307 [2024-11-20 06:40:02.702123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:47.307 [2024-11-20 06:40:02.702135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:2864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.307 [2024-11-20 06:40:02.702142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:29:47.307 [2024-11-20 06:40:02.702154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:2872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.307 [2024-11-20 06:40:02.702161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:29:47.307 [2024-11-20 06:40:02.702173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:2880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.307 [2024-11-20 06:40:02.702179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:47.307 [2024-11-20 06:40:02.702191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:2888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.307 [2024-11-20 06:40:02.702198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:29:47.307 [2024-11-20 06:40:02.702215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:2896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.307 [2024-11-20 06:40:02.702221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:47.307 [2024-11-20 06:40:02.702233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:2904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.307 [2024-11-20 06:40:02.702240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:47.307 [2024-11-20 06:40:02.702252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:2912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.307 [2024-11-20 06:40:02.702258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:47.307 [2024-11-20 06:40:02.702270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:2920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.308 [2024-11-20 06:40:02.702277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:47.308 [2024-11-20 06:40:02.702289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:2928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.308 [2024-11-20 06:40:02.702296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:47.308 [2024-11-20 06:40:02.702308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:2936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.308 [2024-11-20 06:40:02.702314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:29:47.308 [2024-11-20 06:40:02.702326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:2944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.308 [2024-11-20 06:40:02.702332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:47.308 [2024-11-20 06:40:02.702345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:2952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.308 [2024-11-20 06:40:02.702353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:47.308 [2024-11-20 06:40:02.702828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:2960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.308 [2024-11-20 06:40:02.702839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.308 [2024-11-20 06:40:02.702852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:2968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.308 [2024-11-20 06:40:02.702859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:47.308 [2024-11-20 06:40:02.702872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:2976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.308 [2024-11-20 06:40:02.702879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:47.308 [2024-11-20 06:40:02.702890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:2984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.308 [2024-11-20 06:40:02.702897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:29:47.308 [2024-11-20 06:40:02.702909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:2992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.308 [2024-11-20 06:40:02.702916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:29:47.308 [2024-11-20 06:40:02.702928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:3000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.308 [2024-11-20 06:40:02.702934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:29:47.308 [2024-11-20 06:40:02.702946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:3008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.308 [2024-11-20 06:40:02.702953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:29:47.308 [2024-11-20 06:40:02.702965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:3016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.308 [2024-11-20 06:40:02.702971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:29:47.308 [2024-11-20 06:40:02.702983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:3024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.308 [2024-11-20 06:40:02.702990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:29:47.308 [2024-11-20 06:40:02.703002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:2296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.308 [2024-11-20 06:40:02.703009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:29:47.308 [2024-11-20 06:40:02.703021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:2304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.308 [2024-11-20 06:40:02.703027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:29:47.308 [2024-11-20 06:40:02.703039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:2312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.308 [2024-11-20 06:40:02.703046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:29:47.308 [2024-11-20 06:40:02.703060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.308 [2024-11-20 06:40:02.703067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:29:47.308 [2024-11-20 06:40:02.703079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:2328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.308 [2024-11-20 06:40:02.703086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:29:47.308 [2024-11-20 06:40:02.703098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:2336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.308 [2024-11-20 06:40:02.703104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:29:47.308 [2024-11-20 06:40:02.703117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:2344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.308 [2024-11-20 06:40:02.703123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:29:47.308 [2024-11-20 06:40:02.703135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:2352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.308 [2024-11-20 06:40:02.703142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:29:47.308 [2024-11-20 06:40:02.703154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:2360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.308 [2024-11-20 06:40:02.703161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:29:47.308 [2024-11-20 06:40:02.703173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:2368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.308 [2024-11-20 06:40:02.703179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:29:47.308 [2024-11-20 06:40:02.703191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:2376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.308 [2024-11-20 06:40:02.703198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:29:47.308 [2024-11-20 06:40:02.703216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:2384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.308 [2024-11-20 06:40:02.703223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:29:47.308 [2024-11-20 06:40:02.703235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:2392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.308 [2024-11-20 06:40:02.703242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:29:47.308 [2024-11-20 06:40:02.703254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:2400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.308 [2024-11-20 06:40:02.703261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:29:47.308 [2024-11-20 06:40:02.703273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:2408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.308 [2024-11-20 06:40:02.703280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:29:47.308 [2024-11-20 06:40:02.703293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:2416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.308 [2024-11-20 06:40:02.703300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:29:47.308 [2024-11-20 06:40:02.703312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:3032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.308 [2024-11-20 06:40:02.703318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:29:47.308 [2024-11-20 06:40:02.703330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:3040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.308 [2024-11-20 06:40:02.703337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:29:47.308 [2024-11-20 06:40:02.703349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:3048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.308 [2024-11-20 06:40:02.703356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:29:47.308 [2024-11-20 06:40:02.703367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:3056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.308 [2024-11-20 06:40:02.703374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:29:47.308 [2024-11-20 06:40:02.703386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:3064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.308 [2024-11-20 06:40:02.703393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:29:47.308 [2024-11-20 06:40:02.703405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:3072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.308 [2024-11-20 06:40:02.703411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:29:47.308 [2024-11-20 06:40:02.703423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:3080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.308 [2024-11-20 06:40:02.703430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:29:47.308 [2024-11-20 06:40:02.703791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:3088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.308 [2024-11-20 06:40:02.703801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:29:47.308 [2024-11-20 06:40:02.703814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:3096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.308 [2024-11-20 06:40:02.703821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:47.308 [2024-11-20 06:40:02.703833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:3104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.309 [2024-11-20 06:40:02.703840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:47.309 [2024-11-20 06:40:02.703852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:3112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.309 [2024-11-20 06:40:02.703859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:29:47.309 [2024-11-20 06:40:02.703871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:3120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.309 [2024-11-20 06:40:02.703880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:29:47.309 [2024-11-20 06:40:02.703892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:3128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.309 [2024-11-20 06:40:02.703898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:29:47.309 [2024-11-20 06:40:02.703910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:3136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.309 [2024-11-20 06:40:02.703917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:29:47.309 [2024-11-20 06:40:02.703929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:3144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.309 [2024-11-20 06:40:02.703935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:29:47.309 [2024-11-20 06:40:02.703947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:3152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.309 [2024-11-20 06:40:02.703954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:29:47.309 [2024-11-20 06:40:02.703966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:3160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.309 [2024-11-20 06:40:02.703972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:29:47.309 [2024-11-20 06:40:02.703984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.309 [2024-11-20 06:40:02.703991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:29:47.309 [2024-11-20 06:40:02.704003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:3176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.309 [2024-11-20 06:40:02.704010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:29:47.309 [2024-11-20 06:40:02.704023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:3184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.309 [2024-11-20 06:40:02.704030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:29:47.309 [2024-11-20 06:40:02.704042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.309 [2024-11-20 06:40:02.704048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:29:47.309 [2024-11-20 06:40:02.704060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:3200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.309 [2024-11-20 06:40:02.704067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:29:47.309 [2024-11-20 06:40:02.704079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:3208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.309 [2024-11-20 06:40:02.704085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:29:47.309 [2024-11-20 06:40:02.704097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:3216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.309 [2024-11-20 06:40:02.704103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:29:47.309 [2024-11-20 06:40:02.704117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.309 [2024-11-20 06:40:02.704123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:29:47.309 [2024-11-20 06:40:02.704135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:3232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.309 [2024-11-20 06:40:02.704142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:29:47.309 [2024-11-20 06:40:02.704154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:3240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.309 [2024-11-20 06:40:02.704160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:29:47.309 [2024-11-20 06:40:02.704172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:3248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.309 [2024-11-20 06:40:02.704179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:29:47.309 [2024-11-20 06:40:02.704191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:3256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.309 [2024-11-20 06:40:02.704197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:29:47.309 [2024-11-20 06:40:02.704214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:3264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.309 [2024-11-20 06:40:02.704221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:29:47.309 [2024-11-20 06:40:02.704233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:3272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.309 [2024-11-20 06:40:02.704239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:29:47.309 [2024-11-20 06:40:02.704251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:2424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.309 [2024-11-20 06:40:02.704258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:29:47.309 [2024-11-20 06:40:02.704270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:2432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.309 [2024-11-20 06:40:02.704276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:29:47.309 [2024-11-20 06:40:02.704288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:2440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.309 [2024-11-20 06:40:02.704295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:47.309 [2024-11-20 06:40:02.704307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:2448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.309 [2024-11-20 06:40:02.704313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:47.309 [2024-11-20 06:40:02.704325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.309 [2024-11-20 06:40:02.704332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:29:47.309 [2024-11-20 06:40:02.704345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:2464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.309 [2024-11-20 06:40:02.704352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:29:47.309 [2024-11-20 06:40:02.704364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:2472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.309 [2024-11-20 06:40:02.704371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:29:47.309 [2024-11-20 06:40:02.704383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:2480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.309 [2024-11-20 06:40:02.704389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:47.309 [2024-11-20 06:40:02.704401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:2488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.309 [2024-11-20 06:40:02.704408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:29:47.309 [2024-11-20 06:40:02.704420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:2496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.309 [2024-11-20 06:40:02.704426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:47.309 [2024-11-20 06:40:02.704438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:2504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.309 [2024-11-20 06:40:02.704445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:47.309 [2024-11-20 06:40:02.704457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:2512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.309 [2024-11-20 06:40:02.704463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:29:47.309 [2024-11-20 06:40:02.704475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:2520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.310 [2024-11-20 06:40:02.704482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:29:47.310 [2024-11-20 06:40:02.704494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:2528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.310 [2024-11-20 06:40:02.704500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:29:47.310 [2024-11-20 06:40:02.704512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:2536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.310 [2024-11-20 06:40:02.704519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:29:47.310 [2024-11-20 06:40:02.704531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:2544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.310 [2024-11-20 06:40:02.704537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:29:47.310 [2024-11-20 06:40:02.704549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:2552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.310 [2024-11-20 06:40:02.704556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:29:47.310 [2024-11-20 06:40:02.704568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:2560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.310 [2024-11-20 06:40:02.704578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:47.310 [2024-11-20 06:40:02.704590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:2568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.310 [2024-11-20 06:40:02.704597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:29:47.310 [2024-11-20 06:40:02.704609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:2576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.310 [2024-11-20 06:40:02.704615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:47.310 [2024-11-20 06:40:02.704627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:2584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.310 [2024-11-20 06:40:02.704634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:29:47.310 [2024-11-20 06:40:02.704646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:2592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.310 [2024-11-20 06:40:02.704653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:29:47.310 [2024-11-20 06:40:02.704665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:2600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.310 [2024-11-20 06:40:02.704671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:47.310 [2024-11-20 06:40:02.704683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:2608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.310 [2024-11-20 06:40:02.704689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:29:47.310 [2024-11-20 06:40:02.704702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:2616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.310 [2024-11-20 06:40:02.704708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:29:47.310 [2024-11-20 06:40:02.704720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:2624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.310 [2024-11-20 06:40:02.704727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:29:47.310 [2024-11-20 06:40:02.704739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:2632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.310 [2024-11-20 06:40:02.704746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:29:47.310 [2024-11-20 06:40:02.705182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:2640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.310 [2024-11-20 06:40:02.705193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:47.310 [2024-11-20 06:40:02.705212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:2648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.310 [2024-11-20 06:40:02.705219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:29:47.310 [2024-11-20 06:40:02.705232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:2656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.310 [2024-11-20 06:40:02.705241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:47.310 [2024-11-20 06:40:02.705253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:2664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.310 [2024-11-20 06:40:02.705260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:29:47.310 [2024-11-20 06:40:02.705272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:2672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.310 [2024-11-20 06:40:02.705279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:29:47.310 [2024-11-20 06:40:02.705291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:2680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.310 [2024-11-20 06:40:02.705298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:47.310 [2024-11-20 06:40:02.705310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.310 [2024-11-20 06:40:02.705316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:47.310 [2024-11-20 06:40:02.705328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:2696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.310 [2024-11-20 06:40:02.705335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:47.310 [2024-11-20 06:40:02.705347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:2704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.310 [2024-11-20 06:40:02.705353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:47.310 [2024-11-20 06:40:02.705365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:2712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.310 [2024-11-20 06:40:02.705372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:47.310 [2024-11-20 06:40:02.705384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:2720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.310 [2024-11-20 06:40:02.705390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:29:47.310 [2024-11-20 06:40:02.705402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:2728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.310 [2024-11-20 06:40:02.705408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:29:47.310 [2024-11-20 06:40:02.705420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:2736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.310 [2024-11-20 06:40:02.705427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:47.310 [2024-11-20 06:40:02.705439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:2744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.310 [2024-11-20 06:40:02.705445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:47.310 [2024-11-20 06:40:02.705457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:2752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.310 [2024-11-20 06:40:02.705464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:47.310 [2024-11-20 06:40:02.705477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.310 [2024-11-20 06:40:02.705484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:47.310 [2024-11-20 06:40:02.705496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:2768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.310 [2024-11-20 06:40:02.705503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:29:47.310 [2024-11-20 06:40:02.705515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:2776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.310 [2024-11-20 06:40:02.705521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:47.310 [2024-11-20 06:40:02.705533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:2256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.310 [2024-11-20 06:40:02.705539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:47.310 [2024-11-20 06:40:02.705551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:2264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.310 [2024-11-20 06:40:02.705558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:47.310 [2024-11-20 06:40:02.705570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:2272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.310 [2024-11-20 06:40:02.705577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:29:47.310 [2024-11-20 06:40:02.705589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:2280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.310 [2024-11-20 06:40:02.705595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:47.310 [2024-11-20 06:40:02.705607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:2288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.310 [2024-11-20 06:40:02.705614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:47.310 [2024-11-20 06:40:02.705626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:2784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.310 [2024-11-20 06:40:02.705632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:29:47.310 [2024-11-20 06:40:02.705644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:2792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.311 [2024-11-20 06:40:02.705651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:29:47.311 [2024-11-20 06:40:02.705663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:2800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.311 [2024-11-20 06:40:02.705670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:29:47.311 [2024-11-20 06:40:02.705682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:2808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.311 [2024-11-20 06:40:02.705688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:29:47.311 [2024-11-20 06:40:02.705700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:2816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.311 [2024-11-20 06:40:02.705708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:47.311 [2024-11-20 06:40:02.705720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:2824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.311 [2024-11-20 06:40:02.705727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:47.311 [2024-11-20 06:40:02.705739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:2832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.311 [2024-11-20 06:40:02.705745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:29:47.311 [2024-11-20 06:40:02.705757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:2840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.311 [2024-11-20 06:40:02.705763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:29:47.311 [2024-11-20 06:40:02.705775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:2848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.311 [2024-11-20 06:40:02.705782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:47.311 [2024-11-20 06:40:02.705794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:2856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.311 [2024-11-20 06:40:02.705800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:47.311 [2024-11-20 06:40:02.705812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:2864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.311 [2024-11-20 06:40:02.705819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:29:47.311 [2024-11-20 06:40:02.705831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:2872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.311 [2024-11-20 06:40:02.705838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:29:47.311 [2024-11-20 06:40:02.705849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:2880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.311 [2024-11-20 06:40:02.705856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:47.311 [2024-11-20 06:40:02.705868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:2888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.311 [2024-11-20 06:40:02.705875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:29:47.311 [2024-11-20 06:40:02.705886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:2896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.311 [2024-11-20 06:40:02.705893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:47.311 [2024-11-20 06:40:02.705905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:2904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.311 [2024-11-20 06:40:02.705912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:47.311 [2024-11-20 06:40:02.705924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:2912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.311 [2024-11-20 06:40:02.705932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:47.311 [2024-11-20 06:40:02.705944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:2920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.311 [2024-11-20 06:40:02.705950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:47.311 [2024-11-20 06:40:02.705963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:2928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.311 [2024-11-20 06:40:02.705969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:47.311 [2024-11-20 06:40:02.705981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:2936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.311 [2024-11-20 06:40:02.705988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:29:47.311 [2024-11-20 06:40:02.705999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:2944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.311 [2024-11-20 06:40:02.706006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:47.311 [2024-11-20 06:40:02.706018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:2952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.311 [2024-11-20 06:40:02.706025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:47.311 [2024-11-20 06:40:02.706036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:2960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.311 [2024-11-20 06:40:02.706043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.311 [2024-11-20 06:40:02.706055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:2968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.311 [2024-11-20 06:40:02.706061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:47.311 [2024-11-20 06:40:02.706073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:2976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.311 [2024-11-20 06:40:02.706080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:47.311 [2024-11-20 06:40:02.706092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:2984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.311 [2024-11-20 06:40:02.706098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:29:47.311 [2024-11-20 06:40:02.706111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:2992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.311 [2024-11-20 06:40:02.706117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:29:47.311 [2024-11-20 06:40:02.706539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:3000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.311 [2024-11-20 06:40:02.706550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:29:47.311 [2024-11-20 06:40:02.706564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:3008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.311 [2024-11-20 06:40:02.706570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:29:47.311 [2024-11-20 06:40:02.706585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:3016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.311 [2024-11-20 06:40:02.706591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:29:47.311 [2024-11-20 06:40:02.706603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:3024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.311 [2024-11-20 06:40:02.706610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:29:47.311 [2024-11-20 06:40:02.706622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:2296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.311 [2024-11-20 06:40:02.706629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:29:47.311 [2024-11-20 06:40:02.706641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:2304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.311 [2024-11-20 06:40:02.706648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:29:47.311 [2024-11-20 06:40:02.706660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:2312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.311 [2024-11-20 06:40:02.706666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:29:47.311 [2024-11-20 06:40:02.706679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:2320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.311 [2024-11-20 06:40:02.706685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:29:47.311 [2024-11-20 06:40:02.706697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:2328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.311 [2024-11-20 06:40:02.706704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:29:47.311 [2024-11-20 06:40:02.706716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:2336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.311 [2024-11-20 06:40:02.706722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:29:47.311 [2024-11-20 06:40:02.706734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:2344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.311 [2024-11-20 06:40:02.706741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:29:47.311 [2024-11-20 06:40:02.706753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:2352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.311 [2024-11-20 06:40:02.706759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:29:47.311 [2024-11-20 06:40:02.706772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:2360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.311 [2024-11-20 06:40:02.706778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:29:47.312 [2024-11-20 06:40:02.706790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.312 [2024-11-20 06:40:02.706797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:29:47.312 [2024-11-20 06:40:02.706811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:2376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.312 [2024-11-20 06:40:02.706818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:29:47.312 [2024-11-20 06:40:02.706830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:2384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.312 [2024-11-20 06:40:02.706837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:29:47.312 [2024-11-20 06:40:02.706849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:2392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.312 [2024-11-20 06:40:02.706855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:29:47.312 [2024-11-20 06:40:02.706867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:2400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.312 [2024-11-20 06:40:02.706874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:29:47.312 [2024-11-20 06:40:02.706886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:2408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.312 [2024-11-20 06:40:02.706893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:29:47.312 [2024-11-20 06:40:02.706904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:2416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.312 [2024-11-20 06:40:02.706911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:29:47.312 [2024-11-20 06:40:02.706923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:3032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.312 [2024-11-20 06:40:02.706929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:29:47.312 [2024-11-20 06:40:02.706941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:3040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.312 [2024-11-20 06:40:02.706948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:29:47.312 [2024-11-20 06:40:02.706960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:3048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.312 [2024-11-20 06:40:02.706966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:29:47.312 [2024-11-20 06:40:02.706979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:3056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.312 [2024-11-20 06:40:02.706986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:29:47.312 [2024-11-20 06:40:02.707431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:3064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.312 [2024-11-20 06:40:02.707441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:29:47.312 [2024-11-20 06:40:02.707455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:3072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.312 [2024-11-20 06:40:02.707462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:29:47.312 [2024-11-20 06:40:02.707474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:3080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.312 [2024-11-20 06:40:02.707482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:29:47.312 [2024-11-20 06:40:02.707494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:3088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.312 [2024-11-20 06:40:02.707501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:29:47.312 [2024-11-20 06:40:02.707513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:3096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.312 [2024-11-20 06:40:02.707519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:47.312 [2024-11-20 06:40:02.707532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:3104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.312 [2024-11-20 06:40:02.707538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:47.312 [2024-11-20 06:40:02.707550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:3112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.312 [2024-11-20 06:40:02.707557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:29:47.312 [2024-11-20 06:40:02.707569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:3120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.312 [2024-11-20 06:40:02.707576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:29:47.312 [2024-11-20 06:40:02.707588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:3128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.312 [2024-11-20 06:40:02.707594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:29:47.312 [2024-11-20 06:40:02.707606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:3136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.312 [2024-11-20 06:40:02.707613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:29:47.312 [2024-11-20 06:40:02.707625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:3144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.312 [2024-11-20 06:40:02.707631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:29:47.312 [2024-11-20 06:40:02.707643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:3152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.312 [2024-11-20 06:40:02.707650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:29:47.312 [2024-11-20 06:40:02.707662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:3160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.312 [2024-11-20 06:40:02.707669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:29:47.312 [2024-11-20 06:40:02.707680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:3168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.312 [2024-11-20 06:40:02.707687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:29:47.312 [2024-11-20 06:40:02.707699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:3176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.312 [2024-11-20 06:40:02.707707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:29:47.312 [2024-11-20 06:40:02.707720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:3184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.312 [2024-11-20 06:40:02.707726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:29:47.312 [2024-11-20 06:40:02.707739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:3192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.312 [2024-11-20 06:40:02.707745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:29:47.312 [2024-11-20 06:40:02.707757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:3200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.312 [2024-11-20 06:40:02.707764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:29:47.312 [2024-11-20 06:40:02.707776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:3208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.312 [2024-11-20 06:40:02.707782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:29:47.312 [2024-11-20 06:40:02.707794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:3216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.312 [2024-11-20 06:40:02.707800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:29:47.312 [2024-11-20 06:40:02.707812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:3224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.312 [2024-11-20 06:40:02.707819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:29:47.312 [2024-11-20 06:40:02.707831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:3232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.312 [2024-11-20 06:40:02.707838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:29:47.312 [2024-11-20 06:40:02.707849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:3240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.312 [2024-11-20 06:40:02.707856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:29:47.312 [2024-11-20 06:40:02.707868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:3248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.312 [2024-11-20 06:40:02.707875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:29:47.312 [2024-11-20 06:40:02.707887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:3256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.312 [2024-11-20 06:40:02.707893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:29:47.312 [2024-11-20 06:40:02.707905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:3264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.312 [2024-11-20 06:40:02.707912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:29:47.312 [2024-11-20 06:40:02.707924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:3272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.312 [2024-11-20 06:40:02.707930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:29:47.312 [2024-11-20 06:40:02.707944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:2424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.313 [2024-11-20 06:40:02.707951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:29:47.313 [2024-11-20 06:40:02.707963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:2432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.313 [2024-11-20 06:40:02.707969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:29:47.313 [2024-11-20 06:40:02.707981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:2440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.313 [2024-11-20 06:40:02.707987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:47.313 [2024-11-20 06:40:02.707999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:2448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.313 [2024-11-20 06:40:02.708006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:47.313 [2024-11-20 06:40:02.708018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:2456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.313 [2024-11-20 06:40:02.708025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:29:47.313 [2024-11-20 06:40:02.708037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:2464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.313 [2024-11-20 06:40:02.708043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:29:47.313 [2024-11-20 06:40:02.708055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:2472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.313 [2024-11-20 06:40:02.708062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:29:47.313 [2024-11-20 06:40:02.708075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:2480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.313 [2024-11-20 06:40:02.708082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:47.313 [2024-11-20 06:40:02.708095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.313 [2024-11-20 06:40:02.708101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:29:47.313 [2024-11-20 06:40:02.708114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:2496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.313 [2024-11-20 06:40:02.708120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:47.313 [2024-11-20 06:40:02.708132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:2504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.313 [2024-11-20 06:40:02.708139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:47.313 [2024-11-20 06:40:02.708151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:2512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.313 [2024-11-20 06:40:02.708158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:29:47.313 [2024-11-20 06:40:02.708172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.313 [2024-11-20 06:40:02.708178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:29:47.313 [2024-11-20 06:40:02.708191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:2528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.313 [2024-11-20 06:40:02.708197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:29:47.313 [2024-11-20 06:40:02.708214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:2536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.313 [2024-11-20 06:40:02.708220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:29:47.313 [2024-11-20 06:40:02.708232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.313 [2024-11-20 06:40:02.708239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:29:47.313 [2024-11-20 06:40:02.708252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:2552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.313 [2024-11-20 06:40:02.708258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:29:47.313 [2024-11-20 06:40:02.708271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:2560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.313 [2024-11-20 06:40:02.708277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:47.313 [2024-11-20 06:40:02.708289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:2568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.313 [2024-11-20 06:40:02.708296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:29:47.313 [2024-11-20 06:40:02.708308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:2576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.313 [2024-11-20 06:40:02.708315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:47.313 [2024-11-20 06:40:02.708327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:2584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.313 [2024-11-20 06:40:02.708334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:29:47.313 [2024-11-20 06:40:02.708346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:2592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.313 [2024-11-20 06:40:02.708353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:29:47.313 [2024-11-20 06:40:02.708365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:2600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.313 [2024-11-20 06:40:02.708372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:47.313 [2024-11-20 06:40:02.708384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:2608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.313 [2024-11-20 06:40:02.708391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:29:47.313 [2024-11-20 06:40:02.708821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:2616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.313 [2024-11-20 06:40:02.708834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:29:47.313 [2024-11-20 06:40:02.708848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:2624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.313 [2024-11-20 06:40:02.708855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:29:47.313 [2024-11-20 06:40:02.708867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:2632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.313 [2024-11-20 06:40:02.708874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:29:47.313 [2024-11-20 06:40:02.708886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:2640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.313 [2024-11-20 06:40:02.708893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:47.313 [2024-11-20 06:40:02.708905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:2648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.313 [2024-11-20 06:40:02.708912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:29:47.313 [2024-11-20 06:40:02.708924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:2656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.313 [2024-11-20 06:40:02.708931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:47.313 [2024-11-20 06:40:02.708943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:2664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.313 [2024-11-20 06:40:02.708950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:29:47.313 [2024-11-20 06:40:02.708962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:2672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.313 [2024-11-20 06:40:02.708969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:29:47.313 [2024-11-20 06:40:02.708981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:2680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.313 [2024-11-20 06:40:02.708988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:47.313 [2024-11-20 06:40:02.709000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:2688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.313 [2024-11-20 06:40:02.709006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:47.313 [2024-11-20 06:40:02.709018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:2696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.313 [2024-11-20 06:40:02.709025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:47.314 [2024-11-20 06:40:02.709037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:2704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.314 [2024-11-20 06:40:02.709044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:47.314 [2024-11-20 06:40:02.709056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:2712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.314 [2024-11-20 06:40:02.709063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:47.314 [2024-11-20 06:40:02.709076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:2720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.314 [2024-11-20 06:40:02.709083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:29:47.314 [2024-11-20 06:40:02.709095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:2728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.314 [2024-11-20 06:40:02.709102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:29:47.314 [2024-11-20 06:40:02.709114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:2736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.314 [2024-11-20 06:40:02.709120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:47.314 [2024-11-20 06:40:02.709133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:2744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.314 [2024-11-20 06:40:02.709140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:47.314 [2024-11-20 06:40:02.709152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:2752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.314 [2024-11-20 06:40:02.709159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:47.314 [2024-11-20 06:40:02.709171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:2760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.314 [2024-11-20 06:40:02.709178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:47.314 [2024-11-20 06:40:02.709190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:2768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.314 [2024-11-20 06:40:02.709196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:29:47.314 [2024-11-20 06:40:02.709213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:2776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.314 [2024-11-20 06:40:02.709220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:47.314 [2024-11-20 06:40:02.709232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:2256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.314 [2024-11-20 06:40:02.709239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:47.314 [2024-11-20 06:40:02.709251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:2264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.314 [2024-11-20 06:40:02.709258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:47.314 [2024-11-20 06:40:02.709270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:2272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.314 [2024-11-20 06:40:02.709277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:29:47.314 [2024-11-20 06:40:02.709289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.314 [2024-11-20 06:40:02.709295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:47.314 [2024-11-20 06:40:02.709309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:2288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.314 [2024-11-20 06:40:02.709316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:47.314 [2024-11-20 06:40:02.709328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:2784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.314 [2024-11-20 06:40:02.709335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:29:47.314 [2024-11-20 06:40:02.709348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:2792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.314 [2024-11-20 06:40:02.709355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:29:47.314 [2024-11-20 06:40:02.709370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:2800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.314 [2024-11-20 06:40:02.709377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:29:47.314 [2024-11-20 06:40:02.709389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:2808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.314 [2024-11-20 06:40:02.709396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:29:47.314 [2024-11-20 06:40:02.709408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:2816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.314 [2024-11-20 06:40:02.709414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:47.314 [2024-11-20 06:40:02.709427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:2824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.314 [2024-11-20 06:40:02.709433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:47.314 [2024-11-20 06:40:02.709446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:2832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.314 [2024-11-20 06:40:02.709452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:29:47.314 [2024-11-20 06:40:02.709464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:2840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.314 [2024-11-20 06:40:02.709471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:29:47.314 [2024-11-20 06:40:02.709483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:2848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.314 [2024-11-20 06:40:02.709489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:47.314 [2024-11-20 06:40:02.709501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:2856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.314 [2024-11-20 06:40:02.709509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:47.314 [2024-11-20 06:40:02.709520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.314 [2024-11-20 06:40:02.709527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:29:47.314 [2024-11-20 06:40:02.709539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:2872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.314 [2024-11-20 06:40:02.709547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:29:47.314 [2024-11-20 06:40:02.709559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:2880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.314 [2024-11-20 06:40:02.709565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:47.314 [2024-11-20 06:40:02.709578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:2888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.314 [2024-11-20 06:40:02.709584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:29:47.314 [2024-11-20 06:40:02.709596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:2896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.314 [2024-11-20 06:40:02.709603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:47.314 [2024-11-20 06:40:02.709615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:2904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.314 [2024-11-20 06:40:02.709621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:47.314 [2024-11-20 06:40:02.709634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:2912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.314 [2024-11-20 06:40:02.709640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:47.314 [2024-11-20 06:40:02.709652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:2920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.314 [2024-11-20 06:40:02.709659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:47.314 [2024-11-20 06:40:02.709671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:2928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.314 [2024-11-20 06:40:02.709678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:47.314 [2024-11-20 06:40:02.709690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.314 [2024-11-20 06:40:02.709696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:29:47.314 [2024-11-20 06:40:02.709708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:2944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.314 [2024-11-20 06:40:02.709715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:47.314 [2024-11-20 06:40:02.709727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:2952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.314 [2024-11-20 06:40:02.709734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:47.314 [2024-11-20 06:40:02.709746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:2960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.314 [2024-11-20 06:40:02.709752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.314 [2024-11-20 06:40:02.709765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:2968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.314 [2024-11-20 06:40:02.709774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:47.315 [2024-11-20 06:40:02.710197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:2976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.315 [2024-11-20 06:40:02.710213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:47.315 [2024-11-20 06:40:02.710227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:2984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.315 [2024-11-20 06:40:02.710234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:29:47.315 [2024-11-20 06:40:02.710246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:2992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.315 [2024-11-20 06:40:02.710253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:29:47.315 [2024-11-20 06:40:02.710267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:3000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.315 [2024-11-20 06:40:02.710273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:29:47.315 [2024-11-20 06:40:02.710285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:3008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.315 [2024-11-20 06:40:02.710292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:29:47.315 [2024-11-20 06:40:02.710304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:3016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.315 [2024-11-20 06:40:02.710311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:29:47.315 [2024-11-20 06:40:02.710323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:3024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.315 [2024-11-20 06:40:02.710330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:29:47.315 [2024-11-20 06:40:02.710342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:2296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.315 [2024-11-20 06:40:02.710349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:29:47.315 [2024-11-20 06:40:02.710361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:2304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.315 [2024-11-20 06:40:02.710367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:29:47.315 [2024-11-20 06:40:02.710379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:2312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.315 [2024-11-20 06:40:02.710386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:29:47.315 [2024-11-20 06:40:02.710398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:2320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.315 [2024-11-20 06:40:02.710405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:29:47.315 [2024-11-20 06:40:02.710417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:2328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.315 [2024-11-20 06:40:02.710423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:29:47.315 [2024-11-20 06:40:02.710438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.315 [2024-11-20 06:40:02.710445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:29:47.315 [2024-11-20 06:40:02.710456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:2344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.315 [2024-11-20 06:40:02.710463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:29:47.315 [2024-11-20 06:40:02.710475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:2352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.315 [2024-11-20 06:40:02.710481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:29:47.315 [2024-11-20 06:40:02.710493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:2360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.315 [2024-11-20 06:40:02.710500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:29:47.315 [2024-11-20 06:40:02.710512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:2368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.315 [2024-11-20 06:40:02.710519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:29:47.315 [2024-11-20 06:40:02.710531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:2376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.315 [2024-11-20 06:40:02.710538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:29:47.315 [2024-11-20 06:40:02.710550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:2384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.315 [2024-11-20 06:40:02.710557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:29:47.315 [2024-11-20 06:40:02.710568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:2392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.315 [2024-11-20 06:40:02.710575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:29:47.315 [2024-11-20 06:40:02.710587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:2400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.315 [2024-11-20 06:40:02.710593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:29:47.315 [2024-11-20 06:40:02.710605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:2408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.315 [2024-11-20 06:40:02.710612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:29:47.315 [2024-11-20 06:40:02.710624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:2416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.315 [2024-11-20 06:40:02.710631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:29:47.315 [2024-11-20 06:40:02.710643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:3032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.315 [2024-11-20 06:40:02.710650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:29:47.315 [2024-11-20 06:40:02.711071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:3040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.315 [2024-11-20 06:40:02.711081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:29:47.315 [2024-11-20 06:40:02.711103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:3048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.315 [2024-11-20 06:40:02.711111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:29:47.315 [2024-11-20 06:40:02.711123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:3056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.315 [2024-11-20 06:40:02.711130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:29:47.315 [2024-11-20 06:40:02.711142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:3064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.315 [2024-11-20 06:40:02.711148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:29:47.315 [2024-11-20 06:40:02.711161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:3072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.315 [2024-11-20 06:40:02.711167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:29:47.315 [2024-11-20 06:40:02.711179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:3080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.315 [2024-11-20 06:40:02.711186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:29:47.315 [2024-11-20 06:40:02.711197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:3088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.315 [2024-11-20 06:40:02.711211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:29:47.315 [2024-11-20 06:40:02.711224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:3096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.315 [2024-11-20 06:40:02.711231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:47.315 [2024-11-20 06:40:02.711243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:3104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.315 [2024-11-20 06:40:02.711249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:47.315 [2024-11-20 06:40:02.711261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:3112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.315 [2024-11-20 06:40:02.711268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:29:47.315 [2024-11-20 06:40:02.711280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.315 [2024-11-20 06:40:02.711286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:29:47.315 [2024-11-20 06:40:02.711298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:3128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.315 [2024-11-20 06:40:02.711305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:29:47.315 [2024-11-20 06:40:02.711317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:3136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.315 [2024-11-20 06:40:02.711327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:29:47.315 [2024-11-20 06:40:02.711339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.315 [2024-11-20 06:40:02.711345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:29:47.315 [2024-11-20 06:40:02.711357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:3152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.316 [2024-11-20 06:40:02.711364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:29:47.316 [2024-11-20 06:40:02.711376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:3160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.316 [2024-11-20 06:40:02.711383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:29:47.316 [2024-11-20 06:40:02.711395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:3168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.316 [2024-11-20 06:40:02.711402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:29:47.316 [2024-11-20 06:40:02.711460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.316 [2024-11-20 06:40:02.711469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:29:47.316 [2024-11-20 06:40:02.711483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:3184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.316 [2024-11-20 06:40:02.711490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:29:47.316 [2024-11-20 06:40:02.711502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:3192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.316 [2024-11-20 06:40:02.711509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:29:47.316 [2024-11-20 06:40:02.711522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:3200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.316 [2024-11-20 06:40:02.711529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:29:47.316 [2024-11-20 06:40:02.711541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:3208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.316 [2024-11-20 06:40:02.711548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:29:47.316 [2024-11-20 06:40:02.711560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:3216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.316 [2024-11-20 06:40:02.711567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:29:47.316 [2024-11-20 06:40:02.711580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:3224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.316 [2024-11-20 06:40:02.711587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:29:47.316 [2024-11-20 06:40:02.711599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:3232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.316 [2024-11-20 06:40:02.711606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:29:47.316 [2024-11-20 06:40:02.711621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:3240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.316 [2024-11-20 06:40:02.711628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:29:47.316 [2024-11-20 06:40:02.711640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:3248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.316 [2024-11-20 06:40:02.711647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:29:47.316 [2024-11-20 06:40:02.711659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:3256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.316 [2024-11-20 06:40:02.711666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:29:47.316 [2024-11-20 06:40:02.711679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:3264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.316 [2024-11-20 06:40:02.711685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:29:47.316 [2024-11-20 06:40:02.711698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:3272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.316 [2024-11-20 06:40:02.711704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:29:47.316 [2024-11-20 06:40:02.711717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:2424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.316 [2024-11-20 06:40:02.711724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:29:47.316 [2024-11-20 06:40:02.711736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:2432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.316 [2024-11-20 06:40:02.711743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:29:47.316 [2024-11-20 06:40:02.711756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:2440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.316 [2024-11-20 06:40:02.711762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:47.316 [2024-11-20 06:40:02.711775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:2448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.316 [2024-11-20 06:40:02.711781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:47.316 [2024-11-20 06:40:02.711797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:2456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.316 [2024-11-20 06:40:02.711803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:29:47.316 [2024-11-20 06:40:02.711816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:2464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.316 [2024-11-20 06:40:02.711823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:29:47.316 [2024-11-20 06:40:02.711836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:2472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.316 [2024-11-20 06:40:02.711843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:29:47.316 [2024-11-20 06:40:02.711857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:2480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.316 [2024-11-20 06:40:02.711864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:47.316 [2024-11-20 06:40:02.711876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:2488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.316 [2024-11-20 06:40:02.711883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:29:47.316 [2024-11-20 06:40:02.711896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:2496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.316 [2024-11-20 06:40:02.711903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:47.316 [2024-11-20 06:40:02.711915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:2504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.316 [2024-11-20 06:40:02.711922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:47.316 [2024-11-20 06:40:02.711935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:2512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.316 [2024-11-20 06:40:02.711941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:29:47.316 [2024-11-20 06:40:02.711954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:2520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.316 [2024-11-20 06:40:02.711961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:29:47.316 [2024-11-20 06:40:02.711974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:2528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.316 [2024-11-20 06:40:02.711980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:29:47.316 [2024-11-20 06:40:02.711993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:2536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.316 [2024-11-20 06:40:02.712000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:29:47.316 [2024-11-20 06:40:02.712013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:2544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.316 [2024-11-20 06:40:02.712019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:29:47.316 [2024-11-20 06:40:02.712032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:2552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.316 [2024-11-20 06:40:02.712039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:29:47.316 [2024-11-20 06:40:02.712051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:2560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.316 [2024-11-20 06:40:02.712058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:47.316 [2024-11-20 06:40:02.712071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:2568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.316 [2024-11-20 06:40:02.712077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:29:47.316 [2024-11-20 06:40:02.712090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:2576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.316 [2024-11-20 06:40:02.712098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:47.316 [2024-11-20 06:40:02.712112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:2584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.316 [2024-11-20 06:40:02.712118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:29:47.316 [2024-11-20 06:40:02.712131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:2592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.316 [2024-11-20 06:40:02.712138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:29:47.316 [2024-11-20 06:40:02.712150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:2600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.316 [2024-11-20 06:40:02.712157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:47.316 [2024-11-20 06:40:02.712169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:2608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.317 [2024-11-20 06:40:02.712176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:29:47.317 [2024-11-20 06:40:02.712189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:2616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.317 [2024-11-20 06:40:02.712195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:29:47.317 [2024-11-20 06:40:02.712213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:2624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.317 [2024-11-20 06:40:02.712222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:29:47.317 [2024-11-20 06:40:02.712235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:2632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.317 [2024-11-20 06:40:02.712242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:29:47.317 [2024-11-20 06:40:02.712256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.317 [2024-11-20 06:40:02.712262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:47.317 [2024-11-20 06:40:02.712275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:2648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.317 [2024-11-20 06:40:02.712282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:29:47.317 [2024-11-20 06:40:02.712295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:2656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.317 [2024-11-20 06:40:02.712301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:47.317 [2024-11-20 06:40:02.712314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:2664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.317 [2024-11-20 06:40:02.712321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:29:47.317 [2024-11-20 06:40:02.712334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:2672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.317 [2024-11-20 06:40:02.712342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:29:47.317 [2024-11-20 06:40:02.712355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:2680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.317 [2024-11-20 06:40:02.712362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:47.317 [2024-11-20 06:40:02.712375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:2688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.317 [2024-11-20 06:40:02.712382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:47.317 [2024-11-20 06:40:02.712477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:2696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.317 [2024-11-20 06:40:02.712485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:47.317 [2024-11-20 06:40:02.712502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:2704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.317 [2024-11-20 06:40:02.712508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:47.317 [2024-11-20 06:40:02.712524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.317 [2024-11-20 06:40:02.712531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:47.317 [2024-11-20 06:40:02.712546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:2720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.317 [2024-11-20 06:40:02.712553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:29:47.317 [2024-11-20 06:40:02.712568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:2728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.317 [2024-11-20 06:40:02.712575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:29:47.317 [2024-11-20 06:40:02.712590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:2736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.317 [2024-11-20 06:40:02.712597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:47.317 [2024-11-20 06:40:02.712613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:2744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.317 [2024-11-20 06:40:02.712619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:47.317 [2024-11-20 06:40:02.712635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:2752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.317 [2024-11-20 06:40:02.712641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:47.317 [2024-11-20 06:40:02.712656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:2760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.317 [2024-11-20 06:40:02.712663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:47.317 [2024-11-20 06:40:02.712678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:2768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.317 [2024-11-20 06:40:02.712685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:29:47.317 [2024-11-20 06:40:02.712702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:2776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.317 [2024-11-20 06:40:02.712709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:47.317 [2024-11-20 06:40:02.712724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.317 [2024-11-20 06:40:02.712731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:47.317 [2024-11-20 06:40:02.712746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:2264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.317 [2024-11-20 06:40:02.712753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:47.317 [2024-11-20 06:40:02.712769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:2272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.317 [2024-11-20 06:40:02.712775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:29:47.317 [2024-11-20 06:40:02.712791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.317 [2024-11-20 06:40:02.712797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:47.317 [2024-11-20 06:40:02.712813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:2288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.317 [2024-11-20 06:40:02.712819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:47.317 [2024-11-20 06:40:02.712834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:2784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.317 [2024-11-20 06:40:02.712841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:29:47.317 [2024-11-20 06:40:02.712857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:2792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.317 [2024-11-20 06:40:02.712863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:29:47.317 [2024-11-20 06:40:02.712879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:2800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.317 [2024-11-20 06:40:02.712885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:29:47.317 [2024-11-20 06:40:02.712900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:2808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.317 [2024-11-20 06:40:02.712907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:29:47.317 [2024-11-20 06:40:02.712923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:2816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.317 [2024-11-20 06:40:02.712929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:47.317 [2024-11-20 06:40:02.712945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:2824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.317 [2024-11-20 06:40:02.712952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:47.317 [2024-11-20 06:40:02.712967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:2832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.317 [2024-11-20 06:40:02.712975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:29:47.317 [2024-11-20 06:40:02.712991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:2840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.317 [2024-11-20 06:40:02.712998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:29:47.317 [2024-11-20 06:40:02.713013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:2848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.317 [2024-11-20 06:40:02.713019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:47.317 [2024-11-20 06:40:02.713035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:2856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.317 [2024-11-20 06:40:02.713041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:47.317 [2024-11-20 06:40:02.713056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:2864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.317 [2024-11-20 06:40:02.713063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:29:47.317 [2024-11-20 06:40:02.713078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:2872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.317 [2024-11-20 06:40:02.713087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:29:47.318 [2024-11-20 06:40:02.713102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:2880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.318 [2024-11-20 06:40:02.713109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:47.318 [2024-11-20 06:40:02.713124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:2888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.318 [2024-11-20 06:40:02.713130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:29:47.318 [2024-11-20 06:40:02.713146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:2896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.318 [2024-11-20 06:40:02.713152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:47.318 [2024-11-20 06:40:02.713168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:2904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.318 [2024-11-20 06:40:02.713174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:47.318 [2024-11-20 06:40:02.713190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:2912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.318 [2024-11-20 06:40:02.713196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:47.318 [2024-11-20 06:40:02.713215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:2920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.318 [2024-11-20 06:40:02.713222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:47.318 [2024-11-20 06:40:02.713237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:2928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.318 [2024-11-20 06:40:02.713246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:47.318 [2024-11-20 06:40:02.713261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:2936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.318 [2024-11-20 06:40:02.713268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:29:47.318 [2024-11-20 06:40:02.713283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:2944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.318 [2024-11-20 06:40:02.713290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:47.318 [2024-11-20 06:40:02.713371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:2952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.318 [2024-11-20 06:40:02.713379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:47.318 [2024-11-20 06:40:02.713397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:2960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.318 [2024-11-20 06:40:02.713404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.318 [2024-11-20 06:40:02.713421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:2968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.318 [2024-11-20 06:40:02.713427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:47.318 [2024-11-20 06:40:02.713445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:2976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.318 [2024-11-20 06:40:02.713451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:47.318 [2024-11-20 06:40:02.713469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:2984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.318 [2024-11-20 06:40:02.713475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:29:47.318 [2024-11-20 06:40:02.713492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:2992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.318 [2024-11-20 06:40:02.713499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:29:47.318 [2024-11-20 06:40:02.713516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:3000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.318 [2024-11-20 06:40:02.713523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:29:47.318 [2024-11-20 06:40:02.713540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:3008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.318 [2024-11-20 06:40:02.713546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:29:47.318 [2024-11-20 06:40:02.713563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:3016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.318 [2024-11-20 06:40:02.713570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:29:47.318 [2024-11-20 06:40:02.713587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:3024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.318 [2024-11-20 06:40:02.713594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:29:47.318 [2024-11-20 06:40:02.713613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:2296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.318 [2024-11-20 06:40:02.713620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:29:47.318 [2024-11-20 06:40:02.713637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:2304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.318 [2024-11-20 06:40:02.713644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:29:47.318 [2024-11-20 06:40:02.713661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:2312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.318 [2024-11-20 06:40:02.713667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:29:47.318 [2024-11-20 06:40:02.713687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.318 [2024-11-20 06:40:02.713694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:29:47.318 [2024-11-20 06:40:02.713711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:2328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.318 [2024-11-20 06:40:02.713718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:29:47.318 [2024-11-20 06:40:02.713735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:2336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.318 [2024-11-20 06:40:02.713742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:29:47.318 [2024-11-20 06:40:02.713760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:2344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.318 [2024-11-20 06:40:02.713767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:29:47.318 [2024-11-20 06:40:02.713784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:2352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.318 [2024-11-20 06:40:02.713791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:29:47.318 [2024-11-20 06:40:02.713808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:2360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.318 [2024-11-20 06:40:02.713815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:29:47.318 [2024-11-20 06:40:02.713832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:2368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.318 [2024-11-20 06:40:02.713839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:29:47.318 [2024-11-20 06:40:02.713856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:2376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.318 [2024-11-20 06:40:02.713863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:29:47.318 [2024-11-20 06:40:02.713880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:2384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.318 [2024-11-20 06:40:02.713887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:29:47.318 [2024-11-20 06:40:02.713905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:2392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.318 [2024-11-20 06:40:02.713912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:29:47.318 [2024-11-20 06:40:02.713929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:2400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.318 [2024-11-20 06:40:02.713936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:29:47.318 [2024-11-20 06:40:02.713954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:2408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.318 [2024-11-20 06:40:02.713961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:29:47.319 [2024-11-20 06:40:02.713978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:2416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.319 [2024-11-20 06:40:02.713985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:29:47.319 [2024-11-20 06:40:02.714002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:3032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.319 [2024-11-20 06:40:02.714009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:29:47.319 11364.46 IOPS, 44.39 MiB/s [2024-11-20T05:40:19.155Z] 10552.71 IOPS, 41.22 MiB/s [2024-11-20T05:40:19.155Z] 9849.20 IOPS, 38.47 MiB/s [2024-11-20T05:40:19.155Z] 9338.31 IOPS, 36.48 MiB/s [2024-11-20T05:40:19.155Z] 9461.35 IOPS, 36.96 MiB/s [2024-11-20T05:40:19.155Z] 9569.44 IOPS, 37.38 MiB/s [2024-11-20T05:40:19.155Z] 9762.00 IOPS, 38.13 MiB/s [2024-11-20T05:40:19.155Z] 9953.90 IOPS, 38.88 MiB/s [2024-11-20T05:40:19.155Z] 10130.29 IOPS, 39.57 MiB/s [2024-11-20T05:40:19.155Z] 10182.45 IOPS, 39.78 MiB/s [2024-11-20T05:40:19.155Z] 10228.48 IOPS, 39.95 MiB/s [2024-11-20T05:40:19.155Z] 10304.42 IOPS, 40.25 MiB/s [2024-11-20T05:40:19.155Z] 10444.44 IOPS, 40.80 MiB/s [2024-11-20T05:40:19.155Z] 10565.27 IOPS, 41.27 MiB/s [2024-11-20T05:40:19.155Z] [2024-11-20 06:40:16.351239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:18976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.319 [2024-11-20 06:40:16.351277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:29:47.319 [2024-11-20 06:40:16.351297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:18992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.319 [2024-11-20 06:40:16.351305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:47.319 [2024-11-20 06:40:16.351318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:19008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.319 [2024-11-20 06:40:16.351326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:47.319 [2024-11-20 06:40:16.351338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:19024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.319 [2024-11-20 06:40:16.351344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:29:47.319 [2024-11-20 06:40:16.351357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:19040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.319 [2024-11-20 06:40:16.351363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:29:47.319 [2024-11-20 06:40:16.351375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:19056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.319 [2024-11-20 06:40:16.351382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:29:47.319 [2024-11-20 06:40:16.351400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.319 [2024-11-20 06:40:16.351407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:47.319 [2024-11-20 06:40:16.351419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:18488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.319 [2024-11-20 06:40:16.351426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:29:47.319 [2024-11-20 06:40:16.351439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.319 [2024-11-20 06:40:16.351446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:47.319 [2024-11-20 06:40:16.351458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:18552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.319 [2024-11-20 06:40:16.351465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:47.319 [2024-11-20 06:40:16.351477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:18584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.319 [2024-11-20 06:40:16.351484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:29:47.319 [2024-11-20 06:40:16.351496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:18616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.319 [2024-11-20 06:40:16.351503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:29:47.319 [2024-11-20 06:40:16.351515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:18648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.319 [2024-11-20 06:40:16.351522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:29:47.319 [2024-11-20 06:40:16.351534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:18672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.319 [2024-11-20 06:40:16.351541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:29:47.319 [2024-11-20 06:40:16.351553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.319 [2024-11-20 06:40:16.351559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:29:47.319 [2024-11-20 06:40:16.351572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:19096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.319 [2024-11-20 06:40:16.351579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:29:47.319 [2024-11-20 06:40:16.351591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:19112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.319 [2024-11-20 06:40:16.351598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:47.319 [2024-11-20 06:40:16.351611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:19128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.319 [2024-11-20 06:40:16.351619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:29:47.319 [2024-11-20 06:40:16.351633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:19144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.319 [2024-11-20 06:40:16.351640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:47.319 [2024-11-20 06:40:16.351652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:19160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.319 [2024-11-20 06:40:16.351659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:29:47.319 [2024-11-20 06:40:16.351671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:19176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.319 [2024-11-20 06:40:16.351678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:29:47.319 [2024-11-20 06:40:16.351690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:19192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.319 [2024-11-20 06:40:16.351696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:47.319 [2024-11-20 06:40:16.351708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:19208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.319 [2024-11-20 06:40:16.351715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:29:47.319 [2024-11-20 06:40:16.351727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:19224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.319 [2024-11-20 06:40:16.351734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:29:47.319 [2024-11-20 06:40:16.351746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:19240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.319 [2024-11-20 06:40:16.351752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:29:47.319 [2024-11-20 06:40:16.351764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:19256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.319 [2024-11-20 06:40:16.351771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:29:47.319 [2024-11-20 06:40:16.351783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:19272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.319 [2024-11-20 06:40:16.351789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:47.319 [2024-11-20 06:40:16.351801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:19288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.319 [2024-11-20 06:40:16.351807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:29:47.319 [2024-11-20 06:40:16.351819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:19304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.319 [2024-11-20 06:40:16.351826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:47.319 [2024-11-20 06:40:16.351838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:19320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.319 [2024-11-20 06:40:16.351844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:29:47.319 [2024-11-20 06:40:16.351856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:18608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.319 [2024-11-20 06:40:16.351875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:29:47.319 [2024-11-20 06:40:16.351887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:18640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.319 [2024-11-20 06:40:16.351893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:47.319 [2024-11-20 06:40:16.351905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:18680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.319 [2024-11-20 06:40:16.351912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:47.319 [2024-11-20 06:40:16.351924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:18712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.319 [2024-11-20 06:40:16.351931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:47.320 [2024-11-20 06:40:16.351945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:19336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.320 [2024-11-20 06:40:16.351952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:47.320 [2024-11-20 06:40:16.351963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:19352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.320 [2024-11-20 06:40:16.351970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:47.320 [2024-11-20 06:40:16.351982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:18720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.320 [2024-11-20 06:40:16.351988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:29:47.320 [2024-11-20 06:40:16.352000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.320 [2024-11-20 06:40:16.352007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:29:47.320 [2024-11-20 06:40:16.352019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.320 [2024-11-20 06:40:16.352025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:47.320 [2024-11-20 06:40:16.352037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.320 [2024-11-20 06:40:16.352044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:47.320 [2024-11-20 06:40:16.352056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:18728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.320 [2024-11-20 06:40:16.352062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:47.320 [2024-11-20 06:40:16.352074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:18760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.320 [2024-11-20 06:40:16.352081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:47.320 [2024-11-20 06:40:16.352093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:18792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.320 [2024-11-20 06:40:16.352101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:29:47.320 [2024-11-20 06:40:16.352114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:18824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.320 [2024-11-20 06:40:16.352120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:47.320 [2024-11-20 06:40:16.352489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:18864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.320 [2024-11-20 06:40:16.352500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:47.320 [2024-11-20 06:40:16.352514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:18896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.320 [2024-11-20 06:40:16.352521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:47.320 [2024-11-20 06:40:16.352534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:18928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.320 [2024-11-20 06:40:16.352541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:29:47.320 [2024-11-20 06:40:16.352553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:18960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.320 [2024-11-20 06:40:16.352560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:47.320 [2024-11-20 06:40:16.352572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:19376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.320 [2024-11-20 06:40:16.352578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:47.320 [2024-11-20 06:40:16.352591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:19392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.320 [2024-11-20 06:40:16.352598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:29:47.320 [2024-11-20 06:40:16.352610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:19408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.320 [2024-11-20 06:40:16.352616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:29:47.320 [2024-11-20 06:40:16.352629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:19424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.320 [2024-11-20 06:40:16.352636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:29:47.320 [2024-11-20 06:40:16.352648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:19440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.320 [2024-11-20 06:40:16.352654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:29:47.320 [2024-11-20 06:40:16.352666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:19456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.320 [2024-11-20 06:40:16.352673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:47.320 [2024-11-20 06:40:16.352685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.320 [2024-11-20 06:40:16.352692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:47.320 [2024-11-20 06:40:16.352706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:19488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.320 [2024-11-20 06:40:16.352713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:29:47.320 [2024-11-20 06:40:16.352725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:19504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.320 [2024-11-20 06:40:16.352732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:29:47.320 [2024-11-20 06:40:16.352744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:19520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.320 [2024-11-20 06:40:16.352750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:47.320 [2024-11-20 06:40:16.352763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:19536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.320 [2024-11-20 06:40:16.352769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:47.320 [2024-11-20 06:40:16.352781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:19552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.320 [2024-11-20 06:40:16.352788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:29:47.320 [2024-11-20 06:40:16.352800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:18856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.320 [2024-11-20 06:40:16.352807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:29:47.320 [2024-11-20 06:40:16.352819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:18888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.320 [2024-11-20 06:40:16.352825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:47.320 [2024-11-20 06:40:16.352844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:18920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.320 [2024-11-20 06:40:16.352851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:29:47.320 [2024-11-20 06:40:16.352863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:18952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.320 [2024-11-20 06:40:16.352870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:47.320 [2024-11-20 06:40:16.352882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:18984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.320 [2024-11-20 06:40:16.352888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:47.320 [2024-11-20 06:40:16.352901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:19016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.320 [2024-11-20 06:40:16.352907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:47.320 [2024-11-20 06:40:16.352920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:19048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.320 [2024-11-20 06:40:16.352926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:47.320 [2024-11-20 06:40:16.352940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:19080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.320 [2024-11-20 06:40:16.352946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:47.320 [2024-11-20 06:40:16.353962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.320 [2024-11-20 06:40:16.353981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:29:47.320 [2024-11-20 06:40:16.353996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:19136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.320 [2024-11-20 06:40:16.354003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:47.320 [2024-11-20 06:40:16.354016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:19168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.320 [2024-11-20 06:40:16.354023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:47.320 [2024-11-20 06:40:16.354035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:19200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.320 [2024-11-20 06:40:16.354042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.320 [2024-11-20 06:40:16.354054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:18992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.320 [2024-11-20 06:40:16.354060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:47.321 [2024-11-20 06:40:16.354072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:19024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.321 [2024-11-20 06:40:16.354079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:47.321 [2024-11-20 06:40:16.354091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:19056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.321 [2024-11-20 06:40:16.354097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:29:47.321 [2024-11-20 06:40:16.354109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:18488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.321 [2024-11-20 06:40:16.354116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:29:47.321 [2024-11-20 06:40:16.354128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.321 [2024-11-20 06:40:16.354134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:29:47.321 [2024-11-20 06:40:16.354146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.321 [2024-11-20 06:40:16.354152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:29:47.321 [2024-11-20 06:40:16.354167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.321 [2024-11-20 06:40:16.354174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:29:47.321 [2024-11-20 06:40:16.354186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:19096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.321 [2024-11-20 06:40:16.354195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:29:47.321 [2024-11-20 06:40:16.354213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:19128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.321 [2024-11-20 06:40:16.354220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:29:47.321 [2024-11-20 06:40:16.354233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:19160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.321 [2024-11-20 06:40:16.354239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:29:47.321 [2024-11-20 06:40:16.354251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:19192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.321 [2024-11-20 06:40:16.354258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:29:47.321 [2024-11-20 06:40:16.354270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:19224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.321 [2024-11-20 06:40:16.354277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:29:47.321 [2024-11-20 06:40:16.354289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:19256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.321 [2024-11-20 06:40:16.354295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:29:47.321 [2024-11-20 06:40:16.354308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:19288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.321 [2024-11-20 06:40:16.354315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:29:47.321 [2024-11-20 06:40:16.354327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:19320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.321 [2024-11-20 06:40:16.354334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:29:47.321 [2024-11-20 06:40:16.354345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:18640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.321 [2024-11-20 06:40:16.354352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:29:47.321 [2024-11-20 06:40:16.354364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:18712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.321 [2024-11-20 06:40:16.354371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:29:47.321 [2024-11-20 06:40:16.354383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:19352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.321 [2024-11-20 06:40:16.354389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:29:47.321 [2024-11-20 06:40:16.354401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:18752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.321 [2024-11-20 06:40:16.354408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:29:47.321 [2024-11-20 06:40:16.354420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:18816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.321 [2024-11-20 06:40:16.354428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:29:47.321 [2024-11-20 06:40:16.354440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.321 [2024-11-20 06:40:16.354447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:29:47.321 [2024-11-20 06:40:16.354459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:18824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.321 [2024-11-20 06:40:16.354465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:29:47.321 [2024-11-20 06:40:16.355106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:19568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.321 [2024-11-20 06:40:16.355121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:29:47.321 [2024-11-20 06:40:16.355136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:19584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.321 [2024-11-20 06:40:16.355143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:29:47.321 [2024-11-20 06:40:16.355155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:19600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.321 [2024-11-20 06:40:16.355162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:29:47.321 [2024-11-20 06:40:16.355176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:19216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.321 [2024-11-20 06:40:16.355183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:29:47.321 [2024-11-20 06:40:16.355195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.321 [2024-11-20 06:40:16.355207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:29:47.321 [2024-11-20 06:40:16.355219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:19280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.321 [2024-11-20 06:40:16.355226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:29:47.321 [2024-11-20 06:40:16.355238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:19312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.321 [2024-11-20 06:40:16.355245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:29:47.321 [2024-11-20 06:40:16.355257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:19344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.321 [2024-11-20 06:40:16.355264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:29:47.321 [2024-11-20 06:40:16.355276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.321 [2024-11-20 06:40:16.355282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:29:47.321 [2024-11-20 06:40:16.355294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:18928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.321 [2024-11-20 06:40:16.355301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:29:47.321 [2024-11-20 06:40:16.355316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:19376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.321 [2024-11-20 06:40:16.355323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:47.321 [2024-11-20 06:40:16.355335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:19408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.321 [2024-11-20 06:40:16.355341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:47.321 [2024-11-20 06:40:16.355354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:19440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.321 [2024-11-20 06:40:16.355360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:29:47.321 [2024-11-20 06:40:16.355372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.321 [2024-11-20 06:40:16.355379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:29:47.321 [2024-11-20 06:40:16.355390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:19504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.321 [2024-11-20 06:40:16.355397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:29:47.321 [2024-11-20 06:40:16.355409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:19536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.321 [2024-11-20 06:40:16.355416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:29:47.321 [2024-11-20 06:40:16.355429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.321 [2024-11-20 06:40:16.355435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:29:47.321 [2024-11-20 06:40:16.355447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:18920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.322 [2024-11-20 06:40:16.355454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:29:47.322 [2024-11-20 06:40:16.355466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:18984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.322 [2024-11-20 06:40:16.355473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:29:47.322 [2024-11-20 06:40:16.355486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:19048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.322 [2024-11-20 06:40:16.355492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:29:47.322 [2024-11-20 06:40:16.356161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:19368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.322 [2024-11-20 06:40:16.356177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:29:47.322 [2024-11-20 06:40:16.356191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:19400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.322 [2024-11-20 06:40:16.356198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:29:47.322 [2024-11-20 06:40:16.356220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:19432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.322 [2024-11-20 06:40:16.356227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:29:47.322 [2024-11-20 06:40:16.356240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:19616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.322 [2024-11-20 06:40:16.356246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:29:47.322 [2024-11-20 06:40:16.356258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:19632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.322 [2024-11-20 06:40:16.356264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:29:47.322 [2024-11-20 06:40:16.356276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:19464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.322 [2024-11-20 06:40:16.356283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:29:47.322 [2024-11-20 06:40:16.356295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:19496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.322 [2024-11-20 06:40:16.356301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:29:47.322 [2024-11-20 06:40:16.356313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:19528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.322 [2024-11-20 06:40:16.356320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:29:47.322 [2024-11-20 06:40:16.356332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:19136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.322 [2024-11-20 06:40:16.356338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:29:47.322 [2024-11-20 06:40:16.356350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.322 [2024-11-20 06:40:16.356357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:29:47.322 [2024-11-20 06:40:16.356368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:19024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.322 [2024-11-20 06:40:16.356375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:29:47.322 [2024-11-20 06:40:16.356387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:18488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.322 [2024-11-20 06:40:16.356394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:29:47.322 [2024-11-20 06:40:16.356408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:18616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.322 [2024-11-20 06:40:16.356414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:29:47.322 [2024-11-20 06:40:16.356426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:19096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.322 [2024-11-20 06:40:16.356433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:29:47.322 [2024-11-20 06:40:16.356445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:19160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.322 [2024-11-20 06:40:16.356453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:29:47.322 [2024-11-20 06:40:16.356465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:19224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.322 [2024-11-20 06:40:16.356471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:47.322 [2024-11-20 06:40:16.356484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:19288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.322 [2024-11-20 06:40:16.356490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:47.322 [2024-11-20 06:40:16.356502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:18640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.322 [2024-11-20 06:40:16.356508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:29:47.322 [2024-11-20 06:40:16.356520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:19352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.322 [2024-11-20 06:40:16.356527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:29:47.322 [2024-11-20 06:40:16.356539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:18816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.322 [2024-11-20 06:40:16.356545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:29:47.322 [2024-11-20 06:40:16.356558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:18824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.322 [2024-11-20 06:40:16.356564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:47.322 [2024-11-20 06:40:16.357153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:19656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.322 [2024-11-20 06:40:16.357166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:29:47.322 [2024-11-20 06:40:16.357180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:19672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.322 [2024-11-20 06:40:16.357187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:47.322 [2024-11-20 06:40:16.357199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:19688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.322 [2024-11-20 06:40:16.357212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:47.322 [2024-11-20 06:40:16.357224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:19704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.322 [2024-11-20 06:40:16.357230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:29:47.322 [2024-11-20 06:40:16.357242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.322 [2024-11-20 06:40:16.357249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:29:47.322 [2024-11-20 06:40:16.357261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:19584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.322 [2024-11-20 06:40:16.357271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:29:47.322 [2024-11-20 06:40:16.357283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:19216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.322 [2024-11-20 06:40:16.357289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:29:47.322 [2024-11-20 06:40:16.357302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:19280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.322 [2024-11-20 06:40:16.357309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:29:47.322 [2024-11-20 06:40:16.357321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:19344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.322 [2024-11-20 06:40:16.357327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:29:47.322 [2024-11-20 06:40:16.357339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:18928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.322 [2024-11-20 06:40:16.357345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:47.322 [2024-11-20 06:40:16.357357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:19408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.322 [2024-11-20 06:40:16.357365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:29:47.323 [2024-11-20 06:40:16.357377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.323 [2024-11-20 06:40:16.357383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:47.323 [2024-11-20 06:40:16.357395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:19536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.323 [2024-11-20 06:40:16.357402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:29:47.323 [2024-11-20 06:40:16.357414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:18920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.323 [2024-11-20 06:40:16.357421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:29:47.323 [2024-11-20 06:40:16.357433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:19048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.323 [2024-11-20 06:40:16.357440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:47.323 [2024-11-20 06:40:16.357762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:19008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.323 [2024-11-20 06:40:16.357775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:29:47.323 [2024-11-20 06:40:16.357790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:19072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.323 [2024-11-20 06:40:16.357797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:29:47.323 [2024-11-20 06:40:16.357809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:19144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.323 [2024-11-20 06:40:16.357820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:29:47.323 [2024-11-20 06:40:16.357832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:19736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.323 [2024-11-20 06:40:16.357839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:29:47.323 [2024-11-20 06:40:16.357851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:19752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.323 [2024-11-20 06:40:16.357858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:47.323 [2024-11-20 06:40:16.357870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.323 [2024-11-20 06:40:16.357877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:29:47.323 [2024-11-20 06:40:16.357889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:19784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.323 [2024-11-20 06:40:16.357895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:47.323 [2024-11-20 06:40:16.357907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:19208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.323 [2024-11-20 06:40:16.357914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:29:47.323 [2024-11-20 06:40:16.357926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:19272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.323 [2024-11-20 06:40:16.357932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:29:47.323 [2024-11-20 06:40:16.357944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:19400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.323 [2024-11-20 06:40:16.357951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:47.323 [2024-11-20 06:40:16.357963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:19616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.323 [2024-11-20 06:40:16.357969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:47.323 [2024-11-20 06:40:16.357982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:19464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.323 [2024-11-20 06:40:16.357989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:47.323 [2024-11-20 06:40:16.358000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:19528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.323 [2024-11-20 06:40:16.358007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:47.323 [2024-11-20 06:40:16.358019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:19200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.323 [2024-11-20 06:40:16.358026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:47.323 [2024-11-20 06:40:16.358038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:18488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.323 [2024-11-20 06:40:16.358045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:29:47.323 [2024-11-20 06:40:16.358058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:19096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.323 [2024-11-20 06:40:16.358065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:29:47.323 [2024-11-20 06:40:16.358077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:19224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.323 [2024-11-20 06:40:16.358083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:47.323 [2024-11-20 06:40:16.358096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:18640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.323 [2024-11-20 06:40:16.358102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:47.323 [2024-11-20 06:40:16.358114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:18816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.323 [2024-11-20 06:40:16.358121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:47.323 [2024-11-20 06:40:16.358575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:19336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.323 [2024-11-20 06:40:16.358588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:47.323 [2024-11-20 06:40:16.358601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.323 [2024-11-20 06:40:16.358609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:29:47.323 [2024-11-20 06:40:16.358621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:19800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.323 [2024-11-20 06:40:16.358628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:47.323 [2024-11-20 06:40:16.358640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:19816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.323 [2024-11-20 06:40:16.358647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:47.323 [2024-11-20 06:40:16.358659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:19832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.323 [2024-11-20 06:40:16.358666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:47.323 [2024-11-20 06:40:16.358678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:19848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.323 [2024-11-20 06:40:16.358685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:29:47.323 [2024-11-20 06:40:16.358697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:19608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.323 [2024-11-20 06:40:16.358703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:47.323 [2024-11-20 06:40:16.358715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:19672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.323 [2024-11-20 06:40:16.358722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:47.323 [2024-11-20 06:40:16.358739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:19704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.323 [2024-11-20 06:40:16.358746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:29:47.323 [2024-11-20 06:40:16.358758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:19584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.323 [2024-11-20 06:40:16.358764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:29:47.323 [2024-11-20 06:40:16.358776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.323 [2024-11-20 06:40:16.358783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:29:47.323 [2024-11-20 06:40:16.358795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:18928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.323 [2024-11-20 06:40:16.358802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:29:47.323 [2024-11-20 06:40:16.358814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:19472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.323 [2024-11-20 06:40:16.358820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:47.323 [2024-11-20 06:40:16.358833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.323 [2024-11-20 06:40:16.358839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:47.323 [2024-11-20 06:40:16.359975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:19392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.323 [2024-11-20 06:40:16.359992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:29:47.323 [2024-11-20 06:40:16.360006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:19456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.324 [2024-11-20 06:40:16.360013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:29:47.324 [2024-11-20 06:40:16.360026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:19520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.324 [2024-11-20 06:40:16.360033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:47.324 [2024-11-20 06:40:16.360045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:19072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.324 [2024-11-20 06:40:16.360052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:47.324 [2024-11-20 06:40:16.360064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:19736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.324 [2024-11-20 06:40:16.360073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:29:47.324 [2024-11-20 06:40:16.360085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:19768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.324 [2024-11-20 06:40:16.360092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:29:47.324 [2024-11-20 06:40:16.360104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:19208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.324 [2024-11-20 06:40:16.360114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:47.324 [2024-11-20 06:40:16.360127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:19400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.324 [2024-11-20 06:40:16.360134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:29:47.324 [2024-11-20 06:40:16.360146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:19464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.324 [2024-11-20 06:40:16.360152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:47.324 [2024-11-20 06:40:16.360164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:19200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.324 [2024-11-20 06:40:16.360171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:47.324 [2024-11-20 06:40:16.360184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:19096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.324 [2024-11-20 06:40:16.360191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:47.324 [2024-11-20 06:40:16.360209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:18640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.324 [2024-11-20 06:40:16.360216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:47.324 [2024-11-20 06:40:16.360228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:19624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.324 [2024-11-20 06:40:16.360234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:47.324 [2024-11-20 06:40:16.360246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:19872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.324 [2024-11-20 06:40:16.360253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:29:47.324 [2024-11-20 06:40:16.360265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:19888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.324 [2024-11-20 06:40:16.360271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:47.324 [2024-11-20 06:40:16.360283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:19904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.324 [2024-11-20 06:40:16.360290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:47.324 [2024-11-20 06:40:16.360302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:19576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.324 [2024-11-20 06:40:16.360308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.324 [2024-11-20 06:40:16.360321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:19816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.324 [2024-11-20 06:40:16.360327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:47.324 [2024-11-20 06:40:16.360339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:19848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.324 [2024-11-20 06:40:16.360347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:47.324 [2024-11-20 06:40:16.360359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.324 [2024-11-20 06:40:16.360366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:29:47.324 [2024-11-20 06:40:16.360378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:19584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.324 [2024-11-20 06:40:16.360386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:29:47.324 [2024-11-20 06:40:16.360398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:18928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.324 [2024-11-20 06:40:16.360405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:29:47.324 [2024-11-20 06:40:16.360417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:18920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.324 [2024-11-20 06:40:16.360424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:29:47.324 [2024-11-20 06:40:16.362101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:19056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.324 [2024-11-20 06:40:16.362119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:29:47.324 [2024-11-20 06:40:16.362134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:19192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.324 [2024-11-20 06:40:16.362141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:29:47.324 [2024-11-20 06:40:16.362153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:19320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.324 [2024-11-20 06:40:16.362160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:29:47.324 [2024-11-20 06:40:16.362175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:19912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.324 [2024-11-20 06:40:16.362181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:29:47.324 [2024-11-20 06:40:16.362193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:19928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.324 [2024-11-20 06:40:16.362200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:29:47.324 [2024-11-20 06:40:16.362219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:19944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.325 [2024-11-20 06:40:16.362226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:29:47.325 [2024-11-20 06:40:16.362238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:19960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.325 [2024-11-20 06:40:16.362245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:29:47.325 [2024-11-20 06:40:16.362257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.325 [2024-11-20 06:40:16.362263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:29:47.325 [2024-11-20 06:40:16.362281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:19992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.325 [2024-11-20 06:40:16.362287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:29:47.325 [2024-11-20 06:40:16.362300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:20008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.325 [2024-11-20 06:40:16.362306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:29:47.325 [2024-11-20 06:40:16.362318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:20024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.325 [2024-11-20 06:40:16.362325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:29:47.325 [2024-11-20 06:40:16.362336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:20040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.325 [2024-11-20 06:40:16.362343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:29:47.325 [2024-11-20 06:40:16.362356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:20056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.325 [2024-11-20 06:40:16.362362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:29:47.325 [2024-11-20 06:40:16.362374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:20072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.325 [2024-11-20 06:40:16.362382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:29:47.325 [2024-11-20 06:40:16.362394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:20088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.325 [2024-11-20 06:40:16.362401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:29:47.325 [2024-11-20 06:40:16.362413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.325 [2024-11-20 06:40:16.362419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:29:47.325 [2024-11-20 06:40:16.362431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:19696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.325 [2024-11-20 06:40:16.362437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:29:47.325 [2024-11-20 06:40:16.362450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:19728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.325 [2024-11-20 06:40:16.362456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:29:47.325 [2024-11-20 06:40:16.362469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:19600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.325 [2024-11-20 06:40:16.362475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:29:47.325 [2024-11-20 06:40:16.362487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:19456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.325 [2024-11-20 06:40:16.362494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:29:47.325 [2024-11-20 06:40:16.362508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:19072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.325 [2024-11-20 06:40:16.362515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:29:47.325 [2024-11-20 06:40:16.362527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:19768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.325 [2024-11-20 06:40:16.362533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:29:47.325 [2024-11-20 06:40:16.362545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.325 [2024-11-20 06:40:16.362552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:29:47.325 [2024-11-20 06:40:16.362564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:19200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.325 [2024-11-20 06:40:16.362570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:29:47.325 [2024-11-20 06:40:16.362582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:18640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.325 [2024-11-20 06:40:16.362588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:29:47.325 [2024-11-20 06:40:16.362601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:19872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.325 [2024-11-20 06:40:16.362607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:29:47.325 [2024-11-20 06:40:16.362619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:19904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.325 [2024-11-20 06:40:16.362626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:47.325 [2024-11-20 06:40:16.363608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:19816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.325 [2024-11-20 06:40:16.363624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:47.325 [2024-11-20 06:40:16.363638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:19672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.325 [2024-11-20 06:40:16.363645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:29:47.325 [2024-11-20 06:40:16.363657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:18928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.325 [2024-11-20 06:40:16.363664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:29:47.325 [2024-11-20 06:40:16.363676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:19376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.325 [2024-11-20 06:40:16.363683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:29:47.325 [2024-11-20 06:40:16.363695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:19504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.325 [2024-11-20 06:40:16.363702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:29:47.325 [2024-11-20 06:40:16.363714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:19760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.325 [2024-11-20 06:40:16.363724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:29:47.325 [2024-11-20 06:40:16.363736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:19792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.325 [2024-11-20 06:40:16.363743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:29:47.325 [2024-11-20 06:40:16.363755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:19024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.325 [2024-11-20 06:40:16.363761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:29:47.325 [2024-11-20 06:40:16.363773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:20112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.325 [2024-11-20 06:40:16.363780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:29:47.325 [2024-11-20 06:40:16.363792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:19160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.325 [2024-11-20 06:40:16.363799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:29:47.325 [2024-11-20 06:40:16.363811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:19352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.325 [2024-11-20 06:40:16.363817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:29:47.325 [2024-11-20 06:40:16.363829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:19824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.325 [2024-11-20 06:40:16.363836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:29:47.325 [2024-11-20 06:40:16.363848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:19856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.325 [2024-11-20 06:40:16.363855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:29:47.325 [2024-11-20 06:40:16.363867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:20128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.325 [2024-11-20 06:40:16.363874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:29:47.325 [2024-11-20 06:40:16.363885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:20144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.325 [2024-11-20 06:40:16.363892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:29:47.325 [2024-11-20 06:40:16.363904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.325 [2024-11-20 06:40:16.363911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:29:47.325 [2024-11-20 06:40:16.363923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:20176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.325 [2024-11-20 06:40:16.363929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:29:47.326 [2024-11-20 06:40:16.372265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:20192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.326 [2024-11-20 06:40:16.372276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:29:47.326 [2024-11-20 06:40:16.372289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:20208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.326 [2024-11-20 06:40:16.372296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:29:47.326 [2024-11-20 06:40:16.372308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:20224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.326 [2024-11-20 06:40:16.372314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:29:47.326 [2024-11-20 06:40:16.372326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:19720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.326 [2024-11-20 06:40:16.372333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:29:47.326 [2024-11-20 06:40:16.372345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:19536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.326 [2024-11-20 06:40:16.372352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:29:47.326 [2024-11-20 06:40:16.372890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:19784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.326 [2024-11-20 06:40:16.372904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:29:47.326 [2024-11-20 06:40:16.372918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:19224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.326 [2024-11-20 06:40:16.372925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:29:47.326 [2024-11-20 06:40:16.372937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.326 [2024-11-20 06:40:16.372944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:47.326 [2024-11-20 06:40:16.372956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:19864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.326 [2024-11-20 06:40:16.372963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:47.326 [2024-11-20 06:40:16.372975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.326 [2024-11-20 06:40:16.372981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:29:47.326 [2024-11-20 06:40:16.372993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:19832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.326 [2024-11-20 06:40:16.373000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:29:47.326 [2024-11-20 06:40:16.373012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:19192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.326 [2024-11-20 06:40:16.373018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:29:47.326 [2024-11-20 06:40:16.373030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:19912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.326 [2024-11-20 06:40:16.373037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:47.326 [2024-11-20 06:40:16.373052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:19944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.326 [2024-11-20 06:40:16.373058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:29:47.326 [2024-11-20 06:40:16.373070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:19976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.326 [2024-11-20 06:40:16.373077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:47.326 [2024-11-20 06:40:16.373089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.326 [2024-11-20 06:40:16.373095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:47.326 [2024-11-20 06:40:16.373107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:20040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.326 [2024-11-20 06:40:16.373114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:29:47.326 [2024-11-20 06:40:16.373126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:20072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.326 [2024-11-20 06:40:16.373132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:29:47.326 [2024-11-20 06:40:16.373145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:19664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.326 [2024-11-20 06:40:16.373152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:29:47.326 [2024-11-20 06:40:16.373163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:19728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.326 [2024-11-20 06:40:16.373170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:29:47.326 [2024-11-20 06:40:16.373182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:19456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.326 [2024-11-20 06:40:16.373189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:29:47.326 [2024-11-20 06:40:16.373206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:19768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.326 [2024-11-20 06:40:16.373214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:29:47.326 [2024-11-20 06:40:16.373226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.326 [2024-11-20 06:40:16.373232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:47.326 [2024-11-20 06:40:16.373245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:19872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.326 [2024-11-20 06:40:16.373251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:29:47.326 [2024-11-20 06:40:16.373887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.326 [2024-11-20 06:40:16.373899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:47.326 [2024-11-20 06:40:16.373915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:20264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.326 [2024-11-20 06:40:16.373922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:29:47.326 [2024-11-20 06:40:16.373934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:20280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.326 [2024-11-20 06:40:16.373941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:29:47.326 [2024-11-20 06:40:16.373953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:20296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.326 [2024-11-20 06:40:16.373959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:47.326 [2024-11-20 06:40:16.373971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:20312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.326 [2024-11-20 06:40:16.373978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:29:47.326 [2024-11-20 06:40:16.373990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:20328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.326 [2024-11-20 06:40:16.373996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:29:47.326 [2024-11-20 06:40:16.374009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:20344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.326 [2024-11-20 06:40:16.374015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:29:47.326 [2024-11-20 06:40:16.374027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:19936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.326 [2024-11-20 06:40:16.374034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:29:47.326 [2024-11-20 06:40:16.374045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.326 [2024-11-20 06:40:16.374052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:47.326 [2024-11-20 06:40:16.374064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:20000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.326 [2024-11-20 06:40:16.374070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:29:47.326 [2024-11-20 06:40:16.374082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:20352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.326 [2024-11-20 06:40:16.374088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:47.326 [2024-11-20 06:40:16.374100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:20368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.326 [2024-11-20 06:40:16.374107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:29:47.326 [2024-11-20 06:40:16.374118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:20032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.326 [2024-11-20 06:40:16.374125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:29:47.326 [2024-11-20 06:40:16.374137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:20064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.327 [2024-11-20 06:40:16.374145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:47.327 [2024-11-20 06:40:16.374157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:20096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.327 [2024-11-20 06:40:16.374163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:47.327 [2024-11-20 06:40:16.374176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:19672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.327 [2024-11-20 06:40:16.374183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:47.327 [2024-11-20 06:40:16.374194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:19376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.327 [2024-11-20 06:40:16.374208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:47.327 [2024-11-20 06:40:16.374221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.327 [2024-11-20 06:40:16.374227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:47.327 [2024-11-20 06:40:16.374239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:19024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.327 [2024-11-20 06:40:16.374246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:29:47.327 [2024-11-20 06:40:16.374258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:19160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.327 [2024-11-20 06:40:16.374264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:29:47.327 [2024-11-20 06:40:16.374276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:19824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.327 [2024-11-20 06:40:16.374283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:47.327 [2024-11-20 06:40:16.374295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.327 [2024-11-20 06:40:16.374301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:47.327 [2024-11-20 06:40:16.374313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:20160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.327 [2024-11-20 06:40:16.374320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:47.327 [2024-11-20 06:40:16.374332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:20192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.327 [2024-11-20 06:40:16.374338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:47.327 [2024-11-20 06:40:16.374350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.327 [2024-11-20 06:40:16.374357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:29:47.327 [2024-11-20 06:40:16.374369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:19536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.327 [2024-11-20 06:40:16.374377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:47.327 [2024-11-20 06:40:16.375076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:19096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.327 [2024-11-20 06:40:16.375088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:47.327 [2024-11-20 06:40:16.375102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:20384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.327 [2024-11-20 06:40:16.375109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:47.327 [2024-11-20 06:40:16.375122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:20400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.327 [2024-11-20 06:40:16.375128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:29:47.327 [2024-11-20 06:40:16.375140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:19848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.327 [2024-11-20 06:40:16.375147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:47.327 [2024-11-20 06:40:16.375160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.327 [2024-11-20 06:40:16.375166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:47.327 [2024-11-20 06:40:16.375178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.327 [2024-11-20 06:40:16.375185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:29:47.327 [2024-11-20 06:40:16.375197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:19832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.327 [2024-11-20 06:40:16.375210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:29:47.327 [2024-11-20 06:40:16.375222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:19912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.327 [2024-11-20 06:40:16.375229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:29:47.327 [2024-11-20 06:40:16.375241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:19976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.327 [2024-11-20 06:40:16.375247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:29:47.327 [2024-11-20 06:40:16.375259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:20040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.327 [2024-11-20 06:40:16.375265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:47.327 [2024-11-20 06:40:16.375277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:19664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.327 [2024-11-20 06:40:16.375284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:47.327 [2024-11-20 06:40:16.375296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:19456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.327 [2024-11-20 06:40:16.375303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:29:47.327 [2024-11-20 06:40:16.375318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.327 [2024-11-20 06:40:16.375324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:29:47.327 [2024-11-20 06:40:16.376060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.327 [2024-11-20 06:40:16.376073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:47.327 [2024-11-20 06:40:16.376086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:20136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.327 [2024-11-20 06:40:16.376093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:47.327 [2024-11-20 06:40:16.376105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:20168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.327 [2024-11-20 06:40:16.376112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:29:47.327 [2024-11-20 06:40:16.376125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:20416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.327 [2024-11-20 06:40:16.376131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:29:47.327 [2024-11-20 06:40:16.376143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:20432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.327 [2024-11-20 06:40:16.376150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:47.327 [2024-11-20 06:40:16.376162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.327 [2024-11-20 06:40:16.376168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:29:47.328 [2024-11-20 06:40:16.376180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:20464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.328 [2024-11-20 06:40:16.376187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:47.328 [2024-11-20 06:40:16.376198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.328 [2024-11-20 06:40:16.376212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:47.328 [2024-11-20 06:40:16.376224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:20216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.328 [2024-11-20 06:40:16.376231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:47.328 [2024-11-20 06:40:16.376243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:20264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.328 [2024-11-20 06:40:16.376249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:47.328 [2024-11-20 06:40:16.376261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.328 [2024-11-20 06:40:16.376268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:47.328 [2024-11-20 06:40:16.376282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:20328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.328 [2024-11-20 06:40:16.376289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:29:47.328 [2024-11-20 06:40:16.376301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:19936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.328 [2024-11-20 06:40:16.376307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:47.328 [2024-11-20 06:40:16.376320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.328 [2024-11-20 06:40:16.376326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:47.328 [2024-11-20 06:40:16.376338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:20368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.328 [2024-11-20 06:40:16.376345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.328 [2024-11-20 06:40:16.376357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:20064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.328 [2024-11-20 06:40:16.376364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:47.328 [2024-11-20 06:40:16.376376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:19672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.328 [2024-11-20 06:40:16.376382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:47.328 [2024-11-20 06:40:16.376394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.328 [2024-11-20 06:40:16.376401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:29:47.328 [2024-11-20 06:40:16.376412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:19160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.328 [2024-11-20 06:40:16.376419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:29:47.328 [2024-11-20 06:40:16.376431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.328 [2024-11-20 06:40:16.376437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:29:47.328 [2024-11-20 06:40:16.376449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:20192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.328 [2024-11-20 06:40:16.376456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:29:47.328 [2024-11-20 06:40:16.376468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:19536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.328 [2024-11-20 06:40:16.376474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:29:47.328 [2024-11-20 06:40:16.376880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:20248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.328 [2024-11-20 06:40:16.376893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:29:47.328 [2024-11-20 06:40:16.376906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:19960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.328 [2024-11-20 06:40:16.376916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:29:47.328 [2024-11-20 06:40:16.376929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:20024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.328 [2024-11-20 06:40:16.376935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:29:47.328 [2024-11-20 06:40:16.376947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:20496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.328 [2024-11-20 06:40:16.376954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:29:47.328 [2024-11-20 06:40:16.376966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:20512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.328 [2024-11-20 06:40:16.376973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:29:47.328 [2024-11-20 06:40:16.376984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:20528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.328 [2024-11-20 06:40:16.376991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:29:47.328 [2024-11-20 06:40:16.377003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:20544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.328 [2024-11-20 06:40:16.377009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:29:47.328 [2024-11-20 06:40:16.377021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:20056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.328 [2024-11-20 06:40:16.377028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:29:47.328 [2024-11-20 06:40:16.377040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.328 [2024-11-20 06:40:16.377046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:29:47.328 [2024-11-20 06:40:16.377058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:19848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.328 [2024-11-20 06:40:16.377065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:29:47.328 [2024-11-20 06:40:16.377077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:19864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.328 [2024-11-20 06:40:16.377084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:29:47.328 [2024-11-20 06:40:16.377095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:19912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.328 [2024-11-20 06:40:16.377116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:29:47.328 [2024-11-20 06:40:16.377134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:20040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.328 [2024-11-20 06:40:16.377142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:29:47.328 [2024-11-20 06:40:16.377159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:19456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.328 [2024-11-20 06:40:16.377170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:29:47.328 [2024-11-20 06:40:16.378386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:19904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.328 [2024-11-20 06:40:16.378403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:29:47.328 [2024-11-20 06:40:16.378423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:20568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.329 [2024-11-20 06:40:16.378432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:29:47.329 [2024-11-20 06:40:16.378449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:20584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.329 [2024-11-20 06:40:16.378458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:29:47.329 [2024-11-20 06:40:16.378474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:20600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.329 [2024-11-20 06:40:16.378483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:29:47.329 [2024-11-20 06:40:16.378500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:20616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.329 [2024-11-20 06:40:16.378509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:29:47.329 [2024-11-20 06:40:16.378525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:20632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.329 [2024-11-20 06:40:16.378534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:29:47.329 [2024-11-20 06:40:16.378550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:20648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.329 [2024-11-20 06:40:16.378559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:29:47.329 [2024-11-20 06:40:16.378575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:20664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.329 [2024-11-20 06:40:16.378584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:29:47.329 [2024-11-20 06:40:16.378600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:20272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.329 [2024-11-20 06:40:16.378609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:29:47.329 [2024-11-20 06:40:16.378625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:20304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.329 [2024-11-20 06:40:16.378634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:29:47.329 [2024-11-20 06:40:16.378650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:20336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.329 [2024-11-20 06:40:16.378659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:29:47.329 [2024-11-20 06:40:16.378676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:20136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.329 [2024-11-20 06:40:16.378685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:47.329 [2024-11-20 06:40:16.378704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:20416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.329 [2024-11-20 06:40:16.378713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:47.329 [2024-11-20 06:40:16.378729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.329 [2024-11-20 06:40:16.378738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:29:47.329 [2024-11-20 06:40:16.378754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:20480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.329 [2024-11-20 06:40:16.378763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:29:47.329 [2024-11-20 06:40:16.378779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:20264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.329 [2024-11-20 06:40:16.378789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:29:47.329 [2024-11-20 06:40:16.378805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:20328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.329 [2024-11-20 06:40:16.378814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:29:47.329 [2024-11-20 06:40:16.378831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:20000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.329 [2024-11-20 06:40:16.378839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:29:47.329 [2024-11-20 06:40:16.378856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:20064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.329 [2024-11-20 06:40:16.378865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:29:47.329 [2024-11-20 06:40:16.378881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:19760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.329 [2024-11-20 06:40:16.378890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:29:47.329 [2024-11-20 06:40:16.378906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:20128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.329 [2024-11-20 06:40:16.378915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:29:47.329 [2024-11-20 06:40:16.378931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:19536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.329 [2024-11-20 06:40:16.378940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:29:47.329 [2024-11-20 06:40:16.378957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.329 [2024-11-20 06:40:16.378966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:29:47.329 [2024-11-20 06:40:16.378982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:20112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.329 [2024-11-20 06:40:16.378991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:29:47.329 [2024-11-20 06:40:16.379009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:20176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.329 [2024-11-20 06:40:16.379018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:29:47.329 [2024-11-20 06:40:16.379034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:19960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.329 [2024-11-20 06:40:16.379043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:29:47.329 [2024-11-20 06:40:16.379059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:20496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.329 [2024-11-20 06:40:16.379068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:29:47.329 [2024-11-20 06:40:16.379084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:20528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.329 [2024-11-20 06:40:16.379093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:29:47.329 [2024-11-20 06:40:16.379109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:20056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.329 [2024-11-20 06:40:16.379118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:29:47.329 [2024-11-20 06:40:16.379135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:19848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.329 [2024-11-20 06:40:16.379143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:29:47.329 [2024-11-20 06:40:16.379159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.329 [2024-11-20 06:40:16.379169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:29:47.329 [2024-11-20 06:40:16.379185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:19456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.329 [2024-11-20 06:40:16.379194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:29:47.329 [2024-11-20 06:40:16.381120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:20680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.329 [2024-11-20 06:40:16.381141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:29:47.329 [2024-11-20 06:40:16.381160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:20696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.329 [2024-11-20 06:40:16.381171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:29:47.329 [2024-11-20 06:40:16.381189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.329 [2024-11-20 06:40:16.381198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:29:47.329 [2024-11-20 06:40:16.381224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:19944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.329 [2024-11-20 06:40:16.381237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:29:47.329 [2024-11-20 06:40:16.381258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:20072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.329 [2024-11-20 06:40:16.381268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:47.329 [2024-11-20 06:40:16.381284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:20704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.329 [2024-11-20 06:40:16.381294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:47.330 [2024-11-20 06:40:16.381311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:20720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.330 [2024-11-20 06:40:16.381320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:29:47.330 [2024-11-20 06:40:16.381337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.330 [2024-11-20 06:40:16.381346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:29:47.330 [2024-11-20 06:40:16.381362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:20752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.330 [2024-11-20 06:40:16.381373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:29:47.330 [2024-11-20 06:40:16.381389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.330 [2024-11-20 06:40:16.381400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:47.330 [2024-11-20 06:40:16.381416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:20784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.330 [2024-11-20 06:40:16.381425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:29:47.330 [2024-11-20 06:40:16.381443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:20800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.330 [2024-11-20 06:40:16.381452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:47.330 [2024-11-20 06:40:16.381469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:20816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.330 [2024-11-20 06:40:16.381481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:47.330 [2024-11-20 06:40:16.381497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:20568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.330 [2024-11-20 06:40:16.381506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:29:47.330 [2024-11-20 06:40:16.381524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.330 [2024-11-20 06:40:16.381533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:29:47.330 [2024-11-20 06:40:16.381550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:20632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.330 [2024-11-20 06:40:16.381559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:29:47.330 [2024-11-20 06:40:16.381575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:20664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.330 [2024-11-20 06:40:16.381590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:29:47.330 [2024-11-20 06:40:16.381607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:20304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.330 [2024-11-20 06:40:16.381617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:29:47.330 [2024-11-20 06:40:16.381634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:20136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.330 [2024-11-20 06:40:16.381644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:29:47.330 [2024-11-20 06:40:16.381660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:20448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.330 [2024-11-20 06:40:16.381669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:47.330 [2024-11-20 06:40:16.381687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:20264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.330 [2024-11-20 06:40:16.381696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:29:47.330 [2024-11-20 06:40:16.381712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:20000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.330 [2024-11-20 06:40:16.381721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:47.330 [2024-11-20 06:40:16.381737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:19760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.330 [2024-11-20 06:40:16.381746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:29:47.330 [2024-11-20 06:40:16.381763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:19536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.330 [2024-11-20 06:40:16.381771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:29:47.330 [2024-11-20 06:40:16.381788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:20112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.330 [2024-11-20 06:40:16.381797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:47.330 [2024-11-20 06:40:16.381813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:19960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.330 [2024-11-20 06:40:16.381822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:29:47.330 [2024-11-20 06:40:16.381838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:20528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.330 [2024-11-20 06:40:16.381847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:29:47.330 [2024-11-20 06:40:16.381863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:19848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.330 [2024-11-20 06:40:16.381872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:29:47.330 [2024-11-20 06:40:16.381889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:19456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.330 [2024-11-20 06:40:16.381900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:29:47.330 [2024-11-20 06:40:16.381916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:20440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.330 [2024-11-20 06:40:16.381925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:47.330 [2024-11-20 06:40:16.381941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:20472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.330 [2024-11-20 06:40:16.381950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:29:47.330 [2024-11-20 06:40:16.381967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.330 [2024-11-20 06:40:16.381975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:47.330 [2024-11-20 06:40:16.381992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:20352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.330 [2024-11-20 06:40:16.382001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:29:47.330 [2024-11-20 06:40:16.382960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:20224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.330 [2024-11-20 06:40:16.382978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:29:47.330 [2024-11-20 06:40:16.382996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:20504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.330 [2024-11-20 06:40:16.383005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:47.330 [2024-11-20 06:40:16.383022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:20536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.330 [2024-11-20 06:40:16.383031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:47.330 [2024-11-20 06:40:16.383047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:20824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.330 [2024-11-20 06:40:16.383056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:47.330 [2024-11-20 06:40:16.383072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:20840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.330 [2024-11-20 06:40:16.383081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:47.330 [2024-11-20 06:40:16.383098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:20856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.330 [2024-11-20 06:40:16.383107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:47.330 [2024-11-20 06:40:16.383123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:20872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.331 [2024-11-20 06:40:16.383132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:29:47.331 [2024-11-20 06:40:16.383148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:20888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.331 [2024-11-20 06:40:16.383157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:29:47.331 [2024-11-20 06:40:16.383177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:20904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.331 [2024-11-20 06:40:16.383186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:47.331 [2024-11-20 06:40:16.383207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:20920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.331 [2024-11-20 06:40:16.383216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:47.331 [2024-11-20 06:40:16.383232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:20936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.331 [2024-11-20 06:40:16.383241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:47.331 [2024-11-20 06:40:16.383258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.331 [2024-11-20 06:40:16.383266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:47.331 [2024-11-20 06:40:16.384232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:20560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.331 [2024-11-20 06:40:16.384251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:29:47.331 [2024-11-20 06:40:16.384270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.331 [2024-11-20 06:40:16.384280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:47.331 [2024-11-20 06:40:16.384296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.331 [2024-11-20 06:40:16.384306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:47.331 [2024-11-20 06:40:16.384322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:20656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.331 [2024-11-20 06:40:16.384331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:47.331 [2024-11-20 06:40:16.384347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:20696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.331 [2024-11-20 06:40:16.384356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:29:47.331 [2024-11-20 06:40:16.384372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:19944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.331 [2024-11-20 06:40:16.384381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:47.331 [2024-11-20 06:40:16.384397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:20704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.331 [2024-11-20 06:40:16.384406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:47.331 [2024-11-20 06:40:16.384422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:20736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.331 [2024-11-20 06:40:16.384431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:29:47.331 [2024-11-20 06:40:16.384451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:20768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.331 [2024-11-20 06:40:16.384460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:29:47.331 [2024-11-20 06:40:16.384476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:20800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.331 [2024-11-20 06:40:16.384485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:29:47.331 [2024-11-20 06:40:16.384501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:20568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.331 [2024-11-20 06:40:16.384510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:29:47.331 [2024-11-20 06:40:16.384526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.331 [2024-11-20 06:40:16.384535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:47.331 [2024-11-20 06:40:16.384552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:20304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.331 [2024-11-20 06:40:16.384561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:47.331 [2024-11-20 06:40:16.384577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:20448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.331 [2024-11-20 06:40:16.384585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:29:47.331 [2024-11-20 06:40:16.384602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.331 [2024-11-20 06:40:16.384611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:29:47.331 [2024-11-20 06:40:16.384627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:19536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.331 [2024-11-20 06:40:16.384636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:47.331 [2024-11-20 06:40:16.384652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:19960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.331 [2024-11-20 06:40:16.384661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:47.331 [2024-11-20 06:40:16.384677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:19848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.331 [2024-11-20 06:40:16.384686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:29:47.331 [2024-11-20 06:40:16.384702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:20440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.331 [2024-11-20 06:40:16.384711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:29:47.331 [2024-11-20 06:40:16.384727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:20312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.331 [2024-11-20 06:40:16.384736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:47.331 [2024-11-20 06:40:16.384752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:20952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.331 [2024-11-20 06:40:16.384764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:29:47.331 [2024-11-20 06:40:16.384780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:20968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.331 [2024-11-20 06:40:16.384788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:47.331 [2024-11-20 06:40:16.384804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.331 [2024-11-20 06:40:16.384814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:47.331 [2024-11-20 06:40:16.384829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:21000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.331 [2024-11-20 06:40:16.384838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:47.331 [2024-11-20 06:40:16.384855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:21016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.331 [2024-11-20 06:40:16.384863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:47.331 [2024-11-20 06:40:16.384880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:20464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.331 [2024-11-20 06:40:16.384889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:47.331 [2024-11-20 06:40:16.384905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:20368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.331 [2024-11-20 06:40:16.384914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:29:47.331 [2024-11-20 06:40:16.384930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.331 [2024-11-20 06:40:16.384939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:47.331 [2024-11-20 06:40:16.384955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.331 [2024-11-20 06:40:16.384964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:47.331 [2024-11-20 06:40:16.384980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:20824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.331 [2024-11-20 06:40:16.384989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.332 [2024-11-20 06:40:16.385005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:20856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.332 [2024-11-20 06:40:16.385015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:47.332 [2024-11-20 06:40:16.385031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:20888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.332 [2024-11-20 06:40:16.385040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:47.332 [2024-11-20 06:40:16.385056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:20920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.332 [2024-11-20 06:40:16.385067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:29:47.332 [2024-11-20 06:40:16.385084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:20400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.332 [2024-11-20 06:40:16.385093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:29:47.332 [2024-11-20 06:40:16.386908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:20544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.332 [2024-11-20 06:40:16.386929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:29:47.332 [2024-11-20 06:40:16.386949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:20040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.332 [2024-11-20 06:40:16.386958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:29:47.332 [2024-11-20 06:40:16.386974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:21032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.332 [2024-11-20 06:40:16.386984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:29:47.332 [2024-11-20 06:40:16.387000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:21048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.332 [2024-11-20 06:40:16.387009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:29:47.332 [2024-11-20 06:40:16.387025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:21064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.332 [2024-11-20 06:40:16.387034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:29:47.332 [2024-11-20 06:40:16.387050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:21080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.332 [2024-11-20 06:40:16.387059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:29:47.332 [2024-11-20 06:40:16.387076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:21096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.332 [2024-11-20 06:40:16.387085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:29:47.332 [2024-11-20 06:40:16.387101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.332 [2024-11-20 06:40:16.387110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:29:47.332 [2024-11-20 06:40:16.387126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:20728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.332 [2024-11-20 06:40:16.387135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:29:47.332 [2024-11-20 06:40:16.387151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:20760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.332 [2024-11-20 06:40:16.387160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:29:47.332 [2024-11-20 06:40:16.387176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:20792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.332 [2024-11-20 06:40:16.387185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:29:47.332 [2024-11-20 06:40:16.387227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:20592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.332 [2024-11-20 06:40:16.387235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:29:47.332 [2024-11-20 06:40:16.387246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:20656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.332 [2024-11-20 06:40:16.387253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:29:47.332 [2024-11-20 06:40:16.387265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:19944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.332 [2024-11-20 06:40:16.387272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:29:47.332 [2024-11-20 06:40:16.387284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:20736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.332 [2024-11-20 06:40:16.387290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:29:47.332 [2024-11-20 06:40:16.387302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:20800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.332 [2024-11-20 06:40:16.387309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:29:47.332 [2024-11-20 06:40:16.387321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.332 [2024-11-20 06:40:16.387327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:29:47.332 [2024-11-20 06:40:16.387339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.332 [2024-11-20 06:40:16.387346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:29:47.332 [2024-11-20 06:40:16.387358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:19536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.332 [2024-11-20 06:40:16.387365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:29:47.332 [2024-11-20 06:40:16.387377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:19848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.332 [2024-11-20 06:40:16.387383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:29:47.332 [2024-11-20 06:40:16.387395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:20312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.332 [2024-11-20 06:40:16.387402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:29:47.332 [2024-11-20 06:40:16.387414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:20968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.332 [2024-11-20 06:40:16.387420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:29:47.333 [2024-11-20 06:40:16.387433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.333 [2024-11-20 06:40:16.387439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:29:47.333 [2024-11-20 06:40:16.387453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:20464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.333 [2024-11-20 06:40:16.387460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:29:47.333 [2024-11-20 06:40:16.387472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:20192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.333 [2024-11-20 06:40:16.387479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:29:47.333 [2024-11-20 06:40:16.387491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:20824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.333 [2024-11-20 06:40:16.387497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:29:47.333 [2024-11-20 06:40:16.387509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:20888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.333 [2024-11-20 06:40:16.387516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:29:47.333 [2024-11-20 06:40:16.387528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:20400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.333 [2024-11-20 06:40:16.387535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:29:47.333 [2024-11-20 06:40:16.387999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:20616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.333 [2024-11-20 06:40:16.388011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:47.333 [2024-11-20 06:40:16.388025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:20416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.333 [2024-11-20 06:40:16.388032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:47.333 [2024-11-20 06:40:16.388044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.333 [2024-11-20 06:40:16.388051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:29:47.333 [2024-11-20 06:40:16.388063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:21112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.333 [2024-11-20 06:40:16.388069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:29:47.333 [2024-11-20 06:40:16.388082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:21128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.333 [2024-11-20 06:40:16.388088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:29:47.333 [2024-11-20 06:40:16.388100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:21144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.333 [2024-11-20 06:40:16.388107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:29:47.333 [2024-11-20 06:40:16.388119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.333 [2024-11-20 06:40:16.388126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:29:47.333 [2024-11-20 06:40:16.388137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:20128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.333 [2024-11-20 06:40:16.388147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:29:47.333 [2024-11-20 06:40:16.388159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:19912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.333 [2024-11-20 06:40:16.388166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:29:47.333 [2024-11-20 06:40:16.389165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:20848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.333 [2024-11-20 06:40:16.389181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:29:47.333 [2024-11-20 06:40:16.389196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:20880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.333 [2024-11-20 06:40:16.389209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:29:47.333 [2024-11-20 06:40:16.389222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.333 [2024-11-20 06:40:16.389228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:29:47.333 [2024-11-20 06:40:16.389241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:20944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.333 [2024-11-20 06:40:16.389248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:29:47.333 [2024-11-20 06:40:16.389260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:21184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.333 [2024-11-20 06:40:16.389267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:29:47.333 [2024-11-20 06:40:16.389279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:21200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.333 [2024-11-20 06:40:16.389285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:29:47.333 [2024-11-20 06:40:16.389297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:21216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.333 [2024-11-20 06:40:16.389304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:29:47.333 [2024-11-20 06:40:16.389316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:21232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.333 [2024-11-20 06:40:16.389323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:29:47.333 [2024-11-20 06:40:16.389334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.333 [2024-11-20 06:40:16.389341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:29:47.333 [2024-11-20 06:40:16.389353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:20040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.333 [2024-11-20 06:40:16.389360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:29:47.333 [2024-11-20 06:40:16.389372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:21048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.333 [2024-11-20 06:40:16.389382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:29:47.333 [2024-11-20 06:40:16.389394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:21080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.333 [2024-11-20 06:40:16.389401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:29:47.333 [2024-11-20 06:40:16.389413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:20688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.333 [2024-11-20 06:40:16.389419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:29:47.333 [2024-11-20 06:40:16.389432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:20760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.333 [2024-11-20 06:40:16.389438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:29:47.333 [2024-11-20 06:40:16.389450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.333 [2024-11-20 06:40:16.389457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:29:47.333 [2024-11-20 06:40:16.389469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:19944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.333 [2024-11-20 06:40:16.389476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:29:47.333 [2024-11-20 06:40:16.389487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:20800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.333 [2024-11-20 06:40:16.389494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:47.333 [2024-11-20 06:40:16.389506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:20448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.333 [2024-11-20 06:40:16.389513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:47.333 [2024-11-20 06:40:16.389525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:19848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.333 [2024-11-20 06:40:16.389531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:29:47.333 [2024-11-20 06:40:16.389544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.333 [2024-11-20 06:40:16.389550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:29:47.333 [2024-11-20 06:40:16.389562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.333 [2024-11-20 06:40:16.389569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:29:47.334 [2024-11-20 06:40:16.389581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:20824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.334 [2024-11-20 06:40:16.389588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:47.334 [2024-11-20 06:40:16.389600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.334 [2024-11-20 06:40:16.389606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:29:47.334 [2024-11-20 06:40:16.389620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:20720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.334 [2024-11-20 06:40:16.389627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:47.334 [2024-11-20 06:40:16.389638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:20784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.334 [2024-11-20 06:40:16.389645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:47.334 [2024-11-20 06:40:16.389657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:20600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.334 [2024-11-20 06:40:16.389664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:29:47.334 [2024-11-20 06:40:16.389676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:20264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.334 [2024-11-20 06:40:16.389683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:29:47.334 [2024-11-20 06:40:16.389695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:20416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.334 [2024-11-20 06:40:16.389702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:29:47.334 [2024-11-20 06:40:16.389714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:21112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.334 [2024-11-20 06:40:16.389720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:29:47.334 [2024-11-20 06:40:16.389732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.334 [2024-11-20 06:40:16.389739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:29:47.334 [2024-11-20 06:40:16.389751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.334 [2024-11-20 06:40:16.389758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:29:47.334 [2024-11-20 06:40:16.390225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:21264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.334 [2024-11-20 06:40:16.390237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:47.334 [2024-11-20 06:40:16.390251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:21280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.334 [2024-11-20 06:40:16.390258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:29:47.334 [2024-11-20 06:40:16.390271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:21296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.334 [2024-11-20 06:40:16.390277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:47.334 [2024-11-20 06:40:16.390289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:21312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.334 [2024-11-20 06:40:16.390296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:29:47.334 [2024-11-20 06:40:16.390310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:21328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.334 [2024-11-20 06:40:16.390317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:29:47.334 [2024-11-20 06:40:16.390329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:21344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.334 [2024-11-20 06:40:16.390336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:47.334 [2024-11-20 06:40:16.390348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:20960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.334 [2024-11-20 06:40:16.390354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:29:47.334 [2024-11-20 06:40:16.390366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.334 [2024-11-20 06:40:16.390373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:29:47.334 [2024-11-20 06:40:16.390385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:20840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.334 [2024-11-20 06:40:16.390391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:29:47.334 [2024-11-20 06:40:16.390403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:20904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.334 [2024-11-20 06:40:16.390410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:29:47.334 [2024-11-20 06:40:16.390422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:21360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.334 [2024-11-20 06:40:16.390428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:47.334 [2024-11-20 06:40:16.390440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:21376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.334 [2024-11-20 06:40:16.390447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:29:47.334 [2024-11-20 06:40:16.390459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:21392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.334 [2024-11-20 06:40:16.390466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:47.334 [2024-11-20 06:40:16.391632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:21040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.334 [2024-11-20 06:40:16.391648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:29:47.334 [2024-11-20 06:40:16.391663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:21072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.334 [2024-11-20 06:40:16.391670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:29:47.334 [2024-11-20 06:40:16.391682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.334 [2024-11-20 06:40:16.391689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:47.334 [2024-11-20 06:40:16.391701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:20944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.334 [2024-11-20 06:40:16.391711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:47.334 [2024-11-20 06:40:16.391723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:21200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.334 [2024-11-20 06:40:16.391730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:47.334 [2024-11-20 06:40:16.391742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:21232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.334 [2024-11-20 06:40:16.391748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:47.334 [2024-11-20 06:40:16.391760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:20040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.334 [2024-11-20 06:40:16.391767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:47.334 [2024-11-20 06:40:16.391779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:21080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.334 [2024-11-20 06:40:16.391786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:29:47.334 [2024-11-20 06:40:16.391798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:20760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.334 [2024-11-20 06:40:16.391804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:29:47.334 [2024-11-20 06:40:16.391816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:19944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.334 [2024-11-20 06:40:16.391823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:47.334 [2024-11-20 06:40:16.391835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:20448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.334 [2024-11-20 06:40:16.391842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:47.334 [2024-11-20 06:40:16.391853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:20968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.334 [2024-11-20 06:40:16.391860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:47.335 [2024-11-20 06:40:16.391872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.335 [2024-11-20 06:40:16.391879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:47.335 [2024-11-20 06:40:16.391891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.335 [2024-11-20 06:40:16.391897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:29:47.335 [2024-11-20 06:40:16.391909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:20600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.335 [2024-11-20 06:40:16.391916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:47.335 [2024-11-20 06:40:16.391928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:20416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.335 [2024-11-20 06:40:16.391936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:47.335 [2024-11-20 06:40:16.391948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:21144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.335 [2024-11-20 06:40:16.391955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:47.335 [2024-11-20 06:40:16.391967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:20696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.335 [2024-11-20 06:40:16.391974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:29:47.335 [2024-11-20 06:40:16.391986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.335 [2024-11-20 06:40:16.391992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:47.335 [2024-11-20 06:40:16.392004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:20952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.335 [2024-11-20 06:40:16.392011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:47.335 [2024-11-20 06:40:16.392023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:21016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.335 [2024-11-20 06:40:16.392030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:29:47.335 [2024-11-20 06:40:16.392041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:20920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.335 [2024-11-20 06:40:16.392048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:29:47.335 [2024-11-20 06:40:16.392060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:21280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.335 [2024-11-20 06:40:16.392067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:29:47.335 [2024-11-20 06:40:16.392078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:21312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.335 [2024-11-20 06:40:16.392085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:29:47.335 [2024-11-20 06:40:16.392097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:21344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.335 [2024-11-20 06:40:16.392104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:47.335 [2024-11-20 06:40:16.392116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:20992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.335 [2024-11-20 06:40:16.392122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:47.335 [2024-11-20 06:40:16.392134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:20904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.335 [2024-11-20 06:40:16.392141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:29:47.335 [2024-11-20 06:40:16.392153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:21376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.335 [2024-11-20 06:40:16.392160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:29:47.335 [2024-11-20 06:40:16.392729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:21104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.335 [2024-11-20 06:40:16.392743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:47.335 [2024-11-20 06:40:16.392756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:21408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.335 [2024-11-20 06:40:16.392763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:47.335 [2024-11-20 06:40:16.392776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:21424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.335 [2024-11-20 06:40:16.392783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:29:47.335 [2024-11-20 06:40:16.392795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:21440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.335 [2024-11-20 06:40:16.392801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:29:47.335 [2024-11-20 06:40:16.392813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:21456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.335 [2024-11-20 06:40:16.392820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:47.335 [2024-11-20 06:40:16.392832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:21120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.335 [2024-11-20 06:40:16.392838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:29:47.335 [2024-11-20 06:40:16.392850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:21152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.335 [2024-11-20 06:40:16.392857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:47.335 [2024-11-20 06:40:16.392869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:21472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.335 [2024-11-20 06:40:16.392875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:47.335 [2024-11-20 06:40:16.392887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:21488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.335 [2024-11-20 06:40:16.392894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:47.335 [2024-11-20 06:40:16.392906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.335 [2024-11-20 06:40:16.392913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:47.335 [2024-11-20 06:40:16.392924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:21520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.335 [2024-11-20 06:40:16.392931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:47.335 [2024-11-20 06:40:16.392943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:21536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.335 [2024-11-20 06:40:16.392950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:29:47.335 [2024-11-20 06:40:16.392966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.335 [2024-11-20 06:40:16.392973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:47.335 [2024-11-20 06:40:16.392985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:21208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.335 [2024-11-20 06:40:16.392992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:47.335 [2024-11-20 06:40:16.393004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:21240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.335 [2024-11-20 06:40:16.393010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.335 [2024-11-20 06:40:16.393023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:21032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.335 [2024-11-20 06:40:16.393029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:47.335 [2024-11-20 06:40:16.393042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:21096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.335 [2024-11-20 06:40:16.393048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:47.336 [2024-11-20 06:40:16.393915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:20632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.336 [2024-11-20 06:40:16.393930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:29:47.336 [2024-11-20 06:40:16.393944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.336 [2024-11-20 06:40:16.393951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:29:47.336 [2024-11-20 06:40:16.393963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:21072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.336 [2024-11-20 06:40:16.393970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:29:47.336 [2024-11-20 06:40:16.393982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:20944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.336 [2024-11-20 06:40:16.393989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:29:47.336 [2024-11-20 06:40:16.394001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:21232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.336 [2024-11-20 06:40:16.394007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:29:47.336 [2024-11-20 06:40:16.394019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:21080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.336 [2024-11-20 06:40:16.394026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:29:47.336 [2024-11-20 06:40:16.394038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:19944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.336 [2024-11-20 06:40:16.394044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:29:47.336 [2024-11-20 06:40:16.394059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:20968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.336 [2024-11-20 06:40:16.394066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:29:47.336 [2024-11-20 06:40:16.394078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:20720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.336 [2024-11-20 06:40:16.394085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:29:47.336 [2024-11-20 06:40:16.394097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:20416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.336 [2024-11-20 06:40:16.394103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:29:47.336 [2024-11-20 06:40:16.394115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:20696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.336 [2024-11-20 06:40:16.394122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:29:47.336 [2024-11-20 06:40:16.394134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.336 [2024-11-20 06:40:16.394141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:29:47.336 [2024-11-20 06:40:16.394152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:20920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.336 [2024-11-20 06:40:16.394159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:29:47.336 [2024-11-20 06:40:16.394171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:21312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.336 [2024-11-20 06:40:16.394177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:29:47.336 [2024-11-20 06:40:16.394189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:20992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.336 [2024-11-20 06:40:16.394196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:29:47.336 [2024-11-20 06:40:16.394213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:21376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.336 [2024-11-20 06:40:16.394220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:29:47.336 [2024-11-20 06:40:16.394232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.336 [2024-11-20 06:40:16.394239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:29:47.336 [2024-11-20 06:40:16.394251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:21560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.336 [2024-11-20 06:40:16.394257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:29:47.336 [2024-11-20 06:40:16.394269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:21576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.336 [2024-11-20 06:40:16.394275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:29:47.336 [2024-11-20 06:40:16.394287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.336 [2024-11-20 06:40:16.394296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:29:47.336 [2024-11-20 06:40:16.394308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:21304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.336 [2024-11-20 06:40:16.394315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:29:47.336 [2024-11-20 06:40:16.394326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:21336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.336 [2024-11-20 06:40:16.394333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:29:47.336 [2024-11-20 06:40:16.394345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:21408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.336 [2024-11-20 06:40:16.394352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:29:47.336 [2024-11-20 06:40:16.394363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:21440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.336 [2024-11-20 06:40:16.394370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:29:47.336 [2024-11-20 06:40:16.394382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:21120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.336 [2024-11-20 06:40:16.394389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:29:47.336 [2024-11-20 06:40:16.394401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:21472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.336 [2024-11-20 06:40:16.394407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:29:47.336 [2024-11-20 06:40:16.394419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:21504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.336 [2024-11-20 06:40:16.394426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:29:47.336 [2024-11-20 06:40:16.394437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:21536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.336 [2024-11-20 06:40:16.394444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:29:47.336 [2024-11-20 06:40:16.394456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:21208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.336 [2024-11-20 06:40:16.394463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:29:47.336 [2024-11-20 06:40:16.394475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.336 [2024-11-20 06:40:16.394482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:29:47.336 [2024-11-20 06:40:16.395895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:21368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.336 [2024-11-20 06:40:16.395912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:47.336 [2024-11-20 06:40:16.395927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:21600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.336 [2024-11-20 06:40:16.395937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:47.336 [2024-11-20 06:40:16.395949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:21616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.336 [2024-11-20 06:40:16.395956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:29:47.336 [2024-11-20 06:40:16.395968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:21632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.336 [2024-11-20 06:40:16.395974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:29:47.336 [2024-11-20 06:40:16.395987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:21648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.336 [2024-11-20 06:40:16.395993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:29:47.336 [2024-11-20 06:40:16.396005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:21664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.336 [2024-11-20 06:40:16.396012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:29:47.336 [2024-11-20 06:40:16.396024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:21680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.336 [2024-11-20 06:40:16.396030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:29:47.336 [2024-11-20 06:40:16.396043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:21696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.337 [2024-11-20 06:40:16.396049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:29:47.337 [2024-11-20 06:40:16.396781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:21712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.337 [2024-11-20 06:40:16.396796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:29:47.337 [2024-11-20 06:40:16.396810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:21728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.337 [2024-11-20 06:40:16.396817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:29:47.337 [2024-11-20 06:40:16.396829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:21184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.337 [2024-11-20 06:40:16.396836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:29:47.337 [2024-11-20 06:40:16.396848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:21248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.337 [2024-11-20 06:40:16.396854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:29:47.337 [2024-11-20 06:40:16.396866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:20888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.337 [2024-11-20 06:40:16.396873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:29:47.337 [2024-11-20 06:40:16.396885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:20944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.337 [2024-11-20 06:40:16.396892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:29:47.337 [2024-11-20 06:40:16.396908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:21080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.337 [2024-11-20 06:40:16.396914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:29:47.337 [2024-11-20 06:40:16.396926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:20968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.337 [2024-11-20 06:40:16.396933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:29:47.337 [2024-11-20 06:40:16.396945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:20416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.337 [2024-11-20 06:40:16.396952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:29:47.337 [2024-11-20 06:40:16.396964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:20952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.337 [2024-11-20 06:40:16.396971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:29:47.337 [2024-11-20 06:40:16.396982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:21312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.337 [2024-11-20 06:40:16.396989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:29:47.337 [2024-11-20 06:40:16.397001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:21376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.337 [2024-11-20 06:40:16.397008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:29:47.337 [2024-11-20 06:40:16.397019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:21560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.337 [2024-11-20 06:40:16.397033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:29:47.337 [2024-11-20 06:40:16.397045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:21272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.337 [2024-11-20 06:40:16.397051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:29:47.337 [2024-11-20 06:40:16.397063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.337 [2024-11-20 06:40:16.397070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:29:47.337 [2024-11-20 06:40:16.397082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:21440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.337 [2024-11-20 06:40:16.397088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:29:47.337 [2024-11-20 06:40:16.397100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:21472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.337 [2024-11-20 06:40:16.397107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:29:47.337 [2024-11-20 06:40:16.397119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.337 [2024-11-20 06:40:16.397125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:47.337 [2024-11-20 06:40:16.397139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:21032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.337 [2024-11-20 06:40:16.397146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:47.337 [2024-11-20 06:40:16.397157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:21112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.337 [2024-11-20 06:40:16.397164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:29:47.337 [2024-11-20 06:40:16.397176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:21296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.337 [2024-11-20 06:40:16.397183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:29:47.337 [2024-11-20 06:40:16.397195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:21360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.337 [2024-11-20 06:40:16.397208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:29:47.337 [2024-11-20 06:40:16.397220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.337 [2024-11-20 06:40:16.397227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:47.337 [2024-11-20 06:40:16.397238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:21752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.337 [2024-11-20 06:40:16.397245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:29:47.337 [2024-11-20 06:40:16.397257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:21768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.337 [2024-11-20 06:40:16.397264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:47.337 [2024-11-20 06:40:16.397275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:21784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.337 [2024-11-20 06:40:16.397282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:47.337 [2024-11-20 06:40:16.397294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:21800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.337 [2024-11-20 06:40:16.397300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:29:47.337 [2024-11-20 06:40:16.397312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:21816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.337 [2024-11-20 06:40:16.397319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:29:47.337 [2024-11-20 06:40:16.397331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:21832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.337 [2024-11-20 06:40:16.397338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:29:47.337 [2024-11-20 06:40:16.398176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:21848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.337 [2024-11-20 06:40:16.398191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:29:47.337 [2024-11-20 06:40:16.398211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:21864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.337 [2024-11-20 06:40:16.398222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:29:47.337 [2024-11-20 06:40:16.398234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:21880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.337 [2024-11-20 06:40:16.398241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:29:47.338 [2024-11-20 06:40:16.398253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.338 [2024-11-20 06:40:16.398259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:47.338 [2024-11-20 06:40:16.398271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:21448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.338 [2024-11-20 06:40:16.398278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:29:47.338 [2024-11-20 06:40:16.398290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:21480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.338 [2024-11-20 06:40:16.398296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:47.338 [2024-11-20 06:40:16.398308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.338 [2024-11-20 06:40:16.398315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:29:47.338 [2024-11-20 06:40:16.398327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:21544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.338 [2024-11-20 06:40:16.398334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:29:47.338 [2024-11-20 06:40:16.398345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:21600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.338 [2024-11-20 06:40:16.398352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:47.338 [2024-11-20 06:40:16.398364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:21632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.338 [2024-11-20 06:40:16.398371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:29:47.338 [2024-11-20 06:40:16.398383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:21664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.338 [2024-11-20 06:40:16.398389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:29:47.338 [2024-11-20 06:40:16.398401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:21696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.338 [2024-11-20 06:40:16.398408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:29:47.338 [2024-11-20 06:40:16.398650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:20824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.338 [2024-11-20 06:40:16.398663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:29:47.338 [2024-11-20 06:40:16.398677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:21280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.338 [2024-11-20 06:40:16.398686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:47.338 [2024-11-20 06:40:16.398698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:21896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.338 [2024-11-20 06:40:16.398705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:29:47.338 [2024-11-20 06:40:16.398717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:21912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.338 [2024-11-20 06:40:16.398724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:47.338 [2024-11-20 06:40:16.398735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:21928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.338 [2024-11-20 06:40:16.398742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:29:47.338 [2024-11-20 06:40:16.398754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:21944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.338 [2024-11-20 06:40:16.398761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:29:47.338 [2024-11-20 06:40:16.398773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:21960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.338 [2024-11-20 06:40:16.398779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:47.338 [2024-11-20 06:40:16.398791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:21976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.338 [2024-11-20 06:40:16.398798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:47.338 [2024-11-20 06:40:16.398810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:21992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.338 [2024-11-20 06:40:16.398817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:47.338 [2024-11-20 06:40:16.398831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:21344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.338 [2024-11-20 06:40:16.398838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:47.338 [2024-11-20 06:40:16.398850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:21568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.338 [2024-11-20 06:40:16.398857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:47.338 [2024-11-20 06:40:16.399671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:21728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.338 [2024-11-20 06:40:16.399687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:29:47.338 [2024-11-20 06:40:16.399701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:21248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.338 [2024-11-20 06:40:16.399708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:29:47.338 [2024-11-20 06:40:16.399721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:20944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.338 [2024-11-20 06:40:16.399728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:47.338 [2024-11-20 06:40:16.399743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:20968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.338 [2024-11-20 06:40:16.399750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:47.338 [2024-11-20 06:40:16.399761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:20952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.338 [2024-11-20 06:40:16.399768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:47.338 [2024-11-20 06:40:16.399780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:21376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.338 [2024-11-20 06:40:16.399787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:47.338 [2024-11-20 06:40:16.399799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.338 [2024-11-20 06:40:16.399805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:29:47.338 [2024-11-20 06:40:16.399818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:21440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.338 [2024-11-20 06:40:16.399824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:47.338 [2024-11-20 06:40:16.399836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:21536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.338 [2024-11-20 06:40:16.399843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:47.338 [2024-11-20 06:40:16.399855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:21112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.338 [2024-11-20 06:40:16.399861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:47.338 [2024-11-20 06:40:16.399873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:21360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.338 [2024-11-20 06:40:16.399880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:29:47.338 [2024-11-20 06:40:16.399892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.338 [2024-11-20 06:40:16.399898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:47.338 [2024-11-20 06:40:16.399910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:21784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.338 [2024-11-20 06:40:16.399917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:47.338 [2024-11-20 06:40:16.399928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:21816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.338 [2024-11-20 06:40:16.399935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:29:47.338 [2024-11-20 06:40:16.399947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.338 [2024-11-20 06:40:16.399954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:29:47.338 [2024-11-20 06:40:16.399967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:21488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.338 [2024-11-20 06:40:16.399974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:29:47.338 [2024-11-20 06:40:16.399986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:22008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.338 [2024-11-20 06:40:16.399993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:29:47.338 [2024-11-20 06:40:16.400005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:22024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.338 [2024-11-20 06:40:16.400012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:47.338 [2024-11-20 06:40:16.400023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:22040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.339 [2024-11-20 06:40:16.400030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:47.339 [2024-11-20 06:40:16.400042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.339 [2024-11-20 06:40:16.400048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:29:47.339 [2024-11-20 06:40:16.400060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:22072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.339 [2024-11-20 06:40:16.400067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:29:47.339 [2024-11-20 06:40:16.400079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.339 [2024-11-20 06:40:16.400086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:47.339 [2024-11-20 06:40:16.400097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:21624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.339 [2024-11-20 06:40:16.400104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:47.339 [2024-11-20 06:40:16.400116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:21656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.339 [2024-11-20 06:40:16.400123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:29:47.339 [2024-11-20 06:40:16.400135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.339 [2024-11-20 06:40:16.400142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:29:47.339 [2024-11-20 06:40:16.400156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:21720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.339 [2024-11-20 06:40:16.400163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:47.339 [2024-11-20 06:40:16.400175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:21864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.339 [2024-11-20 06:40:16.400182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:29:47.339 [2024-11-20 06:40:16.400195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:21416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.339 [2024-11-20 06:40:16.400208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:47.339 [2024-11-20 06:40:16.400220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:21480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.339 [2024-11-20 06:40:16.400227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:47.339 [2024-11-20 06:40:16.400240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:21544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.339 [2024-11-20 06:40:16.400247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:47.339 [2024-11-20 06:40:16.400259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:21632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.339 [2024-11-20 06:40:16.400266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:47.339 [2024-11-20 06:40:16.400978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.339 [2024-11-20 06:40:16.400994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:47.339 [2024-11-20 06:40:16.401008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:21280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.339 [2024-11-20 06:40:16.401015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:29:47.339 [2024-11-20 06:40:16.401028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:21912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.339 [2024-11-20 06:40:16.401034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:47.339 [2024-11-20 06:40:16.401046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:21944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.339 [2024-11-20 06:40:16.401053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:47.339 [2024-11-20 06:40:16.401065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:21976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.339 [2024-11-20 06:40:16.401072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.339 [2024-11-20 06:40:16.401085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:21344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.339 [2024-11-20 06:40:16.401091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:47.339 [2024-11-20 06:40:16.401447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:21576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.339 [2024-11-20 06:40:16.401462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:47.339 [2024-11-20 06:40:16.401476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:21504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.339 [2024-11-20 06:40:16.401483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:29:47.339 [2024-11-20 06:40:16.401495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:21760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.339 [2024-11-20 06:40:16.401505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:29:47.339 [2024-11-20 06:40:16.401517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:21792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.339 [2024-11-20 06:40:16.401523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:29:47.339 [2024-11-20 06:40:16.401536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:21824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.339 [2024-11-20 06:40:16.401542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:29:47.339 [2024-11-20 06:40:16.401554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:22088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.339 [2024-11-20 06:40:16.401561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:29:47.339 [2024-11-20 06:40:16.401573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:22104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.339 [2024-11-20 06:40:16.401579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:29:47.339 [2024-11-20 06:40:16.401591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:22120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.339 [2024-11-20 06:40:16.401598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:29:47.339 [2024-11-20 06:40:16.401610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:22136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.339 [2024-11-20 06:40:16.401616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:29:47.339 [2024-11-20 06:40:16.401628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:22152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.339 [2024-11-20 06:40:16.401635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:29:47.339 [2024-11-20 06:40:16.401647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:22168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.339 [2024-11-20 06:40:16.401654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:29:47.339 [2024-11-20 06:40:16.401666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:22184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.339 [2024-11-20 06:40:16.401672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:29:47.339 [2024-11-20 06:40:16.402090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:22200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.339 [2024-11-20 06:40:16.402103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:29:47.339 [2024-11-20 06:40:16.402116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:21872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.339 [2024-11-20 06:40:16.402123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:29:47.339 [2024-11-20 06:40:16.402135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:21248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.339 [2024-11-20 06:40:16.402142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:29:47.339 [2024-11-20 06:40:16.402157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:20968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.339 [2024-11-20 06:40:16.402164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:29:47.339 [2024-11-20 06:40:16.402176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:21376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.339 [2024-11-20 06:40:16.402182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:29:47.339 [2024-11-20 06:40:16.402194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:21440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.339 [2024-11-20 06:40:16.402207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:29:47.339 [2024-11-20 06:40:16.402219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:21112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.339 [2024-11-20 06:40:16.402226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:29:47.339 [2024-11-20 06:40:16.402238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:21752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.339 [2024-11-20 06:40:16.402244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:29:47.339 [2024-11-20 06:40:16.402256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.340 [2024-11-20 06:40:16.402263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:29:47.340 [2024-11-20 06:40:16.402275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:21488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.340 [2024-11-20 06:40:16.402281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:29:47.340 [2024-11-20 06:40:16.402293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:22024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.340 [2024-11-20 06:40:16.402300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:29:47.340 [2024-11-20 06:40:16.402312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.340 [2024-11-20 06:40:16.402318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:29:47.340 [2024-11-20 06:40:16.402330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:21592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.340 [2024-11-20 06:40:16.402337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:29:47.340 [2024-11-20 06:40:16.402349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:21656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.340 [2024-11-20 06:40:16.402356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:29:47.340 [2024-11-20 06:40:16.402367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:21720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.340 [2024-11-20 06:40:16.402374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:29:47.340 [2024-11-20 06:40:16.402388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:21416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.340 [2024-11-20 06:40:16.402395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:29:47.340 [2024-11-20 06:40:16.402407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:21544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.340 [2024-11-20 06:40:16.402414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:29:47.340 [2024-11-20 06:40:16.403311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:21616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.340 [2024-11-20 06:40:16.403326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:29:47.340 [2024-11-20 06:40:16.403340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.340 [2024-11-20 06:40:16.403347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:29:47.340 [2024-11-20 06:40:16.403359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:22216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.340 [2024-11-20 06:40:16.403365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:47.340 [2024-11-20 06:40:16.403378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:22232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.340 [2024-11-20 06:40:16.403384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:47.340 [2024-11-20 06:40:16.403396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.340 [2024-11-20 06:40:16.403403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:29:47.340 [2024-11-20 06:40:16.403415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:22264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.340 [2024-11-20 06:40:16.403421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:29:47.340 [2024-11-20 06:40:16.403434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:22280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.340 [2024-11-20 06:40:16.403440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:29:47.340 [2024-11-20 06:40:16.403452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:21280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.340 [2024-11-20 06:40:16.403459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:29:47.340 [2024-11-20 06:40:16.403471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:21944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.340 [2024-11-20 06:40:16.403478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:29:47.340 [2024-11-20 06:40:16.403490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:21344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.340 [2024-11-20 06:40:16.403496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:29:47.340 [2024-11-20 06:40:16.403508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:21904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.340 [2024-11-20 06:40:16.403518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:29:47.340 [2024-11-20 06:40:16.403530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:21936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.340 [2024-11-20 06:40:16.403537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:29:47.340 [2024-11-20 06:40:16.403549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.340 [2024-11-20 06:40:16.403556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:29:47.340 [2024-11-20 06:40:16.403568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:22000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.340 [2024-11-20 06:40:16.403574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:29:47.340 [2024-11-20 06:40:16.403586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:21080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.340 [2024-11-20 06:40:16.403593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:29:47.340 [2024-11-20 06:40:16.403605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.340 [2024-11-20 06:40:16.403612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:29:47.340 [2024-11-20 06:40:16.403623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:21504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.340 [2024-11-20 06:40:16.403630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:29:47.340 [2024-11-20 06:40:16.403642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:21792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.340 [2024-11-20 06:40:16.403649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:29:47.340 [2024-11-20 06:40:16.403661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:22088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.340 [2024-11-20 06:40:16.403667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:29:47.340 [2024-11-20 06:40:16.403679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:22120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.340 [2024-11-20 06:40:16.403686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:29:47.340 [2024-11-20 06:40:16.403698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:22152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.340 [2024-11-20 06:40:16.403705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:29:47.340 [2024-11-20 06:40:16.403717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:22184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.340 [2024-11-20 06:40:16.403723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:29:47.340 [2024-11-20 06:40:16.403735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:21768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.340 [2024-11-20 06:40:16.403746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:29:47.340 [2024-11-20 06:40:16.403758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:22296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.340 [2024-11-20 06:40:16.403765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:29:47.340 [2024-11-20 06:40:16.403777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:21832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.340 [2024-11-20 06:40:16.403783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:29:47.340 [2024-11-20 06:40:16.403795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:22032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.340 [2024-11-20 06:40:16.403802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:29:47.340 [2024-11-20 06:40:16.403814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:22064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.340 [2024-11-20 06:40:16.403821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:29:47.340 [2024-11-20 06:40:16.403833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:21848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.340 [2024-11-20 06:40:16.403839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:47.340 [2024-11-20 06:40:16.403852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:21872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.340 [2024-11-20 06:40:16.403858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:47.340 [2024-11-20 06:40:16.403870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:20968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.340 [2024-11-20 06:40:16.403877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:29:47.341 [2024-11-20 06:40:16.403889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:21440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.341 [2024-11-20 06:40:16.403896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:29:47.341 [2024-11-20 06:40:16.403908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:21752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.341 [2024-11-20 06:40:16.403915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:29:47.341 [2024-11-20 06:40:16.403927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:21488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.341 [2024-11-20 06:40:16.403934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:47.341 [2024-11-20 06:40:16.403946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.341 [2024-11-20 06:40:16.403952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:29:47.341 [2024-11-20 06:40:16.403965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:21656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.341 [2024-11-20 06:40:16.403971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:47.341 [2024-11-20 06:40:16.403985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:21416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.341 [2024-11-20 06:40:16.403992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:47.341 [2024-11-20 06:40:16.405482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.341 [2024-11-20 06:40:16.405500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:29:47.341 [2024-11-20 06:40:16.405524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:21896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.341 [2024-11-20 06:40:16.405532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:29:47.341 [2024-11-20 06:40:16.405545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:21960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.341 [2024-11-20 06:40:16.405552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:29:47.341 [2024-11-20 06:40:16.405564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:22304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.341 [2024-11-20 06:40:16.405571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:29:47.341 [2024-11-20 06:40:16.405582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:22320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.341 [2024-11-20 06:40:16.405589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:29:47.341 [2024-11-20 06:40:16.405601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:22336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.341 [2024-11-20 06:40:16.405608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:29:47.341 [2024-11-20 06:40:16.405620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:22352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.341 [2024-11-20 06:40:16.405626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:47.341 [2024-11-20 06:40:16.405638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.341 [2024-11-20 06:40:16.405646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:29:47.341 [2024-11-20 06:40:16.405658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:22384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.341 [2024-11-20 06:40:16.405664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:47.341 [2024-11-20 06:40:16.405676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:22400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.341 [2024-11-20 06:40:16.405683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:29:47.341 [2024-11-20 06:40:16.405695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:22416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.341 [2024-11-20 06:40:16.405702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:29:47.341 [2024-11-20 06:40:16.405717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:22432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.341 [2024-11-20 06:40:16.405724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:47.341 [2024-11-20 06:40:16.405736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:22448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.341 [2024-11-20 06:40:16.405743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:29:47.341 [2024-11-20 06:40:16.405754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:21680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.341 [2024-11-20 06:40:16.405761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:29:47.341 [2024-11-20 06:40:16.405773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:22232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.341 [2024-11-20 06:40:16.405779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:29:47.341 [2024-11-20 06:40:16.405791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.341 [2024-11-20 06:40:16.405798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:29:47.341 [2024-11-20 06:40:16.405810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:21280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.341 [2024-11-20 06:40:16.405817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:47.341 [2024-11-20 06:40:16.405829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:21344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.341 [2024-11-20 06:40:16.405836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:29:47.341 [2024-11-20 06:40:16.405849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:21936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.341 [2024-11-20 06:40:16.405855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:47.341 [2024-11-20 06:40:16.405868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:22000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.341 [2024-11-20 06:40:16.405874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:29:47.341 [2024-11-20 06:40:16.405886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:21560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.341 [2024-11-20 06:40:16.405893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:29:47.341 [2024-11-20 06:40:16.405905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:21792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.341 [2024-11-20 06:40:16.405912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:47.341 [2024-11-20 06:40:16.405924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:22120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.341 [2024-11-20 06:40:16.405930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:47.341 [2024-11-20 06:40:16.405942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:22184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.341 [2024-11-20 06:40:16.405963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:47.341 [2024-11-20 06:40:16.405976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:22296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.341 [2024-11-20 06:40:16.405982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:47.341 [2024-11-20 06:40:16.405994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:22032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.341 [2024-11-20 06:40:16.406001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:47.341 [2024-11-20 06:40:16.406013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:21848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.341 [2024-11-20 06:40:16.406020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:29:47.341 [2024-11-20 06:40:16.407033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:20968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.341 [2024-11-20 06:40:16.407048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:29:47.341 [2024-11-20 06:40:16.407063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:21752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.341 [2024-11-20 06:40:16.407070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:47.341 [2024-11-20 06:40:16.407082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:22056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.341 [2024-11-20 06:40:16.407089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:47.341 [2024-11-20 06:40:16.407101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:21416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.341 [2024-11-20 06:40:16.407108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:47.341 [2024-11-20 06:40:16.407120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:22112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.341 [2024-11-20 06:40:16.407127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:47.342 [2024-11-20 06:40:16.407140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:22144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.342 [2024-11-20 06:40:16.407146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:29:47.342 [2024-11-20 06:40:16.407158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.342 [2024-11-20 06:40:16.407165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:47.342 [2024-11-20 06:40:16.407177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:21728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.342 [2024-11-20 06:40:16.407183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:47.342 [2024-11-20 06:40:16.407195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:21784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.342 [2024-11-20 06:40:16.407210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:47.342 [2024-11-20 06:40:16.407222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:22040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.342 [2024-11-20 06:40:16.407229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:29:47.342 10631.33 IOPS, 41.53 MiB/s [2024-11-20T05:40:19.178Z] 10666.29 IOPS, 41.67 MiB/s [2024-11-20T05:40:19.178Z] Received shutdown signal, test time was about 28.839964 seconds 00:29:47.342 00:29:47.342 Latency(us) 00:29:47.342 [2024-11-20T05:40:19.178Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:47.342 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:29:47.342 Verification LBA range: start 0x0 length 0x4000 00:29:47.342 Nvme0n1 : 28.84 10690.37 41.76 0.00 0.00 11953.56 577.34 3083812.08 00:29:47.342 [2024-11-20T05:40:19.178Z] =================================================================================================================== 00:29:47.342 [2024-11-20T05:40:19.178Z] Total : 10690.37 41.76 0.00 0.00 11953.56 577.34 3083812.08 00:29:47.342 06:40:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:47.600 06:40:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:29:47.600 06:40:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:29:47.600 06:40:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:29:47.600 06:40:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:47.600 06:40:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:29:47.600 06:40:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:47.600 06:40:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:29:47.600 06:40:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:47.600 06:40:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:47.600 rmmod nvme_tcp 00:29:47.600 rmmod nvme_fabrics 00:29:47.600 rmmod nvme_keyring 00:29:47.600 06:40:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:47.600 06:40:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:29:47.600 06:40:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:29:47.600 06:40:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 660493 ']' 00:29:47.600 06:40:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 660493 00:29:47.600 06:40:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # '[' -z 660493 ']' 00:29:47.600 06:40:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # kill -0 660493 00:29:47.600 06:40:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # uname 00:29:47.600 06:40:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:29:47.600 06:40:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 660493 00:29:47.600 06:40:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:29:47.601 06:40:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:29:47.601 06:40:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # echo 'killing process with pid 660493' 00:29:47.601 killing process with pid 660493 00:29:47.601 06:40:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@971 -- # kill 660493 00:29:47.601 06:40:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@976 -- # wait 660493 00:29:47.601 06:40:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:47.601 06:40:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:47.601 06:40:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:47.601 06:40:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:29:47.601 06:40:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:29:47.601 06:40:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:47.601 06:40:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:29:47.601 06:40:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:47.601 06:40:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:47.601 06:40:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:47.601 06:40:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:47.601 06:40:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:50.221 06:40:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:50.221 00:29:50.221 real 0m40.745s 00:29:50.221 user 1m50.228s 00:29:50.221 sys 0m11.691s 00:29:50.221 06:40:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:50.221 06:40:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:29:50.221 ************************************ 00:29:50.221 END TEST nvmf_host_multipath_status 00:29:50.221 ************************************ 00:29:50.221 06:40:21 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:29:50.221 06:40:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:29:50.221 06:40:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:29:50.221 06:40:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:50.221 ************************************ 00:29:50.221 START TEST nvmf_discovery_remove_ifc 00:29:50.221 ************************************ 00:29:50.221 06:40:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:29:50.221 * Looking for test storage... 00:29:50.221 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:50.221 06:40:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:29:50.221 06:40:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # lcov --version 00:29:50.221 06:40:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:29:50.221 06:40:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:29:50.221 06:40:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:50.221 06:40:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:50.221 06:40:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:50.221 06:40:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:29:50.221 06:40:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:29:50.221 06:40:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:29:50.221 06:40:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:29:50.221 06:40:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:29:50.221 06:40:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:29:50.221 06:40:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:29:50.221 06:40:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:50.221 06:40:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:29:50.221 06:40:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:29:50.221 06:40:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:50.221 06:40:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:50.221 06:40:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:29:50.221 06:40:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:29:50.221 06:40:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:50.221 06:40:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:29:50.221 06:40:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:29:50.221 06:40:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:29:50.221 06:40:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:29:50.221 06:40:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:50.221 06:40:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:29:50.221 06:40:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:29:50.221 06:40:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:50.221 06:40:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:50.221 06:40:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:29:50.221 06:40:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:50.221 06:40:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:29:50.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:50.221 --rc genhtml_branch_coverage=1 00:29:50.221 --rc genhtml_function_coverage=1 00:29:50.221 --rc genhtml_legend=1 00:29:50.221 --rc geninfo_all_blocks=1 00:29:50.221 --rc geninfo_unexecuted_blocks=1 00:29:50.221 00:29:50.221 ' 00:29:50.221 06:40:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:29:50.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:50.221 --rc genhtml_branch_coverage=1 00:29:50.221 --rc genhtml_function_coverage=1 00:29:50.221 --rc genhtml_legend=1 00:29:50.221 --rc geninfo_all_blocks=1 00:29:50.221 --rc geninfo_unexecuted_blocks=1 00:29:50.221 00:29:50.221 ' 00:29:50.221 06:40:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:29:50.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:50.221 --rc genhtml_branch_coverage=1 00:29:50.221 --rc genhtml_function_coverage=1 00:29:50.221 --rc genhtml_legend=1 00:29:50.221 --rc geninfo_all_blocks=1 00:29:50.221 --rc geninfo_unexecuted_blocks=1 00:29:50.221 00:29:50.221 ' 00:29:50.221 06:40:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:29:50.222 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:50.222 --rc genhtml_branch_coverage=1 00:29:50.222 --rc genhtml_function_coverage=1 00:29:50.222 --rc genhtml_legend=1 00:29:50.222 --rc geninfo_all_blocks=1 00:29:50.222 --rc geninfo_unexecuted_blocks=1 00:29:50.222 00:29:50.222 ' 00:29:50.222 06:40:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:50.222 06:40:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:29:50.222 06:40:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:50.222 06:40:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:50.222 06:40:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:50.222 06:40:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:50.222 06:40:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:50.222 06:40:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:50.222 06:40:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:50.222 06:40:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:50.222 06:40:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:50.222 06:40:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:50.222 06:40:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:29:50.222 06:40:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:29:50.222 06:40:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:50.222 06:40:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:50.222 06:40:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:50.222 06:40:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:50.222 06:40:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:50.222 06:40:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:29:50.222 06:40:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:50.222 06:40:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:50.222 06:40:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:50.222 06:40:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:50.222 06:40:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:50.222 06:40:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:50.222 06:40:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:29:50.222 06:40:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:50.222 06:40:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:29:50.222 06:40:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:50.222 06:40:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:50.222 06:40:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:50.222 06:40:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:50.222 06:40:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:50.222 06:40:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:50.222 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:50.222 06:40:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:50.222 06:40:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:50.222 06:40:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:50.222 06:40:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:29:50.222 06:40:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:29:50.222 06:40:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:29:50.222 06:40:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:29:50.222 06:40:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:29:50.222 06:40:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:29:50.222 06:40:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:29:50.222 06:40:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:50.222 06:40:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:50.222 06:40:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:50.222 06:40:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:50.222 06:40:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:50.222 06:40:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:50.222 06:40:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:50.222 06:40:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:50.222 06:40:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:50.222 06:40:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:50.222 06:40:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:29:50.222 06:40:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:56.794 06:40:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:56.794 06:40:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:29:56.794 06:40:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:56.794 06:40:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:56.794 06:40:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:56.794 06:40:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:56.794 06:40:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:56.794 06:40:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:29:56.794 06:40:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:56.794 06:40:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:29:56.794 06:40:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:29:56.794 06:40:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:29:56.794 06:40:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:29:56.794 06:40:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:29:56.794 06:40:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:29:56.794 06:40:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:56.794 06:40:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:56.794 06:40:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:56.794 06:40:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:56.794 06:40:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:56.794 06:40:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:56.794 06:40:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:56.794 06:40:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:56.794 06:40:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:56.794 06:40:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:56.794 06:40:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:56.794 06:40:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:56.794 06:40:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:56.794 06:40:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:56.794 06:40:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:56.794 06:40:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:56.794 06:40:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:56.794 06:40:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:56.794 06:40:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:56.794 06:40:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:29:56.794 Found 0000:86:00.0 (0x8086 - 0x159b) 00:29:56.794 06:40:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:56.794 06:40:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:56.794 06:40:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:56.794 06:40:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:56.794 06:40:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:56.794 06:40:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:56.794 06:40:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:29:56.794 Found 0000:86:00.1 (0x8086 - 0x159b) 00:29:56.794 06:40:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:56.794 06:40:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:56.794 06:40:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:56.794 06:40:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:56.794 06:40:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:56.794 06:40:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:56.794 06:40:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:56.794 06:40:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:56.794 06:40:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:56.794 06:40:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:56.794 06:40:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:56.794 06:40:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:56.794 06:40:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:56.794 06:40:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:56.794 06:40:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:56.794 06:40:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:29:56.794 Found net devices under 0000:86:00.0: cvl_0_0 00:29:56.794 06:40:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:56.794 06:40:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:56.794 06:40:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:56.794 06:40:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:56.794 06:40:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:56.794 06:40:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:56.794 06:40:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:56.794 06:40:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:56.794 06:40:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:29:56.794 Found net devices under 0000:86:00.1: cvl_0_1 00:29:56.794 06:40:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:56.794 06:40:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:56.794 06:40:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # is_hw=yes 00:29:56.794 06:40:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:56.794 06:40:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:56.794 06:40:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:56.794 06:40:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:56.794 06:40:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:56.794 06:40:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:56.794 06:40:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:56.794 06:40:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:56.794 06:40:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:56.794 06:40:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:56.794 06:40:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:56.794 06:40:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:56.795 06:40:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:56.795 06:40:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:56.795 06:40:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:56.795 06:40:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:56.795 06:40:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:56.795 06:40:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:56.795 06:40:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:56.795 06:40:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:56.795 06:40:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:56.795 06:40:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:56.795 06:40:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:56.795 06:40:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:56.795 06:40:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:56.795 06:40:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:56.795 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:56.795 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.359 ms 00:29:56.795 00:29:56.795 --- 10.0.0.2 ping statistics --- 00:29:56.795 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:56.795 rtt min/avg/max/mdev = 0.359/0.359/0.359/0.000 ms 00:29:56.795 06:40:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:56.795 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:56.795 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.191 ms 00:29:56.795 00:29:56.795 --- 10.0.0.1 ping statistics --- 00:29:56.795 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:56.795 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:29:56.795 06:40:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:56.795 06:40:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # return 0 00:29:56.795 06:40:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:56.795 06:40:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:56.795 06:40:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:56.795 06:40:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:56.795 06:40:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:56.795 06:40:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:56.795 06:40:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:56.795 06:40:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:29:56.795 06:40:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:56.795 06:40:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:56.795 06:40:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:56.795 06:40:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=669297 00:29:56.795 06:40:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 669297 00:29:56.795 06:40:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:29:56.795 06:40:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # '[' -z 669297 ']' 00:29:56.795 06:40:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:56.795 06:40:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:56.795 06:40:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:56.795 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:56.795 06:40:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:56.795 06:40:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:56.795 [2024-11-20 06:40:27.757551] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:29:56.795 [2024-11-20 06:40:27.757597] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:56.795 [2024-11-20 06:40:27.838008] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:56.795 [2024-11-20 06:40:27.878614] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:56.795 [2024-11-20 06:40:27.878650] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:56.795 [2024-11-20 06:40:27.878657] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:56.795 [2024-11-20 06:40:27.878663] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:56.795 [2024-11-20 06:40:27.878668] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:56.795 [2024-11-20 06:40:27.879244] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:56.795 06:40:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:56.795 06:40:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@866 -- # return 0 00:29:56.795 06:40:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:56.795 06:40:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:56.795 06:40:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:56.795 06:40:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:56.795 06:40:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:29:56.795 06:40:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:56.795 06:40:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:56.795 [2024-11-20 06:40:28.026559] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:56.795 [2024-11-20 06:40:28.034723] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:29:56.795 null0 00:29:56.795 [2024-11-20 06:40:28.066718] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:56.795 06:40:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:56.795 06:40:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=669491 00:29:56.795 06:40:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:29:56.795 06:40:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 669491 /tmp/host.sock 00:29:56.795 06:40:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # '[' -z 669491 ']' 00:29:56.795 06:40:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@837 -- # local rpc_addr=/tmp/host.sock 00:29:56.795 06:40:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:56.795 06:40:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:29:56.795 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:29:56.795 06:40:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:56.795 06:40:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:56.795 [2024-11-20 06:40:28.133887] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:29:56.795 [2024-11-20 06:40:28.133932] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid669491 ] 00:29:56.795 [2024-11-20 06:40:28.205003] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:56.795 [2024-11-20 06:40:28.248103] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:56.795 06:40:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:56.795 06:40:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@866 -- # return 0 00:29:56.795 06:40:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:56.795 06:40:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:29:56.795 06:40:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:56.796 06:40:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:56.796 06:40:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:56.796 06:40:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:29:56.796 06:40:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:56.796 06:40:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:56.796 06:40:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:56.796 06:40:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:29:56.796 06:40:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:56.796 06:40:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:57.730 [2024-11-20 06:40:29.424353] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:29:57.730 [2024-11-20 06:40:29.424375] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:29:57.730 [2024-11-20 06:40:29.424390] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:29:57.730 [2024-11-20 06:40:29.510653] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:29:57.988 [2024-11-20 06:40:29.613307] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:29:57.988 [2024-11-20 06:40:29.613995] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x153da10:1 started. 00:29:57.988 [2024-11-20 06:40:29.615300] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:29:57.988 [2024-11-20 06:40:29.615339] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:29:57.988 [2024-11-20 06:40:29.615357] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:29:57.988 [2024-11-20 06:40:29.615368] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:29:57.988 [2024-11-20 06:40:29.615385] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:29:57.988 06:40:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:57.988 06:40:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:29:57.988 06:40:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:57.988 06:40:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:57.988 06:40:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:57.988 [2024-11-20 06:40:29.621885] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x153da10 was disconnected and freed. delete nvme_qpair. 00:29:57.988 06:40:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:57.988 06:40:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:57.988 06:40:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:57.988 06:40:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:57.988 06:40:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:57.988 06:40:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:29:57.988 06:40:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:29:57.988 06:40:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:29:57.988 06:40:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:29:57.988 06:40:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:57.988 06:40:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:57.988 06:40:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:57.988 06:40:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:57.988 06:40:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:57.988 06:40:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:57.988 06:40:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:57.988 06:40:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:57.988 06:40:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:29:58.246 06:40:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:29:59.275 06:40:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:59.275 06:40:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:59.275 06:40:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:59.275 06:40:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:59.275 06:40:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:59.275 06:40:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:59.275 06:40:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:59.275 06:40:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:59.275 06:40:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:29:59.275 06:40:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:30:00.208 06:40:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:00.208 06:40:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:00.208 06:40:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:00.208 06:40:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:00.208 06:40:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:00.208 06:40:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:00.208 06:40:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:00.208 06:40:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:00.208 06:40:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:30:00.208 06:40:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:30:01.142 06:40:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:01.142 06:40:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:01.142 06:40:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:01.142 06:40:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:01.142 06:40:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:01.142 06:40:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:01.142 06:40:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:01.142 06:40:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:01.400 06:40:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:30:01.400 06:40:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:30:02.336 06:40:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:02.336 06:40:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:02.336 06:40:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:02.336 06:40:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:02.336 06:40:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:02.336 06:40:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:02.336 06:40:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:02.336 06:40:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:02.336 06:40:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:30:02.336 06:40:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:30:03.269 06:40:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:03.269 06:40:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:03.269 06:40:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:03.269 06:40:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:03.269 06:40:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:03.269 06:40:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:03.269 06:40:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:03.269 [2024-11-20 06:40:35.056968] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:30:03.269 [2024-11-20 06:40:35.057010] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:03.269 [2024-11-20 06:40:35.057021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.269 [2024-11-20 06:40:35.057032] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:03.269 [2024-11-20 06:40:35.057039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.270 [2024-11-20 06:40:35.057047] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:03.270 [2024-11-20 06:40:35.057053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.270 [2024-11-20 06:40:35.057061] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:03.270 [2024-11-20 06:40:35.057067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.270 [2024-11-20 06:40:35.057075] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:30:03.270 [2024-11-20 06:40:35.057082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.270 [2024-11-20 06:40:35.057092] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a220 is same with the state(6) to be set 00:30:03.270 06:40:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:03.270 [2024-11-20 06:40:35.066991] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a220 (9): Bad file descriptor 00:30:03.270 [2024-11-20 06:40:35.077028] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:30:03.270 [2024-11-20 06:40:35.077039] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:30:03.270 [2024-11-20 06:40:35.077043] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:30:03.270 [2024-11-20 06:40:35.077048] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:30:03.270 [2024-11-20 06:40:35.077068] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:30:03.270 06:40:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:30:03.270 06:40:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:30:04.648 06:40:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:04.648 06:40:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:04.648 06:40:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:04.648 06:40:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:04.648 06:40:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:04.648 06:40:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:04.648 06:40:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:04.648 [2024-11-20 06:40:36.103234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:30:04.648 [2024-11-20 06:40:36.103301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a220 with addr=10.0.0.2, port=4420 00:30:04.648 [2024-11-20 06:40:36.103332] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a220 is same with the state(6) to be set 00:30:04.648 [2024-11-20 06:40:36.103384] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a220 (9): Bad file descriptor 00:30:04.648 [2024-11-20 06:40:36.104329] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:30:04.648 [2024-11-20 06:40:36.104392] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:30:04.648 [2024-11-20 06:40:36.104416] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:30:04.648 [2024-11-20 06:40:36.104439] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:30:04.648 [2024-11-20 06:40:36.104457] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:30:04.648 [2024-11-20 06:40:36.104473] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:30:04.648 [2024-11-20 06:40:36.104487] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:30:04.648 [2024-11-20 06:40:36.104509] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:30:04.648 [2024-11-20 06:40:36.104522] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:30:04.648 06:40:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:04.648 06:40:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:30:04.648 06:40:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:30:05.585 [2024-11-20 06:40:37.107038] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:30:05.585 [2024-11-20 06:40:37.107059] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:30:05.585 [2024-11-20 06:40:37.107070] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:30:05.585 [2024-11-20 06:40:37.107076] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:30:05.585 [2024-11-20 06:40:37.107083] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:30:05.585 [2024-11-20 06:40:37.107090] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:30:05.585 [2024-11-20 06:40:37.107094] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:30:05.585 [2024-11-20 06:40:37.107098] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:30:05.585 [2024-11-20 06:40:37.107117] bdev_nvme.c:7135:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:30:05.585 [2024-11-20 06:40:37.107136] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:05.585 [2024-11-20 06:40:37.107146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:05.585 [2024-11-20 06:40:37.107155] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:05.585 [2024-11-20 06:40:37.107171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:05.585 [2024-11-20 06:40:37.107178] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:05.585 [2024-11-20 06:40:37.107185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:05.585 [2024-11-20 06:40:37.107192] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:05.585 [2024-11-20 06:40:37.107198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:05.585 [2024-11-20 06:40:37.107209] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:30:05.585 [2024-11-20 06:40:37.107216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:05.585 [2024-11-20 06:40:37.107223] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:30:05.585 [2024-11-20 06:40:37.107656] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1509900 (9): Bad file descriptor 00:30:05.585 [2024-11-20 06:40:37.108667] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:30:05.585 [2024-11-20 06:40:37.108677] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:30:05.585 06:40:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:05.585 06:40:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:05.585 06:40:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:05.585 06:40:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:05.585 06:40:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:05.585 06:40:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:05.585 06:40:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:05.585 06:40:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:05.585 06:40:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:30:05.585 06:40:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:05.585 06:40:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:05.585 06:40:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:30:05.585 06:40:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:05.585 06:40:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:05.585 06:40:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:05.585 06:40:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:05.585 06:40:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:05.585 06:40:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:05.585 06:40:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:05.585 06:40:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:05.585 06:40:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:30:05.585 06:40:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:30:06.520 06:40:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:06.520 06:40:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:06.520 06:40:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:06.520 06:40:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:06.520 06:40:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:06.520 06:40:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:06.520 06:40:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:06.520 06:40:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:06.794 06:40:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:30:06.794 06:40:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:30:07.360 [2024-11-20 06:40:39.158739] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:30:07.360 [2024-11-20 06:40:39.158755] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:30:07.360 [2024-11-20 06:40:39.158767] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:30:07.617 [2024-11-20 06:40:39.285168] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:30:07.617 [2024-11-20 06:40:39.379788] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4420 00:30:07.617 06:40:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:07.617 [2024-11-20 06:40:39.380388] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x150e820:1 started. 00:30:07.617 [2024-11-20 06:40:39.381408] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:30:07.617 [2024-11-20 06:40:39.381441] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:30:07.617 [2024-11-20 06:40:39.381457] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:30:07.617 [2024-11-20 06:40:39.381469] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:30:07.617 [2024-11-20 06:40:39.381476] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:30:07.617 06:40:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:07.617 06:40:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:07.617 06:40:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:07.617 06:40:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:07.617 06:40:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:07.617 06:40:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:07.617 [2024-11-20 06:40:39.387001] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x150e820 was disconnected and freed. delete nvme_qpair. 00:30:07.617 06:40:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:07.617 06:40:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:30:07.617 06:40:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:30:07.617 06:40:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 669491 00:30:07.617 06:40:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # '[' -z 669491 ']' 00:30:07.617 06:40:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # kill -0 669491 00:30:07.617 06:40:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # uname 00:30:07.617 06:40:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:30:07.617 06:40:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 669491 00:30:07.876 06:40:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:30:07.876 06:40:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:30:07.876 06:40:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 669491' 00:30:07.876 killing process with pid 669491 00:30:07.876 06:40:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@971 -- # kill 669491 00:30:07.876 06:40:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@976 -- # wait 669491 00:30:07.876 06:40:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:30:07.876 06:40:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:07.876 06:40:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:30:07.876 06:40:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:07.876 06:40:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:30:07.876 06:40:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:07.876 06:40:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:07.876 rmmod nvme_tcp 00:30:07.876 rmmod nvme_fabrics 00:30:07.876 rmmod nvme_keyring 00:30:07.876 06:40:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:07.876 06:40:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:30:07.876 06:40:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:30:07.876 06:40:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 669297 ']' 00:30:07.876 06:40:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 669297 00:30:07.876 06:40:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # '[' -z 669297 ']' 00:30:07.876 06:40:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # kill -0 669297 00:30:07.876 06:40:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # uname 00:30:07.876 06:40:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:30:07.876 06:40:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 669297 00:30:08.135 06:40:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:30:08.135 06:40:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:30:08.135 06:40:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 669297' 00:30:08.135 killing process with pid 669297 00:30:08.135 06:40:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@971 -- # kill 669297 00:30:08.135 06:40:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@976 -- # wait 669297 00:30:08.135 06:40:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:08.135 06:40:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:08.135 06:40:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:08.135 06:40:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:30:08.135 06:40:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:30:08.135 06:40:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:08.135 06:40:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:30:08.135 06:40:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:08.135 06:40:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:08.135 06:40:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:08.135 06:40:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:08.135 06:40:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:10.671 06:40:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:10.671 00:30:10.671 real 0m20.413s 00:30:10.671 user 0m24.690s 00:30:10.671 sys 0m5.742s 00:30:10.671 06:40:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:30:10.671 06:40:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:10.671 ************************************ 00:30:10.671 END TEST nvmf_discovery_remove_ifc 00:30:10.671 ************************************ 00:30:10.671 06:40:42 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:30:10.671 06:40:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:30:10.671 06:40:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:30:10.671 06:40:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:10.671 ************************************ 00:30:10.671 START TEST nvmf_identify_kernel_target 00:30:10.671 ************************************ 00:30:10.671 06:40:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:30:10.671 * Looking for test storage... 00:30:10.671 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:10.671 06:40:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:30:10.671 06:40:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # lcov --version 00:30:10.671 06:40:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:30:10.671 06:40:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:30:10.671 06:40:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:10.671 06:40:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:10.671 06:40:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:10.671 06:40:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:30:10.671 06:40:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:30:10.671 06:40:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:30:10.671 06:40:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:30:10.671 06:40:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:30:10.671 06:40:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:30:10.672 06:40:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:30:10.672 06:40:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:10.672 06:40:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:30:10.672 06:40:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:30:10.672 06:40:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:10.672 06:40:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:10.672 06:40:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:30:10.672 06:40:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:30:10.672 06:40:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:10.672 06:40:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:30:10.672 06:40:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:30:10.672 06:40:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:30:10.672 06:40:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:30:10.672 06:40:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:10.672 06:40:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:30:10.672 06:40:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:30:10.672 06:40:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:10.672 06:40:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:10.672 06:40:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:30:10.672 06:40:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:10.672 06:40:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:30:10.672 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:10.672 --rc genhtml_branch_coverage=1 00:30:10.672 --rc genhtml_function_coverage=1 00:30:10.672 --rc genhtml_legend=1 00:30:10.672 --rc geninfo_all_blocks=1 00:30:10.672 --rc geninfo_unexecuted_blocks=1 00:30:10.672 00:30:10.672 ' 00:30:10.672 06:40:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:30:10.672 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:10.672 --rc genhtml_branch_coverage=1 00:30:10.672 --rc genhtml_function_coverage=1 00:30:10.672 --rc genhtml_legend=1 00:30:10.672 --rc geninfo_all_blocks=1 00:30:10.672 --rc geninfo_unexecuted_blocks=1 00:30:10.672 00:30:10.672 ' 00:30:10.672 06:40:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:30:10.672 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:10.672 --rc genhtml_branch_coverage=1 00:30:10.672 --rc genhtml_function_coverage=1 00:30:10.672 --rc genhtml_legend=1 00:30:10.672 --rc geninfo_all_blocks=1 00:30:10.672 --rc geninfo_unexecuted_blocks=1 00:30:10.672 00:30:10.672 ' 00:30:10.672 06:40:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:30:10.672 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:10.672 --rc genhtml_branch_coverage=1 00:30:10.672 --rc genhtml_function_coverage=1 00:30:10.672 --rc genhtml_legend=1 00:30:10.672 --rc geninfo_all_blocks=1 00:30:10.672 --rc geninfo_unexecuted_blocks=1 00:30:10.672 00:30:10.672 ' 00:30:10.672 06:40:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:10.672 06:40:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:30:10.672 06:40:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:10.672 06:40:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:10.672 06:40:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:10.672 06:40:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:10.672 06:40:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:10.672 06:40:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:10.672 06:40:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:10.672 06:40:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:10.672 06:40:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:10.672 06:40:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:10.672 06:40:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:30:10.672 06:40:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:30:10.672 06:40:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:10.672 06:40:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:10.672 06:40:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:10.672 06:40:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:10.672 06:40:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:10.672 06:40:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:30:10.672 06:40:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:10.672 06:40:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:10.672 06:40:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:10.672 06:40:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:10.672 06:40:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:10.672 06:40:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:10.672 06:40:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:30:10.672 06:40:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:10.672 06:40:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:30:10.672 06:40:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:10.672 06:40:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:10.672 06:40:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:10.672 06:40:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:10.672 06:40:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:10.672 06:40:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:10.672 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:10.672 06:40:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:10.672 06:40:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:10.672 06:40:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:10.672 06:40:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:30:10.672 06:40:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:10.672 06:40:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:10.672 06:40:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:10.672 06:40:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:10.672 06:40:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:10.672 06:40:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:10.672 06:40:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:10.672 06:40:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:10.672 06:40:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:10.672 06:40:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:10.673 06:40:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:30:10.673 06:40:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:30:17.246 06:40:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:17.246 06:40:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:30:17.246 06:40:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:17.246 06:40:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:17.246 06:40:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:17.246 06:40:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:17.246 06:40:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:17.246 06:40:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:30:17.246 06:40:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:17.246 06:40:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:30:17.246 06:40:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:30:17.246 06:40:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:30:17.246 06:40:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:30:17.246 06:40:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:30:17.246 06:40:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:30:17.246 06:40:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:17.246 06:40:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:17.246 06:40:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:17.246 06:40:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:17.246 06:40:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:17.246 06:40:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:17.246 06:40:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:17.247 06:40:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:17.247 06:40:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:17.247 06:40:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:17.247 06:40:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:17.247 06:40:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:17.247 06:40:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:17.247 06:40:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:17.247 06:40:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:17.247 06:40:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:17.247 06:40:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:17.247 06:40:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:17.247 06:40:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:17.247 06:40:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:30:17.247 Found 0000:86:00.0 (0x8086 - 0x159b) 00:30:17.247 06:40:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:17.247 06:40:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:17.247 06:40:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:17.247 06:40:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:17.247 06:40:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:17.247 06:40:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:17.247 06:40:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:30:17.247 Found 0000:86:00.1 (0x8086 - 0x159b) 00:30:17.247 06:40:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:17.247 06:40:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:17.247 06:40:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:17.247 06:40:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:17.247 06:40:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:17.247 06:40:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:17.247 06:40:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:17.247 06:40:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:17.247 06:40:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:17.247 06:40:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:17.247 06:40:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:17.247 06:40:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:17.247 06:40:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:17.247 06:40:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:17.247 06:40:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:17.247 06:40:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:30:17.247 Found net devices under 0000:86:00.0: cvl_0_0 00:30:17.247 06:40:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:17.247 06:40:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:17.247 06:40:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:17.247 06:40:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:17.247 06:40:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:17.247 06:40:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:17.247 06:40:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:17.247 06:40:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:17.247 06:40:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:30:17.247 Found net devices under 0000:86:00.1: cvl_0_1 00:30:17.247 06:40:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:17.247 06:40:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:17.247 06:40:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes 00:30:17.247 06:40:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:17.247 06:40:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:17.247 06:40:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:17.247 06:40:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:17.247 06:40:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:17.247 06:40:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:17.247 06:40:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:17.247 06:40:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:17.247 06:40:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:17.247 06:40:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:17.247 06:40:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:17.247 06:40:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:17.247 06:40:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:17.247 06:40:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:17.247 06:40:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:17.247 06:40:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:17.247 06:40:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:17.247 06:40:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:17.247 06:40:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:17.247 06:40:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:17.247 06:40:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:17.247 06:40:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:17.247 06:40:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:17.247 06:40:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:17.247 06:40:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:17.247 06:40:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:17.247 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:17.247 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.254 ms 00:30:17.247 00:30:17.247 --- 10.0.0.2 ping statistics --- 00:30:17.247 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:17.247 rtt min/avg/max/mdev = 0.254/0.254/0.254/0.000 ms 00:30:17.247 06:40:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:17.247 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:17.247 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.180 ms 00:30:17.247 00:30:17.247 --- 10.0.0.1 ping statistics --- 00:30:17.247 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:17.247 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:30:17.247 06:40:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:17.247 06:40:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0 00:30:17.247 06:40:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:17.247 06:40:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:17.247 06:40:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:17.247 06:40:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:17.247 06:40:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:17.247 06:40:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:17.247 06:40:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:17.247 06:40:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:30:17.247 06:40:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:30:17.247 06:40:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:30:17.247 06:40:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:17.247 06:40:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:17.247 06:40:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:17.247 06:40:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:17.247 06:40:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:17.247 06:40:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:17.247 06:40:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:17.248 06:40:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:17.248 06:40:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:17.248 06:40:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:30:17.248 06:40:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:30:17.248 06:40:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:30:17.248 06:40:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:30:17.248 06:40:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:30:17.248 06:40:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:30:17.248 06:40:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:30:17.248 06:40:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:30:17.248 06:40:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:30:17.248 06:40:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:30:17.248 06:40:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:30:17.248 06:40:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:30:19.162 Waiting for block devices as requested 00:30:19.422 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:30:19.422 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:30:19.422 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:30:19.682 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:30:19.682 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:30:19.682 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:30:19.941 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:30:19.941 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:30:19.941 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:30:20.200 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:30:20.200 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:30:20.200 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:30:20.200 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:30:20.459 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:30:20.459 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:30:20.459 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:30:20.718 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:30:20.718 06:40:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:30:20.718 06:40:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:30:20.718 06:40:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:30:20.718 06:40:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:30:20.718 06:40:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:30:20.718 06:40:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:30:20.718 06:40:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:30:20.718 06:40:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:30:20.718 06:40:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:30:20.718 No valid GPT data, bailing 00:30:20.718 06:40:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:30:20.718 06:40:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:30:20.718 06:40:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:30:20.718 06:40:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:30:20.718 06:40:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:30:20.718 06:40:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:30:20.718 06:40:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:30:20.718 06:40:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:30:20.718 06:40:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:30:20.718 06:40:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:30:20.718 06:40:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:30:20.718 06:40:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:30:20.718 06:40:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:30:20.718 06:40:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:30:20.718 06:40:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:30:20.718 06:40:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:30:20.718 06:40:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:30:20.718 06:40:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:30:20.979 00:30:20.979 Discovery Log Number of Records 2, Generation counter 2 00:30:20.979 =====Discovery Log Entry 0====== 00:30:20.979 trtype: tcp 00:30:20.979 adrfam: ipv4 00:30:20.979 subtype: current discovery subsystem 00:30:20.979 treq: not specified, sq flow control disable supported 00:30:20.979 portid: 1 00:30:20.979 trsvcid: 4420 00:30:20.979 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:30:20.979 traddr: 10.0.0.1 00:30:20.979 eflags: none 00:30:20.979 sectype: none 00:30:20.979 =====Discovery Log Entry 1====== 00:30:20.979 trtype: tcp 00:30:20.979 adrfam: ipv4 00:30:20.979 subtype: nvme subsystem 00:30:20.979 treq: not specified, sq flow control disable supported 00:30:20.979 portid: 1 00:30:20.979 trsvcid: 4420 00:30:20.979 subnqn: nqn.2016-06.io.spdk:testnqn 00:30:20.979 traddr: 10.0.0.1 00:30:20.979 eflags: none 00:30:20.979 sectype: none 00:30:20.979 06:40:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:30:20.979 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:30:20.979 ===================================================== 00:30:20.979 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:30:20.979 ===================================================== 00:30:20.979 Controller Capabilities/Features 00:30:20.979 ================================ 00:30:20.979 Vendor ID: 0000 00:30:20.979 Subsystem Vendor ID: 0000 00:30:20.979 Serial Number: 6f877553ca91b1a2829b 00:30:20.979 Model Number: Linux 00:30:20.979 Firmware Version: 6.8.9-20 00:30:20.979 Recommended Arb Burst: 0 00:30:20.979 IEEE OUI Identifier: 00 00 00 00:30:20.979 Multi-path I/O 00:30:20.979 May have multiple subsystem ports: No 00:30:20.979 May have multiple controllers: No 00:30:20.979 Associated with SR-IOV VF: No 00:30:20.979 Max Data Transfer Size: Unlimited 00:30:20.979 Max Number of Namespaces: 0 00:30:20.979 Max Number of I/O Queues: 1024 00:30:20.979 NVMe Specification Version (VS): 1.3 00:30:20.979 NVMe Specification Version (Identify): 1.3 00:30:20.979 Maximum Queue Entries: 1024 00:30:20.979 Contiguous Queues Required: No 00:30:20.979 Arbitration Mechanisms Supported 00:30:20.979 Weighted Round Robin: Not Supported 00:30:20.979 Vendor Specific: Not Supported 00:30:20.979 Reset Timeout: 7500 ms 00:30:20.979 Doorbell Stride: 4 bytes 00:30:20.979 NVM Subsystem Reset: Not Supported 00:30:20.979 Command Sets Supported 00:30:20.979 NVM Command Set: Supported 00:30:20.979 Boot Partition: Not Supported 00:30:20.979 Memory Page Size Minimum: 4096 bytes 00:30:20.979 Memory Page Size Maximum: 4096 bytes 00:30:20.979 Persistent Memory Region: Not Supported 00:30:20.979 Optional Asynchronous Events Supported 00:30:20.979 Namespace Attribute Notices: Not Supported 00:30:20.979 Firmware Activation Notices: Not Supported 00:30:20.979 ANA Change Notices: Not Supported 00:30:20.979 PLE Aggregate Log Change Notices: Not Supported 00:30:20.979 LBA Status Info Alert Notices: Not Supported 00:30:20.979 EGE Aggregate Log Change Notices: Not Supported 00:30:20.979 Normal NVM Subsystem Shutdown event: Not Supported 00:30:20.979 Zone Descriptor Change Notices: Not Supported 00:30:20.979 Discovery Log Change Notices: Supported 00:30:20.979 Controller Attributes 00:30:20.980 128-bit Host Identifier: Not Supported 00:30:20.980 Non-Operational Permissive Mode: Not Supported 00:30:20.980 NVM Sets: Not Supported 00:30:20.980 Read Recovery Levels: Not Supported 00:30:20.980 Endurance Groups: Not Supported 00:30:20.980 Predictable Latency Mode: Not Supported 00:30:20.980 Traffic Based Keep ALive: Not Supported 00:30:20.980 Namespace Granularity: Not Supported 00:30:20.980 SQ Associations: Not Supported 00:30:20.980 UUID List: Not Supported 00:30:20.980 Multi-Domain Subsystem: Not Supported 00:30:20.980 Fixed Capacity Management: Not Supported 00:30:20.980 Variable Capacity Management: Not Supported 00:30:20.980 Delete Endurance Group: Not Supported 00:30:20.980 Delete NVM Set: Not Supported 00:30:20.980 Extended LBA Formats Supported: Not Supported 00:30:20.980 Flexible Data Placement Supported: Not Supported 00:30:20.980 00:30:20.980 Controller Memory Buffer Support 00:30:20.980 ================================ 00:30:20.980 Supported: No 00:30:20.980 00:30:20.980 Persistent Memory Region Support 00:30:20.980 ================================ 00:30:20.980 Supported: No 00:30:20.980 00:30:20.980 Admin Command Set Attributes 00:30:20.980 ============================ 00:30:20.980 Security Send/Receive: Not Supported 00:30:20.980 Format NVM: Not Supported 00:30:20.980 Firmware Activate/Download: Not Supported 00:30:20.980 Namespace Management: Not Supported 00:30:20.980 Device Self-Test: Not Supported 00:30:20.980 Directives: Not Supported 00:30:20.980 NVMe-MI: Not Supported 00:30:20.980 Virtualization Management: Not Supported 00:30:20.980 Doorbell Buffer Config: Not Supported 00:30:20.980 Get LBA Status Capability: Not Supported 00:30:20.980 Command & Feature Lockdown Capability: Not Supported 00:30:20.980 Abort Command Limit: 1 00:30:20.980 Async Event Request Limit: 1 00:30:20.980 Number of Firmware Slots: N/A 00:30:20.980 Firmware Slot 1 Read-Only: N/A 00:30:20.980 Firmware Activation Without Reset: N/A 00:30:20.980 Multiple Update Detection Support: N/A 00:30:20.980 Firmware Update Granularity: No Information Provided 00:30:20.980 Per-Namespace SMART Log: No 00:30:20.980 Asymmetric Namespace Access Log Page: Not Supported 00:30:20.980 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:30:20.980 Command Effects Log Page: Not Supported 00:30:20.980 Get Log Page Extended Data: Supported 00:30:20.980 Telemetry Log Pages: Not Supported 00:30:20.980 Persistent Event Log Pages: Not Supported 00:30:20.980 Supported Log Pages Log Page: May Support 00:30:20.980 Commands Supported & Effects Log Page: Not Supported 00:30:20.980 Feature Identifiers & Effects Log Page:May Support 00:30:20.980 NVMe-MI Commands & Effects Log Page: May Support 00:30:20.980 Data Area 4 for Telemetry Log: Not Supported 00:30:20.980 Error Log Page Entries Supported: 1 00:30:20.980 Keep Alive: Not Supported 00:30:20.980 00:30:20.980 NVM Command Set Attributes 00:30:20.980 ========================== 00:30:20.980 Submission Queue Entry Size 00:30:20.980 Max: 1 00:30:20.980 Min: 1 00:30:20.980 Completion Queue Entry Size 00:30:20.980 Max: 1 00:30:20.980 Min: 1 00:30:20.980 Number of Namespaces: 0 00:30:20.980 Compare Command: Not Supported 00:30:20.980 Write Uncorrectable Command: Not Supported 00:30:20.980 Dataset Management Command: Not Supported 00:30:20.980 Write Zeroes Command: Not Supported 00:30:20.980 Set Features Save Field: Not Supported 00:30:20.980 Reservations: Not Supported 00:30:20.980 Timestamp: Not Supported 00:30:20.980 Copy: Not Supported 00:30:20.980 Volatile Write Cache: Not Present 00:30:20.980 Atomic Write Unit (Normal): 1 00:30:20.980 Atomic Write Unit (PFail): 1 00:30:20.980 Atomic Compare & Write Unit: 1 00:30:20.980 Fused Compare & Write: Not Supported 00:30:20.980 Scatter-Gather List 00:30:20.980 SGL Command Set: Supported 00:30:20.980 SGL Keyed: Not Supported 00:30:20.980 SGL Bit Bucket Descriptor: Not Supported 00:30:20.980 SGL Metadata Pointer: Not Supported 00:30:20.980 Oversized SGL: Not Supported 00:30:20.980 SGL Metadata Address: Not Supported 00:30:20.980 SGL Offset: Supported 00:30:20.980 Transport SGL Data Block: Not Supported 00:30:20.980 Replay Protected Memory Block: Not Supported 00:30:20.980 00:30:20.980 Firmware Slot Information 00:30:20.980 ========================= 00:30:20.980 Active slot: 0 00:30:20.980 00:30:20.980 00:30:20.980 Error Log 00:30:20.980 ========= 00:30:20.980 00:30:20.980 Active Namespaces 00:30:20.980 ================= 00:30:20.980 Discovery Log Page 00:30:20.980 ================== 00:30:20.980 Generation Counter: 2 00:30:20.980 Number of Records: 2 00:30:20.980 Record Format: 0 00:30:20.980 00:30:20.980 Discovery Log Entry 0 00:30:20.980 ---------------------- 00:30:20.980 Transport Type: 3 (TCP) 00:30:20.980 Address Family: 1 (IPv4) 00:30:20.980 Subsystem Type: 3 (Current Discovery Subsystem) 00:30:20.980 Entry Flags: 00:30:20.980 Duplicate Returned Information: 0 00:30:20.980 Explicit Persistent Connection Support for Discovery: 0 00:30:20.980 Transport Requirements: 00:30:20.980 Secure Channel: Not Specified 00:30:20.980 Port ID: 1 (0x0001) 00:30:20.980 Controller ID: 65535 (0xffff) 00:30:20.980 Admin Max SQ Size: 32 00:30:20.980 Transport Service Identifier: 4420 00:30:20.980 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:30:20.980 Transport Address: 10.0.0.1 00:30:20.980 Discovery Log Entry 1 00:30:20.980 ---------------------- 00:30:20.980 Transport Type: 3 (TCP) 00:30:20.980 Address Family: 1 (IPv4) 00:30:20.980 Subsystem Type: 2 (NVM Subsystem) 00:30:20.980 Entry Flags: 00:30:20.980 Duplicate Returned Information: 0 00:30:20.980 Explicit Persistent Connection Support for Discovery: 0 00:30:20.980 Transport Requirements: 00:30:20.980 Secure Channel: Not Specified 00:30:20.980 Port ID: 1 (0x0001) 00:30:20.980 Controller ID: 65535 (0xffff) 00:30:20.980 Admin Max SQ Size: 32 00:30:20.980 Transport Service Identifier: 4420 00:30:20.980 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:30:20.980 Transport Address: 10.0.0.1 00:30:20.980 06:40:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:20.980 get_feature(0x01) failed 00:30:20.980 get_feature(0x02) failed 00:30:20.980 get_feature(0x04) failed 00:30:20.980 ===================================================== 00:30:20.980 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:30:20.980 ===================================================== 00:30:20.980 Controller Capabilities/Features 00:30:20.980 ================================ 00:30:20.980 Vendor ID: 0000 00:30:20.980 Subsystem Vendor ID: 0000 00:30:20.980 Serial Number: 02c58fd86aeeeaba6499 00:30:20.980 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:30:20.980 Firmware Version: 6.8.9-20 00:30:20.980 Recommended Arb Burst: 6 00:30:20.980 IEEE OUI Identifier: 00 00 00 00:30:20.980 Multi-path I/O 00:30:20.980 May have multiple subsystem ports: Yes 00:30:20.980 May have multiple controllers: Yes 00:30:20.980 Associated with SR-IOV VF: No 00:30:20.980 Max Data Transfer Size: Unlimited 00:30:20.980 Max Number of Namespaces: 1024 00:30:20.980 Max Number of I/O Queues: 128 00:30:20.980 NVMe Specification Version (VS): 1.3 00:30:20.980 NVMe Specification Version (Identify): 1.3 00:30:20.980 Maximum Queue Entries: 1024 00:30:20.980 Contiguous Queues Required: No 00:30:20.980 Arbitration Mechanisms Supported 00:30:20.980 Weighted Round Robin: Not Supported 00:30:20.980 Vendor Specific: Not Supported 00:30:20.981 Reset Timeout: 7500 ms 00:30:20.981 Doorbell Stride: 4 bytes 00:30:20.981 NVM Subsystem Reset: Not Supported 00:30:20.981 Command Sets Supported 00:30:20.981 NVM Command Set: Supported 00:30:20.981 Boot Partition: Not Supported 00:30:20.981 Memory Page Size Minimum: 4096 bytes 00:30:20.981 Memory Page Size Maximum: 4096 bytes 00:30:20.981 Persistent Memory Region: Not Supported 00:30:20.981 Optional Asynchronous Events Supported 00:30:20.981 Namespace Attribute Notices: Supported 00:30:20.981 Firmware Activation Notices: Not Supported 00:30:20.981 ANA Change Notices: Supported 00:30:20.981 PLE Aggregate Log Change Notices: Not Supported 00:30:20.981 LBA Status Info Alert Notices: Not Supported 00:30:20.981 EGE Aggregate Log Change Notices: Not Supported 00:30:20.981 Normal NVM Subsystem Shutdown event: Not Supported 00:30:20.981 Zone Descriptor Change Notices: Not Supported 00:30:20.981 Discovery Log Change Notices: Not Supported 00:30:20.981 Controller Attributes 00:30:20.981 128-bit Host Identifier: Supported 00:30:20.981 Non-Operational Permissive Mode: Not Supported 00:30:20.981 NVM Sets: Not Supported 00:30:20.981 Read Recovery Levels: Not Supported 00:30:20.981 Endurance Groups: Not Supported 00:30:20.981 Predictable Latency Mode: Not Supported 00:30:20.981 Traffic Based Keep ALive: Supported 00:30:20.981 Namespace Granularity: Not Supported 00:30:20.981 SQ Associations: Not Supported 00:30:20.981 UUID List: Not Supported 00:30:20.981 Multi-Domain Subsystem: Not Supported 00:30:20.981 Fixed Capacity Management: Not Supported 00:30:20.981 Variable Capacity Management: Not Supported 00:30:20.981 Delete Endurance Group: Not Supported 00:30:20.981 Delete NVM Set: Not Supported 00:30:20.981 Extended LBA Formats Supported: Not Supported 00:30:20.981 Flexible Data Placement Supported: Not Supported 00:30:20.981 00:30:20.981 Controller Memory Buffer Support 00:30:20.981 ================================ 00:30:20.981 Supported: No 00:30:20.981 00:30:20.981 Persistent Memory Region Support 00:30:20.981 ================================ 00:30:20.981 Supported: No 00:30:20.981 00:30:20.981 Admin Command Set Attributes 00:30:20.981 ============================ 00:30:20.981 Security Send/Receive: Not Supported 00:30:20.981 Format NVM: Not Supported 00:30:20.981 Firmware Activate/Download: Not Supported 00:30:20.981 Namespace Management: Not Supported 00:30:20.981 Device Self-Test: Not Supported 00:30:20.981 Directives: Not Supported 00:30:20.981 NVMe-MI: Not Supported 00:30:20.981 Virtualization Management: Not Supported 00:30:20.981 Doorbell Buffer Config: Not Supported 00:30:20.981 Get LBA Status Capability: Not Supported 00:30:20.981 Command & Feature Lockdown Capability: Not Supported 00:30:20.981 Abort Command Limit: 4 00:30:20.981 Async Event Request Limit: 4 00:30:20.981 Number of Firmware Slots: N/A 00:30:20.981 Firmware Slot 1 Read-Only: N/A 00:30:20.981 Firmware Activation Without Reset: N/A 00:30:20.981 Multiple Update Detection Support: N/A 00:30:20.981 Firmware Update Granularity: No Information Provided 00:30:20.981 Per-Namespace SMART Log: Yes 00:30:20.981 Asymmetric Namespace Access Log Page: Supported 00:30:20.981 ANA Transition Time : 10 sec 00:30:20.981 00:30:20.981 Asymmetric Namespace Access Capabilities 00:30:20.981 ANA Optimized State : Supported 00:30:20.981 ANA Non-Optimized State : Supported 00:30:20.981 ANA Inaccessible State : Supported 00:30:20.981 ANA Persistent Loss State : Supported 00:30:20.981 ANA Change State : Supported 00:30:20.981 ANAGRPID is not changed : No 00:30:20.981 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:30:20.981 00:30:20.981 ANA Group Identifier Maximum : 128 00:30:20.981 Number of ANA Group Identifiers : 128 00:30:20.981 Max Number of Allowed Namespaces : 1024 00:30:20.981 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:30:20.981 Command Effects Log Page: Supported 00:30:20.981 Get Log Page Extended Data: Supported 00:30:20.981 Telemetry Log Pages: Not Supported 00:30:20.981 Persistent Event Log Pages: Not Supported 00:30:20.981 Supported Log Pages Log Page: May Support 00:30:20.981 Commands Supported & Effects Log Page: Not Supported 00:30:20.981 Feature Identifiers & Effects Log Page:May Support 00:30:20.981 NVMe-MI Commands & Effects Log Page: May Support 00:30:20.981 Data Area 4 for Telemetry Log: Not Supported 00:30:20.981 Error Log Page Entries Supported: 128 00:30:20.981 Keep Alive: Supported 00:30:20.981 Keep Alive Granularity: 1000 ms 00:30:20.981 00:30:20.981 NVM Command Set Attributes 00:30:20.981 ========================== 00:30:20.981 Submission Queue Entry Size 00:30:20.981 Max: 64 00:30:20.981 Min: 64 00:30:20.981 Completion Queue Entry Size 00:30:20.981 Max: 16 00:30:20.981 Min: 16 00:30:20.981 Number of Namespaces: 1024 00:30:20.981 Compare Command: Not Supported 00:30:20.981 Write Uncorrectable Command: Not Supported 00:30:20.981 Dataset Management Command: Supported 00:30:20.981 Write Zeroes Command: Supported 00:30:20.981 Set Features Save Field: Not Supported 00:30:20.981 Reservations: Not Supported 00:30:20.981 Timestamp: Not Supported 00:30:20.981 Copy: Not Supported 00:30:20.981 Volatile Write Cache: Present 00:30:20.981 Atomic Write Unit (Normal): 1 00:30:20.981 Atomic Write Unit (PFail): 1 00:30:20.981 Atomic Compare & Write Unit: 1 00:30:20.981 Fused Compare & Write: Not Supported 00:30:20.981 Scatter-Gather List 00:30:20.981 SGL Command Set: Supported 00:30:20.981 SGL Keyed: Not Supported 00:30:20.981 SGL Bit Bucket Descriptor: Not Supported 00:30:20.981 SGL Metadata Pointer: Not Supported 00:30:20.981 Oversized SGL: Not Supported 00:30:20.981 SGL Metadata Address: Not Supported 00:30:20.981 SGL Offset: Supported 00:30:20.981 Transport SGL Data Block: Not Supported 00:30:20.981 Replay Protected Memory Block: Not Supported 00:30:20.981 00:30:20.981 Firmware Slot Information 00:30:20.981 ========================= 00:30:20.981 Active slot: 0 00:30:20.981 00:30:20.981 Asymmetric Namespace Access 00:30:20.981 =========================== 00:30:20.981 Change Count : 0 00:30:20.981 Number of ANA Group Descriptors : 1 00:30:20.981 ANA Group Descriptor : 0 00:30:20.981 ANA Group ID : 1 00:30:20.981 Number of NSID Values : 1 00:30:20.981 Change Count : 0 00:30:20.981 ANA State : 1 00:30:20.981 Namespace Identifier : 1 00:30:20.981 00:30:20.981 Commands Supported and Effects 00:30:20.981 ============================== 00:30:20.981 Admin Commands 00:30:20.981 -------------- 00:30:20.981 Get Log Page (02h): Supported 00:30:20.981 Identify (06h): Supported 00:30:20.981 Abort (08h): Supported 00:30:20.981 Set Features (09h): Supported 00:30:20.981 Get Features (0Ah): Supported 00:30:20.981 Asynchronous Event Request (0Ch): Supported 00:30:20.981 Keep Alive (18h): Supported 00:30:20.981 I/O Commands 00:30:20.981 ------------ 00:30:20.981 Flush (00h): Supported 00:30:20.981 Write (01h): Supported LBA-Change 00:30:20.981 Read (02h): Supported 00:30:20.981 Write Zeroes (08h): Supported LBA-Change 00:30:20.981 Dataset Management (09h): Supported 00:30:20.981 00:30:20.981 Error Log 00:30:20.981 ========= 00:30:20.981 Entry: 0 00:30:20.981 Error Count: 0x3 00:30:20.981 Submission Queue Id: 0x0 00:30:20.981 Command Id: 0x5 00:30:20.981 Phase Bit: 0 00:30:20.981 Status Code: 0x2 00:30:20.981 Status Code Type: 0x0 00:30:20.981 Do Not Retry: 1 00:30:20.981 Error Location: 0x28 00:30:20.981 LBA: 0x0 00:30:20.981 Namespace: 0x0 00:30:20.981 Vendor Log Page: 0x0 00:30:20.981 ----------- 00:30:20.981 Entry: 1 00:30:20.981 Error Count: 0x2 00:30:20.981 Submission Queue Id: 0x0 00:30:20.981 Command Id: 0x5 00:30:20.981 Phase Bit: 0 00:30:20.981 Status Code: 0x2 00:30:20.981 Status Code Type: 0x0 00:30:20.981 Do Not Retry: 1 00:30:20.981 Error Location: 0x28 00:30:20.981 LBA: 0x0 00:30:20.981 Namespace: 0x0 00:30:20.981 Vendor Log Page: 0x0 00:30:20.981 ----------- 00:30:20.981 Entry: 2 00:30:20.981 Error Count: 0x1 00:30:20.981 Submission Queue Id: 0x0 00:30:20.981 Command Id: 0x4 00:30:20.981 Phase Bit: 0 00:30:20.981 Status Code: 0x2 00:30:20.981 Status Code Type: 0x0 00:30:20.981 Do Not Retry: 1 00:30:20.981 Error Location: 0x28 00:30:20.981 LBA: 0x0 00:30:20.982 Namespace: 0x0 00:30:20.982 Vendor Log Page: 0x0 00:30:20.982 00:30:20.982 Number of Queues 00:30:20.982 ================ 00:30:20.982 Number of I/O Submission Queues: 128 00:30:20.982 Number of I/O Completion Queues: 128 00:30:20.982 00:30:20.982 ZNS Specific Controller Data 00:30:20.982 ============================ 00:30:20.982 Zone Append Size Limit: 0 00:30:20.982 00:30:20.982 00:30:20.982 Active Namespaces 00:30:20.982 ================= 00:30:20.982 get_feature(0x05) failed 00:30:20.982 Namespace ID:1 00:30:20.982 Command Set Identifier: NVM (00h) 00:30:20.982 Deallocate: Supported 00:30:20.982 Deallocated/Unwritten Error: Not Supported 00:30:20.982 Deallocated Read Value: Unknown 00:30:20.982 Deallocate in Write Zeroes: Not Supported 00:30:20.982 Deallocated Guard Field: 0xFFFF 00:30:20.982 Flush: Supported 00:30:20.982 Reservation: Not Supported 00:30:20.982 Namespace Sharing Capabilities: Multiple Controllers 00:30:20.982 Size (in LBAs): 3125627568 (1490GiB) 00:30:20.982 Capacity (in LBAs): 3125627568 (1490GiB) 00:30:20.982 Utilization (in LBAs): 3125627568 (1490GiB) 00:30:20.982 UUID: 783d2fa6-c8be-4bdb-8353-8be005d4a57b 00:30:20.982 Thin Provisioning: Not Supported 00:30:20.982 Per-NS Atomic Units: Yes 00:30:20.982 Atomic Boundary Size (Normal): 0 00:30:20.982 Atomic Boundary Size (PFail): 0 00:30:20.982 Atomic Boundary Offset: 0 00:30:20.982 NGUID/EUI64 Never Reused: No 00:30:20.982 ANA group ID: 1 00:30:20.982 Namespace Write Protected: No 00:30:20.982 Number of LBA Formats: 1 00:30:20.982 Current LBA Format: LBA Format #00 00:30:20.982 LBA Format #00: Data Size: 512 Metadata Size: 0 00:30:20.982 00:30:20.982 06:40:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:30:20.982 06:40:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:20.982 06:40:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:30:20.982 06:40:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:20.982 06:40:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:30:20.982 06:40:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:20.982 06:40:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:20.982 rmmod nvme_tcp 00:30:21.242 rmmod nvme_fabrics 00:30:21.242 06:40:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:21.242 06:40:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:30:21.242 06:40:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:30:21.242 06:40:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:30:21.242 06:40:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:21.242 06:40:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:21.242 06:40:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:21.242 06:40:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:30:21.242 06:40:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:30:21.242 06:40:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:21.242 06:40:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:30:21.242 06:40:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:21.242 06:40:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:21.242 06:40:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:21.242 06:40:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:21.242 06:40:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:23.147 06:40:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:23.147 06:40:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:30:23.147 06:40:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:30:23.147 06:40:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:30:23.147 06:40:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:30:23.147 06:40:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:30:23.147 06:40:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:30:23.147 06:40:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:30:23.147 06:40:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:30:23.147 06:40:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:30:23.147 06:40:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:30:26.436 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:30:26.436 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:30:26.436 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:30:26.436 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:30:26.436 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:30:26.436 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:30:26.436 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:30:26.436 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:30:26.436 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:30:26.436 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:30:26.436 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:30:26.436 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:30:26.436 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:30:26.436 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:30:26.436 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:30:26.436 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:30:27.815 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:30:27.815 00:30:27.815 real 0m17.390s 00:30:27.815 user 0m4.389s 00:30:27.815 sys 0m8.822s 00:30:27.815 06:40:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1128 -- # xtrace_disable 00:30:27.815 06:40:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:30:27.815 ************************************ 00:30:27.815 END TEST nvmf_identify_kernel_target 00:30:27.815 ************************************ 00:30:27.815 06:40:59 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:30:27.815 06:40:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:30:27.815 06:40:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:30:27.815 06:40:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:27.815 ************************************ 00:30:27.815 START TEST nvmf_auth_host 00:30:27.815 ************************************ 00:30:27.815 06:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:30:27.815 * Looking for test storage... 00:30:27.815 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:27.815 06:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:30:27.815 06:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # lcov --version 00:30:27.815 06:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:30:28.075 06:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:30:28.075 06:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:28.075 06:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:28.075 06:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:28.075 06:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:30:28.075 06:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:30:28.075 06:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:30:28.075 06:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:30:28.075 06:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:30:28.075 06:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:30:28.075 06:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:30:28.075 06:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:28.075 06:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:30:28.075 06:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:30:28.075 06:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:28.075 06:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:28.075 06:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:30:28.075 06:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:30:28.075 06:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:28.075 06:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:30:28.075 06:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:30:28.075 06:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:30:28.075 06:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:30:28.075 06:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:28.075 06:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:30:28.075 06:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:30:28.075 06:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:28.075 06:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:28.075 06:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:30:28.075 06:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:28.075 06:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:30:28.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:28.075 --rc genhtml_branch_coverage=1 00:30:28.075 --rc genhtml_function_coverage=1 00:30:28.075 --rc genhtml_legend=1 00:30:28.075 --rc geninfo_all_blocks=1 00:30:28.075 --rc geninfo_unexecuted_blocks=1 00:30:28.075 00:30:28.075 ' 00:30:28.075 06:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:30:28.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:28.075 --rc genhtml_branch_coverage=1 00:30:28.075 --rc genhtml_function_coverage=1 00:30:28.075 --rc genhtml_legend=1 00:30:28.075 --rc geninfo_all_blocks=1 00:30:28.075 --rc geninfo_unexecuted_blocks=1 00:30:28.075 00:30:28.075 ' 00:30:28.075 06:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:30:28.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:28.075 --rc genhtml_branch_coverage=1 00:30:28.075 --rc genhtml_function_coverage=1 00:30:28.075 --rc genhtml_legend=1 00:30:28.075 --rc geninfo_all_blocks=1 00:30:28.075 --rc geninfo_unexecuted_blocks=1 00:30:28.075 00:30:28.075 ' 00:30:28.075 06:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:30:28.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:28.075 --rc genhtml_branch_coverage=1 00:30:28.075 --rc genhtml_function_coverage=1 00:30:28.075 --rc genhtml_legend=1 00:30:28.075 --rc geninfo_all_blocks=1 00:30:28.075 --rc geninfo_unexecuted_blocks=1 00:30:28.075 00:30:28.075 ' 00:30:28.075 06:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:28.075 06:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:30:28.075 06:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:28.075 06:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:28.075 06:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:28.075 06:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:28.075 06:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:28.075 06:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:28.075 06:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:28.075 06:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:28.075 06:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:28.075 06:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:28.075 06:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:30:28.075 06:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:30:28.075 06:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:28.075 06:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:28.075 06:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:28.075 06:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:28.075 06:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:28.076 06:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:30:28.076 06:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:28.076 06:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:28.076 06:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:28.076 06:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:28.076 06:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:28.076 06:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:28.076 06:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:30:28.076 06:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:28.076 06:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:30:28.076 06:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:28.076 06:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:28.076 06:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:28.076 06:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:28.076 06:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:28.076 06:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:28.076 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:28.076 06:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:28.076 06:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:28.076 06:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:28.076 06:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:30:28.076 06:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:30:28.076 06:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:30:28.076 06:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:30:28.076 06:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:30:28.076 06:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:30:28.076 06:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:30:28.076 06:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:30:28.076 06:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:30:28.076 06:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:28.076 06:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:28.076 06:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:28.076 06:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:28.076 06:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:28.076 06:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:28.076 06:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:28.076 06:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:28.076 06:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:28.076 06:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:28.076 06:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:30:28.076 06:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:34.646 06:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:34.646 06:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:30:34.646 06:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:34.646 06:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:34.646 06:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:34.646 06:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:34.646 06:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:34.646 06:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:30:34.646 06:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:34.646 06:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:30:34.646 06:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:30:34.646 06:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:30:34.646 06:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:30:34.646 06:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:30:34.646 06:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:30:34.646 06:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:34.646 06:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:34.646 06:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:34.646 06:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:34.646 06:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:34.646 06:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:34.646 06:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:34.646 06:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:34.646 06:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:34.646 06:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:34.646 06:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:34.646 06:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:34.646 06:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:34.646 06:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:34.646 06:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:34.646 06:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:34.646 06:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:34.646 06:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:34.646 06:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:34.646 06:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:30:34.646 Found 0000:86:00.0 (0x8086 - 0x159b) 00:30:34.646 06:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:34.646 06:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:34.646 06:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:34.646 06:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:34.646 06:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:34.646 06:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:34.646 06:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:30:34.646 Found 0000:86:00.1 (0x8086 - 0x159b) 00:30:34.646 06:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:34.646 06:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:34.646 06:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:34.646 06:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:34.646 06:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:34.646 06:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:34.646 06:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:34.646 06:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:34.646 06:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:34.646 06:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:34.646 06:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:34.646 06:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:34.646 06:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:34.647 06:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:34.647 06:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:34.647 06:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:30:34.647 Found net devices under 0000:86:00.0: cvl_0_0 00:30:34.647 06:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:34.647 06:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:34.647 06:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:34.647 06:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:34.647 06:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:34.647 06:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:34.647 06:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:34.647 06:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:34.647 06:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:30:34.647 Found net devices under 0000:86:00.1: cvl_0_1 00:30:34.647 06:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:34.647 06:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:34.647 06:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes 00:30:34.647 06:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:34.647 06:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:34.647 06:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:34.647 06:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:34.647 06:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:34.647 06:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:34.647 06:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:34.647 06:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:34.647 06:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:34.647 06:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:34.647 06:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:34.647 06:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:34.647 06:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:34.647 06:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:34.647 06:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:34.647 06:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:34.647 06:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:34.647 06:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:34.647 06:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:34.647 06:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:34.647 06:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:34.647 06:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:34.647 06:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:34.647 06:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:34.647 06:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:34.647 06:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:34.647 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:34.647 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.467 ms 00:30:34.647 00:30:34.647 --- 10.0.0.2 ping statistics --- 00:30:34.647 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:34.647 rtt min/avg/max/mdev = 0.467/0.467/0.467/0.000 ms 00:30:34.647 06:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:34.647 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:34.647 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.200 ms 00:30:34.647 00:30:34.647 --- 10.0.0.1 ping statistics --- 00:30:34.647 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:34.647 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:30:34.647 06:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:34.647 06:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0 00:30:34.647 06:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:34.647 06:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:34.647 06:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:34.647 06:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:34.647 06:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:34.647 06:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:34.647 06:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:34.647 06:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:30:34.647 06:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:34.647 06:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:34.647 06:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:34.647 06:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=681307 00:30:34.647 06:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:30:34.647 06:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 681307 00:30:34.647 06:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@833 -- # '[' -z 681307 ']' 00:30:34.647 06:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:34.647 06:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # local max_retries=100 00:30:34.647 06:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:34.647 06:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # xtrace_disable 00:30:34.647 06:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:34.647 06:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:30:34.647 06:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@866 -- # return 0 00:30:34.647 06:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:34.647 06:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:34.647 06:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:34.647 06:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:34.647 06:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:30:34.647 06:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:30:34.647 06:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:30:34.647 06:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:30:34.647 06:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:30:34.647 06:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:30:34.647 06:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:30:34.647 06:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:30:34.647 06:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=0b976ddab12fc455abdcbf9db2e876ee 00:30:34.647 06:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:30:34.647 06:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.TQT 00:30:34.647 06:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 0b976ddab12fc455abdcbf9db2e876ee 0 00:30:34.647 06:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 0b976ddab12fc455abdcbf9db2e876ee 0 00:30:34.647 06:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:30:34.647 06:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:30:34.647 06:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=0b976ddab12fc455abdcbf9db2e876ee 00:30:34.647 06:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:30:34.647 06:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:30:34.647 06:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.TQT 00:30:34.647 06:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.TQT 00:30:34.647 06:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.TQT 00:30:34.647 06:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:30:34.647 06:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:30:34.647 06:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:30:34.647 06:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:30:34.647 06:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:30:34.647 06:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:30:34.647 06:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:30:34.647 06:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=ee0b3822b30897eaee8371b95207bf0057621b07fbac79601599bb2684178fd2 00:30:34.647 06:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:30:34.647 06:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.Dj6 00:30:34.647 06:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key ee0b3822b30897eaee8371b95207bf0057621b07fbac79601599bb2684178fd2 3 00:30:34.648 06:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 ee0b3822b30897eaee8371b95207bf0057621b07fbac79601599bb2684178fd2 3 00:30:34.648 06:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:30:34.648 06:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:30:34.648 06:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=ee0b3822b30897eaee8371b95207bf0057621b07fbac79601599bb2684178fd2 00:30:34.648 06:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:30:34.648 06:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:30:34.648 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.Dj6 00:30:34.648 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.Dj6 00:30:34.648 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.Dj6 00:30:34.648 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:30:34.648 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:30:34.648 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:30:34.648 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:30:34.648 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:30:34.648 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:30:34.648 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:30:34.648 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=070eea581f8e7c60d8132cd194993a261a96c31c77b58002 00:30:34.648 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:30:34.648 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.gki 00:30:34.648 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 070eea581f8e7c60d8132cd194993a261a96c31c77b58002 0 00:30:34.648 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 070eea581f8e7c60d8132cd194993a261a96c31c77b58002 0 00:30:34.648 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:30:34.648 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:30:34.648 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=070eea581f8e7c60d8132cd194993a261a96c31c77b58002 00:30:34.648 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:30:34.648 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:30:34.648 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.gki 00:30:34.648 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.gki 00:30:34.648 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.gki 00:30:34.648 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:30:34.648 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:30:34.648 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:30:34.648 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:30:34.648 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:30:34.648 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:30:34.648 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:30:34.648 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=c16bbbc9e0b69b00bf351b927fff54c11575f114fb66dc90 00:30:34.648 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:30:34.648 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.uyf 00:30:34.648 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key c16bbbc9e0b69b00bf351b927fff54c11575f114fb66dc90 2 00:30:34.648 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 c16bbbc9e0b69b00bf351b927fff54c11575f114fb66dc90 2 00:30:34.648 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:30:34.648 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:30:34.648 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=c16bbbc9e0b69b00bf351b927fff54c11575f114fb66dc90 00:30:34.648 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:30:34.648 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:30:34.648 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.uyf 00:30:34.648 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.uyf 00:30:34.648 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.uyf 00:30:34.648 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:30:34.648 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:30:34.648 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:30:34.648 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:30:34.648 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:30:34.648 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:30:34.648 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:30:34.648 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=132562491ae548a581c3dae5db2999de 00:30:34.648 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:30:34.648 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.ymw 00:30:34.648 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 132562491ae548a581c3dae5db2999de 1 00:30:34.648 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 132562491ae548a581c3dae5db2999de 1 00:30:34.648 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:30:34.648 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:30:34.648 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=132562491ae548a581c3dae5db2999de 00:30:34.648 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:30:34.648 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:30:34.648 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.ymw 00:30:34.648 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.ymw 00:30:34.648 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.ymw 00:30:34.648 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:30:34.648 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:30:34.648 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:30:34.648 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:30:34.648 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:30:34.648 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:30:34.648 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:30:34.648 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=f4a77f3216ddd33e32444cc7bbae9dea 00:30:34.648 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:30:34.648 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.TkZ 00:30:34.648 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key f4a77f3216ddd33e32444cc7bbae9dea 1 00:30:34.648 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 f4a77f3216ddd33e32444cc7bbae9dea 1 00:30:34.648 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:30:34.648 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:30:34.648 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=f4a77f3216ddd33e32444cc7bbae9dea 00:30:34.648 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:30:34.648 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:30:34.648 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.TkZ 00:30:34.648 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.TkZ 00:30:34.648 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.TkZ 00:30:34.648 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:30:34.648 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:30:34.648 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:30:34.648 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:30:34.648 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:30:34.648 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:30:34.648 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:30:34.648 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=d0e9262f3c0230dccedd88d8a6e4cf2791535a23080e44cb 00:30:34.648 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:30:34.648 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.cip 00:30:34.648 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key d0e9262f3c0230dccedd88d8a6e4cf2791535a23080e44cb 2 00:30:34.648 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 d0e9262f3c0230dccedd88d8a6e4cf2791535a23080e44cb 2 00:30:34.648 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:30:34.648 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:30:34.648 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=d0e9262f3c0230dccedd88d8a6e4cf2791535a23080e44cb 00:30:34.648 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:30:34.648 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:30:34.648 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.cip 00:30:34.648 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.cip 00:30:34.648 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.cip 00:30:34.648 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:30:34.648 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:30:34.648 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:30:34.649 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:30:34.649 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:30:34.649 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:30:34.649 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:30:34.649 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=530e1c04fdd59bc15fcfd433ddc466c2 00:30:34.649 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:30:34.649 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.6Rk 00:30:34.649 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 530e1c04fdd59bc15fcfd433ddc466c2 0 00:30:34.649 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 530e1c04fdd59bc15fcfd433ddc466c2 0 00:30:34.649 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:30:34.649 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:30:34.649 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=530e1c04fdd59bc15fcfd433ddc466c2 00:30:34.649 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:30:34.649 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:30:34.649 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.6Rk 00:30:34.649 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.6Rk 00:30:34.649 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.6Rk 00:30:34.649 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:30:34.649 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:30:34.649 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:30:34.649 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:30:34.649 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:30:34.649 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:30:34.649 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:30:34.649 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=9295f32be31fc2035db65abc9c0970272061c352e13ee7c2ab6d10702644625f 00:30:34.649 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:30:34.649 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.82E 00:30:34.649 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 9295f32be31fc2035db65abc9c0970272061c352e13ee7c2ab6d10702644625f 3 00:30:34.649 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 9295f32be31fc2035db65abc9c0970272061c352e13ee7c2ab6d10702644625f 3 00:30:34.649 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:30:34.649 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:30:34.649 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=9295f32be31fc2035db65abc9c0970272061c352e13ee7c2ab6d10702644625f 00:30:34.649 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:30:34.649 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:30:34.649 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.82E 00:30:34.649 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.82E 00:30:34.649 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.82E 00:30:34.649 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:30:34.649 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 681307 00:30:34.649 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@833 -- # '[' -z 681307 ']' 00:30:34.649 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:34.649 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # local max_retries=100 00:30:34.649 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:34.649 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:34.649 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # xtrace_disable 00:30:34.649 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:34.908 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:30:34.908 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@866 -- # return 0 00:30:34.908 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:30:34.908 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.TQT 00:30:34.908 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:34.908 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:34.908 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:34.908 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.Dj6 ]] 00:30:34.908 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Dj6 00:30:34.908 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:34.908 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:34.908 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:34.908 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:30:34.908 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.gki 00:30:34.908 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:34.908 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:34.908 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:34.908 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.uyf ]] 00:30:34.908 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.uyf 00:30:34.908 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:34.908 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:34.908 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:34.908 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:30:34.908 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.ymw 00:30:34.908 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:34.908 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:34.908 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:34.908 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.TkZ ]] 00:30:34.908 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.TkZ 00:30:34.908 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:34.908 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:34.908 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:34.908 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:30:34.908 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.cip 00:30:34.908 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:34.908 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:34.908 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:34.908 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.6Rk ]] 00:30:34.908 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.6Rk 00:30:34.908 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:34.908 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:34.908 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:34.908 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:30:34.908 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.82E 00:30:34.908 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:34.908 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:34.908 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:34.908 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:30:34.908 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:30:34.908 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:30:34.908 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:34.908 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:34.908 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:34.908 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:34.908 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:34.908 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:34.908 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:34.908 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:34.908 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:34.908 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:34.908 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:30:34.909 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:30:34.909 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:30:34.909 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:30:34.909 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:30:34.909 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:30:34.909 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:30:34.909 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:30:34.909 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:30:35.167 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:30:35.167 06:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:30:37.698 Waiting for block devices as requested 00:30:37.698 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:30:37.956 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:30:37.956 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:30:37.956 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:30:37.956 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:30:38.215 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:30:38.215 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:30:38.215 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:30:38.215 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:30:38.473 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:30:38.473 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:30:38.473 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:30:38.733 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:30:38.733 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:30:38.733 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:30:38.991 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:30:38.991 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:30:39.559 06:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:30:39.559 06:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:30:39.559 06:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:30:39.559 06:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:30:39.559 06:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:30:39.559 06:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:30:39.559 06:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:30:39.559 06:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:30:39.560 06:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:30:39.560 No valid GPT data, bailing 00:30:39.560 06:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:30:39.560 06:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:30:39.560 06:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:30:39.560 06:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:30:39.560 06:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:30:39.560 06:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:30:39.560 06:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:30:39.560 06:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:30:39.560 06:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:30:39.560 06:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:30:39.560 06:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:30:39.560 06:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:30:39.560 06:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:30:39.560 06:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:30:39.560 06:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:30:39.560 06:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:30:39.560 06:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:30:39.560 06:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:30:39.560 00:30:39.560 Discovery Log Number of Records 2, Generation counter 2 00:30:39.560 =====Discovery Log Entry 0====== 00:30:39.560 trtype: tcp 00:30:39.560 adrfam: ipv4 00:30:39.560 subtype: current discovery subsystem 00:30:39.560 treq: not specified, sq flow control disable supported 00:30:39.560 portid: 1 00:30:39.560 trsvcid: 4420 00:30:39.560 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:30:39.560 traddr: 10.0.0.1 00:30:39.560 eflags: none 00:30:39.560 sectype: none 00:30:39.560 =====Discovery Log Entry 1====== 00:30:39.560 trtype: tcp 00:30:39.560 adrfam: ipv4 00:30:39.560 subtype: nvme subsystem 00:30:39.560 treq: not specified, sq flow control disable supported 00:30:39.560 portid: 1 00:30:39.560 trsvcid: 4420 00:30:39.560 subnqn: nqn.2024-02.io.spdk:cnode0 00:30:39.560 traddr: 10.0.0.1 00:30:39.560 eflags: none 00:30:39.560 sectype: none 00:30:39.560 06:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:30:39.819 06:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:30:39.819 06:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:30:39.819 06:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:30:39.819 06:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:39.819 06:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:39.819 06:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:39.819 06:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:30:39.819 06:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDcwZWVhNTgxZjhlN2M2MGQ4MTMyY2QxOTQ5OTNhMjYxYTk2YzMxYzc3YjU4MDAyPkmkKg==: 00:30:39.819 06:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzE2YmJiYzllMGI2OWIwMGJmMzUxYjkyN2ZmZjU0YzExNTc1ZjExNGZiNjZkYzkwwgEU6g==: 00:30:39.819 06:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:39.819 06:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:30:39.819 06:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDcwZWVhNTgxZjhlN2M2MGQ4MTMyY2QxOTQ5OTNhMjYxYTk2YzMxYzc3YjU4MDAyPkmkKg==: 00:30:39.819 06:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzE2YmJiYzllMGI2OWIwMGJmMzUxYjkyN2ZmZjU0YzExNTc1ZjExNGZiNjZkYzkwwgEU6g==: ]] 00:30:39.819 06:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzE2YmJiYzllMGI2OWIwMGJmMzUxYjkyN2ZmZjU0YzExNTc1ZjExNGZiNjZkYzkwwgEU6g==: 00:30:39.819 06:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:30:39.819 06:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:30:39.819 06:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:30:39.819 06:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:30:39.819 06:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:30:39.819 06:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:39.819 06:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:30:39.819 06:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:30:39.819 06:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:30:39.819 06:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:39.819 06:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:30:39.819 06:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:39.819 06:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:39.819 06:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:39.819 06:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:39.819 06:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:39.819 06:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:39.819 06:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:39.819 06:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:39.819 06:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:39.819 06:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:39.819 06:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:39.819 06:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:39.819 06:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:39.819 06:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:39.819 06:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:39.820 06:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:39.820 06:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:39.820 nvme0n1 00:30:39.820 06:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:39.820 06:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:39.820 06:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:39.820 06:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:39.820 06:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:39.820 06:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:39.820 06:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:39.820 06:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:39.820 06:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:39.820 06:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:39.820 06:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:39.820 06:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:30:39.820 06:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:30:39.820 06:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:39.820 06:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:30:39.820 06:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:39.820 06:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:39.820 06:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:39.820 06:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:30:39.820 06:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGI5NzZkZGFiMTJmYzQ1NWFiZGNiZjlkYjJlODc2ZWVmk4E/: 00:30:39.820 06:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWUwYjM4MjJiMzA4OTdlYWVlODM3MWI5NTIwN2JmMDA1NzYyMWIwN2ZiYWM3OTYwMTU5OWJiMjY4NDE3OGZkMprsxS0=: 00:30:39.820 06:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:39.820 06:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:30:39.820 06:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGI5NzZkZGFiMTJmYzQ1NWFiZGNiZjlkYjJlODc2ZWVmk4E/: 00:30:39.820 06:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWUwYjM4MjJiMzA4OTdlYWVlODM3MWI5NTIwN2JmMDA1NzYyMWIwN2ZiYWM3OTYwMTU5OWJiMjY4NDE3OGZkMprsxS0=: ]] 00:30:39.820 06:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWUwYjM4MjJiMzA4OTdlYWVlODM3MWI5NTIwN2JmMDA1NzYyMWIwN2ZiYWM3OTYwMTU5OWJiMjY4NDE3OGZkMprsxS0=: 00:30:39.820 06:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:30:39.820 06:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:39.820 06:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:39.820 06:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:30:39.820 06:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:30:39.820 06:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:39.820 06:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:30:39.820 06:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:39.820 06:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:39.820 06:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:39.820 06:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:39.820 06:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:39.820 06:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:39.820 06:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:39.820 06:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:39.820 06:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:39.820 06:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:39.820 06:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:39.820 06:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:39.820 06:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:39.820 06:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:40.079 06:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:40.079 06:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:40.079 06:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:40.079 nvme0n1 00:30:40.079 06:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:40.079 06:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:40.079 06:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:40.079 06:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:40.079 06:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:40.079 06:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:40.079 06:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:40.079 06:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:40.079 06:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:40.079 06:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:40.079 06:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:40.079 06:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:40.079 06:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:30:40.079 06:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:40.079 06:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:40.079 06:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:40.079 06:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:30:40.079 06:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDcwZWVhNTgxZjhlN2M2MGQ4MTMyY2QxOTQ5OTNhMjYxYTk2YzMxYzc3YjU4MDAyPkmkKg==: 00:30:40.079 06:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzE2YmJiYzllMGI2OWIwMGJmMzUxYjkyN2ZmZjU0YzExNTc1ZjExNGZiNjZkYzkwwgEU6g==: 00:30:40.079 06:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:40.079 06:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:30:40.079 06:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDcwZWVhNTgxZjhlN2M2MGQ4MTMyY2QxOTQ5OTNhMjYxYTk2YzMxYzc3YjU4MDAyPkmkKg==: 00:30:40.079 06:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzE2YmJiYzllMGI2OWIwMGJmMzUxYjkyN2ZmZjU0YzExNTc1ZjExNGZiNjZkYzkwwgEU6g==: ]] 00:30:40.079 06:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzE2YmJiYzllMGI2OWIwMGJmMzUxYjkyN2ZmZjU0YzExNTc1ZjExNGZiNjZkYzkwwgEU6g==: 00:30:40.079 06:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:30:40.079 06:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:40.079 06:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:40.079 06:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:30:40.079 06:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:30:40.079 06:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:40.080 06:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:30:40.080 06:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:40.080 06:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:40.080 06:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:40.080 06:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:40.080 06:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:40.080 06:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:40.080 06:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:40.080 06:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:40.080 06:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:40.080 06:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:40.080 06:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:40.080 06:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:40.080 06:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:40.080 06:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:40.080 06:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:40.080 06:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:40.080 06:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:40.339 nvme0n1 00:30:40.339 06:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:40.339 06:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:40.339 06:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:40.339 06:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:40.339 06:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:40.339 06:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:40.339 06:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:40.339 06:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:40.339 06:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:40.339 06:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:40.339 06:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:40.339 06:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:40.339 06:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:30:40.339 06:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:40.339 06:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:40.339 06:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:40.339 06:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:30:40.339 06:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTMyNTYyNDkxYWU1NDhhNTgxYzNkYWU1ZGIyOTk5ZGVv1SUr: 00:30:40.339 06:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjRhNzdmMzIxNmRkZDMzZTMyNDQ0Y2M3YmJhZTlkZWHfDQR9: 00:30:40.339 06:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:40.339 06:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:30:40.339 06:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTMyNTYyNDkxYWU1NDhhNTgxYzNkYWU1ZGIyOTk5ZGVv1SUr: 00:30:40.339 06:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjRhNzdmMzIxNmRkZDMzZTMyNDQ0Y2M3YmJhZTlkZWHfDQR9: ]] 00:30:40.339 06:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjRhNzdmMzIxNmRkZDMzZTMyNDQ0Y2M3YmJhZTlkZWHfDQR9: 00:30:40.339 06:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:30:40.339 06:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:40.339 06:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:40.339 06:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:30:40.339 06:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:30:40.339 06:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:40.339 06:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:30:40.339 06:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:40.339 06:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:40.339 06:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:40.339 06:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:40.339 06:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:40.339 06:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:40.339 06:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:40.340 06:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:40.340 06:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:40.340 06:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:40.340 06:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:40.340 06:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:40.340 06:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:40.340 06:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:40.340 06:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:40.340 06:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:40.340 06:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:40.599 nvme0n1 00:30:40.599 06:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:40.599 06:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:40.599 06:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:40.599 06:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:40.599 06:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:40.599 06:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:40.599 06:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:40.599 06:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:40.599 06:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:40.599 06:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:40.599 06:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:40.599 06:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:40.599 06:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:30:40.599 06:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:40.599 06:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:40.599 06:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:40.599 06:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:30:40.599 06:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDBlOTI2MmYzYzAyMzBkY2NlZGQ4OGQ4YTZlNGNmMjc5MTUzNWEyMzA4MGU0NGNi7F8Cfw==: 00:30:40.599 06:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTMwZTFjMDRmZGQ1OWJjMTVmY2ZkNDMzZGRjNDY2YzJUavto: 00:30:40.599 06:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:40.599 06:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:30:40.599 06:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDBlOTI2MmYzYzAyMzBkY2NlZGQ4OGQ4YTZlNGNmMjc5MTUzNWEyMzA4MGU0NGNi7F8Cfw==: 00:30:40.599 06:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTMwZTFjMDRmZGQ1OWJjMTVmY2ZkNDMzZGRjNDY2YzJUavto: ]] 00:30:40.599 06:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTMwZTFjMDRmZGQ1OWJjMTVmY2ZkNDMzZGRjNDY2YzJUavto: 00:30:40.599 06:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:30:40.599 06:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:40.599 06:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:40.599 06:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:30:40.599 06:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:30:40.599 06:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:40.599 06:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:30:40.599 06:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:40.599 06:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:40.599 06:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:40.599 06:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:40.599 06:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:40.599 06:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:40.599 06:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:40.599 06:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:40.599 06:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:40.599 06:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:40.599 06:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:40.599 06:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:40.599 06:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:40.599 06:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:40.599 06:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:30:40.599 06:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:40.599 06:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:40.859 nvme0n1 00:30:40.859 06:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:40.859 06:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:40.859 06:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:40.859 06:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:40.859 06:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:40.859 06:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:40.859 06:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:40.859 06:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:40.859 06:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:40.859 06:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:40.859 06:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:40.859 06:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:40.859 06:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:30:40.859 06:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:40.859 06:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:40.859 06:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:40.859 06:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:30:40.859 06:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTI5NWYzMmJlMzFmYzIwMzVkYjY1YWJjOWMwOTcwMjcyMDYxYzM1MmUxM2VlN2MyYWI2ZDEwNzAyNjQ0NjI1ZvHhVgU=: 00:30:40.859 06:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:30:40.859 06:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:40.859 06:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:30:40.859 06:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTI5NWYzMmJlMzFmYzIwMzVkYjY1YWJjOWMwOTcwMjcyMDYxYzM1MmUxM2VlN2MyYWI2ZDEwNzAyNjQ0NjI1ZvHhVgU=: 00:30:40.859 06:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:30:40.859 06:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:30:40.859 06:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:40.859 06:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:40.859 06:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:30:40.859 06:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:30:40.859 06:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:40.859 06:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:30:40.859 06:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:40.859 06:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:40.859 06:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:40.859 06:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:40.859 06:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:40.859 06:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:40.859 06:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:40.859 06:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:40.859 06:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:40.859 06:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:40.859 06:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:40.859 06:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:40.859 06:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:40.859 06:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:40.859 06:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:30:40.859 06:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:40.859 06:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:41.119 nvme0n1 00:30:41.119 06:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:41.119 06:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:41.119 06:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:41.119 06:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:41.119 06:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:41.119 06:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:41.119 06:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:41.119 06:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:41.119 06:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:41.119 06:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:41.119 06:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:41.119 06:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:30:41.119 06:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:41.119 06:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:30:41.119 06:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:41.119 06:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:41.119 06:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:30:41.119 06:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:30:41.119 06:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGI5NzZkZGFiMTJmYzQ1NWFiZGNiZjlkYjJlODc2ZWVmk4E/: 00:30:41.119 06:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWUwYjM4MjJiMzA4OTdlYWVlODM3MWI5NTIwN2JmMDA1NzYyMWIwN2ZiYWM3OTYwMTU5OWJiMjY4NDE3OGZkMprsxS0=: 00:30:41.119 06:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:41.119 06:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:30:41.119 06:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGI5NzZkZGFiMTJmYzQ1NWFiZGNiZjlkYjJlODc2ZWVmk4E/: 00:30:41.119 06:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWUwYjM4MjJiMzA4OTdlYWVlODM3MWI5NTIwN2JmMDA1NzYyMWIwN2ZiYWM3OTYwMTU5OWJiMjY4NDE3OGZkMprsxS0=: ]] 00:30:41.119 06:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWUwYjM4MjJiMzA4OTdlYWVlODM3MWI5NTIwN2JmMDA1NzYyMWIwN2ZiYWM3OTYwMTU5OWJiMjY4NDE3OGZkMprsxS0=: 00:30:41.119 06:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:30:41.119 06:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:41.119 06:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:41.119 06:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:30:41.119 06:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:30:41.119 06:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:41.119 06:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:30:41.119 06:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:41.119 06:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:41.119 06:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:41.119 06:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:41.119 06:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:41.119 06:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:41.119 06:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:41.119 06:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:41.119 06:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:41.119 06:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:41.119 06:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:41.119 06:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:41.119 06:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:41.119 06:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:41.119 06:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:41.119 06:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:41.119 06:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:41.379 nvme0n1 00:30:41.379 06:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:41.379 06:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:41.379 06:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:41.379 06:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:41.379 06:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:41.379 06:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:41.379 06:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:41.379 06:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:41.379 06:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:41.379 06:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:41.379 06:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:41.379 06:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:41.379 06:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:30:41.379 06:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:41.379 06:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:41.379 06:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:30:41.379 06:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:30:41.379 06:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDcwZWVhNTgxZjhlN2M2MGQ4MTMyY2QxOTQ5OTNhMjYxYTk2YzMxYzc3YjU4MDAyPkmkKg==: 00:30:41.379 06:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzE2YmJiYzllMGI2OWIwMGJmMzUxYjkyN2ZmZjU0YzExNTc1ZjExNGZiNjZkYzkwwgEU6g==: 00:30:41.379 06:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:41.379 06:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:30:41.379 06:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDcwZWVhNTgxZjhlN2M2MGQ4MTMyY2QxOTQ5OTNhMjYxYTk2YzMxYzc3YjU4MDAyPkmkKg==: 00:30:41.379 06:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzE2YmJiYzllMGI2OWIwMGJmMzUxYjkyN2ZmZjU0YzExNTc1ZjExNGZiNjZkYzkwwgEU6g==: ]] 00:30:41.379 06:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzE2YmJiYzllMGI2OWIwMGJmMzUxYjkyN2ZmZjU0YzExNTc1ZjExNGZiNjZkYzkwwgEU6g==: 00:30:41.379 06:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:30:41.379 06:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:41.379 06:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:41.379 06:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:30:41.379 06:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:30:41.379 06:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:41.379 06:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:30:41.379 06:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:41.379 06:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:41.379 06:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:41.379 06:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:41.379 06:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:41.379 06:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:41.379 06:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:41.379 06:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:41.379 06:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:41.379 06:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:41.379 06:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:41.379 06:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:41.379 06:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:41.379 06:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:41.379 06:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:41.379 06:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:41.379 06:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:41.638 nvme0n1 00:30:41.638 06:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:41.638 06:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:41.638 06:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:41.638 06:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:41.638 06:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:41.638 06:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:41.638 06:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:41.638 06:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:41.638 06:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:41.638 06:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:41.638 06:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:41.638 06:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:41.638 06:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:30:41.639 06:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:41.639 06:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:41.639 06:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:30:41.639 06:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:30:41.639 06:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTMyNTYyNDkxYWU1NDhhNTgxYzNkYWU1ZGIyOTk5ZGVv1SUr: 00:30:41.639 06:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjRhNzdmMzIxNmRkZDMzZTMyNDQ0Y2M3YmJhZTlkZWHfDQR9: 00:30:41.639 06:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:41.639 06:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:30:41.639 06:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTMyNTYyNDkxYWU1NDhhNTgxYzNkYWU1ZGIyOTk5ZGVv1SUr: 00:30:41.639 06:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjRhNzdmMzIxNmRkZDMzZTMyNDQ0Y2M3YmJhZTlkZWHfDQR9: ]] 00:30:41.639 06:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjRhNzdmMzIxNmRkZDMzZTMyNDQ0Y2M3YmJhZTlkZWHfDQR9: 00:30:41.639 06:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:30:41.639 06:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:41.639 06:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:41.639 06:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:30:41.639 06:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:30:41.639 06:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:41.639 06:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:30:41.639 06:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:41.639 06:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:41.639 06:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:41.639 06:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:41.639 06:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:41.639 06:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:41.639 06:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:41.639 06:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:41.639 06:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:41.639 06:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:41.639 06:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:41.639 06:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:41.639 06:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:41.639 06:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:41.639 06:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:41.639 06:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:41.639 06:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:41.639 nvme0n1 00:30:41.639 06:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:41.639 06:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:41.639 06:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:41.639 06:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:41.639 06:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:41.898 06:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:41.898 06:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:41.898 06:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:41.898 06:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:41.898 06:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:41.898 06:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:41.898 06:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:41.898 06:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:30:41.898 06:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:41.898 06:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:41.898 06:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:30:41.898 06:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:30:41.898 06:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDBlOTI2MmYzYzAyMzBkY2NlZGQ4OGQ4YTZlNGNmMjc5MTUzNWEyMzA4MGU0NGNi7F8Cfw==: 00:30:41.898 06:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTMwZTFjMDRmZGQ1OWJjMTVmY2ZkNDMzZGRjNDY2YzJUavto: 00:30:41.898 06:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:41.898 06:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:30:41.898 06:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDBlOTI2MmYzYzAyMzBkY2NlZGQ4OGQ4YTZlNGNmMjc5MTUzNWEyMzA4MGU0NGNi7F8Cfw==: 00:30:41.898 06:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTMwZTFjMDRmZGQ1OWJjMTVmY2ZkNDMzZGRjNDY2YzJUavto: ]] 00:30:41.898 06:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTMwZTFjMDRmZGQ1OWJjMTVmY2ZkNDMzZGRjNDY2YzJUavto: 00:30:41.898 06:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:30:41.898 06:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:41.898 06:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:41.898 06:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:30:41.898 06:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:30:41.898 06:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:41.898 06:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:30:41.898 06:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:41.898 06:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:41.898 06:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:41.898 06:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:41.898 06:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:41.898 06:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:41.898 06:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:41.898 06:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:41.898 06:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:41.898 06:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:41.898 06:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:41.898 06:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:41.898 06:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:41.898 06:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:41.898 06:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:30:41.898 06:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:41.898 06:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:41.898 nvme0n1 00:30:41.898 06:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:41.898 06:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:41.898 06:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:41.898 06:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:41.898 06:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:42.157 06:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:42.157 06:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:42.157 06:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:42.157 06:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:42.157 06:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:42.157 06:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:42.157 06:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:42.157 06:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:30:42.157 06:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:42.157 06:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:42.157 06:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:30:42.157 06:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:30:42.157 06:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTI5NWYzMmJlMzFmYzIwMzVkYjY1YWJjOWMwOTcwMjcyMDYxYzM1MmUxM2VlN2MyYWI2ZDEwNzAyNjQ0NjI1ZvHhVgU=: 00:30:42.157 06:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:30:42.157 06:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:42.157 06:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:30:42.157 06:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTI5NWYzMmJlMzFmYzIwMzVkYjY1YWJjOWMwOTcwMjcyMDYxYzM1MmUxM2VlN2MyYWI2ZDEwNzAyNjQ0NjI1ZvHhVgU=: 00:30:42.157 06:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:30:42.157 06:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:30:42.157 06:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:42.157 06:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:42.157 06:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:30:42.157 06:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:30:42.157 06:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:42.157 06:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:30:42.157 06:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:42.157 06:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:42.157 06:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:42.157 06:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:42.157 06:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:42.157 06:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:42.157 06:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:42.157 06:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:42.157 06:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:42.157 06:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:42.157 06:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:42.157 06:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:42.157 06:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:42.157 06:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:42.157 06:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:30:42.157 06:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:42.157 06:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:42.157 nvme0n1 00:30:42.157 06:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:42.157 06:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:42.157 06:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:42.157 06:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:42.157 06:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:42.417 06:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:42.417 06:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:42.417 06:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:42.417 06:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:42.417 06:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:42.417 06:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:42.417 06:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:30:42.417 06:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:42.417 06:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:30:42.417 06:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:42.417 06:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:42.417 06:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:30:42.417 06:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:30:42.417 06:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGI5NzZkZGFiMTJmYzQ1NWFiZGNiZjlkYjJlODc2ZWVmk4E/: 00:30:42.417 06:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWUwYjM4MjJiMzA4OTdlYWVlODM3MWI5NTIwN2JmMDA1NzYyMWIwN2ZiYWM3OTYwMTU5OWJiMjY4NDE3OGZkMprsxS0=: 00:30:42.417 06:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:42.417 06:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:30:42.417 06:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGI5NzZkZGFiMTJmYzQ1NWFiZGNiZjlkYjJlODc2ZWVmk4E/: 00:30:42.417 06:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWUwYjM4MjJiMzA4OTdlYWVlODM3MWI5NTIwN2JmMDA1NzYyMWIwN2ZiYWM3OTYwMTU5OWJiMjY4NDE3OGZkMprsxS0=: ]] 00:30:42.417 06:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWUwYjM4MjJiMzA4OTdlYWVlODM3MWI5NTIwN2JmMDA1NzYyMWIwN2ZiYWM3OTYwMTU5OWJiMjY4NDE3OGZkMprsxS0=: 00:30:42.417 06:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:30:42.417 06:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:42.417 06:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:42.417 06:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:30:42.417 06:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:30:42.417 06:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:42.417 06:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:30:42.417 06:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:42.417 06:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:42.417 06:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:42.417 06:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:42.417 06:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:42.417 06:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:42.417 06:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:42.417 06:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:42.417 06:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:42.417 06:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:42.417 06:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:42.417 06:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:42.417 06:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:42.417 06:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:42.417 06:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:42.417 06:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:42.417 06:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:42.676 nvme0n1 00:30:42.676 06:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:42.676 06:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:42.676 06:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:42.676 06:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:42.676 06:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:42.676 06:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:42.676 06:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:42.676 06:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:42.676 06:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:42.676 06:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:42.677 06:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:42.677 06:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:42.677 06:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:30:42.677 06:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:42.677 06:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:42.677 06:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:30:42.677 06:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:30:42.677 06:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDcwZWVhNTgxZjhlN2M2MGQ4MTMyY2QxOTQ5OTNhMjYxYTk2YzMxYzc3YjU4MDAyPkmkKg==: 00:30:42.677 06:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzE2YmJiYzllMGI2OWIwMGJmMzUxYjkyN2ZmZjU0YzExNTc1ZjExNGZiNjZkYzkwwgEU6g==: 00:30:42.677 06:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:42.677 06:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:30:42.677 06:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDcwZWVhNTgxZjhlN2M2MGQ4MTMyY2QxOTQ5OTNhMjYxYTk2YzMxYzc3YjU4MDAyPkmkKg==: 00:30:42.677 06:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzE2YmJiYzllMGI2OWIwMGJmMzUxYjkyN2ZmZjU0YzExNTc1ZjExNGZiNjZkYzkwwgEU6g==: ]] 00:30:42.677 06:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzE2YmJiYzllMGI2OWIwMGJmMzUxYjkyN2ZmZjU0YzExNTc1ZjExNGZiNjZkYzkwwgEU6g==: 00:30:42.677 06:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:30:42.677 06:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:42.677 06:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:42.677 06:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:30:42.677 06:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:30:42.677 06:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:42.677 06:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:30:42.677 06:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:42.677 06:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:42.677 06:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:42.677 06:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:42.677 06:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:42.677 06:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:42.677 06:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:42.677 06:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:42.677 06:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:42.677 06:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:42.677 06:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:42.677 06:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:42.677 06:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:42.677 06:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:42.677 06:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:42.677 06:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:42.677 06:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:42.936 nvme0n1 00:30:42.936 06:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:42.936 06:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:42.936 06:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:42.936 06:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:42.936 06:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:42.936 06:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:42.936 06:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:42.936 06:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:42.936 06:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:42.936 06:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:42.936 06:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:42.936 06:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:42.936 06:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:30:42.936 06:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:42.936 06:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:42.936 06:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:30:42.936 06:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:30:42.936 06:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTMyNTYyNDkxYWU1NDhhNTgxYzNkYWU1ZGIyOTk5ZGVv1SUr: 00:30:42.936 06:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjRhNzdmMzIxNmRkZDMzZTMyNDQ0Y2M3YmJhZTlkZWHfDQR9: 00:30:42.936 06:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:42.936 06:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:30:42.936 06:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTMyNTYyNDkxYWU1NDhhNTgxYzNkYWU1ZGIyOTk5ZGVv1SUr: 00:30:42.936 06:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjRhNzdmMzIxNmRkZDMzZTMyNDQ0Y2M3YmJhZTlkZWHfDQR9: ]] 00:30:42.936 06:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjRhNzdmMzIxNmRkZDMzZTMyNDQ0Y2M3YmJhZTlkZWHfDQR9: 00:30:42.936 06:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:30:42.936 06:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:42.936 06:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:42.936 06:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:30:42.936 06:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:30:42.936 06:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:42.936 06:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:30:42.936 06:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:42.936 06:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:42.936 06:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:42.936 06:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:42.936 06:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:42.936 06:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:42.936 06:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:42.936 06:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:42.936 06:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:42.936 06:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:42.936 06:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:42.936 06:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:42.936 06:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:42.936 06:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:42.936 06:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:42.936 06:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:42.936 06:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:43.195 nvme0n1 00:30:43.195 06:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:43.195 06:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:43.195 06:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:43.196 06:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:43.196 06:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:43.196 06:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:43.196 06:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:43.196 06:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:43.196 06:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:43.196 06:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:43.455 06:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:43.455 06:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:43.455 06:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:30:43.455 06:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:43.455 06:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:43.455 06:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:30:43.455 06:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:30:43.455 06:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDBlOTI2MmYzYzAyMzBkY2NlZGQ4OGQ4YTZlNGNmMjc5MTUzNWEyMzA4MGU0NGNi7F8Cfw==: 00:30:43.455 06:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTMwZTFjMDRmZGQ1OWJjMTVmY2ZkNDMzZGRjNDY2YzJUavto: 00:30:43.455 06:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:43.455 06:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:30:43.455 06:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDBlOTI2MmYzYzAyMzBkY2NlZGQ4OGQ4YTZlNGNmMjc5MTUzNWEyMzA4MGU0NGNi7F8Cfw==: 00:30:43.455 06:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTMwZTFjMDRmZGQ1OWJjMTVmY2ZkNDMzZGRjNDY2YzJUavto: ]] 00:30:43.455 06:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTMwZTFjMDRmZGQ1OWJjMTVmY2ZkNDMzZGRjNDY2YzJUavto: 00:30:43.455 06:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:30:43.455 06:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:43.455 06:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:43.455 06:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:30:43.455 06:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:30:43.455 06:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:43.455 06:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:30:43.455 06:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:43.455 06:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:43.455 06:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:43.455 06:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:43.455 06:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:43.455 06:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:43.455 06:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:43.455 06:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:43.455 06:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:43.455 06:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:43.455 06:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:43.455 06:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:43.455 06:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:43.455 06:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:43.455 06:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:30:43.455 06:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:43.455 06:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:43.714 nvme0n1 00:30:43.714 06:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:43.714 06:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:43.714 06:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:43.714 06:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:43.714 06:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:43.714 06:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:43.714 06:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:43.714 06:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:43.714 06:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:43.714 06:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:43.714 06:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:43.714 06:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:43.714 06:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:30:43.714 06:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:43.714 06:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:43.714 06:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:30:43.714 06:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:30:43.714 06:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTI5NWYzMmJlMzFmYzIwMzVkYjY1YWJjOWMwOTcwMjcyMDYxYzM1MmUxM2VlN2MyYWI2ZDEwNzAyNjQ0NjI1ZvHhVgU=: 00:30:43.714 06:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:30:43.714 06:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:43.714 06:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:30:43.714 06:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTI5NWYzMmJlMzFmYzIwMzVkYjY1YWJjOWMwOTcwMjcyMDYxYzM1MmUxM2VlN2MyYWI2ZDEwNzAyNjQ0NjI1ZvHhVgU=: 00:30:43.714 06:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:30:43.714 06:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:30:43.714 06:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:43.714 06:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:43.714 06:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:30:43.714 06:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:30:43.714 06:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:43.714 06:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:30:43.714 06:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:43.714 06:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:43.714 06:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:43.714 06:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:43.714 06:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:43.714 06:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:43.714 06:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:43.714 06:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:43.714 06:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:43.714 06:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:43.714 06:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:43.714 06:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:43.714 06:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:43.714 06:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:43.714 06:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:30:43.714 06:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:43.714 06:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:43.974 nvme0n1 00:30:43.974 06:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:43.974 06:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:43.974 06:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:43.974 06:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:43.974 06:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:43.974 06:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:43.974 06:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:43.974 06:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:43.974 06:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:43.974 06:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:43.974 06:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:43.974 06:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:30:43.974 06:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:43.974 06:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:30:43.974 06:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:43.974 06:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:43.974 06:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:30:43.974 06:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:30:43.974 06:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGI5NzZkZGFiMTJmYzQ1NWFiZGNiZjlkYjJlODc2ZWVmk4E/: 00:30:43.974 06:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWUwYjM4MjJiMzA4OTdlYWVlODM3MWI5NTIwN2JmMDA1NzYyMWIwN2ZiYWM3OTYwMTU5OWJiMjY4NDE3OGZkMprsxS0=: 00:30:43.974 06:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:43.974 06:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:30:43.974 06:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGI5NzZkZGFiMTJmYzQ1NWFiZGNiZjlkYjJlODc2ZWVmk4E/: 00:30:43.974 06:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWUwYjM4MjJiMzA4OTdlYWVlODM3MWI5NTIwN2JmMDA1NzYyMWIwN2ZiYWM3OTYwMTU5OWJiMjY4NDE3OGZkMprsxS0=: ]] 00:30:43.974 06:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWUwYjM4MjJiMzA4OTdlYWVlODM3MWI5NTIwN2JmMDA1NzYyMWIwN2ZiYWM3OTYwMTU5OWJiMjY4NDE3OGZkMprsxS0=: 00:30:43.974 06:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:30:43.974 06:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:43.974 06:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:43.974 06:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:30:43.974 06:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:30:43.974 06:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:43.974 06:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:30:43.974 06:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:43.974 06:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:43.974 06:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:43.974 06:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:43.974 06:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:43.974 06:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:43.974 06:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:43.974 06:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:43.974 06:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:43.974 06:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:43.974 06:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:43.974 06:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:43.974 06:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:43.974 06:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:43.974 06:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:43.974 06:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:43.974 06:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:44.542 nvme0n1 00:30:44.542 06:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:44.542 06:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:44.542 06:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:44.542 06:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:44.542 06:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:44.542 06:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:44.542 06:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:44.542 06:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:44.542 06:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:44.542 06:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:44.542 06:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:44.542 06:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:44.542 06:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:30:44.542 06:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:44.542 06:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:44.542 06:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:30:44.542 06:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:30:44.542 06:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDcwZWVhNTgxZjhlN2M2MGQ4MTMyY2QxOTQ5OTNhMjYxYTk2YzMxYzc3YjU4MDAyPkmkKg==: 00:30:44.542 06:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzE2YmJiYzllMGI2OWIwMGJmMzUxYjkyN2ZmZjU0YzExNTc1ZjExNGZiNjZkYzkwwgEU6g==: 00:30:44.542 06:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:44.542 06:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:30:44.542 06:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDcwZWVhNTgxZjhlN2M2MGQ4MTMyY2QxOTQ5OTNhMjYxYTk2YzMxYzc3YjU4MDAyPkmkKg==: 00:30:44.542 06:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzE2YmJiYzllMGI2OWIwMGJmMzUxYjkyN2ZmZjU0YzExNTc1ZjExNGZiNjZkYzkwwgEU6g==: ]] 00:30:44.542 06:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzE2YmJiYzllMGI2OWIwMGJmMzUxYjkyN2ZmZjU0YzExNTc1ZjExNGZiNjZkYzkwwgEU6g==: 00:30:44.542 06:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:30:44.542 06:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:44.542 06:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:44.542 06:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:30:44.542 06:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:30:44.542 06:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:44.542 06:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:30:44.542 06:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:44.542 06:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:44.542 06:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:44.542 06:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:44.542 06:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:44.542 06:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:44.542 06:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:44.542 06:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:44.542 06:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:44.542 06:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:44.542 06:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:44.542 06:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:44.542 06:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:44.542 06:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:44.542 06:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:44.542 06:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:44.542 06:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:44.801 nvme0n1 00:30:44.801 06:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:44.801 06:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:44.801 06:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:44.801 06:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:44.801 06:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:44.801 06:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:44.801 06:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:44.801 06:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:44.801 06:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:44.801 06:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:44.801 06:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:44.801 06:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:44.801 06:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:30:44.801 06:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:44.801 06:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:44.801 06:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:30:44.801 06:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:30:44.801 06:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTMyNTYyNDkxYWU1NDhhNTgxYzNkYWU1ZGIyOTk5ZGVv1SUr: 00:30:44.801 06:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjRhNzdmMzIxNmRkZDMzZTMyNDQ0Y2M3YmJhZTlkZWHfDQR9: 00:30:44.801 06:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:44.801 06:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:30:44.801 06:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTMyNTYyNDkxYWU1NDhhNTgxYzNkYWU1ZGIyOTk5ZGVv1SUr: 00:30:44.801 06:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjRhNzdmMzIxNmRkZDMzZTMyNDQ0Y2M3YmJhZTlkZWHfDQR9: ]] 00:30:44.801 06:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjRhNzdmMzIxNmRkZDMzZTMyNDQ0Y2M3YmJhZTlkZWHfDQR9: 00:30:44.801 06:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:30:44.801 06:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:44.801 06:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:44.801 06:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:30:44.801 06:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:30:44.801 06:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:44.801 06:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:30:44.801 06:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:44.801 06:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:44.801 06:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:44.801 06:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:44.801 06:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:44.801 06:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:44.801 06:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:44.801 06:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:44.801 06:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:44.801 06:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:44.801 06:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:44.801 06:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:44.801 06:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:44.801 06:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:44.801 06:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:44.801 06:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:44.801 06:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:45.369 nvme0n1 00:30:45.369 06:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:45.369 06:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:45.369 06:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:45.369 06:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:45.369 06:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:45.369 06:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:45.369 06:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:45.369 06:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:45.369 06:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:45.369 06:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:45.369 06:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:45.369 06:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:45.370 06:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:30:45.370 06:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:45.370 06:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:45.370 06:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:30:45.370 06:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:30:45.370 06:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDBlOTI2MmYzYzAyMzBkY2NlZGQ4OGQ4YTZlNGNmMjc5MTUzNWEyMzA4MGU0NGNi7F8Cfw==: 00:30:45.370 06:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTMwZTFjMDRmZGQ1OWJjMTVmY2ZkNDMzZGRjNDY2YzJUavto: 00:30:45.370 06:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:45.370 06:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:30:45.370 06:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDBlOTI2MmYzYzAyMzBkY2NlZGQ4OGQ4YTZlNGNmMjc5MTUzNWEyMzA4MGU0NGNi7F8Cfw==: 00:30:45.370 06:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTMwZTFjMDRmZGQ1OWJjMTVmY2ZkNDMzZGRjNDY2YzJUavto: ]] 00:30:45.370 06:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTMwZTFjMDRmZGQ1OWJjMTVmY2ZkNDMzZGRjNDY2YzJUavto: 00:30:45.370 06:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:30:45.370 06:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:45.370 06:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:45.370 06:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:30:45.370 06:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:30:45.370 06:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:45.370 06:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:30:45.370 06:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:45.370 06:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:45.370 06:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:45.370 06:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:45.370 06:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:45.370 06:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:45.370 06:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:45.370 06:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:45.370 06:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:45.370 06:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:45.370 06:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:45.370 06:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:45.370 06:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:45.370 06:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:45.370 06:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:30:45.370 06:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:45.370 06:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:45.628 nvme0n1 00:30:45.628 06:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:45.628 06:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:45.629 06:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:45.629 06:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:45.629 06:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:45.629 06:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:45.888 06:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:45.888 06:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:45.888 06:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:45.888 06:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:45.888 06:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:45.888 06:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:45.888 06:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:30:45.888 06:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:45.888 06:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:45.888 06:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:30:45.888 06:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:30:45.888 06:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTI5NWYzMmJlMzFmYzIwMzVkYjY1YWJjOWMwOTcwMjcyMDYxYzM1MmUxM2VlN2MyYWI2ZDEwNzAyNjQ0NjI1ZvHhVgU=: 00:30:45.888 06:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:30:45.888 06:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:45.888 06:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:30:45.888 06:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTI5NWYzMmJlMzFmYzIwMzVkYjY1YWJjOWMwOTcwMjcyMDYxYzM1MmUxM2VlN2MyYWI2ZDEwNzAyNjQ0NjI1ZvHhVgU=: 00:30:45.888 06:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:30:45.888 06:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:30:45.888 06:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:45.888 06:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:45.888 06:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:30:45.888 06:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:30:45.888 06:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:45.888 06:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:30:45.888 06:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:45.888 06:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:45.888 06:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:45.888 06:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:45.888 06:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:45.888 06:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:45.888 06:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:45.888 06:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:45.888 06:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:45.888 06:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:45.888 06:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:45.888 06:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:45.888 06:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:45.888 06:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:45.888 06:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:30:45.888 06:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:45.888 06:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:46.147 nvme0n1 00:30:46.147 06:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:46.147 06:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:46.147 06:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:46.147 06:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:46.147 06:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:46.147 06:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:46.147 06:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:46.147 06:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:46.147 06:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:46.147 06:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:46.147 06:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:46.147 06:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:30:46.147 06:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:46.147 06:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:30:46.147 06:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:46.147 06:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:46.147 06:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:30:46.147 06:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:30:46.147 06:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGI5NzZkZGFiMTJmYzQ1NWFiZGNiZjlkYjJlODc2ZWVmk4E/: 00:30:46.147 06:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWUwYjM4MjJiMzA4OTdlYWVlODM3MWI5NTIwN2JmMDA1NzYyMWIwN2ZiYWM3OTYwMTU5OWJiMjY4NDE3OGZkMprsxS0=: 00:30:46.147 06:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:46.147 06:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:30:46.147 06:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGI5NzZkZGFiMTJmYzQ1NWFiZGNiZjlkYjJlODc2ZWVmk4E/: 00:30:46.147 06:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWUwYjM4MjJiMzA4OTdlYWVlODM3MWI5NTIwN2JmMDA1NzYyMWIwN2ZiYWM3OTYwMTU5OWJiMjY4NDE3OGZkMprsxS0=: ]] 00:30:46.147 06:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWUwYjM4MjJiMzA4OTdlYWVlODM3MWI5NTIwN2JmMDA1NzYyMWIwN2ZiYWM3OTYwMTU5OWJiMjY4NDE3OGZkMprsxS0=: 00:30:46.147 06:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:30:46.147 06:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:46.147 06:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:46.147 06:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:30:46.147 06:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:30:46.147 06:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:46.147 06:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:30:46.147 06:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:46.147 06:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:46.147 06:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:46.147 06:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:46.147 06:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:46.147 06:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:46.147 06:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:46.147 06:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:46.147 06:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:46.147 06:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:46.147 06:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:46.147 06:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:46.147 06:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:46.147 06:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:46.147 06:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:46.147 06:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:46.147 06:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:46.714 nvme0n1 00:30:46.714 06:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:46.714 06:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:46.714 06:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:46.714 06:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:46.714 06:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:46.714 06:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:46.973 06:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:46.973 06:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:46.973 06:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:46.973 06:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:46.973 06:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:46.973 06:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:46.973 06:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:30:46.973 06:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:46.973 06:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:46.973 06:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:30:46.973 06:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:30:46.973 06:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDcwZWVhNTgxZjhlN2M2MGQ4MTMyY2QxOTQ5OTNhMjYxYTk2YzMxYzc3YjU4MDAyPkmkKg==: 00:30:46.973 06:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzE2YmJiYzllMGI2OWIwMGJmMzUxYjkyN2ZmZjU0YzExNTc1ZjExNGZiNjZkYzkwwgEU6g==: 00:30:46.973 06:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:46.973 06:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:30:46.973 06:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDcwZWVhNTgxZjhlN2M2MGQ4MTMyY2QxOTQ5OTNhMjYxYTk2YzMxYzc3YjU4MDAyPkmkKg==: 00:30:46.973 06:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzE2YmJiYzllMGI2OWIwMGJmMzUxYjkyN2ZmZjU0YzExNTc1ZjExNGZiNjZkYzkwwgEU6g==: ]] 00:30:46.973 06:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzE2YmJiYzllMGI2OWIwMGJmMzUxYjkyN2ZmZjU0YzExNTc1ZjExNGZiNjZkYzkwwgEU6g==: 00:30:46.973 06:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:30:46.973 06:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:46.973 06:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:46.973 06:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:30:46.973 06:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:30:46.973 06:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:46.973 06:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:30:46.973 06:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:46.973 06:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:46.974 06:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:46.974 06:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:46.974 06:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:46.974 06:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:46.974 06:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:46.974 06:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:46.974 06:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:46.974 06:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:46.974 06:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:46.974 06:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:46.974 06:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:46.974 06:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:46.974 06:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:46.974 06:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:46.974 06:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:47.557 nvme0n1 00:30:47.557 06:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:47.557 06:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:47.557 06:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:47.557 06:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:47.557 06:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:47.557 06:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:47.557 06:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:47.557 06:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:47.557 06:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:47.557 06:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:47.557 06:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:47.557 06:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:47.557 06:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:30:47.557 06:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:47.557 06:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:47.558 06:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:30:47.558 06:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:30:47.558 06:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTMyNTYyNDkxYWU1NDhhNTgxYzNkYWU1ZGIyOTk5ZGVv1SUr: 00:30:47.558 06:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjRhNzdmMzIxNmRkZDMzZTMyNDQ0Y2M3YmJhZTlkZWHfDQR9: 00:30:47.558 06:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:47.558 06:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:30:47.558 06:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTMyNTYyNDkxYWU1NDhhNTgxYzNkYWU1ZGIyOTk5ZGVv1SUr: 00:30:47.558 06:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjRhNzdmMzIxNmRkZDMzZTMyNDQ0Y2M3YmJhZTlkZWHfDQR9: ]] 00:30:47.558 06:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjRhNzdmMzIxNmRkZDMzZTMyNDQ0Y2M3YmJhZTlkZWHfDQR9: 00:30:47.558 06:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:30:47.558 06:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:47.558 06:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:47.558 06:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:30:47.558 06:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:30:47.558 06:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:47.558 06:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:30:47.558 06:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:47.558 06:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:47.558 06:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:47.558 06:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:47.559 06:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:47.559 06:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:47.559 06:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:47.559 06:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:47.559 06:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:47.559 06:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:47.559 06:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:47.559 06:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:47.559 06:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:47.559 06:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:47.559 06:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:47.559 06:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:47.559 06:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:48.130 nvme0n1 00:30:48.130 06:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:48.130 06:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:48.130 06:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:48.130 06:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:48.130 06:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:48.130 06:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:48.130 06:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:48.130 06:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:48.130 06:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:48.130 06:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:48.130 06:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:48.130 06:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:48.130 06:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:30:48.130 06:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:48.130 06:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:48.130 06:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:30:48.130 06:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:30:48.130 06:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDBlOTI2MmYzYzAyMzBkY2NlZGQ4OGQ4YTZlNGNmMjc5MTUzNWEyMzA4MGU0NGNi7F8Cfw==: 00:30:48.130 06:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTMwZTFjMDRmZGQ1OWJjMTVmY2ZkNDMzZGRjNDY2YzJUavto: 00:30:48.130 06:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:48.130 06:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:30:48.130 06:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDBlOTI2MmYzYzAyMzBkY2NlZGQ4OGQ4YTZlNGNmMjc5MTUzNWEyMzA4MGU0NGNi7F8Cfw==: 00:30:48.130 06:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTMwZTFjMDRmZGQ1OWJjMTVmY2ZkNDMzZGRjNDY2YzJUavto: ]] 00:30:48.130 06:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTMwZTFjMDRmZGQ1OWJjMTVmY2ZkNDMzZGRjNDY2YzJUavto: 00:30:48.130 06:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:30:48.130 06:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:48.130 06:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:48.130 06:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:30:48.130 06:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:30:48.130 06:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:48.130 06:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:30:48.130 06:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:48.130 06:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:48.130 06:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:48.130 06:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:48.130 06:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:48.130 06:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:48.130 06:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:48.130 06:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:48.130 06:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:48.130 06:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:48.130 06:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:48.130 06:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:48.130 06:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:48.130 06:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:48.130 06:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:30:48.130 06:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:48.130 06:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:48.698 nvme0n1 00:30:48.698 06:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:48.698 06:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:48.698 06:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:48.698 06:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:48.698 06:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:48.698 06:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:48.698 06:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:48.698 06:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:48.698 06:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:48.698 06:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:48.698 06:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:48.698 06:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:48.698 06:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:30:48.698 06:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:48.698 06:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:48.698 06:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:30:48.698 06:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:30:48.698 06:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTI5NWYzMmJlMzFmYzIwMzVkYjY1YWJjOWMwOTcwMjcyMDYxYzM1MmUxM2VlN2MyYWI2ZDEwNzAyNjQ0NjI1ZvHhVgU=: 00:30:48.698 06:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:30:48.698 06:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:48.698 06:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:30:48.698 06:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTI5NWYzMmJlMzFmYzIwMzVkYjY1YWJjOWMwOTcwMjcyMDYxYzM1MmUxM2VlN2MyYWI2ZDEwNzAyNjQ0NjI1ZvHhVgU=: 00:30:48.698 06:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:30:48.698 06:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:30:48.698 06:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:48.698 06:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:48.698 06:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:30:48.698 06:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:30:48.698 06:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:48.698 06:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:30:48.698 06:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:48.698 06:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:48.957 06:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:48.957 06:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:48.957 06:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:48.957 06:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:48.957 06:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:48.957 06:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:48.957 06:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:48.957 06:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:48.957 06:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:48.957 06:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:48.957 06:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:48.957 06:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:48.957 06:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:30:48.957 06:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:48.957 06:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:49.526 nvme0n1 00:30:49.526 06:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:49.526 06:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:49.526 06:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:49.526 06:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:49.526 06:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:49.526 06:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:49.526 06:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:49.526 06:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:49.526 06:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:49.526 06:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:49.526 06:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:49.526 06:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:30:49.526 06:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:30:49.526 06:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:49.526 06:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:30:49.526 06:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:49.526 06:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:49.526 06:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:49.526 06:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:30:49.526 06:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGI5NzZkZGFiMTJmYzQ1NWFiZGNiZjlkYjJlODc2ZWVmk4E/: 00:30:49.526 06:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWUwYjM4MjJiMzA4OTdlYWVlODM3MWI5NTIwN2JmMDA1NzYyMWIwN2ZiYWM3OTYwMTU5OWJiMjY4NDE3OGZkMprsxS0=: 00:30:49.526 06:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:49.526 06:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:30:49.526 06:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGI5NzZkZGFiMTJmYzQ1NWFiZGNiZjlkYjJlODc2ZWVmk4E/: 00:30:49.527 06:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWUwYjM4MjJiMzA4OTdlYWVlODM3MWI5NTIwN2JmMDA1NzYyMWIwN2ZiYWM3OTYwMTU5OWJiMjY4NDE3OGZkMprsxS0=: ]] 00:30:49.527 06:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWUwYjM4MjJiMzA4OTdlYWVlODM3MWI5NTIwN2JmMDA1NzYyMWIwN2ZiYWM3OTYwMTU5OWJiMjY4NDE3OGZkMprsxS0=: 00:30:49.527 06:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:30:49.527 06:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:49.527 06:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:49.527 06:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:30:49.527 06:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:30:49.527 06:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:49.527 06:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:30:49.527 06:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:49.527 06:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:49.527 06:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:49.527 06:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:49.527 06:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:49.527 06:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:49.527 06:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:49.527 06:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:49.527 06:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:49.527 06:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:49.527 06:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:49.527 06:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:49.527 06:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:49.527 06:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:49.527 06:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:49.527 06:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:49.527 06:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:49.527 nvme0n1 00:30:49.527 06:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:49.527 06:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:49.527 06:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:49.527 06:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:49.527 06:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:49.527 06:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:49.786 06:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:49.786 06:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:49.786 06:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:49.786 06:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:49.786 06:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:49.786 06:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:49.786 06:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:30:49.786 06:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:49.786 06:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:49.786 06:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:49.786 06:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:30:49.786 06:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDcwZWVhNTgxZjhlN2M2MGQ4MTMyY2QxOTQ5OTNhMjYxYTk2YzMxYzc3YjU4MDAyPkmkKg==: 00:30:49.786 06:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzE2YmJiYzllMGI2OWIwMGJmMzUxYjkyN2ZmZjU0YzExNTc1ZjExNGZiNjZkYzkwwgEU6g==: 00:30:49.786 06:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:49.786 06:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:30:49.786 06:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDcwZWVhNTgxZjhlN2M2MGQ4MTMyY2QxOTQ5OTNhMjYxYTk2YzMxYzc3YjU4MDAyPkmkKg==: 00:30:49.786 06:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzE2YmJiYzllMGI2OWIwMGJmMzUxYjkyN2ZmZjU0YzExNTc1ZjExNGZiNjZkYzkwwgEU6g==: ]] 00:30:49.786 06:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzE2YmJiYzllMGI2OWIwMGJmMzUxYjkyN2ZmZjU0YzExNTc1ZjExNGZiNjZkYzkwwgEU6g==: 00:30:49.786 06:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:30:49.786 06:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:49.786 06:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:49.786 06:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:30:49.786 06:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:30:49.786 06:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:49.786 06:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:30:49.786 06:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:49.786 06:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:49.786 06:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:49.786 06:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:49.786 06:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:49.786 06:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:49.786 06:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:49.786 06:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:49.786 06:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:49.786 06:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:49.786 06:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:49.786 06:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:49.786 06:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:49.786 06:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:49.786 06:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:49.786 06:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:49.786 06:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:49.786 nvme0n1 00:30:49.786 06:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:49.786 06:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:49.786 06:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:49.786 06:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:49.786 06:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:49.786 06:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:49.786 06:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:49.786 06:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:49.786 06:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:49.786 06:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:49.786 06:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:49.786 06:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:49.786 06:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:30:49.786 06:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:49.786 06:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:49.786 06:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:49.786 06:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:30:49.786 06:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTMyNTYyNDkxYWU1NDhhNTgxYzNkYWU1ZGIyOTk5ZGVv1SUr: 00:30:49.786 06:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjRhNzdmMzIxNmRkZDMzZTMyNDQ0Y2M3YmJhZTlkZWHfDQR9: 00:30:49.786 06:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:49.786 06:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:30:49.786 06:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTMyNTYyNDkxYWU1NDhhNTgxYzNkYWU1ZGIyOTk5ZGVv1SUr: 00:30:49.786 06:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjRhNzdmMzIxNmRkZDMzZTMyNDQ0Y2M3YmJhZTlkZWHfDQR9: ]] 00:30:49.786 06:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjRhNzdmMzIxNmRkZDMzZTMyNDQ0Y2M3YmJhZTlkZWHfDQR9: 00:30:49.786 06:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:30:49.786 06:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:49.786 06:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:50.045 06:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:30:50.045 06:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:30:50.045 06:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:50.045 06:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:30:50.045 06:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:50.045 06:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:50.045 06:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:50.045 06:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:50.045 06:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:50.045 06:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:50.045 06:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:50.045 06:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:50.045 06:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:50.045 06:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:50.045 06:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:50.045 06:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:50.045 06:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:50.045 06:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:50.045 06:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:50.045 06:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:50.045 06:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:50.045 nvme0n1 00:30:50.045 06:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:50.045 06:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:50.045 06:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:50.045 06:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:50.045 06:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:50.045 06:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:50.045 06:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:50.045 06:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:50.045 06:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:50.045 06:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:50.045 06:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:50.045 06:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:50.045 06:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:30:50.045 06:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:50.045 06:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:50.045 06:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:50.045 06:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:30:50.045 06:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDBlOTI2MmYzYzAyMzBkY2NlZGQ4OGQ4YTZlNGNmMjc5MTUzNWEyMzA4MGU0NGNi7F8Cfw==: 00:30:50.045 06:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTMwZTFjMDRmZGQ1OWJjMTVmY2ZkNDMzZGRjNDY2YzJUavto: 00:30:50.045 06:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:50.045 06:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:30:50.045 06:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDBlOTI2MmYzYzAyMzBkY2NlZGQ4OGQ4YTZlNGNmMjc5MTUzNWEyMzA4MGU0NGNi7F8Cfw==: 00:30:50.045 06:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTMwZTFjMDRmZGQ1OWJjMTVmY2ZkNDMzZGRjNDY2YzJUavto: ]] 00:30:50.045 06:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTMwZTFjMDRmZGQ1OWJjMTVmY2ZkNDMzZGRjNDY2YzJUavto: 00:30:50.045 06:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:30:50.045 06:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:50.045 06:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:50.045 06:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:30:50.045 06:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:30:50.045 06:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:50.045 06:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:30:50.045 06:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:50.045 06:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:50.045 06:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:50.045 06:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:50.045 06:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:50.045 06:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:50.046 06:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:50.046 06:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:50.046 06:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:50.046 06:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:50.046 06:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:50.046 06:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:50.046 06:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:50.046 06:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:50.046 06:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:30:50.046 06:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:50.046 06:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:50.304 nvme0n1 00:30:50.304 06:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:50.304 06:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:50.304 06:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:50.304 06:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:50.304 06:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:50.304 06:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:50.304 06:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:50.304 06:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:50.304 06:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:50.304 06:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:50.304 06:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:50.304 06:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:50.304 06:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:30:50.304 06:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:50.304 06:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:50.304 06:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:50.304 06:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:30:50.304 06:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTI5NWYzMmJlMzFmYzIwMzVkYjY1YWJjOWMwOTcwMjcyMDYxYzM1MmUxM2VlN2MyYWI2ZDEwNzAyNjQ0NjI1ZvHhVgU=: 00:30:50.304 06:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:30:50.304 06:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:50.304 06:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:30:50.304 06:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTI5NWYzMmJlMzFmYzIwMzVkYjY1YWJjOWMwOTcwMjcyMDYxYzM1MmUxM2VlN2MyYWI2ZDEwNzAyNjQ0NjI1ZvHhVgU=: 00:30:50.304 06:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:30:50.305 06:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:30:50.305 06:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:50.305 06:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:50.305 06:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:30:50.305 06:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:30:50.305 06:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:50.305 06:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:30:50.305 06:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:50.305 06:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:50.305 06:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:50.305 06:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:50.305 06:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:50.305 06:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:50.305 06:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:50.305 06:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:50.305 06:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:50.305 06:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:50.305 06:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:50.305 06:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:50.305 06:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:50.305 06:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:50.305 06:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:30:50.305 06:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:50.305 06:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:50.564 nvme0n1 00:30:50.564 06:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:50.564 06:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:50.564 06:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:50.564 06:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:50.564 06:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:50.564 06:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:50.564 06:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:50.564 06:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:50.564 06:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:50.564 06:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:50.564 06:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:50.564 06:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:30:50.564 06:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:50.564 06:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:30:50.564 06:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:50.564 06:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:50.564 06:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:30:50.564 06:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:30:50.564 06:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGI5NzZkZGFiMTJmYzQ1NWFiZGNiZjlkYjJlODc2ZWVmk4E/: 00:30:50.564 06:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWUwYjM4MjJiMzA4OTdlYWVlODM3MWI5NTIwN2JmMDA1NzYyMWIwN2ZiYWM3OTYwMTU5OWJiMjY4NDE3OGZkMprsxS0=: 00:30:50.564 06:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:50.564 06:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:30:50.564 06:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGI5NzZkZGFiMTJmYzQ1NWFiZGNiZjlkYjJlODc2ZWVmk4E/: 00:30:50.564 06:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWUwYjM4MjJiMzA4OTdlYWVlODM3MWI5NTIwN2JmMDA1NzYyMWIwN2ZiYWM3OTYwMTU5OWJiMjY4NDE3OGZkMprsxS0=: ]] 00:30:50.564 06:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWUwYjM4MjJiMzA4OTdlYWVlODM3MWI5NTIwN2JmMDA1NzYyMWIwN2ZiYWM3OTYwMTU5OWJiMjY4NDE3OGZkMprsxS0=: 00:30:50.564 06:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:30:50.564 06:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:50.564 06:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:50.564 06:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:30:50.564 06:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:30:50.564 06:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:50.564 06:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:30:50.564 06:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:50.564 06:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:50.564 06:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:50.564 06:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:50.564 06:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:50.564 06:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:50.564 06:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:50.564 06:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:50.564 06:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:50.564 06:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:50.564 06:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:50.564 06:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:50.564 06:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:50.564 06:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:50.564 06:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:50.564 06:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:50.564 06:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:50.824 nvme0n1 00:30:50.824 06:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:50.824 06:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:50.824 06:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:50.824 06:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:50.824 06:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:50.824 06:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:50.824 06:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:50.824 06:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:50.824 06:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:50.824 06:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:50.824 06:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:50.824 06:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:50.824 06:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:30:50.824 06:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:50.824 06:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:50.824 06:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:30:50.824 06:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:30:50.824 06:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDcwZWVhNTgxZjhlN2M2MGQ4MTMyY2QxOTQ5OTNhMjYxYTk2YzMxYzc3YjU4MDAyPkmkKg==: 00:30:50.824 06:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzE2YmJiYzllMGI2OWIwMGJmMzUxYjkyN2ZmZjU0YzExNTc1ZjExNGZiNjZkYzkwwgEU6g==: 00:30:50.824 06:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:50.824 06:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:30:50.824 06:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDcwZWVhNTgxZjhlN2M2MGQ4MTMyY2QxOTQ5OTNhMjYxYTk2YzMxYzc3YjU4MDAyPkmkKg==: 00:30:50.824 06:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzE2YmJiYzllMGI2OWIwMGJmMzUxYjkyN2ZmZjU0YzExNTc1ZjExNGZiNjZkYzkwwgEU6g==: ]] 00:30:50.824 06:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzE2YmJiYzllMGI2OWIwMGJmMzUxYjkyN2ZmZjU0YzExNTc1ZjExNGZiNjZkYzkwwgEU6g==: 00:30:50.824 06:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:30:50.824 06:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:50.824 06:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:50.824 06:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:30:50.824 06:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:30:50.825 06:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:50.825 06:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:30:50.825 06:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:50.825 06:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:50.825 06:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:50.825 06:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:50.825 06:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:50.825 06:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:50.825 06:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:50.825 06:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:50.825 06:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:50.825 06:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:50.825 06:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:50.825 06:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:50.825 06:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:50.825 06:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:50.825 06:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:50.825 06:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:50.825 06:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:51.084 nvme0n1 00:30:51.084 06:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:51.084 06:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:51.084 06:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:51.084 06:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:51.084 06:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:51.084 06:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:51.084 06:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:51.084 06:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:51.084 06:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:51.084 06:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:51.084 06:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:51.084 06:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:51.084 06:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:30:51.084 06:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:51.084 06:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:51.084 06:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:30:51.084 06:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:30:51.084 06:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTMyNTYyNDkxYWU1NDhhNTgxYzNkYWU1ZGIyOTk5ZGVv1SUr: 00:30:51.084 06:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjRhNzdmMzIxNmRkZDMzZTMyNDQ0Y2M3YmJhZTlkZWHfDQR9: 00:30:51.084 06:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:51.084 06:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:30:51.084 06:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTMyNTYyNDkxYWU1NDhhNTgxYzNkYWU1ZGIyOTk5ZGVv1SUr: 00:30:51.084 06:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjRhNzdmMzIxNmRkZDMzZTMyNDQ0Y2M3YmJhZTlkZWHfDQR9: ]] 00:30:51.085 06:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjRhNzdmMzIxNmRkZDMzZTMyNDQ0Y2M3YmJhZTlkZWHfDQR9: 00:30:51.085 06:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:30:51.085 06:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:51.085 06:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:51.085 06:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:30:51.085 06:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:30:51.085 06:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:51.085 06:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:30:51.085 06:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:51.085 06:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:51.085 06:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:51.085 06:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:51.085 06:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:51.085 06:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:51.085 06:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:51.085 06:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:51.085 06:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:51.085 06:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:51.085 06:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:51.085 06:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:51.085 06:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:51.085 06:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:51.085 06:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:51.085 06:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:51.085 06:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:51.344 nvme0n1 00:30:51.344 06:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:51.344 06:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:51.344 06:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:51.344 06:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:51.344 06:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:51.344 06:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:51.344 06:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:51.344 06:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:51.344 06:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:51.344 06:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:51.344 06:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:51.344 06:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:51.344 06:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:30:51.344 06:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:51.344 06:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:51.344 06:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:30:51.344 06:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:30:51.344 06:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDBlOTI2MmYzYzAyMzBkY2NlZGQ4OGQ4YTZlNGNmMjc5MTUzNWEyMzA4MGU0NGNi7F8Cfw==: 00:30:51.344 06:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTMwZTFjMDRmZGQ1OWJjMTVmY2ZkNDMzZGRjNDY2YzJUavto: 00:30:51.344 06:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:51.344 06:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:30:51.344 06:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDBlOTI2MmYzYzAyMzBkY2NlZGQ4OGQ4YTZlNGNmMjc5MTUzNWEyMzA4MGU0NGNi7F8Cfw==: 00:30:51.344 06:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTMwZTFjMDRmZGQ1OWJjMTVmY2ZkNDMzZGRjNDY2YzJUavto: ]] 00:30:51.344 06:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTMwZTFjMDRmZGQ1OWJjMTVmY2ZkNDMzZGRjNDY2YzJUavto: 00:30:51.344 06:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:30:51.344 06:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:51.344 06:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:51.344 06:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:30:51.344 06:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:30:51.344 06:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:51.344 06:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:30:51.344 06:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:51.344 06:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:51.344 06:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:51.344 06:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:51.344 06:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:51.344 06:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:51.344 06:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:51.344 06:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:51.344 06:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:51.344 06:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:51.344 06:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:51.344 06:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:51.344 06:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:51.344 06:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:51.344 06:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:30:51.344 06:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:51.344 06:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:51.603 nvme0n1 00:30:51.603 06:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:51.603 06:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:51.603 06:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:51.603 06:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:51.603 06:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:51.603 06:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:51.603 06:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:51.603 06:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:51.603 06:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:51.603 06:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:51.603 06:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:51.603 06:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:51.603 06:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:30:51.603 06:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:51.603 06:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:51.603 06:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:30:51.603 06:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:30:51.603 06:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTI5NWYzMmJlMzFmYzIwMzVkYjY1YWJjOWMwOTcwMjcyMDYxYzM1MmUxM2VlN2MyYWI2ZDEwNzAyNjQ0NjI1ZvHhVgU=: 00:30:51.603 06:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:30:51.603 06:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:51.603 06:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:30:51.603 06:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTI5NWYzMmJlMzFmYzIwMzVkYjY1YWJjOWMwOTcwMjcyMDYxYzM1MmUxM2VlN2MyYWI2ZDEwNzAyNjQ0NjI1ZvHhVgU=: 00:30:51.603 06:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:30:51.603 06:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:30:51.603 06:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:51.603 06:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:51.603 06:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:30:51.603 06:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:30:51.603 06:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:51.603 06:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:30:51.603 06:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:51.603 06:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:51.603 06:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:51.603 06:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:51.603 06:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:51.603 06:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:51.603 06:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:51.603 06:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:51.603 06:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:51.603 06:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:51.603 06:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:51.603 06:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:51.603 06:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:51.603 06:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:51.603 06:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:30:51.603 06:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:51.604 06:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:51.862 nvme0n1 00:30:51.862 06:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:51.862 06:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:51.862 06:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:51.862 06:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:51.862 06:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:51.862 06:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:51.862 06:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:51.862 06:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:51.862 06:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:51.862 06:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:51.862 06:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:51.862 06:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:30:51.862 06:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:51.862 06:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:30:51.862 06:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:51.862 06:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:51.862 06:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:30:51.862 06:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:30:51.862 06:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGI5NzZkZGFiMTJmYzQ1NWFiZGNiZjlkYjJlODc2ZWVmk4E/: 00:30:51.862 06:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWUwYjM4MjJiMzA4OTdlYWVlODM3MWI5NTIwN2JmMDA1NzYyMWIwN2ZiYWM3OTYwMTU5OWJiMjY4NDE3OGZkMprsxS0=: 00:30:51.862 06:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:51.862 06:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:30:51.862 06:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGI5NzZkZGFiMTJmYzQ1NWFiZGNiZjlkYjJlODc2ZWVmk4E/: 00:30:51.862 06:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWUwYjM4MjJiMzA4OTdlYWVlODM3MWI5NTIwN2JmMDA1NzYyMWIwN2ZiYWM3OTYwMTU5OWJiMjY4NDE3OGZkMprsxS0=: ]] 00:30:51.862 06:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWUwYjM4MjJiMzA4OTdlYWVlODM3MWI5NTIwN2JmMDA1NzYyMWIwN2ZiYWM3OTYwMTU5OWJiMjY4NDE3OGZkMprsxS0=: 00:30:51.862 06:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:30:51.863 06:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:51.863 06:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:51.863 06:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:30:51.863 06:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:30:51.863 06:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:51.863 06:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:30:51.863 06:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:51.863 06:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:51.863 06:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:51.863 06:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:51.863 06:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:51.863 06:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:51.863 06:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:51.863 06:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:51.863 06:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:51.863 06:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:51.863 06:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:51.863 06:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:51.863 06:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:51.863 06:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:51.863 06:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:51.863 06:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:51.863 06:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:52.122 nvme0n1 00:30:52.122 06:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:52.122 06:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:52.122 06:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:52.122 06:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:52.122 06:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:52.122 06:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:52.122 06:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:52.122 06:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:52.122 06:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:52.122 06:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:52.122 06:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:52.122 06:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:52.122 06:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:30:52.122 06:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:52.122 06:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:52.122 06:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:30:52.122 06:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:30:52.122 06:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDcwZWVhNTgxZjhlN2M2MGQ4MTMyY2QxOTQ5OTNhMjYxYTk2YzMxYzc3YjU4MDAyPkmkKg==: 00:30:52.122 06:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzE2YmJiYzllMGI2OWIwMGJmMzUxYjkyN2ZmZjU0YzExNTc1ZjExNGZiNjZkYzkwwgEU6g==: 00:30:52.122 06:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:52.122 06:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:30:52.122 06:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDcwZWVhNTgxZjhlN2M2MGQ4MTMyY2QxOTQ5OTNhMjYxYTk2YzMxYzc3YjU4MDAyPkmkKg==: 00:30:52.122 06:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzE2YmJiYzllMGI2OWIwMGJmMzUxYjkyN2ZmZjU0YzExNTc1ZjExNGZiNjZkYzkwwgEU6g==: ]] 00:30:52.122 06:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzE2YmJiYzllMGI2OWIwMGJmMzUxYjkyN2ZmZjU0YzExNTc1ZjExNGZiNjZkYzkwwgEU6g==: 00:30:52.122 06:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:30:52.122 06:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:52.122 06:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:52.122 06:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:30:52.122 06:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:30:52.122 06:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:52.122 06:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:30:52.122 06:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:52.122 06:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:52.122 06:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:52.122 06:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:52.122 06:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:52.122 06:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:52.122 06:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:52.122 06:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:52.122 06:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:52.122 06:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:52.122 06:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:52.122 06:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:52.122 06:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:52.122 06:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:52.122 06:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:52.122 06:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:52.122 06:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:52.381 nvme0n1 00:30:52.381 06:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:52.381 06:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:52.381 06:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:52.381 06:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:52.381 06:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:52.381 06:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:52.639 06:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:52.639 06:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:52.639 06:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:52.639 06:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:52.639 06:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:52.639 06:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:52.639 06:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:30:52.639 06:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:52.639 06:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:52.639 06:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:30:52.639 06:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:30:52.639 06:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTMyNTYyNDkxYWU1NDhhNTgxYzNkYWU1ZGIyOTk5ZGVv1SUr: 00:30:52.639 06:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjRhNzdmMzIxNmRkZDMzZTMyNDQ0Y2M3YmJhZTlkZWHfDQR9: 00:30:52.639 06:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:52.639 06:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:30:52.639 06:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTMyNTYyNDkxYWU1NDhhNTgxYzNkYWU1ZGIyOTk5ZGVv1SUr: 00:30:52.639 06:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjRhNzdmMzIxNmRkZDMzZTMyNDQ0Y2M3YmJhZTlkZWHfDQR9: ]] 00:30:52.639 06:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjRhNzdmMzIxNmRkZDMzZTMyNDQ0Y2M3YmJhZTlkZWHfDQR9: 00:30:52.639 06:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:30:52.639 06:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:52.639 06:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:52.639 06:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:30:52.639 06:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:30:52.639 06:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:52.639 06:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:30:52.639 06:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:52.639 06:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:52.639 06:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:52.639 06:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:52.639 06:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:52.639 06:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:52.639 06:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:52.639 06:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:52.639 06:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:52.639 06:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:52.639 06:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:52.639 06:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:52.639 06:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:52.639 06:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:52.640 06:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:52.640 06:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:52.640 06:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:52.898 nvme0n1 00:30:52.898 06:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:52.898 06:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:52.898 06:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:52.898 06:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:52.898 06:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:52.898 06:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:52.898 06:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:52.898 06:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:52.898 06:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:52.898 06:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:52.898 06:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:52.898 06:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:52.898 06:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:30:52.898 06:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:52.898 06:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:52.898 06:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:30:52.898 06:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:30:52.898 06:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDBlOTI2MmYzYzAyMzBkY2NlZGQ4OGQ4YTZlNGNmMjc5MTUzNWEyMzA4MGU0NGNi7F8Cfw==: 00:30:52.898 06:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTMwZTFjMDRmZGQ1OWJjMTVmY2ZkNDMzZGRjNDY2YzJUavto: 00:30:52.898 06:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:52.898 06:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:30:52.898 06:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDBlOTI2MmYzYzAyMzBkY2NlZGQ4OGQ4YTZlNGNmMjc5MTUzNWEyMzA4MGU0NGNi7F8Cfw==: 00:30:52.898 06:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTMwZTFjMDRmZGQ1OWJjMTVmY2ZkNDMzZGRjNDY2YzJUavto: ]] 00:30:52.898 06:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTMwZTFjMDRmZGQ1OWJjMTVmY2ZkNDMzZGRjNDY2YzJUavto: 00:30:52.898 06:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:30:52.898 06:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:52.898 06:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:52.898 06:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:30:52.898 06:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:30:52.898 06:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:52.898 06:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:30:52.898 06:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:52.898 06:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:52.898 06:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:52.898 06:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:52.899 06:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:52.899 06:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:52.899 06:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:52.899 06:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:52.899 06:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:52.899 06:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:52.899 06:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:52.899 06:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:52.899 06:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:52.899 06:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:52.899 06:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:30:52.899 06:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:52.899 06:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:53.158 nvme0n1 00:30:53.158 06:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:53.158 06:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:53.158 06:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:53.158 06:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:53.158 06:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:53.158 06:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:53.158 06:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:53.158 06:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:53.158 06:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:53.158 06:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:53.158 06:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:53.158 06:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:53.158 06:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:30:53.158 06:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:53.158 06:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:53.158 06:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:30:53.158 06:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:30:53.158 06:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTI5NWYzMmJlMzFmYzIwMzVkYjY1YWJjOWMwOTcwMjcyMDYxYzM1MmUxM2VlN2MyYWI2ZDEwNzAyNjQ0NjI1ZvHhVgU=: 00:30:53.158 06:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:30:53.158 06:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:53.158 06:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:30:53.158 06:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTI5NWYzMmJlMzFmYzIwMzVkYjY1YWJjOWMwOTcwMjcyMDYxYzM1MmUxM2VlN2MyYWI2ZDEwNzAyNjQ0NjI1ZvHhVgU=: 00:30:53.158 06:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:30:53.158 06:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:30:53.158 06:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:53.158 06:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:53.158 06:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:30:53.158 06:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:30:53.158 06:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:53.158 06:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:30:53.158 06:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:53.158 06:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:53.159 06:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:53.159 06:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:53.159 06:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:53.159 06:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:53.159 06:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:53.159 06:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:53.159 06:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:53.159 06:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:53.159 06:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:53.159 06:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:53.159 06:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:53.159 06:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:53.159 06:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:30:53.159 06:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:53.159 06:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:53.417 nvme0n1 00:30:53.417 06:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:53.417 06:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:53.417 06:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:53.417 06:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:53.417 06:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:53.417 06:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:53.417 06:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:53.417 06:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:53.417 06:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:53.417 06:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:53.417 06:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:53.417 06:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:30:53.417 06:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:53.417 06:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:30:53.417 06:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:53.417 06:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:53.417 06:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:30:53.417 06:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:30:53.417 06:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGI5NzZkZGFiMTJmYzQ1NWFiZGNiZjlkYjJlODc2ZWVmk4E/: 00:30:53.417 06:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWUwYjM4MjJiMzA4OTdlYWVlODM3MWI5NTIwN2JmMDA1NzYyMWIwN2ZiYWM3OTYwMTU5OWJiMjY4NDE3OGZkMprsxS0=: 00:30:53.417 06:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:53.417 06:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:30:53.417 06:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGI5NzZkZGFiMTJmYzQ1NWFiZGNiZjlkYjJlODc2ZWVmk4E/: 00:30:53.418 06:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWUwYjM4MjJiMzA4OTdlYWVlODM3MWI5NTIwN2JmMDA1NzYyMWIwN2ZiYWM3OTYwMTU5OWJiMjY4NDE3OGZkMprsxS0=: ]] 00:30:53.418 06:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWUwYjM4MjJiMzA4OTdlYWVlODM3MWI5NTIwN2JmMDA1NzYyMWIwN2ZiYWM3OTYwMTU5OWJiMjY4NDE3OGZkMprsxS0=: 00:30:53.418 06:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:30:53.418 06:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:53.418 06:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:53.418 06:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:30:53.418 06:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:30:53.418 06:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:53.418 06:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:30:53.418 06:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:53.418 06:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:53.418 06:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:53.418 06:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:53.418 06:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:53.418 06:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:53.418 06:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:53.418 06:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:53.418 06:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:53.418 06:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:53.418 06:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:53.418 06:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:53.418 06:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:53.418 06:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:53.418 06:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:53.418 06:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:53.418 06:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:53.985 nvme0n1 00:30:53.985 06:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:53.985 06:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:53.985 06:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:53.985 06:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:53.985 06:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:53.985 06:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:53.985 06:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:53.985 06:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:53.985 06:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:53.985 06:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:53.985 06:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:53.985 06:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:53.985 06:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:30:53.985 06:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:53.985 06:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:53.985 06:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:30:53.985 06:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:30:53.985 06:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDcwZWVhNTgxZjhlN2M2MGQ4MTMyY2QxOTQ5OTNhMjYxYTk2YzMxYzc3YjU4MDAyPkmkKg==: 00:30:53.985 06:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzE2YmJiYzllMGI2OWIwMGJmMzUxYjkyN2ZmZjU0YzExNTc1ZjExNGZiNjZkYzkwwgEU6g==: 00:30:53.985 06:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:53.985 06:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:30:53.985 06:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDcwZWVhNTgxZjhlN2M2MGQ4MTMyY2QxOTQ5OTNhMjYxYTk2YzMxYzc3YjU4MDAyPkmkKg==: 00:30:53.985 06:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzE2YmJiYzllMGI2OWIwMGJmMzUxYjkyN2ZmZjU0YzExNTc1ZjExNGZiNjZkYzkwwgEU6g==: ]] 00:30:53.985 06:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzE2YmJiYzllMGI2OWIwMGJmMzUxYjkyN2ZmZjU0YzExNTc1ZjExNGZiNjZkYzkwwgEU6g==: 00:30:53.985 06:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:30:53.985 06:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:53.985 06:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:53.985 06:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:30:53.985 06:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:30:53.985 06:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:53.985 06:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:30:53.985 06:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:53.985 06:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:53.985 06:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:53.985 06:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:53.985 06:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:53.985 06:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:53.985 06:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:53.985 06:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:53.985 06:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:53.985 06:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:53.985 06:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:53.985 06:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:53.985 06:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:53.985 06:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:53.985 06:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:53.985 06:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:53.985 06:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:54.244 nvme0n1 00:30:54.244 06:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:54.244 06:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:54.244 06:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:54.244 06:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:54.244 06:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:54.244 06:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:54.503 06:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:54.503 06:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:54.503 06:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:54.503 06:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:54.503 06:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:54.503 06:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:54.503 06:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:30:54.503 06:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:54.503 06:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:54.503 06:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:30:54.503 06:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:30:54.503 06:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTMyNTYyNDkxYWU1NDhhNTgxYzNkYWU1ZGIyOTk5ZGVv1SUr: 00:30:54.503 06:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjRhNzdmMzIxNmRkZDMzZTMyNDQ0Y2M3YmJhZTlkZWHfDQR9: 00:30:54.503 06:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:54.503 06:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:30:54.503 06:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTMyNTYyNDkxYWU1NDhhNTgxYzNkYWU1ZGIyOTk5ZGVv1SUr: 00:30:54.503 06:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjRhNzdmMzIxNmRkZDMzZTMyNDQ0Y2M3YmJhZTlkZWHfDQR9: ]] 00:30:54.503 06:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjRhNzdmMzIxNmRkZDMzZTMyNDQ0Y2M3YmJhZTlkZWHfDQR9: 00:30:54.503 06:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:30:54.503 06:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:54.503 06:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:54.503 06:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:30:54.503 06:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:30:54.503 06:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:54.503 06:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:30:54.503 06:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:54.503 06:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:54.503 06:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:54.503 06:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:54.503 06:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:54.503 06:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:54.503 06:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:54.503 06:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:54.503 06:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:54.503 06:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:54.503 06:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:54.503 06:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:54.503 06:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:54.503 06:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:54.503 06:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:54.503 06:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:54.503 06:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:54.763 nvme0n1 00:30:54.763 06:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:54.763 06:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:54.763 06:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:54.763 06:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:54.763 06:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:54.763 06:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:54.763 06:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:54.763 06:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:54.763 06:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:54.763 06:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:54.763 06:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:54.763 06:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:54.763 06:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:30:54.763 06:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:54.763 06:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:54.763 06:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:30:54.763 06:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:30:54.763 06:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDBlOTI2MmYzYzAyMzBkY2NlZGQ4OGQ4YTZlNGNmMjc5MTUzNWEyMzA4MGU0NGNi7F8Cfw==: 00:30:54.763 06:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTMwZTFjMDRmZGQ1OWJjMTVmY2ZkNDMzZGRjNDY2YzJUavto: 00:30:54.763 06:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:54.763 06:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:30:54.763 06:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDBlOTI2MmYzYzAyMzBkY2NlZGQ4OGQ4YTZlNGNmMjc5MTUzNWEyMzA4MGU0NGNi7F8Cfw==: 00:30:54.763 06:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTMwZTFjMDRmZGQ1OWJjMTVmY2ZkNDMzZGRjNDY2YzJUavto: ]] 00:30:54.763 06:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTMwZTFjMDRmZGQ1OWJjMTVmY2ZkNDMzZGRjNDY2YzJUavto: 00:30:54.763 06:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:30:54.763 06:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:54.763 06:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:54.763 06:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:30:54.763 06:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:30:54.763 06:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:54.763 06:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:30:54.763 06:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:54.763 06:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:54.763 06:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:54.763 06:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:54.763 06:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:54.763 06:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:54.763 06:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:54.763 06:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:54.763 06:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:54.763 06:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:54.763 06:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:54.763 06:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:54.763 06:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:54.763 06:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:54.763 06:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:30:54.763 06:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:54.763 06:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:55.332 nvme0n1 00:30:55.332 06:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:55.332 06:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:55.332 06:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:55.332 06:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:55.332 06:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:55.332 06:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:55.332 06:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:55.332 06:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:55.332 06:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:55.332 06:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:55.332 06:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:55.332 06:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:55.332 06:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:30:55.332 06:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:55.332 06:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:55.332 06:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:30:55.332 06:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:30:55.332 06:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTI5NWYzMmJlMzFmYzIwMzVkYjY1YWJjOWMwOTcwMjcyMDYxYzM1MmUxM2VlN2MyYWI2ZDEwNzAyNjQ0NjI1ZvHhVgU=: 00:30:55.332 06:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:30:55.332 06:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:55.332 06:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:30:55.332 06:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTI5NWYzMmJlMzFmYzIwMzVkYjY1YWJjOWMwOTcwMjcyMDYxYzM1MmUxM2VlN2MyYWI2ZDEwNzAyNjQ0NjI1ZvHhVgU=: 00:30:55.332 06:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:30:55.332 06:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:30:55.332 06:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:55.332 06:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:55.332 06:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:30:55.332 06:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:30:55.332 06:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:55.332 06:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:30:55.332 06:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:55.332 06:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:55.332 06:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:55.332 06:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:55.332 06:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:55.332 06:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:55.332 06:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:55.332 06:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:55.332 06:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:55.332 06:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:55.332 06:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:55.332 06:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:55.332 06:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:55.332 06:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:55.332 06:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:30:55.332 06:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:55.332 06:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:55.592 nvme0n1 00:30:55.592 06:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:55.592 06:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:55.592 06:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:55.592 06:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:55.592 06:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:55.592 06:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:55.851 06:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:55.851 06:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:55.851 06:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:55.851 06:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:55.851 06:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:55.851 06:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:30:55.851 06:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:55.851 06:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:30:55.851 06:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:55.851 06:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:55.851 06:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:30:55.851 06:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:30:55.851 06:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGI5NzZkZGFiMTJmYzQ1NWFiZGNiZjlkYjJlODc2ZWVmk4E/: 00:30:55.851 06:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWUwYjM4MjJiMzA4OTdlYWVlODM3MWI5NTIwN2JmMDA1NzYyMWIwN2ZiYWM3OTYwMTU5OWJiMjY4NDE3OGZkMprsxS0=: 00:30:55.851 06:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:55.851 06:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:30:55.851 06:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGI5NzZkZGFiMTJmYzQ1NWFiZGNiZjlkYjJlODc2ZWVmk4E/: 00:30:55.851 06:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWUwYjM4MjJiMzA4OTdlYWVlODM3MWI5NTIwN2JmMDA1NzYyMWIwN2ZiYWM3OTYwMTU5OWJiMjY4NDE3OGZkMprsxS0=: ]] 00:30:55.851 06:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWUwYjM4MjJiMzA4OTdlYWVlODM3MWI5NTIwN2JmMDA1NzYyMWIwN2ZiYWM3OTYwMTU5OWJiMjY4NDE3OGZkMprsxS0=: 00:30:55.851 06:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:30:55.851 06:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:55.851 06:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:55.851 06:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:30:55.851 06:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:30:55.851 06:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:55.851 06:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:30:55.851 06:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:55.851 06:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:55.851 06:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:55.851 06:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:55.851 06:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:55.851 06:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:55.851 06:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:55.851 06:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:55.851 06:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:55.851 06:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:55.851 06:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:55.851 06:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:55.851 06:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:55.851 06:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:55.851 06:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:55.851 06:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:55.851 06:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:56.417 nvme0n1 00:30:56.417 06:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:56.417 06:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:56.417 06:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:56.417 06:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:56.417 06:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:56.417 06:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:56.417 06:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:56.417 06:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:56.417 06:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:56.417 06:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:56.417 06:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:56.417 06:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:56.417 06:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:30:56.417 06:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:56.417 06:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:56.417 06:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:30:56.417 06:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:30:56.417 06:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDcwZWVhNTgxZjhlN2M2MGQ4MTMyY2QxOTQ5OTNhMjYxYTk2YzMxYzc3YjU4MDAyPkmkKg==: 00:30:56.417 06:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzE2YmJiYzllMGI2OWIwMGJmMzUxYjkyN2ZmZjU0YzExNTc1ZjExNGZiNjZkYzkwwgEU6g==: 00:30:56.417 06:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:56.417 06:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:30:56.417 06:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDcwZWVhNTgxZjhlN2M2MGQ4MTMyY2QxOTQ5OTNhMjYxYTk2YzMxYzc3YjU4MDAyPkmkKg==: 00:30:56.417 06:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzE2YmJiYzllMGI2OWIwMGJmMzUxYjkyN2ZmZjU0YzExNTc1ZjExNGZiNjZkYzkwwgEU6g==: ]] 00:30:56.417 06:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzE2YmJiYzllMGI2OWIwMGJmMzUxYjkyN2ZmZjU0YzExNTc1ZjExNGZiNjZkYzkwwgEU6g==: 00:30:56.417 06:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:30:56.417 06:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:56.417 06:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:56.417 06:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:30:56.417 06:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:30:56.417 06:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:56.417 06:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:30:56.417 06:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:56.417 06:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:56.418 06:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:56.418 06:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:56.418 06:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:56.418 06:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:56.418 06:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:56.418 06:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:56.418 06:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:56.418 06:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:56.418 06:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:56.418 06:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:56.418 06:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:56.418 06:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:56.418 06:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:56.418 06:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:56.418 06:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:56.984 nvme0n1 00:30:56.984 06:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:56.984 06:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:56.984 06:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:56.984 06:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:56.984 06:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:56.984 06:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:56.984 06:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:56.984 06:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:56.984 06:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:56.984 06:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:56.984 06:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:56.984 06:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:56.984 06:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:30:56.984 06:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:56.984 06:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:56.984 06:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:30:56.984 06:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:30:56.984 06:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTMyNTYyNDkxYWU1NDhhNTgxYzNkYWU1ZGIyOTk5ZGVv1SUr: 00:30:56.984 06:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjRhNzdmMzIxNmRkZDMzZTMyNDQ0Y2M3YmJhZTlkZWHfDQR9: 00:30:56.984 06:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:56.984 06:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:30:56.984 06:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTMyNTYyNDkxYWU1NDhhNTgxYzNkYWU1ZGIyOTk5ZGVv1SUr: 00:30:56.984 06:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjRhNzdmMzIxNmRkZDMzZTMyNDQ0Y2M3YmJhZTlkZWHfDQR9: ]] 00:30:56.984 06:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjRhNzdmMzIxNmRkZDMzZTMyNDQ0Y2M3YmJhZTlkZWHfDQR9: 00:30:56.984 06:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:30:56.984 06:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:56.984 06:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:56.984 06:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:30:56.984 06:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:30:56.984 06:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:56.984 06:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:30:56.984 06:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:56.984 06:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:56.984 06:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:56.984 06:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:56.984 06:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:56.984 06:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:56.984 06:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:56.984 06:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:56.984 06:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:56.984 06:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:56.984 06:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:56.984 06:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:56.984 06:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:56.984 06:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:56.984 06:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:56.984 06:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:56.984 06:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:57.549 nvme0n1 00:30:57.549 06:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:57.549 06:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:57.549 06:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:57.549 06:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:57.549 06:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:57.549 06:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:57.808 06:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:57.808 06:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:57.808 06:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:57.808 06:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:57.808 06:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:57.808 06:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:57.808 06:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:30:57.808 06:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:57.808 06:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:57.808 06:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:30:57.808 06:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:30:57.808 06:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDBlOTI2MmYzYzAyMzBkY2NlZGQ4OGQ4YTZlNGNmMjc5MTUzNWEyMzA4MGU0NGNi7F8Cfw==: 00:30:57.808 06:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTMwZTFjMDRmZGQ1OWJjMTVmY2ZkNDMzZGRjNDY2YzJUavto: 00:30:57.808 06:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:57.808 06:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:30:57.808 06:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDBlOTI2MmYzYzAyMzBkY2NlZGQ4OGQ4YTZlNGNmMjc5MTUzNWEyMzA4MGU0NGNi7F8Cfw==: 00:30:57.808 06:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTMwZTFjMDRmZGQ1OWJjMTVmY2ZkNDMzZGRjNDY2YzJUavto: ]] 00:30:57.808 06:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTMwZTFjMDRmZGQ1OWJjMTVmY2ZkNDMzZGRjNDY2YzJUavto: 00:30:57.808 06:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:30:57.808 06:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:57.808 06:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:57.808 06:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:30:57.808 06:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:30:57.808 06:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:57.808 06:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:30:57.808 06:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:57.808 06:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:57.808 06:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:57.808 06:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:57.808 06:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:57.808 06:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:57.808 06:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:57.808 06:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:57.808 06:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:57.808 06:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:57.808 06:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:57.808 06:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:57.808 06:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:57.808 06:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:57.808 06:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:30:57.808 06:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:57.808 06:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:58.374 nvme0n1 00:30:58.374 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:58.374 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:58.374 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:58.374 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:58.374 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:58.374 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:58.374 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:58.374 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:58.374 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:58.374 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:58.374 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:58.374 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:58.374 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:30:58.374 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:58.374 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:58.374 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:30:58.374 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:30:58.374 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTI5NWYzMmJlMzFmYzIwMzVkYjY1YWJjOWMwOTcwMjcyMDYxYzM1MmUxM2VlN2MyYWI2ZDEwNzAyNjQ0NjI1ZvHhVgU=: 00:30:58.374 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:30:58.374 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:58.374 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:30:58.374 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTI5NWYzMmJlMzFmYzIwMzVkYjY1YWJjOWMwOTcwMjcyMDYxYzM1MmUxM2VlN2MyYWI2ZDEwNzAyNjQ0NjI1ZvHhVgU=: 00:30:58.374 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:30:58.374 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:30:58.374 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:58.374 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:58.374 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:30:58.374 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:30:58.375 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:58.375 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:30:58.375 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:58.375 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:58.375 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:58.375 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:58.375 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:58.375 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:58.375 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:58.375 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:58.375 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:58.375 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:58.375 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:58.375 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:58.375 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:58.375 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:58.375 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:30:58.375 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:58.375 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:58.942 nvme0n1 00:30:58.942 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:58.942 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:58.942 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:58.942 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:58.942 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:58.942 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:58.942 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:58.942 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:58.942 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:58.942 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:58.942 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:58.942 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:30:58.942 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:30:58.942 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:58.942 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:30:58.942 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:58.942 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:58.942 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:58.942 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:30:58.942 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGI5NzZkZGFiMTJmYzQ1NWFiZGNiZjlkYjJlODc2ZWVmk4E/: 00:30:58.942 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWUwYjM4MjJiMzA4OTdlYWVlODM3MWI5NTIwN2JmMDA1NzYyMWIwN2ZiYWM3OTYwMTU5OWJiMjY4NDE3OGZkMprsxS0=: 00:30:58.942 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:58.942 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:30:58.942 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGI5NzZkZGFiMTJmYzQ1NWFiZGNiZjlkYjJlODc2ZWVmk4E/: 00:30:58.942 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWUwYjM4MjJiMzA4OTdlYWVlODM3MWI5NTIwN2JmMDA1NzYyMWIwN2ZiYWM3OTYwMTU5OWJiMjY4NDE3OGZkMprsxS0=: ]] 00:30:58.942 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWUwYjM4MjJiMzA4OTdlYWVlODM3MWI5NTIwN2JmMDA1NzYyMWIwN2ZiYWM3OTYwMTU5OWJiMjY4NDE3OGZkMprsxS0=: 00:30:58.942 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:30:58.942 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:58.942 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:58.942 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:30:58.942 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:30:58.942 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:58.942 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:30:58.942 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:58.942 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:58.942 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:58.942 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:58.942 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:58.942 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:58.942 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:58.942 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:58.942 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:58.942 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:58.942 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:58.942 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:58.942 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:58.942 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:58.942 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:58.942 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:58.942 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:59.201 nvme0n1 00:30:59.201 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:59.201 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:59.201 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:59.201 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:59.201 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:59.201 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:59.201 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:59.201 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:59.201 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:59.201 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:59.201 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:59.201 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:59.201 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:30:59.201 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:59.201 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:59.201 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:59.201 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:30:59.201 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDcwZWVhNTgxZjhlN2M2MGQ4MTMyY2QxOTQ5OTNhMjYxYTk2YzMxYzc3YjU4MDAyPkmkKg==: 00:30:59.201 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzE2YmJiYzllMGI2OWIwMGJmMzUxYjkyN2ZmZjU0YzExNTc1ZjExNGZiNjZkYzkwwgEU6g==: 00:30:59.201 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:59.201 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:30:59.202 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDcwZWVhNTgxZjhlN2M2MGQ4MTMyY2QxOTQ5OTNhMjYxYTk2YzMxYzc3YjU4MDAyPkmkKg==: 00:30:59.202 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzE2YmJiYzllMGI2OWIwMGJmMzUxYjkyN2ZmZjU0YzExNTc1ZjExNGZiNjZkYzkwwgEU6g==: ]] 00:30:59.202 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzE2YmJiYzllMGI2OWIwMGJmMzUxYjkyN2ZmZjU0YzExNTc1ZjExNGZiNjZkYzkwwgEU6g==: 00:30:59.202 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:30:59.202 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:59.202 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:59.202 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:30:59.202 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:30:59.202 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:59.202 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:30:59.202 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:59.202 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:59.202 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:59.202 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:59.202 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:59.202 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:59.202 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:59.202 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:59.202 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:59.202 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:59.202 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:59.202 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:59.202 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:59.202 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:59.202 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:59.202 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:59.202 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:59.461 nvme0n1 00:30:59.461 06:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:59.461 06:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:59.461 06:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:59.461 06:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:59.461 06:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:59.461 06:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:59.461 06:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:59.461 06:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:59.461 06:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:59.461 06:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:59.461 06:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:59.461 06:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:59.461 06:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:30:59.461 06:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:59.461 06:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:59.461 06:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:59.461 06:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:30:59.461 06:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTMyNTYyNDkxYWU1NDhhNTgxYzNkYWU1ZGIyOTk5ZGVv1SUr: 00:30:59.461 06:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjRhNzdmMzIxNmRkZDMzZTMyNDQ0Y2M3YmJhZTlkZWHfDQR9: 00:30:59.461 06:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:59.461 06:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:30:59.461 06:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTMyNTYyNDkxYWU1NDhhNTgxYzNkYWU1ZGIyOTk5ZGVv1SUr: 00:30:59.461 06:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjRhNzdmMzIxNmRkZDMzZTMyNDQ0Y2M3YmJhZTlkZWHfDQR9: ]] 00:30:59.461 06:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjRhNzdmMzIxNmRkZDMzZTMyNDQ0Y2M3YmJhZTlkZWHfDQR9: 00:30:59.461 06:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:30:59.461 06:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:59.461 06:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:59.461 06:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:30:59.461 06:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:30:59.461 06:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:59.461 06:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:30:59.461 06:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:59.461 06:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:59.461 06:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:59.461 06:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:59.461 06:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:59.461 06:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:59.461 06:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:59.461 06:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:59.461 06:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:59.461 06:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:59.461 06:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:59.461 06:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:59.461 06:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:59.461 06:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:59.461 06:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:59.461 06:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:59.461 06:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:59.720 nvme0n1 00:30:59.720 06:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:59.720 06:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:59.720 06:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:59.720 06:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:59.720 06:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:59.720 06:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:59.720 06:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:59.720 06:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:59.720 06:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:59.720 06:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:59.720 06:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:59.720 06:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:59.720 06:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:30:59.720 06:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:59.720 06:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:59.720 06:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:59.720 06:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:30:59.720 06:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDBlOTI2MmYzYzAyMzBkY2NlZGQ4OGQ4YTZlNGNmMjc5MTUzNWEyMzA4MGU0NGNi7F8Cfw==: 00:30:59.720 06:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTMwZTFjMDRmZGQ1OWJjMTVmY2ZkNDMzZGRjNDY2YzJUavto: 00:30:59.720 06:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:59.720 06:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:30:59.720 06:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDBlOTI2MmYzYzAyMzBkY2NlZGQ4OGQ4YTZlNGNmMjc5MTUzNWEyMzA4MGU0NGNi7F8Cfw==: 00:30:59.720 06:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTMwZTFjMDRmZGQ1OWJjMTVmY2ZkNDMzZGRjNDY2YzJUavto: ]] 00:30:59.720 06:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTMwZTFjMDRmZGQ1OWJjMTVmY2ZkNDMzZGRjNDY2YzJUavto: 00:30:59.720 06:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:30:59.720 06:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:59.720 06:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:59.720 06:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:30:59.720 06:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:30:59.720 06:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:59.720 06:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:30:59.720 06:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:59.720 06:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:59.720 06:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:59.720 06:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:59.720 06:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:59.720 06:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:59.720 06:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:59.720 06:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:59.720 06:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:59.720 06:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:59.720 06:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:59.720 06:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:59.721 06:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:59.721 06:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:59.721 06:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:30:59.721 06:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:59.721 06:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:59.979 nvme0n1 00:30:59.979 06:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:59.979 06:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:59.979 06:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:59.979 06:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:59.979 06:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:59.979 06:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:59.979 06:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:59.980 06:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:59.980 06:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:59.980 06:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:59.980 06:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:59.980 06:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:59.980 06:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:30:59.980 06:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:59.980 06:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:59.980 06:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:59.980 06:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:30:59.980 06:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTI5NWYzMmJlMzFmYzIwMzVkYjY1YWJjOWMwOTcwMjcyMDYxYzM1MmUxM2VlN2MyYWI2ZDEwNzAyNjQ0NjI1ZvHhVgU=: 00:30:59.980 06:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:30:59.980 06:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:59.980 06:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:30:59.980 06:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTI5NWYzMmJlMzFmYzIwMzVkYjY1YWJjOWMwOTcwMjcyMDYxYzM1MmUxM2VlN2MyYWI2ZDEwNzAyNjQ0NjI1ZvHhVgU=: 00:30:59.980 06:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:30:59.980 06:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:30:59.980 06:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:59.980 06:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:59.980 06:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:30:59.980 06:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:30:59.980 06:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:59.980 06:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:30:59.980 06:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:59.980 06:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:59.980 06:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:59.980 06:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:59.980 06:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:59.980 06:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:59.980 06:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:59.980 06:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:59.980 06:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:59.980 06:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:59.980 06:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:59.980 06:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:59.980 06:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:59.980 06:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:59.980 06:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:30:59.980 06:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:59.980 06:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:00.239 nvme0n1 00:31:00.239 06:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:00.239 06:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:00.239 06:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:00.239 06:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:00.239 06:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:00.239 06:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:00.239 06:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:00.239 06:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:00.239 06:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:00.239 06:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:00.239 06:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:00.239 06:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:00.239 06:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:00.239 06:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:31:00.239 06:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:00.239 06:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:00.239 06:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:00.239 06:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:00.239 06:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGI5NzZkZGFiMTJmYzQ1NWFiZGNiZjlkYjJlODc2ZWVmk4E/: 00:31:00.239 06:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWUwYjM4MjJiMzA4OTdlYWVlODM3MWI5NTIwN2JmMDA1NzYyMWIwN2ZiYWM3OTYwMTU5OWJiMjY4NDE3OGZkMprsxS0=: 00:31:00.239 06:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:00.239 06:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:00.239 06:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGI5NzZkZGFiMTJmYzQ1NWFiZGNiZjlkYjJlODc2ZWVmk4E/: 00:31:00.239 06:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWUwYjM4MjJiMzA4OTdlYWVlODM3MWI5NTIwN2JmMDA1NzYyMWIwN2ZiYWM3OTYwMTU5OWJiMjY4NDE3OGZkMprsxS0=: ]] 00:31:00.239 06:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWUwYjM4MjJiMzA4OTdlYWVlODM3MWI5NTIwN2JmMDA1NzYyMWIwN2ZiYWM3OTYwMTU5OWJiMjY4NDE3OGZkMprsxS0=: 00:31:00.239 06:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:31:00.239 06:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:00.239 06:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:00.239 06:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:00.239 06:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:00.239 06:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:00.239 06:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:31:00.239 06:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:00.239 06:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:00.239 06:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:00.239 06:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:00.239 06:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:00.239 06:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:00.239 06:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:00.239 06:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:00.239 06:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:00.239 06:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:00.239 06:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:00.239 06:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:00.239 06:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:00.239 06:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:00.239 06:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:00.239 06:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:00.239 06:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:00.239 nvme0n1 00:31:00.239 06:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:00.239 06:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:00.239 06:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:00.498 06:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:00.498 06:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:00.498 06:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:00.498 06:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:00.498 06:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:00.498 06:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:00.498 06:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:00.498 06:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:00.498 06:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:00.498 06:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:31:00.498 06:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:00.498 06:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:00.498 06:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:00.498 06:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:00.498 06:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDcwZWVhNTgxZjhlN2M2MGQ4MTMyY2QxOTQ5OTNhMjYxYTk2YzMxYzc3YjU4MDAyPkmkKg==: 00:31:00.498 06:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzE2YmJiYzllMGI2OWIwMGJmMzUxYjkyN2ZmZjU0YzExNTc1ZjExNGZiNjZkYzkwwgEU6g==: 00:31:00.498 06:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:00.498 06:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:00.498 06:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDcwZWVhNTgxZjhlN2M2MGQ4MTMyY2QxOTQ5OTNhMjYxYTk2YzMxYzc3YjU4MDAyPkmkKg==: 00:31:00.498 06:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzE2YmJiYzllMGI2OWIwMGJmMzUxYjkyN2ZmZjU0YzExNTc1ZjExNGZiNjZkYzkwwgEU6g==: ]] 00:31:00.498 06:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzE2YmJiYzllMGI2OWIwMGJmMzUxYjkyN2ZmZjU0YzExNTc1ZjExNGZiNjZkYzkwwgEU6g==: 00:31:00.498 06:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:31:00.498 06:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:00.498 06:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:00.498 06:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:00.498 06:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:00.498 06:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:00.498 06:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:31:00.498 06:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:00.498 06:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:00.498 06:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:00.498 06:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:00.498 06:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:00.498 06:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:00.499 06:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:00.499 06:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:00.499 06:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:00.499 06:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:00.499 06:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:00.499 06:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:00.499 06:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:00.499 06:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:00.499 06:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:00.499 06:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:00.499 06:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:00.499 nvme0n1 00:31:00.499 06:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:00.499 06:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:00.499 06:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:00.499 06:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:00.499 06:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:00.499 06:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:00.757 06:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:00.757 06:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:00.757 06:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:00.757 06:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:00.757 06:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:00.757 06:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:00.757 06:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:31:00.757 06:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:00.757 06:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:00.757 06:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:00.757 06:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:00.757 06:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTMyNTYyNDkxYWU1NDhhNTgxYzNkYWU1ZGIyOTk5ZGVv1SUr: 00:31:00.757 06:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjRhNzdmMzIxNmRkZDMzZTMyNDQ0Y2M3YmJhZTlkZWHfDQR9: 00:31:00.757 06:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:00.757 06:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:00.757 06:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTMyNTYyNDkxYWU1NDhhNTgxYzNkYWU1ZGIyOTk5ZGVv1SUr: 00:31:00.757 06:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjRhNzdmMzIxNmRkZDMzZTMyNDQ0Y2M3YmJhZTlkZWHfDQR9: ]] 00:31:00.757 06:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjRhNzdmMzIxNmRkZDMzZTMyNDQ0Y2M3YmJhZTlkZWHfDQR9: 00:31:00.757 06:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:31:00.758 06:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:00.758 06:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:00.758 06:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:00.758 06:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:00.758 06:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:00.758 06:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:31:00.758 06:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:00.758 06:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:00.758 06:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:00.758 06:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:00.758 06:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:00.758 06:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:00.758 06:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:00.758 06:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:00.758 06:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:00.758 06:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:00.758 06:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:00.758 06:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:00.758 06:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:00.758 06:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:00.758 06:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:00.758 06:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:00.758 06:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:00.758 nvme0n1 00:31:00.758 06:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:00.758 06:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:00.758 06:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:00.758 06:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:00.758 06:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:00.758 06:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:01.016 06:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:01.017 06:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:01.017 06:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:01.017 06:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:01.017 06:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:01.017 06:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:01.017 06:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:31:01.017 06:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:01.017 06:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:01.017 06:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:01.017 06:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:01.017 06:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDBlOTI2MmYzYzAyMzBkY2NlZGQ4OGQ4YTZlNGNmMjc5MTUzNWEyMzA4MGU0NGNi7F8Cfw==: 00:31:01.017 06:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTMwZTFjMDRmZGQ1OWJjMTVmY2ZkNDMzZGRjNDY2YzJUavto: 00:31:01.017 06:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:01.017 06:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:01.017 06:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDBlOTI2MmYzYzAyMzBkY2NlZGQ4OGQ4YTZlNGNmMjc5MTUzNWEyMzA4MGU0NGNi7F8Cfw==: 00:31:01.017 06:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTMwZTFjMDRmZGQ1OWJjMTVmY2ZkNDMzZGRjNDY2YzJUavto: ]] 00:31:01.017 06:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTMwZTFjMDRmZGQ1OWJjMTVmY2ZkNDMzZGRjNDY2YzJUavto: 00:31:01.017 06:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:31:01.017 06:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:01.017 06:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:01.017 06:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:01.017 06:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:01.017 06:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:01.017 06:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:31:01.017 06:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:01.017 06:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:01.017 06:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:01.017 06:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:01.017 06:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:01.017 06:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:01.017 06:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:01.017 06:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:01.017 06:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:01.017 06:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:01.017 06:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:01.017 06:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:01.017 06:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:01.017 06:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:01.017 06:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:01.017 06:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:01.017 06:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:01.017 nvme0n1 00:31:01.017 06:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:01.017 06:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:01.017 06:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:01.017 06:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:01.017 06:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:01.017 06:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:01.276 06:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:01.276 06:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:01.276 06:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:01.276 06:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:01.276 06:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:01.276 06:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:01.276 06:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:31:01.276 06:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:01.276 06:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:01.276 06:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:01.276 06:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:01.276 06:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTI5NWYzMmJlMzFmYzIwMzVkYjY1YWJjOWMwOTcwMjcyMDYxYzM1MmUxM2VlN2MyYWI2ZDEwNzAyNjQ0NjI1ZvHhVgU=: 00:31:01.276 06:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:01.276 06:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:01.276 06:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:01.276 06:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTI5NWYzMmJlMzFmYzIwMzVkYjY1YWJjOWMwOTcwMjcyMDYxYzM1MmUxM2VlN2MyYWI2ZDEwNzAyNjQ0NjI1ZvHhVgU=: 00:31:01.276 06:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:01.276 06:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:31:01.276 06:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:01.276 06:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:01.276 06:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:01.276 06:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:01.276 06:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:01.276 06:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:31:01.276 06:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:01.276 06:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:01.276 06:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:01.276 06:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:01.276 06:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:01.276 06:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:01.276 06:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:01.276 06:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:01.276 06:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:01.276 06:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:01.277 06:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:01.277 06:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:01.277 06:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:01.277 06:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:01.277 06:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:01.277 06:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:01.277 06:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:01.277 nvme0n1 00:31:01.277 06:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:01.277 06:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:01.277 06:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:01.277 06:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:01.277 06:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:01.277 06:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:01.536 06:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:01.536 06:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:01.536 06:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:01.536 06:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:01.536 06:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:01.536 06:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:01.536 06:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:01.536 06:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:31:01.536 06:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:01.536 06:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:01.536 06:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:01.536 06:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:01.536 06:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGI5NzZkZGFiMTJmYzQ1NWFiZGNiZjlkYjJlODc2ZWVmk4E/: 00:31:01.536 06:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWUwYjM4MjJiMzA4OTdlYWVlODM3MWI5NTIwN2JmMDA1NzYyMWIwN2ZiYWM3OTYwMTU5OWJiMjY4NDE3OGZkMprsxS0=: 00:31:01.536 06:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:01.536 06:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:01.536 06:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGI5NzZkZGFiMTJmYzQ1NWFiZGNiZjlkYjJlODc2ZWVmk4E/: 00:31:01.536 06:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWUwYjM4MjJiMzA4OTdlYWVlODM3MWI5NTIwN2JmMDA1NzYyMWIwN2ZiYWM3OTYwMTU5OWJiMjY4NDE3OGZkMprsxS0=: ]] 00:31:01.536 06:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWUwYjM4MjJiMzA4OTdlYWVlODM3MWI5NTIwN2JmMDA1NzYyMWIwN2ZiYWM3OTYwMTU5OWJiMjY4NDE3OGZkMprsxS0=: 00:31:01.536 06:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:31:01.536 06:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:01.537 06:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:01.537 06:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:01.537 06:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:01.537 06:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:01.537 06:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:31:01.537 06:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:01.537 06:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:01.537 06:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:01.537 06:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:01.537 06:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:01.537 06:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:01.537 06:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:01.537 06:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:01.537 06:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:01.537 06:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:01.537 06:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:01.537 06:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:01.537 06:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:01.537 06:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:01.537 06:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:01.537 06:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:01.537 06:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:01.796 nvme0n1 00:31:01.796 06:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:01.796 06:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:01.796 06:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:01.796 06:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:01.796 06:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:01.796 06:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:01.796 06:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:01.796 06:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:01.796 06:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:01.796 06:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:01.796 06:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:01.796 06:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:01.796 06:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:31:01.796 06:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:01.796 06:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:01.796 06:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:01.796 06:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:01.796 06:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDcwZWVhNTgxZjhlN2M2MGQ4MTMyY2QxOTQ5OTNhMjYxYTk2YzMxYzc3YjU4MDAyPkmkKg==: 00:31:01.796 06:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzE2YmJiYzllMGI2OWIwMGJmMzUxYjkyN2ZmZjU0YzExNTc1ZjExNGZiNjZkYzkwwgEU6g==: 00:31:01.796 06:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:01.796 06:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:01.796 06:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDcwZWVhNTgxZjhlN2M2MGQ4MTMyY2QxOTQ5OTNhMjYxYTk2YzMxYzc3YjU4MDAyPkmkKg==: 00:31:01.796 06:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzE2YmJiYzllMGI2OWIwMGJmMzUxYjkyN2ZmZjU0YzExNTc1ZjExNGZiNjZkYzkwwgEU6g==: ]] 00:31:01.796 06:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzE2YmJiYzllMGI2OWIwMGJmMzUxYjkyN2ZmZjU0YzExNTc1ZjExNGZiNjZkYzkwwgEU6g==: 00:31:01.796 06:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:31:01.796 06:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:01.796 06:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:01.796 06:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:01.796 06:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:01.796 06:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:01.796 06:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:31:01.796 06:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:01.796 06:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:01.796 06:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:01.796 06:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:01.797 06:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:01.797 06:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:01.797 06:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:01.797 06:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:01.797 06:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:01.797 06:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:01.797 06:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:01.797 06:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:01.797 06:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:01.797 06:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:01.797 06:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:01.797 06:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:01.797 06:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:02.056 nvme0n1 00:31:02.056 06:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:02.056 06:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:02.056 06:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:02.056 06:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:02.056 06:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:02.056 06:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:02.056 06:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:02.056 06:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:02.056 06:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:02.056 06:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:02.056 06:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:02.056 06:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:02.056 06:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:31:02.056 06:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:02.056 06:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:02.056 06:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:02.056 06:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:02.056 06:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTMyNTYyNDkxYWU1NDhhNTgxYzNkYWU1ZGIyOTk5ZGVv1SUr: 00:31:02.056 06:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjRhNzdmMzIxNmRkZDMzZTMyNDQ0Y2M3YmJhZTlkZWHfDQR9: 00:31:02.056 06:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:02.056 06:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:02.056 06:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTMyNTYyNDkxYWU1NDhhNTgxYzNkYWU1ZGIyOTk5ZGVv1SUr: 00:31:02.056 06:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjRhNzdmMzIxNmRkZDMzZTMyNDQ0Y2M3YmJhZTlkZWHfDQR9: ]] 00:31:02.056 06:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjRhNzdmMzIxNmRkZDMzZTMyNDQ0Y2M3YmJhZTlkZWHfDQR9: 00:31:02.056 06:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:31:02.056 06:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:02.056 06:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:02.056 06:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:02.056 06:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:02.056 06:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:02.056 06:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:31:02.056 06:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:02.056 06:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:02.056 06:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:02.056 06:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:02.056 06:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:02.056 06:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:02.056 06:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:02.056 06:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:02.056 06:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:02.056 06:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:02.056 06:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:02.056 06:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:02.056 06:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:02.056 06:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:02.056 06:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:02.056 06:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:02.056 06:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:02.315 nvme0n1 00:31:02.315 06:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:02.315 06:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:02.315 06:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:02.315 06:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:02.315 06:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:02.315 06:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:02.315 06:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:02.315 06:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:02.315 06:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:02.315 06:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:02.315 06:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:02.315 06:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:02.315 06:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:31:02.315 06:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:02.315 06:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:02.315 06:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:02.315 06:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:02.315 06:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDBlOTI2MmYzYzAyMzBkY2NlZGQ4OGQ4YTZlNGNmMjc5MTUzNWEyMzA4MGU0NGNi7F8Cfw==: 00:31:02.315 06:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTMwZTFjMDRmZGQ1OWJjMTVmY2ZkNDMzZGRjNDY2YzJUavto: 00:31:02.315 06:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:02.315 06:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:02.315 06:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDBlOTI2MmYzYzAyMzBkY2NlZGQ4OGQ4YTZlNGNmMjc5MTUzNWEyMzA4MGU0NGNi7F8Cfw==: 00:31:02.315 06:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTMwZTFjMDRmZGQ1OWJjMTVmY2ZkNDMzZGRjNDY2YzJUavto: ]] 00:31:02.315 06:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTMwZTFjMDRmZGQ1OWJjMTVmY2ZkNDMzZGRjNDY2YzJUavto: 00:31:02.315 06:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:31:02.315 06:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:02.315 06:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:02.315 06:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:02.315 06:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:02.315 06:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:02.315 06:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:31:02.315 06:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:02.315 06:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:02.315 06:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:02.315 06:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:02.315 06:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:02.315 06:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:02.315 06:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:02.315 06:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:02.315 06:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:02.315 06:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:02.315 06:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:02.315 06:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:02.315 06:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:02.315 06:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:02.315 06:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:02.315 06:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:02.315 06:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:02.574 nvme0n1 00:31:02.574 06:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:02.574 06:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:02.574 06:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:02.574 06:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:02.574 06:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:02.574 06:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:02.833 06:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:02.833 06:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:02.833 06:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:02.833 06:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:02.833 06:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:02.833 06:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:02.833 06:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:31:02.833 06:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:02.833 06:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:02.833 06:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:02.833 06:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:02.833 06:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTI5NWYzMmJlMzFmYzIwMzVkYjY1YWJjOWMwOTcwMjcyMDYxYzM1MmUxM2VlN2MyYWI2ZDEwNzAyNjQ0NjI1ZvHhVgU=: 00:31:02.833 06:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:02.833 06:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:02.833 06:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:02.833 06:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTI5NWYzMmJlMzFmYzIwMzVkYjY1YWJjOWMwOTcwMjcyMDYxYzM1MmUxM2VlN2MyYWI2ZDEwNzAyNjQ0NjI1ZvHhVgU=: 00:31:02.833 06:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:02.833 06:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:31:02.833 06:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:02.833 06:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:02.833 06:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:02.833 06:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:02.833 06:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:02.833 06:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:31:02.833 06:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:02.833 06:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:02.833 06:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:02.833 06:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:02.833 06:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:02.833 06:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:02.833 06:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:02.833 06:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:02.833 06:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:02.833 06:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:02.833 06:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:02.833 06:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:02.833 06:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:02.833 06:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:02.833 06:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:02.833 06:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:02.833 06:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:03.092 nvme0n1 00:31:03.092 06:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:03.092 06:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:03.092 06:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:03.092 06:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:03.092 06:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:03.092 06:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:03.092 06:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:03.092 06:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:03.092 06:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:03.092 06:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:03.092 06:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:03.092 06:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:03.092 06:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:03.092 06:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:31:03.092 06:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:03.092 06:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:03.092 06:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:03.092 06:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:03.092 06:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGI5NzZkZGFiMTJmYzQ1NWFiZGNiZjlkYjJlODc2ZWVmk4E/: 00:31:03.092 06:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWUwYjM4MjJiMzA4OTdlYWVlODM3MWI5NTIwN2JmMDA1NzYyMWIwN2ZiYWM3OTYwMTU5OWJiMjY4NDE3OGZkMprsxS0=: 00:31:03.092 06:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:03.092 06:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:03.092 06:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGI5NzZkZGFiMTJmYzQ1NWFiZGNiZjlkYjJlODc2ZWVmk4E/: 00:31:03.092 06:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWUwYjM4MjJiMzA4OTdlYWVlODM3MWI5NTIwN2JmMDA1NzYyMWIwN2ZiYWM3OTYwMTU5OWJiMjY4NDE3OGZkMprsxS0=: ]] 00:31:03.092 06:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWUwYjM4MjJiMzA4OTdlYWVlODM3MWI5NTIwN2JmMDA1NzYyMWIwN2ZiYWM3OTYwMTU5OWJiMjY4NDE3OGZkMprsxS0=: 00:31:03.092 06:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:31:03.092 06:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:03.092 06:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:03.092 06:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:03.092 06:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:03.092 06:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:03.092 06:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:31:03.092 06:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:03.092 06:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:03.092 06:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:03.092 06:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:03.092 06:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:03.092 06:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:03.092 06:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:03.092 06:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:03.092 06:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:03.092 06:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:03.092 06:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:03.092 06:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:03.092 06:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:03.092 06:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:03.092 06:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:03.092 06:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:03.092 06:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:03.352 nvme0n1 00:31:03.352 06:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:03.352 06:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:03.352 06:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:03.352 06:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:03.352 06:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:03.611 06:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:03.611 06:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:03.611 06:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:03.611 06:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:03.611 06:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:03.611 06:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:03.611 06:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:03.611 06:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:31:03.611 06:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:03.611 06:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:03.611 06:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:03.611 06:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:03.611 06:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDcwZWVhNTgxZjhlN2M2MGQ4MTMyY2QxOTQ5OTNhMjYxYTk2YzMxYzc3YjU4MDAyPkmkKg==: 00:31:03.611 06:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzE2YmJiYzllMGI2OWIwMGJmMzUxYjkyN2ZmZjU0YzExNTc1ZjExNGZiNjZkYzkwwgEU6g==: 00:31:03.611 06:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:03.611 06:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:03.611 06:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDcwZWVhNTgxZjhlN2M2MGQ4MTMyY2QxOTQ5OTNhMjYxYTk2YzMxYzc3YjU4MDAyPkmkKg==: 00:31:03.611 06:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzE2YmJiYzllMGI2OWIwMGJmMzUxYjkyN2ZmZjU0YzExNTc1ZjExNGZiNjZkYzkwwgEU6g==: ]] 00:31:03.611 06:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzE2YmJiYzllMGI2OWIwMGJmMzUxYjkyN2ZmZjU0YzExNTc1ZjExNGZiNjZkYzkwwgEU6g==: 00:31:03.611 06:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:31:03.611 06:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:03.611 06:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:03.611 06:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:03.611 06:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:03.611 06:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:03.611 06:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:31:03.611 06:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:03.611 06:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:03.611 06:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:03.611 06:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:03.611 06:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:03.611 06:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:03.611 06:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:03.611 06:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:03.611 06:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:03.611 06:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:03.611 06:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:03.611 06:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:03.611 06:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:03.611 06:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:03.611 06:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:03.611 06:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:03.611 06:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:03.870 nvme0n1 00:31:03.870 06:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:03.870 06:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:03.870 06:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:03.870 06:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:03.870 06:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:03.870 06:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:03.870 06:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:03.870 06:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:03.870 06:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:03.870 06:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:03.870 06:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:03.870 06:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:03.870 06:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:31:03.870 06:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:03.870 06:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:03.870 06:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:03.870 06:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:03.870 06:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTMyNTYyNDkxYWU1NDhhNTgxYzNkYWU1ZGIyOTk5ZGVv1SUr: 00:31:03.870 06:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjRhNzdmMzIxNmRkZDMzZTMyNDQ0Y2M3YmJhZTlkZWHfDQR9: 00:31:03.870 06:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:03.870 06:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:03.870 06:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTMyNTYyNDkxYWU1NDhhNTgxYzNkYWU1ZGIyOTk5ZGVv1SUr: 00:31:03.870 06:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjRhNzdmMzIxNmRkZDMzZTMyNDQ0Y2M3YmJhZTlkZWHfDQR9: ]] 00:31:03.870 06:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjRhNzdmMzIxNmRkZDMzZTMyNDQ0Y2M3YmJhZTlkZWHfDQR9: 00:31:03.870 06:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:31:03.870 06:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:03.870 06:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:03.870 06:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:03.870 06:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:03.870 06:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:03.870 06:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:31:03.870 06:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:03.870 06:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:03.870 06:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:03.870 06:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:03.870 06:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:03.870 06:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:03.870 06:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:03.870 06:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:03.870 06:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:03.870 06:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:03.870 06:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:03.870 06:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:03.870 06:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:03.870 06:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:03.870 06:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:03.870 06:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:03.870 06:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:04.438 nvme0n1 00:31:04.438 06:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:04.438 06:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:04.438 06:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:04.438 06:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:04.438 06:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:04.438 06:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:04.438 06:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:04.438 06:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:04.438 06:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:04.438 06:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:04.438 06:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:04.438 06:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:04.438 06:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:31:04.438 06:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:04.438 06:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:04.438 06:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:04.438 06:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:04.438 06:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDBlOTI2MmYzYzAyMzBkY2NlZGQ4OGQ4YTZlNGNmMjc5MTUzNWEyMzA4MGU0NGNi7F8Cfw==: 00:31:04.438 06:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTMwZTFjMDRmZGQ1OWJjMTVmY2ZkNDMzZGRjNDY2YzJUavto: 00:31:04.438 06:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:04.438 06:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:04.438 06:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDBlOTI2MmYzYzAyMzBkY2NlZGQ4OGQ4YTZlNGNmMjc5MTUzNWEyMzA4MGU0NGNi7F8Cfw==: 00:31:04.438 06:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTMwZTFjMDRmZGQ1OWJjMTVmY2ZkNDMzZGRjNDY2YzJUavto: ]] 00:31:04.438 06:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTMwZTFjMDRmZGQ1OWJjMTVmY2ZkNDMzZGRjNDY2YzJUavto: 00:31:04.438 06:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:31:04.438 06:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:04.438 06:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:04.438 06:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:04.438 06:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:04.438 06:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:04.438 06:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:31:04.438 06:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:04.438 06:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:04.438 06:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:04.438 06:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:04.438 06:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:04.438 06:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:04.438 06:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:04.438 06:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:04.438 06:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:04.438 06:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:04.438 06:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:04.438 06:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:04.438 06:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:04.438 06:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:04.439 06:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:04.439 06:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:04.439 06:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:04.697 nvme0n1 00:31:04.697 06:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:04.697 06:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:04.697 06:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:04.697 06:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:04.697 06:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:04.697 06:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:04.956 06:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:04.956 06:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:04.956 06:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:04.956 06:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:04.956 06:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:04.956 06:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:04.956 06:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:31:04.956 06:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:04.956 06:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:04.956 06:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:04.956 06:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:04.956 06:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTI5NWYzMmJlMzFmYzIwMzVkYjY1YWJjOWMwOTcwMjcyMDYxYzM1MmUxM2VlN2MyYWI2ZDEwNzAyNjQ0NjI1ZvHhVgU=: 00:31:04.956 06:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:04.956 06:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:04.956 06:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:04.956 06:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTI5NWYzMmJlMzFmYzIwMzVkYjY1YWJjOWMwOTcwMjcyMDYxYzM1MmUxM2VlN2MyYWI2ZDEwNzAyNjQ0NjI1ZvHhVgU=: 00:31:04.956 06:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:04.956 06:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:31:04.956 06:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:04.956 06:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:04.956 06:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:04.956 06:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:04.956 06:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:04.956 06:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:31:04.956 06:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:04.956 06:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:04.956 06:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:04.956 06:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:04.956 06:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:04.956 06:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:04.956 06:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:04.956 06:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:04.956 06:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:04.956 06:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:04.956 06:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:04.956 06:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:04.956 06:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:04.956 06:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:04.956 06:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:04.956 06:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:04.956 06:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:05.214 nvme0n1 00:31:05.214 06:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:05.214 06:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:05.214 06:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:05.214 06:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:05.214 06:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:05.214 06:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:05.214 06:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:05.214 06:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:05.214 06:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:05.214 06:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:05.214 06:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:05.214 06:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:05.214 06:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:05.214 06:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:31:05.214 06:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:05.214 06:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:05.214 06:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:05.214 06:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:05.214 06:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGI5NzZkZGFiMTJmYzQ1NWFiZGNiZjlkYjJlODc2ZWVmk4E/: 00:31:05.214 06:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWUwYjM4MjJiMzA4OTdlYWVlODM3MWI5NTIwN2JmMDA1NzYyMWIwN2ZiYWM3OTYwMTU5OWJiMjY4NDE3OGZkMprsxS0=: 00:31:05.214 06:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:05.214 06:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:05.214 06:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGI5NzZkZGFiMTJmYzQ1NWFiZGNiZjlkYjJlODc2ZWVmk4E/: 00:31:05.214 06:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWUwYjM4MjJiMzA4OTdlYWVlODM3MWI5NTIwN2JmMDA1NzYyMWIwN2ZiYWM3OTYwMTU5OWJiMjY4NDE3OGZkMprsxS0=: ]] 00:31:05.214 06:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWUwYjM4MjJiMzA4OTdlYWVlODM3MWI5NTIwN2JmMDA1NzYyMWIwN2ZiYWM3OTYwMTU5OWJiMjY4NDE3OGZkMprsxS0=: 00:31:05.214 06:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:31:05.214 06:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:05.214 06:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:05.214 06:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:05.214 06:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:05.214 06:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:05.214 06:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:31:05.214 06:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:05.214 06:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:05.214 06:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:05.214 06:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:05.214 06:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:05.214 06:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:05.214 06:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:05.214 06:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:05.214 06:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:05.473 06:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:05.473 06:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:05.473 06:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:05.473 06:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:05.473 06:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:05.473 06:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:05.473 06:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:05.473 06:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:06.038 nvme0n1 00:31:06.038 06:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:06.038 06:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:06.038 06:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:06.038 06:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:06.038 06:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:06.038 06:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:06.038 06:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:06.038 06:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:06.038 06:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:06.038 06:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:06.038 06:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:06.038 06:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:06.038 06:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:31:06.039 06:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:06.039 06:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:06.039 06:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:06.039 06:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:06.039 06:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDcwZWVhNTgxZjhlN2M2MGQ4MTMyY2QxOTQ5OTNhMjYxYTk2YzMxYzc3YjU4MDAyPkmkKg==: 00:31:06.039 06:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzE2YmJiYzllMGI2OWIwMGJmMzUxYjkyN2ZmZjU0YzExNTc1ZjExNGZiNjZkYzkwwgEU6g==: 00:31:06.039 06:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:06.039 06:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:06.039 06:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDcwZWVhNTgxZjhlN2M2MGQ4MTMyY2QxOTQ5OTNhMjYxYTk2YzMxYzc3YjU4MDAyPkmkKg==: 00:31:06.039 06:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzE2YmJiYzllMGI2OWIwMGJmMzUxYjkyN2ZmZjU0YzExNTc1ZjExNGZiNjZkYzkwwgEU6g==: ]] 00:31:06.039 06:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzE2YmJiYzllMGI2OWIwMGJmMzUxYjkyN2ZmZjU0YzExNTc1ZjExNGZiNjZkYzkwwgEU6g==: 00:31:06.039 06:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:31:06.039 06:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:06.039 06:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:06.039 06:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:06.039 06:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:06.039 06:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:06.039 06:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:31:06.039 06:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:06.039 06:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:06.039 06:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:06.039 06:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:06.039 06:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:06.039 06:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:06.039 06:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:06.039 06:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:06.039 06:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:06.039 06:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:06.039 06:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:06.039 06:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:06.039 06:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:06.039 06:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:06.039 06:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:06.039 06:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:06.039 06:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:06.605 nvme0n1 00:31:06.605 06:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:06.605 06:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:06.605 06:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:06.605 06:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:06.605 06:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:06.605 06:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:06.605 06:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:06.605 06:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:06.605 06:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:06.605 06:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:06.605 06:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:06.605 06:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:06.605 06:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:31:06.605 06:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:06.605 06:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:06.605 06:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:06.605 06:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:06.605 06:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTMyNTYyNDkxYWU1NDhhNTgxYzNkYWU1ZGIyOTk5ZGVv1SUr: 00:31:06.605 06:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjRhNzdmMzIxNmRkZDMzZTMyNDQ0Y2M3YmJhZTlkZWHfDQR9: 00:31:06.605 06:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:06.605 06:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:06.605 06:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTMyNTYyNDkxYWU1NDhhNTgxYzNkYWU1ZGIyOTk5ZGVv1SUr: 00:31:06.605 06:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjRhNzdmMzIxNmRkZDMzZTMyNDQ0Y2M3YmJhZTlkZWHfDQR9: ]] 00:31:06.605 06:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjRhNzdmMzIxNmRkZDMzZTMyNDQ0Y2M3YmJhZTlkZWHfDQR9: 00:31:06.605 06:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:31:06.605 06:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:06.605 06:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:06.605 06:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:06.605 06:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:06.605 06:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:06.605 06:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:31:06.605 06:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:06.605 06:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:06.605 06:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:06.605 06:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:06.605 06:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:06.605 06:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:06.605 06:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:06.605 06:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:06.605 06:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:06.605 06:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:06.605 06:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:06.605 06:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:06.605 06:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:06.605 06:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:06.605 06:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:06.605 06:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:06.605 06:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:07.170 nvme0n1 00:31:07.170 06:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:07.170 06:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:07.171 06:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:07.171 06:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:07.171 06:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:07.171 06:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:07.171 06:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:07.171 06:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:07.171 06:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:07.171 06:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:07.171 06:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:07.171 06:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:07.171 06:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:31:07.171 06:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:07.171 06:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:07.171 06:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:07.171 06:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:07.171 06:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDBlOTI2MmYzYzAyMzBkY2NlZGQ4OGQ4YTZlNGNmMjc5MTUzNWEyMzA4MGU0NGNi7F8Cfw==: 00:31:07.171 06:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTMwZTFjMDRmZGQ1OWJjMTVmY2ZkNDMzZGRjNDY2YzJUavto: 00:31:07.171 06:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:07.171 06:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:07.171 06:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDBlOTI2MmYzYzAyMzBkY2NlZGQ4OGQ4YTZlNGNmMjc5MTUzNWEyMzA4MGU0NGNi7F8Cfw==: 00:31:07.171 06:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTMwZTFjMDRmZGQ1OWJjMTVmY2ZkNDMzZGRjNDY2YzJUavto: ]] 00:31:07.171 06:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTMwZTFjMDRmZGQ1OWJjMTVmY2ZkNDMzZGRjNDY2YzJUavto: 00:31:07.171 06:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:31:07.171 06:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:07.171 06:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:07.171 06:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:07.171 06:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:07.171 06:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:07.171 06:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:31:07.171 06:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:07.171 06:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:07.171 06:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:07.171 06:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:07.171 06:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:07.171 06:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:07.171 06:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:07.171 06:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:07.171 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:07.171 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:07.171 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:07.171 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:07.171 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:07.171 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:07.429 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:07.429 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:07.429 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:07.995 nvme0n1 00:31:07.995 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:07.995 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:07.995 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:07.995 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:07.995 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:07.995 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:07.995 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:07.995 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:07.995 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:07.995 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:07.995 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:07.995 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:07.995 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:31:07.995 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:07.995 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:07.995 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:07.995 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:07.995 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTI5NWYzMmJlMzFmYzIwMzVkYjY1YWJjOWMwOTcwMjcyMDYxYzM1MmUxM2VlN2MyYWI2ZDEwNzAyNjQ0NjI1ZvHhVgU=: 00:31:07.995 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:07.995 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:07.995 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:07.995 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTI5NWYzMmJlMzFmYzIwMzVkYjY1YWJjOWMwOTcwMjcyMDYxYzM1MmUxM2VlN2MyYWI2ZDEwNzAyNjQ0NjI1ZvHhVgU=: 00:31:07.995 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:07.995 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:31:07.995 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:07.995 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:07.995 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:07.995 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:07.995 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:07.995 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:31:07.995 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:07.995 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:07.995 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:07.995 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:07.995 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:07.995 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:07.995 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:07.995 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:07.995 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:07.995 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:07.995 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:07.995 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:07.995 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:07.995 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:07.995 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:07.995 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:07.995 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:08.562 nvme0n1 00:31:08.562 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:08.562 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:08.562 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:08.562 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:08.562 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:08.562 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:08.562 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:08.562 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:08.562 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:08.562 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:08.562 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:08.562 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:31:08.562 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:08.562 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:08.562 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:08.562 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:08.562 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDcwZWVhNTgxZjhlN2M2MGQ4MTMyY2QxOTQ5OTNhMjYxYTk2YzMxYzc3YjU4MDAyPkmkKg==: 00:31:08.562 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzE2YmJiYzllMGI2OWIwMGJmMzUxYjkyN2ZmZjU0YzExNTc1ZjExNGZiNjZkYzkwwgEU6g==: 00:31:08.562 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:08.562 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:08.562 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDcwZWVhNTgxZjhlN2M2MGQ4MTMyY2QxOTQ5OTNhMjYxYTk2YzMxYzc3YjU4MDAyPkmkKg==: 00:31:08.562 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzE2YmJiYzllMGI2OWIwMGJmMzUxYjkyN2ZmZjU0YzExNTc1ZjExNGZiNjZkYzkwwgEU6g==: ]] 00:31:08.562 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzE2YmJiYzllMGI2OWIwMGJmMzUxYjkyN2ZmZjU0YzExNTc1ZjExNGZiNjZkYzkwwgEU6g==: 00:31:08.562 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:31:08.562 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:08.562 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:08.562 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:08.562 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:31:08.562 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:08.562 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:08.562 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:08.562 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:08.562 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:08.562 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:08.562 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:08.562 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:08.562 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:08.562 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:08.562 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:31:08.562 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:31:08.562 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:31:08.562 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:31:08.562 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:08.562 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:31:08.562 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:08.562 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:31:08.562 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:08.562 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:08.562 request: 00:31:08.562 { 00:31:08.562 "name": "nvme0", 00:31:08.562 "trtype": "tcp", 00:31:08.562 "traddr": "10.0.0.1", 00:31:08.562 "adrfam": "ipv4", 00:31:08.562 "trsvcid": "4420", 00:31:08.562 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:31:08.562 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:31:08.562 "prchk_reftag": false, 00:31:08.562 "prchk_guard": false, 00:31:08.562 "hdgst": false, 00:31:08.562 "ddgst": false, 00:31:08.562 "allow_unrecognized_csi": false, 00:31:08.562 "method": "bdev_nvme_attach_controller", 00:31:08.562 "req_id": 1 00:31:08.562 } 00:31:08.562 Got JSON-RPC error response 00:31:08.562 response: 00:31:08.562 { 00:31:08.562 "code": -5, 00:31:08.562 "message": "Input/output error" 00:31:08.562 } 00:31:08.562 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:31:08.562 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:31:08.562 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:31:08.562 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:31:08.562 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:31:08.562 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:31:08.562 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:31:08.562 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:08.562 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:08.562 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:08.562 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:31:08.562 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:31:08.562 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:08.562 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:08.821 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:08.821 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:08.821 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:08.821 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:08.821 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:08.821 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:08.821 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:08.821 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:08.821 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:31:08.821 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:31:08.821 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:31:08.821 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:31:08.821 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:08.821 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:31:08.821 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:08.821 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:31:08.821 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:08.821 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:08.821 request: 00:31:08.821 { 00:31:08.821 "name": "nvme0", 00:31:08.821 "trtype": "tcp", 00:31:08.821 "traddr": "10.0.0.1", 00:31:08.821 "adrfam": "ipv4", 00:31:08.821 "trsvcid": "4420", 00:31:08.821 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:31:08.821 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:31:08.821 "prchk_reftag": false, 00:31:08.821 "prchk_guard": false, 00:31:08.821 "hdgst": false, 00:31:08.821 "ddgst": false, 00:31:08.821 "dhchap_key": "key2", 00:31:08.821 "allow_unrecognized_csi": false, 00:31:08.821 "method": "bdev_nvme_attach_controller", 00:31:08.821 "req_id": 1 00:31:08.821 } 00:31:08.821 Got JSON-RPC error response 00:31:08.821 response: 00:31:08.821 { 00:31:08.821 "code": -5, 00:31:08.821 "message": "Input/output error" 00:31:08.821 } 00:31:08.821 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:31:08.821 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:31:08.821 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:31:08.821 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:31:08.821 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:31:08.821 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:31:08.821 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:31:08.821 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:08.821 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:08.821 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:08.821 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:31:08.821 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:31:08.821 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:08.821 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:08.821 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:08.821 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:08.821 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:08.821 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:08.821 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:08.821 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:08.821 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:08.821 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:08.821 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:31:08.821 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:31:08.821 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:31:08.821 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:31:08.821 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:08.821 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:31:08.821 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:08.821 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:31:08.821 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:08.821 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:08.821 request: 00:31:08.821 { 00:31:08.821 "name": "nvme0", 00:31:08.821 "trtype": "tcp", 00:31:08.821 "traddr": "10.0.0.1", 00:31:08.821 "adrfam": "ipv4", 00:31:08.821 "trsvcid": "4420", 00:31:08.821 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:31:08.821 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:31:08.821 "prchk_reftag": false, 00:31:08.821 "prchk_guard": false, 00:31:08.821 "hdgst": false, 00:31:08.821 "ddgst": false, 00:31:08.821 "dhchap_key": "key1", 00:31:08.821 "dhchap_ctrlr_key": "ckey2", 00:31:08.821 "allow_unrecognized_csi": false, 00:31:08.821 "method": "bdev_nvme_attach_controller", 00:31:08.821 "req_id": 1 00:31:08.821 } 00:31:08.821 Got JSON-RPC error response 00:31:08.821 response: 00:31:08.821 { 00:31:08.821 "code": -5, 00:31:08.821 "message": "Input/output error" 00:31:08.821 } 00:31:08.821 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:31:08.821 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:31:08.821 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:31:08.821 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:31:08.821 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:31:08.822 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:31:08.822 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:08.822 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:08.822 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:08.822 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:08.822 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:08.822 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:08.822 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:08.822 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:08.822 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:08.822 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:08.822 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:31:08.822 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:08.822 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:09.080 nvme0n1 00:31:09.080 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:09.080 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:31:09.080 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:09.080 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:09.080 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:09.080 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:09.080 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTMyNTYyNDkxYWU1NDhhNTgxYzNkYWU1ZGIyOTk5ZGVv1SUr: 00:31:09.080 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjRhNzdmMzIxNmRkZDMzZTMyNDQ0Y2M3YmJhZTlkZWHfDQR9: 00:31:09.080 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:09.080 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:09.080 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTMyNTYyNDkxYWU1NDhhNTgxYzNkYWU1ZGIyOTk5ZGVv1SUr: 00:31:09.080 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjRhNzdmMzIxNmRkZDMzZTMyNDQ0Y2M3YmJhZTlkZWHfDQR9: ]] 00:31:09.080 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjRhNzdmMzIxNmRkZDMzZTMyNDQ0Y2M3YmJhZTlkZWHfDQR9: 00:31:09.080 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:09.080 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:09.080 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:09.080 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:09.080 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:31:09.080 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:31:09.080 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:09.080 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:09.080 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:09.080 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:09.080 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:31:09.080 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:31:09.080 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:31:09.080 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:31:09.080 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:09.080 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:31:09.080 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:09.080 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:31:09.080 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:09.080 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:09.339 request: 00:31:09.339 { 00:31:09.339 "name": "nvme0", 00:31:09.339 "dhchap_key": "key1", 00:31:09.339 "dhchap_ctrlr_key": "ckey2", 00:31:09.339 "method": "bdev_nvme_set_keys", 00:31:09.339 "req_id": 1 00:31:09.339 } 00:31:09.339 Got JSON-RPC error response 00:31:09.339 response: 00:31:09.339 { 00:31:09.339 "code": -13, 00:31:09.339 "message": "Permission denied" 00:31:09.339 } 00:31:09.339 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:31:09.339 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:31:09.339 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:31:09.339 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:31:09.339 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:31:09.339 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:31:09.339 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:31:09.339 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:09.339 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:09.339 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:09.339 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:31:09.339 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:31:10.275 06:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:31:10.275 06:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:31:10.275 06:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:10.275 06:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:10.275 06:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:10.275 06:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:31:10.275 06:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:31:11.209 06:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:31:11.209 06:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:31:11.209 06:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:11.209 06:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:11.209 06:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:11.468 06:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:31:11.468 06:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:31:11.468 06:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:11.468 06:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:11.468 06:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:11.468 06:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:11.468 06:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDcwZWVhNTgxZjhlN2M2MGQ4MTMyY2QxOTQ5OTNhMjYxYTk2YzMxYzc3YjU4MDAyPkmkKg==: 00:31:11.468 06:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzE2YmJiYzllMGI2OWIwMGJmMzUxYjkyN2ZmZjU0YzExNTc1ZjExNGZiNjZkYzkwwgEU6g==: 00:31:11.468 06:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:11.468 06:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:11.468 06:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDcwZWVhNTgxZjhlN2M2MGQ4MTMyY2QxOTQ5OTNhMjYxYTk2YzMxYzc3YjU4MDAyPkmkKg==: 00:31:11.468 06:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzE2YmJiYzllMGI2OWIwMGJmMzUxYjkyN2ZmZjU0YzExNTc1ZjExNGZiNjZkYzkwwgEU6g==: ]] 00:31:11.468 06:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzE2YmJiYzllMGI2OWIwMGJmMzUxYjkyN2ZmZjU0YzExNTc1ZjExNGZiNjZkYzkwwgEU6g==: 00:31:11.468 06:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:31:11.468 06:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:11.468 06:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:11.468 06:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:11.468 06:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:11.468 06:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:11.468 06:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:11.468 06:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:11.468 06:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:11.468 06:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:11.468 06:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:11.468 06:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:31:11.468 06:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:11.468 06:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:11.468 nvme0n1 00:31:11.468 06:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:11.468 06:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:31:11.468 06:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:11.468 06:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:11.468 06:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:11.468 06:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:11.468 06:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTMyNTYyNDkxYWU1NDhhNTgxYzNkYWU1ZGIyOTk5ZGVv1SUr: 00:31:11.468 06:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjRhNzdmMzIxNmRkZDMzZTMyNDQ0Y2M3YmJhZTlkZWHfDQR9: 00:31:11.468 06:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:11.468 06:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:11.468 06:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTMyNTYyNDkxYWU1NDhhNTgxYzNkYWU1ZGIyOTk5ZGVv1SUr: 00:31:11.468 06:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjRhNzdmMzIxNmRkZDMzZTMyNDQ0Y2M3YmJhZTlkZWHfDQR9: ]] 00:31:11.468 06:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjRhNzdmMzIxNmRkZDMzZTMyNDQ0Y2M3YmJhZTlkZWHfDQR9: 00:31:11.468 06:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:31:11.468 06:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:31:11.468 06:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:31:11.468 06:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:31:11.468 06:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:11.468 06:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:31:11.468 06:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:11.468 06:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:31:11.468 06:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:11.469 06:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:11.469 request: 00:31:11.469 { 00:31:11.469 "name": "nvme0", 00:31:11.727 "dhchap_key": "key2", 00:31:11.727 "dhchap_ctrlr_key": "ckey1", 00:31:11.727 "method": "bdev_nvme_set_keys", 00:31:11.727 "req_id": 1 00:31:11.727 } 00:31:11.727 Got JSON-RPC error response 00:31:11.727 response: 00:31:11.727 { 00:31:11.727 "code": -13, 00:31:11.727 "message": "Permission denied" 00:31:11.727 } 00:31:11.727 06:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:31:11.727 06:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:31:11.727 06:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:31:11.727 06:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:31:11.727 06:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:31:11.727 06:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:31:11.727 06:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:31:11.727 06:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:11.727 06:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:11.727 06:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:11.727 06:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:31:11.727 06:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:31:12.662 06:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:31:12.662 06:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:31:12.662 06:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:12.662 06:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:12.662 06:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:12.662 06:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:31:12.662 06:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:31:12.662 06:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:31:12.662 06:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:31:12.662 06:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:12.662 06:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:31:12.662 06:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:12.662 06:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:31:12.662 06:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:12.662 06:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:12.662 rmmod nvme_tcp 00:31:12.662 rmmod nvme_fabrics 00:31:12.662 06:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:12.662 06:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:31:12.662 06:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:31:12.662 06:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 681307 ']' 00:31:12.662 06:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 681307 00:31:12.662 06:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@952 -- # '[' -z 681307 ']' 00:31:12.662 06:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # kill -0 681307 00:31:12.662 06:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@957 -- # uname 00:31:12.662 06:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:31:12.662 06:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 681307 00:31:12.921 06:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:31:12.921 06:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:31:12.921 06:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@970 -- # echo 'killing process with pid 681307' 00:31:12.921 killing process with pid 681307 00:31:12.921 06:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@971 -- # kill 681307 00:31:12.921 06:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@976 -- # wait 681307 00:31:12.921 06:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:12.921 06:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:12.921 06:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:12.921 06:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:31:12.921 06:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:31:12.921 06:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:12.921 06:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:31:12.921 06:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:12.921 06:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:12.921 06:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:12.921 06:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:12.921 06:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:15.455 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:15.455 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:31:15.455 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:31:15.455 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:31:15.455 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:31:15.455 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:31:15.455 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:31:15.455 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:31:15.455 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:31:15.455 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:31:15.455 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:31:15.455 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:31:15.455 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:31:18.153 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:31:18.153 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:31:18.153 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:31:18.153 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:31:18.153 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:31:18.153 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:31:18.153 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:31:18.153 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:31:18.153 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:31:18.153 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:31:18.153 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:31:18.153 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:31:18.153 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:31:18.153 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:31:18.153 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:31:18.153 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:31:19.619 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:31:19.619 06:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.TQT /tmp/spdk.key-null.gki /tmp/spdk.key-sha256.ymw /tmp/spdk.key-sha384.cip /tmp/spdk.key-sha512.82E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:31:19.619 06:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:31:22.912 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:31:22.912 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:31:22.912 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:31:22.913 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:31:22.913 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:31:22.913 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:31:22.913 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:31:22.913 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:31:22.913 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:31:22.913 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:31:22.913 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:31:22.913 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:31:22.913 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:31:22.913 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:31:22.913 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:31:22.913 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:31:22.913 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:31:22.913 00:31:22.913 real 0m54.696s 00:31:22.913 user 0m48.812s 00:31:22.913 sys 0m12.662s 00:31:22.913 06:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1128 -- # xtrace_disable 00:31:22.913 06:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:22.913 ************************************ 00:31:22.913 END TEST nvmf_auth_host 00:31:22.913 ************************************ 00:31:22.913 06:41:54 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:31:22.913 06:41:54 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:31:22.913 06:41:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:31:22.913 06:41:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:31:22.913 06:41:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:22.913 ************************************ 00:31:22.913 START TEST nvmf_digest 00:31:22.913 ************************************ 00:31:22.913 06:41:54 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:31:22.913 * Looking for test storage... 00:31:22.913 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:22.913 06:41:54 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:31:22.913 06:41:54 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1691 -- # lcov --version 00:31:22.913 06:41:54 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:31:22.913 06:41:54 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:31:22.913 06:41:54 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:22.913 06:41:54 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:22.913 06:41:54 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:22.913 06:41:54 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:31:22.913 06:41:54 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:31:22.913 06:41:54 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:31:22.913 06:41:54 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:31:22.913 06:41:54 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:31:22.913 06:41:54 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:31:22.913 06:41:54 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:31:22.913 06:41:54 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:22.913 06:41:54 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:31:22.913 06:41:54 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:31:22.913 06:41:54 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:22.913 06:41:54 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:22.913 06:41:54 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:31:22.913 06:41:54 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:31:22.913 06:41:54 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:22.913 06:41:54 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:31:22.913 06:41:54 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:31:22.913 06:41:54 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:31:22.913 06:41:54 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:31:22.913 06:41:54 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:22.913 06:41:54 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:31:22.913 06:41:54 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:31:22.913 06:41:54 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:22.913 06:41:54 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:22.913 06:41:54 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:31:22.913 06:41:54 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:22.913 06:41:54 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:31:22.913 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:22.913 --rc genhtml_branch_coverage=1 00:31:22.913 --rc genhtml_function_coverage=1 00:31:22.913 --rc genhtml_legend=1 00:31:22.913 --rc geninfo_all_blocks=1 00:31:22.913 --rc geninfo_unexecuted_blocks=1 00:31:22.913 00:31:22.913 ' 00:31:22.913 06:41:54 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:31:22.913 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:22.913 --rc genhtml_branch_coverage=1 00:31:22.913 --rc genhtml_function_coverage=1 00:31:22.913 --rc genhtml_legend=1 00:31:22.913 --rc geninfo_all_blocks=1 00:31:22.913 --rc geninfo_unexecuted_blocks=1 00:31:22.913 00:31:22.913 ' 00:31:22.913 06:41:54 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:31:22.913 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:22.913 --rc genhtml_branch_coverage=1 00:31:22.913 --rc genhtml_function_coverage=1 00:31:22.913 --rc genhtml_legend=1 00:31:22.913 --rc geninfo_all_blocks=1 00:31:22.913 --rc geninfo_unexecuted_blocks=1 00:31:22.913 00:31:22.913 ' 00:31:22.913 06:41:54 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:31:22.913 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:22.913 --rc genhtml_branch_coverage=1 00:31:22.913 --rc genhtml_function_coverage=1 00:31:22.913 --rc genhtml_legend=1 00:31:22.913 --rc geninfo_all_blocks=1 00:31:22.913 --rc geninfo_unexecuted_blocks=1 00:31:22.913 00:31:22.913 ' 00:31:22.913 06:41:54 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:22.913 06:41:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:31:22.913 06:41:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:22.913 06:41:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:22.913 06:41:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:22.913 06:41:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:22.913 06:41:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:22.913 06:41:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:22.913 06:41:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:22.913 06:41:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:22.913 06:41:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:22.913 06:41:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:22.913 06:41:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:31:22.913 06:41:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:31:22.913 06:41:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:22.913 06:41:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:22.913 06:41:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:22.913 06:41:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:22.913 06:41:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:22.913 06:41:54 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:31:22.913 06:41:54 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:22.913 06:41:54 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:22.913 06:41:54 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:22.913 06:41:54 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:22.913 06:41:54 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:22.913 06:41:54 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:22.914 06:41:54 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:31:22.914 06:41:54 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:22.914 06:41:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:31:22.914 06:41:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:22.914 06:41:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:22.914 06:41:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:22.914 06:41:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:22.914 06:41:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:22.914 06:41:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:22.914 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:22.914 06:41:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:22.914 06:41:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:22.914 06:41:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:22.914 06:41:54 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:31:22.914 06:41:54 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:31:22.914 06:41:54 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:31:22.914 06:41:54 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:31:22.914 06:41:54 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:31:22.914 06:41:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:22.914 06:41:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:22.914 06:41:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:22.914 06:41:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:22.914 06:41:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:22.914 06:41:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:22.914 06:41:54 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:22.914 06:41:54 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:22.914 06:41:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:22.914 06:41:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:22.914 06:41:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:31:22.914 06:41:54 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:31:29.503 06:42:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:29.503 06:42:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:31:29.503 06:42:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:29.503 06:42:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:29.503 06:42:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:29.503 06:42:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:29.503 06:42:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:29.503 06:42:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:31:29.503 06:42:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:29.503 06:42:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:31:29.503 06:42:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:31:29.503 06:42:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:31:29.503 06:42:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:31:29.503 06:42:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:31:29.503 06:42:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:31:29.503 06:42:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:29.503 06:42:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:29.503 06:42:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:29.503 06:42:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:29.503 06:42:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:29.503 06:42:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:29.503 06:42:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:29.503 06:42:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:29.503 06:42:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:29.503 06:42:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:29.503 06:42:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:29.503 06:42:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:29.503 06:42:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:29.503 06:42:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:29.503 06:42:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:29.503 06:42:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:29.503 06:42:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:29.503 06:42:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:29.503 06:42:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:29.503 06:42:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:31:29.503 Found 0000:86:00.0 (0x8086 - 0x159b) 00:31:29.503 06:42:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:29.503 06:42:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:29.503 06:42:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:29.503 06:42:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:29.503 06:42:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:29.503 06:42:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:29.503 06:42:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:31:29.503 Found 0000:86:00.1 (0x8086 - 0x159b) 00:31:29.503 06:42:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:29.503 06:42:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:29.503 06:42:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:29.503 06:42:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:29.503 06:42:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:29.503 06:42:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:29.503 06:42:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:29.503 06:42:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:29.503 06:42:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:29.503 06:42:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:29.503 06:42:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:29.503 06:42:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:29.503 06:42:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:29.503 06:42:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:29.503 06:42:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:29.503 06:42:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:31:29.503 Found net devices under 0000:86:00.0: cvl_0_0 00:31:29.503 06:42:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:29.503 06:42:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:29.503 06:42:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:29.503 06:42:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:29.503 06:42:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:29.503 06:42:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:29.503 06:42:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:29.503 06:42:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:29.503 06:42:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:31:29.503 Found net devices under 0000:86:00.1: cvl_0_1 00:31:29.503 06:42:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:29.503 06:42:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:29.503 06:42:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # is_hw=yes 00:31:29.503 06:42:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:29.503 06:42:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:29.503 06:42:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:29.503 06:42:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:29.503 06:42:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:29.503 06:42:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:29.503 06:42:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:29.503 06:42:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:29.504 06:42:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:29.504 06:42:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:29.504 06:42:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:29.504 06:42:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:29.504 06:42:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:29.504 06:42:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:29.504 06:42:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:29.504 06:42:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:29.504 06:42:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:29.504 06:42:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:29.504 06:42:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:29.504 06:42:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:29.504 06:42:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:29.504 06:42:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:29.504 06:42:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:29.504 06:42:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:29.504 06:42:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:29.504 06:42:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:29.504 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:29.504 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.231 ms 00:31:29.504 00:31:29.504 --- 10.0.0.2 ping statistics --- 00:31:29.504 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:29.504 rtt min/avg/max/mdev = 0.231/0.231/0.231/0.000 ms 00:31:29.504 06:42:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:29.504 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:29.504 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.119 ms 00:31:29.504 00:31:29.504 --- 10.0.0.1 ping statistics --- 00:31:29.504 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:29.504 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:31:29.504 06:42:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:29.504 06:42:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # return 0 00:31:29.504 06:42:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:29.504 06:42:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:29.504 06:42:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:29.504 06:42:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:29.504 06:42:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:29.504 06:42:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:29.504 06:42:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:29.504 06:42:00 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:31:29.504 06:42:00 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:31:29.504 06:42:00 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:31:29.504 06:42:00 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:31:29.504 06:42:00 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1109 -- # xtrace_disable 00:31:29.504 06:42:00 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:31:29.504 ************************************ 00:31:29.504 START TEST nvmf_digest_clean 00:31:29.504 ************************************ 00:31:29.504 06:42:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1127 -- # run_digest 00:31:29.504 06:42:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:31:29.504 06:42:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:31:29.504 06:42:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:31:29.504 06:42:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:31:29.504 06:42:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:31:29.504 06:42:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:29.504 06:42:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:29.504 06:42:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:31:29.504 06:42:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=695108 00:31:29.504 06:42:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 695108 00:31:29.504 06:42:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:31:29.504 06:42:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 695108 ']' 00:31:29.504 06:42:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:29.504 06:42:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:31:29.504 06:42:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:29.504 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:29.504 06:42:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:31:29.504 06:42:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:31:29.504 [2024-11-20 06:42:00.437307] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:31:29.504 [2024-11-20 06:42:00.437357] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:29.504 [2024-11-20 06:42:00.508350] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:29.504 [2024-11-20 06:42:00.550630] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:29.504 [2024-11-20 06:42:00.550665] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:29.504 [2024-11-20 06:42:00.550673] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:29.504 [2024-11-20 06:42:00.550679] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:29.504 [2024-11-20 06:42:00.550688] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:29.504 [2024-11-20 06:42:00.551228] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:29.504 06:42:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:31:29.504 06:42:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:31:29.504 06:42:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:29.504 06:42:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:29.504 06:42:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:31:29.504 06:42:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:29.504 06:42:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:31:29.504 06:42:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:31:29.504 06:42:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:31:29.504 06:42:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:29.504 06:42:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:31:29.504 null0 00:31:29.504 [2024-11-20 06:42:00.710173] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:29.504 [2024-11-20 06:42:00.734368] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:29.504 06:42:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:29.504 06:42:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:31:29.504 06:42:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:31:29.504 06:42:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:31:29.504 06:42:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:31:29.504 06:42:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:31:29.504 06:42:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:31:29.504 06:42:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:31:29.504 06:42:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=695153 00:31:29.504 06:42:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 695153 /var/tmp/bperf.sock 00:31:29.504 06:42:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:31:29.504 06:42:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 695153 ']' 00:31:29.504 06:42:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:29.504 06:42:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:31:29.504 06:42:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:29.504 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:29.504 06:42:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:31:29.504 06:42:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:31:29.504 [2024-11-20 06:42:00.788392] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:31:29.505 [2024-11-20 06:42:00.788433] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid695153 ] 00:31:29.505 [2024-11-20 06:42:00.862954] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:29.505 [2024-11-20 06:42:00.904631] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:29.505 06:42:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:31:29.505 06:42:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:31:29.505 06:42:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:31:29.505 06:42:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:31:29.505 06:42:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:31:29.505 06:42:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:29.505 06:42:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:29.763 nvme0n1 00:31:29.763 06:42:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:31:29.763 06:42:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:31:29.763 Running I/O for 2 seconds... 00:31:32.071 25232.00 IOPS, 98.56 MiB/s [2024-11-20T05:42:03.907Z] 25073.50 IOPS, 97.94 MiB/s 00:31:32.071 Latency(us) 00:31:32.071 [2024-11-20T05:42:03.907Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:32.071 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:31:32.071 nvme0n1 : 2.01 25082.82 97.98 0.00 0.00 5097.33 2481.01 19099.06 00:31:32.071 [2024-11-20T05:42:03.907Z] =================================================================================================================== 00:31:32.071 [2024-11-20T05:42:03.907Z] Total : 25082.82 97.98 0.00 0.00 5097.33 2481.01 19099.06 00:31:32.071 { 00:31:32.071 "results": [ 00:31:32.071 { 00:31:32.071 "job": "nvme0n1", 00:31:32.071 "core_mask": "0x2", 00:31:32.071 "workload": "randread", 00:31:32.071 "status": "finished", 00:31:32.071 "queue_depth": 128, 00:31:32.071 "io_size": 4096, 00:31:32.071 "runtime": 2.008546, 00:31:32.071 "iops": 25082.821105416555, 00:31:32.071 "mibps": 97.97976994303342, 00:31:32.071 "io_failed": 0, 00:31:32.071 "io_timeout": 0, 00:31:32.071 "avg_latency_us": 5097.3264692338225, 00:31:32.071 "min_latency_us": 2481.0057142857145, 00:31:32.071 "max_latency_us": 19099.062857142857 00:31:32.071 } 00:31:32.071 ], 00:31:32.071 "core_count": 1 00:31:32.071 } 00:31:32.071 06:42:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:31:32.071 06:42:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:31:32.071 06:42:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:31:32.071 06:42:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:31:32.071 | select(.opcode=="crc32c") 00:31:32.071 | "\(.module_name) \(.executed)"' 00:31:32.071 06:42:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:31:32.071 06:42:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:31:32.071 06:42:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:31:32.071 06:42:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:31:32.071 06:42:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:31:32.071 06:42:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 695153 00:31:32.071 06:42:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 695153 ']' 00:31:32.071 06:42:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 695153 00:31:32.071 06:42:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:31:32.071 06:42:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:31:32.071 06:42:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 695153 00:31:32.071 06:42:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:31:32.071 06:42:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:31:32.072 06:42:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 695153' 00:31:32.072 killing process with pid 695153 00:31:32.072 06:42:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 695153 00:31:32.072 Received shutdown signal, test time was about 2.000000 seconds 00:31:32.072 00:31:32.072 Latency(us) 00:31:32.072 [2024-11-20T05:42:03.908Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:32.072 [2024-11-20T05:42:03.908Z] =================================================================================================================== 00:31:32.072 [2024-11-20T05:42:03.908Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:32.072 06:42:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 695153 00:31:32.331 06:42:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:31:32.331 06:42:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:31:32.331 06:42:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:31:32.331 06:42:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:31:32.331 06:42:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:31:32.331 06:42:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:31:32.331 06:42:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:31:32.331 06:42:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=695700 00:31:32.331 06:42:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 695700 /var/tmp/bperf.sock 00:31:32.331 06:42:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:31:32.331 06:42:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 695700 ']' 00:31:32.331 06:42:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:32.331 06:42:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:31:32.331 06:42:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:32.331 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:32.331 06:42:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:31:32.331 06:42:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:31:32.331 [2024-11-20 06:42:04.005087] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:31:32.331 [2024-11-20 06:42:04.005139] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid695700 ] 00:31:32.331 I/O size of 131072 is greater than zero copy threshold (65536). 00:31:32.331 Zero copy mechanism will not be used. 00:31:32.331 [2024-11-20 06:42:04.079411] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:32.331 [2024-11-20 06:42:04.116752] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:32.331 06:42:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:31:32.331 06:42:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:31:32.331 06:42:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:31:32.331 06:42:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:31:32.331 06:42:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:31:32.898 06:42:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:32.898 06:42:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:33.157 nvme0n1 00:31:33.157 06:42:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:31:33.157 06:42:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:31:33.157 I/O size of 131072 is greater than zero copy threshold (65536). 00:31:33.157 Zero copy mechanism will not be used. 00:31:33.157 Running I/O for 2 seconds... 00:31:35.468 6062.00 IOPS, 757.75 MiB/s [2024-11-20T05:42:07.304Z] 6012.00 IOPS, 751.50 MiB/s 00:31:35.468 Latency(us) 00:31:35.468 [2024-11-20T05:42:07.304Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:35.468 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:31:35.468 nvme0n1 : 2.00 6010.66 751.33 0.00 0.00 2659.22 628.05 6803.26 00:31:35.468 [2024-11-20T05:42:07.304Z] =================================================================================================================== 00:31:35.468 [2024-11-20T05:42:07.304Z] Total : 6010.66 751.33 0.00 0.00 2659.22 628.05 6803.26 00:31:35.468 { 00:31:35.468 "results": [ 00:31:35.468 { 00:31:35.468 "job": "nvme0n1", 00:31:35.468 "core_mask": "0x2", 00:31:35.468 "workload": "randread", 00:31:35.468 "status": "finished", 00:31:35.468 "queue_depth": 16, 00:31:35.468 "io_size": 131072, 00:31:35.468 "runtime": 2.003108, 00:31:35.468 "iops": 6010.6594352376405, 00:31:35.468 "mibps": 751.3324294047051, 00:31:35.468 "io_failed": 0, 00:31:35.468 "io_timeout": 0, 00:31:35.468 "avg_latency_us": 2659.2191362126246, 00:31:35.468 "min_latency_us": 628.0533333333333, 00:31:35.468 "max_latency_us": 6803.260952380952 00:31:35.468 } 00:31:35.468 ], 00:31:35.468 "core_count": 1 00:31:35.468 } 00:31:35.468 06:42:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:31:35.468 06:42:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:31:35.468 06:42:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:31:35.468 06:42:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:31:35.468 | select(.opcode=="crc32c") 00:31:35.468 | "\(.module_name) \(.executed)"' 00:31:35.468 06:42:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:31:35.468 06:42:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:31:35.468 06:42:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:31:35.468 06:42:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:31:35.468 06:42:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:31:35.468 06:42:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 695700 00:31:35.468 06:42:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 695700 ']' 00:31:35.468 06:42:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 695700 00:31:35.468 06:42:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:31:35.468 06:42:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:31:35.468 06:42:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 695700 00:31:35.468 06:42:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:31:35.468 06:42:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:31:35.468 06:42:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 695700' 00:31:35.468 killing process with pid 695700 00:31:35.468 06:42:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 695700 00:31:35.468 Received shutdown signal, test time was about 2.000000 seconds 00:31:35.468 00:31:35.468 Latency(us) 00:31:35.468 [2024-11-20T05:42:07.304Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:35.468 [2024-11-20T05:42:07.304Z] =================================================================================================================== 00:31:35.468 [2024-11-20T05:42:07.304Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:35.469 06:42:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 695700 00:31:35.728 06:42:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:31:35.728 06:42:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:31:35.728 06:42:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:31:35.728 06:42:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:31:35.728 06:42:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:31:35.728 06:42:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:31:35.728 06:42:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:31:35.728 06:42:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=696681 00:31:35.728 06:42:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 696681 /var/tmp/bperf.sock 00:31:35.728 06:42:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 696681 ']' 00:31:35.728 06:42:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:31:35.728 06:42:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:35.728 06:42:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:31:35.728 06:42:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:35.728 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:35.728 06:42:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:31:35.728 06:42:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:31:35.728 [2024-11-20 06:42:07.438794] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:31:35.728 [2024-11-20 06:42:07.438839] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid696681 ] 00:31:35.728 [2024-11-20 06:42:07.513580] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:35.728 [2024-11-20 06:42:07.559677] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:35.986 06:42:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:31:35.986 06:42:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:31:35.986 06:42:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:31:35.986 06:42:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:31:35.986 06:42:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:31:36.244 06:42:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:36.244 06:42:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:36.502 nvme0n1 00:31:36.502 06:42:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:31:36.502 06:42:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:31:36.759 Running I/O for 2 seconds... 00:31:38.630 28190.00 IOPS, 110.12 MiB/s [2024-11-20T05:42:10.466Z] 28341.00 IOPS, 110.71 MiB/s 00:31:38.630 Latency(us) 00:31:38.630 [2024-11-20T05:42:10.466Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:38.630 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:38.630 nvme0n1 : 2.01 28360.39 110.78 0.00 0.00 4509.19 1771.03 13606.52 00:31:38.630 [2024-11-20T05:42:10.466Z] =================================================================================================================== 00:31:38.630 [2024-11-20T05:42:10.466Z] Total : 28360.39 110.78 0.00 0.00 4509.19 1771.03 13606.52 00:31:38.630 { 00:31:38.630 "results": [ 00:31:38.630 { 00:31:38.630 "job": "nvme0n1", 00:31:38.630 "core_mask": "0x2", 00:31:38.630 "workload": "randwrite", 00:31:38.630 "status": "finished", 00:31:38.630 "queue_depth": 128, 00:31:38.630 "io_size": 4096, 00:31:38.630 "runtime": 2.007624, 00:31:38.630 "iops": 28360.390192585863, 00:31:38.630 "mibps": 110.78277418978853, 00:31:38.630 "io_failed": 0, 00:31:38.630 "io_timeout": 0, 00:31:38.630 "avg_latency_us": 4509.1879827746125, 00:31:38.630 "min_latency_us": 1771.032380952381, 00:31:38.630 "max_latency_us": 13606.521904761905 00:31:38.630 } 00:31:38.630 ], 00:31:38.630 "core_count": 1 00:31:38.630 } 00:31:38.630 06:42:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:31:38.630 06:42:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:31:38.630 06:42:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:31:38.630 06:42:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:31:38.630 | select(.opcode=="crc32c") 00:31:38.630 | "\(.module_name) \(.executed)"' 00:31:38.630 06:42:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:31:38.889 06:42:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:31:38.889 06:42:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:31:38.889 06:42:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:31:38.889 06:42:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:31:38.889 06:42:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 696681 00:31:38.889 06:42:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 696681 ']' 00:31:38.889 06:42:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 696681 00:31:38.889 06:42:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:31:38.889 06:42:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:31:38.889 06:42:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 696681 00:31:38.889 06:42:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:31:38.889 06:42:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:31:38.889 06:42:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 696681' 00:31:38.889 killing process with pid 696681 00:31:38.889 06:42:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 696681 00:31:38.889 Received shutdown signal, test time was about 2.000000 seconds 00:31:38.889 00:31:38.889 Latency(us) 00:31:38.889 [2024-11-20T05:42:10.725Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:38.889 [2024-11-20T05:42:10.725Z] =================================================================================================================== 00:31:38.889 [2024-11-20T05:42:10.725Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:38.889 06:42:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 696681 00:31:39.148 06:42:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:31:39.148 06:42:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:31:39.148 06:42:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:31:39.148 06:42:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:31:39.148 06:42:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:31:39.148 06:42:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:31:39.148 06:42:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:31:39.148 06:42:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=697245 00:31:39.148 06:42:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 697245 /var/tmp/bperf.sock 00:31:39.148 06:42:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:31:39.148 06:42:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 697245 ']' 00:31:39.148 06:42:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:39.148 06:42:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:31:39.148 06:42:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:39.148 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:39.148 06:42:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:31:39.148 06:42:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:31:39.148 [2024-11-20 06:42:10.841804] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:31:39.148 [2024-11-20 06:42:10.841850] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid697245 ] 00:31:39.148 I/O size of 131072 is greater than zero copy threshold (65536). 00:31:39.148 Zero copy mechanism will not be used. 00:31:39.148 [2024-11-20 06:42:10.915888] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:39.148 [2024-11-20 06:42:10.957676] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:39.407 06:42:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:31:39.407 06:42:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:31:39.407 06:42:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:31:39.407 06:42:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:31:39.407 06:42:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:31:39.407 06:42:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:39.407 06:42:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:39.665 nvme0n1 00:31:39.665 06:42:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:31:39.665 06:42:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:31:39.923 I/O size of 131072 is greater than zero copy threshold (65536). 00:31:39.923 Zero copy mechanism will not be used. 00:31:39.923 Running I/O for 2 seconds... 00:31:41.794 6565.00 IOPS, 820.62 MiB/s [2024-11-20T05:42:13.630Z] 6573.50 IOPS, 821.69 MiB/s 00:31:41.794 Latency(us) 00:31:41.794 [2024-11-20T05:42:13.630Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:41.794 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:31:41.794 nvme0n1 : 2.00 6571.75 821.47 0.00 0.00 2430.87 1942.67 4587.52 00:31:41.794 [2024-11-20T05:42:13.630Z] =================================================================================================================== 00:31:41.794 [2024-11-20T05:42:13.630Z] Total : 6571.75 821.47 0.00 0.00 2430.87 1942.67 4587.52 00:31:41.794 { 00:31:41.794 "results": [ 00:31:41.794 { 00:31:41.795 "job": "nvme0n1", 00:31:41.795 "core_mask": "0x2", 00:31:41.795 "workload": "randwrite", 00:31:41.795 "status": "finished", 00:31:41.795 "queue_depth": 16, 00:31:41.795 "io_size": 131072, 00:31:41.795 "runtime": 2.002966, 00:31:41.795 "iops": 6571.754088686478, 00:31:41.795 "mibps": 821.4692610858098, 00:31:41.795 "io_failed": 0, 00:31:41.795 "io_timeout": 0, 00:31:41.795 "avg_latency_us": 2430.872415681763, 00:31:41.795 "min_latency_us": 1942.6742857142858, 00:31:41.795 "max_latency_us": 4587.52 00:31:41.795 } 00:31:41.795 ], 00:31:41.795 "core_count": 1 00:31:41.795 } 00:31:41.795 06:42:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:31:41.795 06:42:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:31:41.795 06:42:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:31:41.795 06:42:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:31:41.795 | select(.opcode=="crc32c") 00:31:41.795 | "\(.module_name) \(.executed)"' 00:31:41.795 06:42:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:31:42.053 06:42:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:31:42.053 06:42:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:31:42.053 06:42:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:31:42.053 06:42:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:31:42.053 06:42:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 697245 00:31:42.053 06:42:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 697245 ']' 00:31:42.053 06:42:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 697245 00:31:42.053 06:42:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:31:42.053 06:42:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:31:42.053 06:42:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 697245 00:31:42.053 06:42:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:31:42.053 06:42:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:31:42.053 06:42:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 697245' 00:31:42.053 killing process with pid 697245 00:31:42.053 06:42:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 697245 00:31:42.053 Received shutdown signal, test time was about 2.000000 seconds 00:31:42.053 00:31:42.054 Latency(us) 00:31:42.054 [2024-11-20T05:42:13.890Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:42.054 [2024-11-20T05:42:13.890Z] =================================================================================================================== 00:31:42.054 [2024-11-20T05:42:13.890Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:42.054 06:42:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 697245 00:31:42.312 06:42:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 695108 00:31:42.312 06:42:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 695108 ']' 00:31:42.312 06:42:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 695108 00:31:42.312 06:42:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:31:42.312 06:42:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:31:42.312 06:42:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 695108 00:31:42.312 06:42:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:31:42.312 06:42:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:31:42.312 06:42:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 695108' 00:31:42.312 killing process with pid 695108 00:31:42.312 06:42:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 695108 00:31:42.312 06:42:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 695108 00:31:42.570 00:31:42.570 real 0m13.853s 00:31:42.570 user 0m26.587s 00:31:42.570 sys 0m4.457s 00:31:42.570 06:42:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1128 -- # xtrace_disable 00:31:42.570 06:42:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:31:42.570 ************************************ 00:31:42.570 END TEST nvmf_digest_clean 00:31:42.570 ************************************ 00:31:42.570 06:42:14 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:31:42.570 06:42:14 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:31:42.570 06:42:14 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1109 -- # xtrace_disable 00:31:42.570 06:42:14 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:31:42.570 ************************************ 00:31:42.570 START TEST nvmf_digest_error 00:31:42.570 ************************************ 00:31:42.570 06:42:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1127 -- # run_digest_error 00:31:42.570 06:42:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:31:42.570 06:42:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:42.570 06:42:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:42.570 06:42:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:31:42.570 06:42:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=697760 00:31:42.570 06:42:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 697760 00:31:42.570 06:42:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:31:42.570 06:42:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 697760 ']' 00:31:42.570 06:42:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:42.570 06:42:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:31:42.570 06:42:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:42.570 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:42.570 06:42:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:31:42.570 06:42:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:31:42.570 [2024-11-20 06:42:14.361360] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:31:42.570 [2024-11-20 06:42:14.361404] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:42.829 [2024-11-20 06:42:14.439719] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:42.829 [2024-11-20 06:42:14.480642] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:42.829 [2024-11-20 06:42:14.480676] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:42.829 [2024-11-20 06:42:14.480683] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:42.829 [2024-11-20 06:42:14.480689] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:42.829 [2024-11-20 06:42:14.480695] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:42.829 [2024-11-20 06:42:14.481257] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:42.829 06:42:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:31:42.829 06:42:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:31:42.829 06:42:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:42.829 06:42:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:42.829 06:42:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:31:42.829 06:42:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:42.829 06:42:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:31:42.829 06:42:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.829 06:42:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:31:42.829 [2024-11-20 06:42:14.545687] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:31:42.829 06:42:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.829 06:42:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:31:42.829 06:42:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:31:42.829 06:42:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.829 06:42:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:31:42.829 null0 00:31:42.829 [2024-11-20 06:42:14.639850] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:43.088 [2024-11-20 06:42:14.664043] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:43.088 06:42:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:43.088 06:42:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:31:43.088 06:42:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:31:43.088 06:42:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:31:43.088 06:42:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:31:43.088 06:42:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:31:43.088 06:42:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=697958 00:31:43.089 06:42:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 697958 /var/tmp/bperf.sock 00:31:43.089 06:42:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:31:43.089 06:42:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 697958 ']' 00:31:43.089 06:42:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:43.089 06:42:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:31:43.089 06:42:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:43.089 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:43.089 06:42:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:31:43.089 06:42:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:31:43.089 [2024-11-20 06:42:14.714503] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:31:43.089 [2024-11-20 06:42:14.714543] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid697958 ] 00:31:43.089 [2024-11-20 06:42:14.787446] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:43.089 [2024-11-20 06:42:14.830168] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:43.089 06:42:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:31:43.089 06:42:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:31:43.089 06:42:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:31:43.089 06:42:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:31:43.348 06:42:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:31:43.348 06:42:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:43.348 06:42:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:31:43.348 06:42:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:43.348 06:42:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:43.348 06:42:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:43.915 nvme0n1 00:31:43.915 06:42:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:31:43.915 06:42:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:43.915 06:42:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:31:43.915 06:42:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:43.915 06:42:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:31:43.915 06:42:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:31:43.915 Running I/O for 2 seconds... 00:31:43.915 [2024-11-20 06:42:15.595270] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x636860) 00:31:43.915 [2024-11-20 06:42:15.595303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6752 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.915 [2024-11-20 06:42:15.595314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:43.915 [2024-11-20 06:42:15.607178] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x636860) 00:31:43.915 [2024-11-20 06:42:15.607210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:16396 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.915 [2024-11-20 06:42:15.607220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:43.915 [2024-11-20 06:42:15.618188] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x636860) 00:31:43.916 [2024-11-20 06:42:15.618223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:15219 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.916 [2024-11-20 06:42:15.618232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:43.916 [2024-11-20 06:42:15.626834] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x636860) 00:31:43.916 [2024-11-20 06:42:15.626858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:7438 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.916 [2024-11-20 06:42:15.626867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:43.916 [2024-11-20 06:42:15.639247] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x636860) 00:31:43.916 [2024-11-20 06:42:15.639273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:7721 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.916 [2024-11-20 06:42:15.639281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:43.916 [2024-11-20 06:42:15.651284] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x636860) 00:31:43.916 [2024-11-20 06:42:15.651306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:2470 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.916 [2024-11-20 06:42:15.651315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:43.916 [2024-11-20 06:42:15.663464] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x636860) 00:31:43.916 [2024-11-20 06:42:15.663487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7433 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.916 [2024-11-20 06:42:15.663495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:43.916 [2024-11-20 06:42:15.674816] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x636860) 00:31:43.916 [2024-11-20 06:42:15.674837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:8860 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.916 [2024-11-20 06:42:15.674846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:43.916 [2024-11-20 06:42:15.683625] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x636860) 00:31:43.916 [2024-11-20 06:42:15.683646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:19594 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.916 [2024-11-20 06:42:15.683655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:43.916 [2024-11-20 06:42:15.695505] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x636860) 00:31:43.916 [2024-11-20 06:42:15.695528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:7136 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.916 [2024-11-20 06:42:15.695536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:43.916 [2024-11-20 06:42:15.707517] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x636860) 00:31:43.916 [2024-11-20 06:42:15.707539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:23872 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.916 [2024-11-20 06:42:15.707547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:43.916 [2024-11-20 06:42:15.718803] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x636860) 00:31:43.916 [2024-11-20 06:42:15.718824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:1079 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.916 [2024-11-20 06:42:15.718832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:43.916 [2024-11-20 06:42:15.731122] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x636860) 00:31:43.916 [2024-11-20 06:42:15.731144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:9486 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.916 [2024-11-20 06:42:15.731153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:43.916 [2024-11-20 06:42:15.739633] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x636860) 00:31:43.916 [2024-11-20 06:42:15.739655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:21488 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.916 [2024-11-20 06:42:15.739663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.175 [2024-11-20 06:42:15.751964] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x636860) 00:31:44.175 [2024-11-20 06:42:15.751988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:25431 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.175 [2024-11-20 06:42:15.751997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.175 [2024-11-20 06:42:15.765095] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x636860) 00:31:44.175 [2024-11-20 06:42:15.765118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:10895 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.175 [2024-11-20 06:42:15.765127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.175 [2024-11-20 06:42:15.775075] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x636860) 00:31:44.175 [2024-11-20 06:42:15.775097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:22574 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.175 [2024-11-20 06:42:15.775105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.175 [2024-11-20 06:42:15.783337] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x636860) 00:31:44.175 [2024-11-20 06:42:15.783359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:3965 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.175 [2024-11-20 06:42:15.783367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.175 [2024-11-20 06:42:15.793476] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x636860) 00:31:44.175 [2024-11-20 06:42:15.793511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:4738 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.175 [2024-11-20 06:42:15.793521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.175 [2024-11-20 06:42:15.804039] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x636860) 00:31:44.175 [2024-11-20 06:42:15.804060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:8638 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.175 [2024-11-20 06:42:15.804069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.175 [2024-11-20 06:42:15.812689] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x636860) 00:31:44.175 [2024-11-20 06:42:15.812711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:340 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.175 [2024-11-20 06:42:15.812720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.175 [2024-11-20 06:42:15.823527] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x636860) 00:31:44.175 [2024-11-20 06:42:15.823549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16027 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.175 [2024-11-20 06:42:15.823561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.175 [2024-11-20 06:42:15.834793] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x636860) 00:31:44.175 [2024-11-20 06:42:15.834814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:18402 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.175 [2024-11-20 06:42:15.834823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.175 [2024-11-20 06:42:15.843086] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x636860) 00:31:44.175 [2024-11-20 06:42:15.843108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:13140 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.175 [2024-11-20 06:42:15.843116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.175 [2024-11-20 06:42:15.852788] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x636860) 00:31:44.175 [2024-11-20 06:42:15.852809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:11481 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.175 [2024-11-20 06:42:15.852818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.175 [2024-11-20 06:42:15.862618] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x636860) 00:31:44.175 [2024-11-20 06:42:15.862639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:17872 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.175 [2024-11-20 06:42:15.862648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.175 [2024-11-20 06:42:15.874443] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x636860) 00:31:44.175 [2024-11-20 06:42:15.874464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:4230 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.175 [2024-11-20 06:42:15.874473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.175 [2024-11-20 06:42:15.882536] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x636860) 00:31:44.175 [2024-11-20 06:42:15.882558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:8270 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.175 [2024-11-20 06:42:15.882567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.175 [2024-11-20 06:42:15.892426] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x636860) 00:31:44.175 [2024-11-20 06:42:15.892459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:8710 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.175 [2024-11-20 06:42:15.892468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.175 [2024-11-20 06:42:15.902094] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x636860) 00:31:44.175 [2024-11-20 06:42:15.902116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4114 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.175 [2024-11-20 06:42:15.902124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.175 [2024-11-20 06:42:15.912049] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x636860) 00:31:44.175 [2024-11-20 06:42:15.912074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:13171 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.175 [2024-11-20 06:42:15.912082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.175 [2024-11-20 06:42:15.919810] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x636860) 00:31:44.175 [2024-11-20 06:42:15.919832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:23321 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.175 [2024-11-20 06:42:15.919841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.175 [2024-11-20 06:42:15.929712] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x636860) 00:31:44.175 [2024-11-20 06:42:15.929734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11542 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.176 [2024-11-20 06:42:15.929743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.176 [2024-11-20 06:42:15.940986] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x636860) 00:31:44.176 [2024-11-20 06:42:15.941008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:19653 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.176 [2024-11-20 06:42:15.941017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.176 [2024-11-20 06:42:15.950056] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x636860) 00:31:44.176 [2024-11-20 06:42:15.950078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:23448 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.176 [2024-11-20 06:42:15.950087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.176 [2024-11-20 06:42:15.960044] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x636860) 00:31:44.176 [2024-11-20 06:42:15.960067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:18834 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.176 [2024-11-20 06:42:15.960076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.176 [2024-11-20 06:42:15.969190] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x636860) 00:31:44.176 [2024-11-20 06:42:15.969217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:18737 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.176 [2024-11-20 06:42:15.969227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.176 [2024-11-20 06:42:15.978135] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x636860) 00:31:44.176 [2024-11-20 06:42:15.978157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:16832 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.176 [2024-11-20 06:42:15.978165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.176 [2024-11-20 06:42:15.987529] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x636860) 00:31:44.176 [2024-11-20 06:42:15.987551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:3544 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.176 [2024-11-20 06:42:15.987559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.176 [2024-11-20 06:42:15.997925] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x636860) 00:31:44.176 [2024-11-20 06:42:15.997946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:775 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.176 [2024-11-20 06:42:15.997955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.176 [2024-11-20 06:42:16.006386] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x636860) 00:31:44.176 [2024-11-20 06:42:16.006414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:7794 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.176 [2024-11-20 06:42:16.006427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.434 [2024-11-20 06:42:16.017570] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x636860) 00:31:44.434 [2024-11-20 06:42:16.017595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:17736 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.434 [2024-11-20 06:42:16.017604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.434 [2024-11-20 06:42:16.030362] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x636860) 00:31:44.434 [2024-11-20 06:42:16.030385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:17155 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.434 [2024-11-20 06:42:16.030394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.434 [2024-11-20 06:42:16.038902] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x636860) 00:31:44.434 [2024-11-20 06:42:16.038924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20023 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.434 [2024-11-20 06:42:16.038933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.434 [2024-11-20 06:42:16.050373] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x636860) 00:31:44.434 [2024-11-20 06:42:16.050395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:15329 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.434 [2024-11-20 06:42:16.050404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.434 [2024-11-20 06:42:16.062589] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x636860) 00:31:44.434 [2024-11-20 06:42:16.062611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:2067 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.434 [2024-11-20 06:42:16.062620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.434 [2024-11-20 06:42:16.073376] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x636860) 00:31:44.434 [2024-11-20 06:42:16.073398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:6620 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.434 [2024-11-20 06:42:16.073406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.434 [2024-11-20 06:42:16.083569] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x636860) 00:31:44.434 [2024-11-20 06:42:16.083596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:17230 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.434 [2024-11-20 06:42:16.083605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.434 [2024-11-20 06:42:16.091984] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x636860) 00:31:44.434 [2024-11-20 06:42:16.092006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:20855 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.434 [2024-11-20 06:42:16.092015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.434 [2024-11-20 06:42:16.102982] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x636860) 00:31:44.434 [2024-11-20 06:42:16.103003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:24389 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.434 [2024-11-20 06:42:16.103012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.434 [2024-11-20 06:42:16.111572] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x636860) 00:31:44.434 [2024-11-20 06:42:16.111593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:4371 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.434 [2024-11-20 06:42:16.111602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.434 [2024-11-20 06:42:16.121023] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x636860) 00:31:44.434 [2024-11-20 06:42:16.121044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:4585 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.434 [2024-11-20 06:42:16.121052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.434 [2024-11-20 06:42:16.131441] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x636860) 00:31:44.434 [2024-11-20 06:42:16.131462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:9760 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.434 [2024-11-20 06:42:16.131470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.434 [2024-11-20 06:42:16.140540] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x636860) 00:31:44.434 [2024-11-20 06:42:16.140562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:3701 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.434 [2024-11-20 06:42:16.140571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.434 [2024-11-20 06:42:16.149479] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x636860) 00:31:44.434 [2024-11-20 06:42:16.149500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:2379 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.434 [2024-11-20 06:42:16.149509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.434 [2024-11-20 06:42:16.157780] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x636860) 00:31:44.434 [2024-11-20 06:42:16.157802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17175 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.434 [2024-11-20 06:42:16.157811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.434 [2024-11-20 06:42:16.166782] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x636860) 00:31:44.434 [2024-11-20 06:42:16.166804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:436 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.435 [2024-11-20 06:42:16.166813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.435 [2024-11-20 06:42:16.176377] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x636860) 00:31:44.435 [2024-11-20 06:42:16.176399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:17832 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.435 [2024-11-20 06:42:16.176407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.435 [2024-11-20 06:42:16.185392] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x636860) 00:31:44.435 [2024-11-20 06:42:16.185412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:22294 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.435 [2024-11-20 06:42:16.185420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.435 [2024-11-20 06:42:16.195402] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x636860) 00:31:44.435 [2024-11-20 06:42:16.195423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:3924 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.435 [2024-11-20 06:42:16.195432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.435 [2024-11-20 06:42:16.204013] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x636860) 00:31:44.435 [2024-11-20 06:42:16.204033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:24316 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.435 [2024-11-20 06:42:16.204042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.435 [2024-11-20 06:42:16.212511] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x636860) 00:31:44.435 [2024-11-20 06:42:16.212532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:14163 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.435 [2024-11-20 06:42:16.212541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.435 [2024-11-20 06:42:16.222623] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x636860) 00:31:44.435 [2024-11-20 06:42:16.222644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17651 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.435 [2024-11-20 06:42:16.222652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.435 [2024-11-20 06:42:16.231520] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x636860) 00:31:44.435 [2024-11-20 06:42:16.231541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:21648 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.435 [2024-11-20 06:42:16.231549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.435 [2024-11-20 06:42:16.240120] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x636860) 00:31:44.435 [2024-11-20 06:42:16.240140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:19891 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.435 [2024-11-20 06:42:16.240152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.435 [2024-11-20 06:42:16.249321] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x636860) 00:31:44.435 [2024-11-20 06:42:16.249342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:3677 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.435 [2024-11-20 06:42:16.249351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.435 [2024-11-20 06:42:16.260172] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x636860) 00:31:44.435 [2024-11-20 06:42:16.260192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4437 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.435 [2024-11-20 06:42:16.260200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.693 [2024-11-20 06:42:16.270745] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x636860) 00:31:44.693 [2024-11-20 06:42:16.270771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:11301 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.693 [2024-11-20 06:42:16.270780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.693 [2024-11-20 06:42:16.278499] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x636860) 00:31:44.693 [2024-11-20 06:42:16.278521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:7297 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.693 [2024-11-20 06:42:16.278529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.693 [2024-11-20 06:42:16.290446] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x636860) 00:31:44.693 [2024-11-20 06:42:16.290468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:18458 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.693 [2024-11-20 06:42:16.290477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.693 [2024-11-20 06:42:16.300813] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x636860) 00:31:44.693 [2024-11-20 06:42:16.300835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:2206 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.693 [2024-11-20 06:42:16.300843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.693 [2024-11-20 06:42:16.310826] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x636860) 00:31:44.693 [2024-11-20 06:42:16.310848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:10467 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.693 [2024-11-20 06:42:16.310857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.693 [2024-11-20 06:42:16.318969] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x636860) 00:31:44.693 [2024-11-20 06:42:16.318991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:768 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.693 [2024-11-20 06:42:16.319000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.693 [2024-11-20 06:42:16.330134] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x636860) 00:31:44.693 [2024-11-20 06:42:16.330159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:4687 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.693 [2024-11-20 06:42:16.330167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.693 [2024-11-20 06:42:16.340728] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x636860) 00:31:44.693 [2024-11-20 06:42:16.340754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:15134 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.693 [2024-11-20 06:42:16.340763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.693 [2024-11-20 06:42:16.350041] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x636860) 00:31:44.693 [2024-11-20 06:42:16.350062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:9478 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.693 [2024-11-20 06:42:16.350071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.693 [2024-11-20 06:42:16.359029] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x636860) 00:31:44.693 [2024-11-20 06:42:16.359051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:6074 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.693 [2024-11-20 06:42:16.359060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.693 [2024-11-20 06:42:16.369894] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x636860) 00:31:44.693 [2024-11-20 06:42:16.369915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:7476 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.693 [2024-11-20 06:42:16.369924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.693 [2024-11-20 06:42:16.381842] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x636860) 00:31:44.693 [2024-11-20 06:42:16.381863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7959 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.693 [2024-11-20 06:42:16.381871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.693 [2024-11-20 06:42:16.389943] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x636860) 00:31:44.693 [2024-11-20 06:42:16.389964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:4157 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.693 [2024-11-20 06:42:16.389973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.693 [2024-11-20 06:42:16.399806] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x636860) 00:31:44.693 [2024-11-20 06:42:16.399827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14888 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.693 [2024-11-20 06:42:16.399835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.693 [2024-11-20 06:42:16.410832] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x636860) 00:31:44.693 [2024-11-20 06:42:16.410853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:15724 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.693 [2024-11-20 06:42:16.410861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.693 [2024-11-20 06:42:16.421988] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x636860) 00:31:44.693 [2024-11-20 06:42:16.422009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:21180 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.693 [2024-11-20 06:42:16.422017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.693 [2024-11-20 06:42:16.433390] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x636860) 00:31:44.693 [2024-11-20 06:42:16.433411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:8612 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.693 [2024-11-20 06:42:16.433420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.693 [2024-11-20 06:42:16.445006] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x636860) 00:31:44.694 [2024-11-20 06:42:16.445026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:22707 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.694 [2024-11-20 06:42:16.445035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.694 [2024-11-20 06:42:16.453621] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x636860) 00:31:44.694 [2024-11-20 06:42:16.453642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6669 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.694 [2024-11-20 06:42:16.453651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.694 [2024-11-20 06:42:16.464988] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x636860) 00:31:44.694 [2024-11-20 06:42:16.465009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10796 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.694 [2024-11-20 06:42:16.465017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.694 [2024-11-20 06:42:16.477096] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x636860) 00:31:44.694 [2024-11-20 06:42:16.477116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:20937 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.694 [2024-11-20 06:42:16.477125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.694 [2024-11-20 06:42:16.485331] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x636860) 00:31:44.694 [2024-11-20 06:42:16.485352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:16053 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.694 [2024-11-20 06:42:16.485360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.694 [2024-11-20 06:42:16.494728] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x636860) 00:31:44.694 [2024-11-20 06:42:16.494748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:24222 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.694 [2024-11-20 06:42:16.494757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.694 [2024-11-20 06:42:16.504804] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x636860) 00:31:44.694 [2024-11-20 06:42:16.504824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:20860 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.694 [2024-11-20 06:42:16.504836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.694 [2024-11-20 06:42:16.513209] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x636860) 00:31:44.694 [2024-11-20 06:42:16.513230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:3242 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.694 [2024-11-20 06:42:16.513238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.694 [2024-11-20 06:42:16.524451] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x636860) 00:31:44.694 [2024-11-20 06:42:16.524479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:21114 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.694 [2024-11-20 06:42:16.524489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.953 [2024-11-20 06:42:16.534995] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x636860) 00:31:44.953 [2024-11-20 06:42:16.535018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:4371 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.953 [2024-11-20 06:42:16.535027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.953 [2024-11-20 06:42:16.543347] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x636860) 00:31:44.953 [2024-11-20 06:42:16.543369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:682 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.953 [2024-11-20 06:42:16.543377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.953 [2024-11-20 06:42:16.555702] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x636860) 00:31:44.953 [2024-11-20 06:42:16.555723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:16309 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.953 [2024-11-20 06:42:16.555731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.953 [2024-11-20 06:42:16.564279] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x636860) 00:31:44.953 [2024-11-20 06:42:16.564300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:11063 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.953 [2024-11-20 06:42:16.564309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.953 [2024-11-20 06:42:16.572961] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x636860) 00:31:44.953 [2024-11-20 06:42:16.572981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:9171 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.953 [2024-11-20 06:42:16.572989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.953 [2024-11-20 06:42:16.581542] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x636860) 00:31:44.953 [2024-11-20 06:42:16.581563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:21416 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.953 [2024-11-20 06:42:16.581571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.953 25277.00 IOPS, 98.74 MiB/s [2024-11-20T05:42:16.789Z] [2024-11-20 06:42:16.592893] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x636860) 00:31:44.953 [2024-11-20 06:42:16.592914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:4038 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.953 [2024-11-20 06:42:16.592923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.953 [2024-11-20 06:42:16.603057] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x636860) 00:31:44.953 [2024-11-20 06:42:16.603078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:25368 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.953 [2024-11-20 06:42:16.603087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.953 [2024-11-20 06:42:16.612735] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x636860) 00:31:44.953 [2024-11-20 06:42:16.612757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:17329 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.953 [2024-11-20 06:42:16.612765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.953 [2024-11-20 06:42:16.621637] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x636860) 00:31:44.953 [2024-11-20 06:42:16.621658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14426 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.953 [2024-11-20 06:42:16.621666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.953 [2024-11-20 06:42:16.631111] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x636860) 00:31:44.953 [2024-11-20 06:42:16.631132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:3884 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.953 [2024-11-20 06:42:16.631140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.953 [2024-11-20 06:42:16.639779] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x636860) 00:31:44.953 [2024-11-20 06:42:16.639799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21813 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.953 [2024-11-20 06:42:16.639808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.953 [2024-11-20 06:42:16.649811] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x636860) 00:31:44.953 [2024-11-20 06:42:16.649832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:2549 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.953 [2024-11-20 06:42:16.649840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.954 [2024-11-20 06:42:16.659321] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x636860) 00:31:44.954 [2024-11-20 06:42:16.659342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:17551 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.954 [2024-11-20 06:42:16.659351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.954 [2024-11-20 06:42:16.669219] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x636860) 00:31:44.954 [2024-11-20 06:42:16.669256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:19363 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.954 [2024-11-20 06:42:16.669268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.954 [2024-11-20 06:42:16.677926] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x636860) 00:31:44.954 [2024-11-20 06:42:16.677947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:25498 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.954 [2024-11-20 06:42:16.677955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.954 [2024-11-20 06:42:16.687804] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x636860) 00:31:44.954 [2024-11-20 06:42:16.687825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:20052 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.954 [2024-11-20 06:42:16.687833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.954 [2024-11-20 06:42:16.696422] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x636860) 00:31:44.954 [2024-11-20 06:42:16.696442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:10762 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.954 [2024-11-20 06:42:16.696451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.954 [2024-11-20 06:42:16.707069] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x636860) 00:31:44.954 [2024-11-20 06:42:16.707089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:2137 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.954 [2024-11-20 06:42:16.707098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.954 [2024-11-20 06:42:16.715234] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x636860) 00:31:44.954 [2024-11-20 06:42:16.715257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:10559 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.954 [2024-11-20 06:42:16.715265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.954 [2024-11-20 06:42:16.727269] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x636860) 00:31:44.954 [2024-11-20 06:42:16.727291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:3314 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.954 [2024-11-20 06:42:16.727299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.954 [2024-11-20 06:42:16.735519] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x636860) 00:31:44.954 [2024-11-20 06:42:16.735540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:21115 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.954 [2024-11-20 06:42:16.735548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.954 [2024-11-20 06:42:16.747443] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x636860) 00:31:44.954 [2024-11-20 06:42:16.747464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:25581 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.954 [2024-11-20 06:42:16.747472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.954 [2024-11-20 06:42:16.758866] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x636860) 00:31:44.954 [2024-11-20 06:42:16.758891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:12301 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.954 [2024-11-20 06:42:16.758900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.954 [2024-11-20 06:42:16.771578] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x636860) 00:31:44.954 [2024-11-20 06:42:16.771599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:17509 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.954 [2024-11-20 06:42:16.771607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.954 [2024-11-20 06:42:16.781993] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x636860) 00:31:44.954 [2024-11-20 06:42:16.782017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:17797 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.954 [2024-11-20 06:42:16.782027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:45.212 [2024-11-20 06:42:16.790344] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x636860) 00:31:45.212 [2024-11-20 06:42:16.790370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:24170 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.212 [2024-11-20 06:42:16.790379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:45.212 [2024-11-20 06:42:16.801217] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x636860) 00:31:45.212 [2024-11-20 06:42:16.801240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:23263 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.212 [2024-11-20 06:42:16.801248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:45.212 [2024-11-20 06:42:16.810635] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x636860) 00:31:45.212 [2024-11-20 06:42:16.810657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:19379 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.212 [2024-11-20 06:42:16.810665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:45.212 [2024-11-20 06:42:16.819409] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x636860) 00:31:45.212 [2024-11-20 06:42:16.819430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:3768 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.212 [2024-11-20 06:42:16.819439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:45.212 [2024-11-20 06:42:16.830656] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x636860) 00:31:45.212 [2024-11-20 06:42:16.830677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:16208 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.212 [2024-11-20 06:42:16.830685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:45.212 [2024-11-20 06:42:16.842455] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x636860) 00:31:45.212 [2024-11-20 06:42:16.842476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:12157 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.212 [2024-11-20 06:42:16.842484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:45.212 [2024-11-20 06:42:16.850532] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x636860) 00:31:45.212 [2024-11-20 06:42:16.850552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:6227 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.212 [2024-11-20 06:42:16.850560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:45.212 [2024-11-20 06:42:16.863280] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x636860) 00:31:45.212 [2024-11-20 06:42:16.863301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:920 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.212 [2024-11-20 06:42:16.863309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:45.212 [2024-11-20 06:42:16.875464] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x636860) 00:31:45.212 [2024-11-20 06:42:16.875484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:6026 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.212 [2024-11-20 06:42:16.875492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:45.212 [2024-11-20 06:42:16.883810] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x636860) 00:31:45.212 [2024-11-20 06:42:16.883831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:17569 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.212 [2024-11-20 06:42:16.883840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:45.212 [2024-11-20 06:42:16.895678] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x636860) 00:31:45.212 [2024-11-20 06:42:16.895699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:12328 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.212 [2024-11-20 06:42:16.895707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:45.212 [2024-11-20 06:42:16.904861] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x636860) 00:31:45.212 [2024-11-20 06:42:16.904881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:7571 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.212 [2024-11-20 06:42:16.904889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:45.212 [2024-11-20 06:42:16.914377] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x636860) 00:31:45.212 [2024-11-20 06:42:16.914398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:18295 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.212 [2024-11-20 06:42:16.914406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:45.212 [2024-11-20 06:42:16.925375] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x636860) 00:31:45.212 [2024-11-20 06:42:16.925396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:16373 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.212 [2024-11-20 06:42:16.925405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:45.212 [2024-11-20 06:42:16.933875] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x636860) 00:31:45.212 [2024-11-20 06:42:16.933896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:21384 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.212 [2024-11-20 06:42:16.933909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:45.212 [2024-11-20 06:42:16.945000] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x636860) 00:31:45.212 [2024-11-20 06:42:16.945021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:13 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.212 [2024-11-20 06:42:16.945029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:45.212 [2024-11-20 06:42:16.953185] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x636860) 00:31:45.212 [2024-11-20 06:42:16.953211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:13541 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.212 [2024-11-20 06:42:16.953220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:45.212 [2024-11-20 06:42:16.964359] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x636860) 00:31:45.212 [2024-11-20 06:42:16.964380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:15379 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.212 [2024-11-20 06:42:16.964388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:45.212 [2024-11-20 06:42:16.974553] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x636860) 00:31:45.212 [2024-11-20 06:42:16.974575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:7994 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.212 [2024-11-20 06:42:16.974584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:45.212 [2024-11-20 06:42:16.982856] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x636860) 00:31:45.212 [2024-11-20 06:42:16.982878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:21080 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.212 [2024-11-20 06:42:16.982886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:45.212 [2024-11-20 06:42:16.993813] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x636860) 00:31:45.212 [2024-11-20 06:42:16.993834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:22799 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.212 [2024-11-20 06:42:16.993842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:45.212 [2024-11-20 06:42:17.003474] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x636860) 00:31:45.212 [2024-11-20 06:42:17.003495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:13954 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.212 [2024-11-20 06:42:17.003503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:45.212 [2024-11-20 06:42:17.015765] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x636860) 00:31:45.212 [2024-11-20 06:42:17.015786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:8818 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.212 [2024-11-20 06:42:17.015794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:45.212 [2024-11-20 06:42:17.027010] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x636860) 00:31:45.212 [2024-11-20 06:42:17.027035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:11698 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.212 [2024-11-20 06:42:17.027043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:45.213 [2024-11-20 06:42:17.035790] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x636860) 00:31:45.213 [2024-11-20 06:42:17.035810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:7846 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.213 [2024-11-20 06:42:17.035818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:45.471 [2024-11-20 06:42:17.048576] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x636860) 00:31:45.471 [2024-11-20 06:42:17.048600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:2493 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.471 [2024-11-20 06:42:17.048609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:45.471 [2024-11-20 06:42:17.056527] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x636860) 00:31:45.471 [2024-11-20 06:42:17.056548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:18646 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.471 [2024-11-20 06:42:17.056557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:45.471 [2024-11-20 06:42:17.068047] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x636860) 00:31:45.471 [2024-11-20 06:42:17.068070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:6993 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.471 [2024-11-20 06:42:17.068078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:45.471 [2024-11-20 06:42:17.079055] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x636860) 00:31:45.471 [2024-11-20 06:42:17.079077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:14277 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.471 [2024-11-20 06:42:17.079085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:45.471 [2024-11-20 06:42:17.089077] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x636860) 00:31:45.471 [2024-11-20 06:42:17.089097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:13103 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.471 [2024-11-20 06:42:17.089106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:45.471 [2024-11-20 06:42:17.099495] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x636860) 00:31:45.471 [2024-11-20 06:42:17.099516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:3195 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.471 [2024-11-20 06:42:17.099525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:45.471 [2024-11-20 06:42:17.108095] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x636860) 00:31:45.471 [2024-11-20 06:42:17.108115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19028 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.471 [2024-11-20 06:42:17.108124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:45.471 [2024-11-20 06:42:17.116621] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x636860) 00:31:45.471 [2024-11-20 06:42:17.116642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:3067 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.471 [2024-11-20 06:42:17.116651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:45.471 [2024-11-20 06:42:17.127853] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x636860) 00:31:45.471 [2024-11-20 06:42:17.127873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22831 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.471 [2024-11-20 06:42:17.127881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:45.471 [2024-11-20 06:42:17.137748] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x636860) 00:31:45.471 [2024-11-20 06:42:17.137770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14313 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.471 [2024-11-20 06:42:17.137778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:45.471 [2024-11-20 06:42:17.145722] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x636860) 00:31:45.471 [2024-11-20 06:42:17.145743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3532 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.471 [2024-11-20 06:42:17.145751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:45.471 [2024-11-20 06:42:17.155267] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x636860) 00:31:45.471 [2024-11-20 06:42:17.155287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:7747 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.471 [2024-11-20 06:42:17.155296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:45.471 [2024-11-20 06:42:17.164126] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x636860) 00:31:45.471 [2024-11-20 06:42:17.164147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:2312 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.471 [2024-11-20 06:42:17.164155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:45.471 [2024-11-20 06:42:17.173191] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x636860) 00:31:45.471 [2024-11-20 06:42:17.173221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:11281 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.472 [2024-11-20 06:42:17.173229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:45.472 [2024-11-20 06:42:17.182999] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x636860) 00:31:45.472 [2024-11-20 06:42:17.183021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:8892 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.472 [2024-11-20 06:42:17.183029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:45.472 [2024-11-20 06:42:17.192046] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x636860) 00:31:45.472 [2024-11-20 06:42:17.192067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:11022 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.472 [2024-11-20 06:42:17.192079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:45.472 [2024-11-20 06:42:17.201749] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x636860) 00:31:45.472 [2024-11-20 06:42:17.201772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:17651 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.472 [2024-11-20 06:42:17.201781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:45.472 [2024-11-20 06:42:17.210871] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x636860) 00:31:45.472 [2024-11-20 06:42:17.210893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:8796 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.472 [2024-11-20 06:42:17.210902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:45.472 [2024-11-20 06:42:17.218818] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x636860) 00:31:45.472 [2024-11-20 06:42:17.218842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:22037 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.472 [2024-11-20 06:42:17.218851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:45.472 [2024-11-20 06:42:17.228939] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x636860) 00:31:45.472 [2024-11-20 06:42:17.228961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:10807 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.472 [2024-11-20 06:42:17.228969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:45.472 [2024-11-20 06:42:17.239150] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x636860) 00:31:45.472 [2024-11-20 06:42:17.239173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:22042 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.472 [2024-11-20 06:42:17.239182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:45.472 [2024-11-20 06:42:17.249250] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x636860) 00:31:45.472 [2024-11-20 06:42:17.249273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:15007 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.472 [2024-11-20 06:42:17.249281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:45.472 [2024-11-20 06:42:17.260167] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x636860) 00:31:45.472 [2024-11-20 06:42:17.260189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:23224 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.472 [2024-11-20 06:42:17.260198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:45.472 [2024-11-20 06:42:17.268155] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x636860) 00:31:45.472 [2024-11-20 06:42:17.268177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12556 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.472 [2024-11-20 06:42:17.268185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:45.472 [2024-11-20 06:42:17.278190] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x636860) 00:31:45.472 [2024-11-20 06:42:17.278218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:17046 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.472 [2024-11-20 06:42:17.278227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:45.472 [2024-11-20 06:42:17.286278] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x636860) 00:31:45.472 [2024-11-20 06:42:17.286300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:23838 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.472 [2024-11-20 06:42:17.286308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:45.472 [2024-11-20 06:42:17.297048] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x636860) 00:31:45.472 [2024-11-20 06:42:17.297075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:8055 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.472 [2024-11-20 06:42:17.297083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:45.730 [2024-11-20 06:42:17.307478] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x636860) 00:31:45.730 [2024-11-20 06:42:17.307504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:5195 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.730 [2024-11-20 06:42:17.307513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:45.730 [2024-11-20 06:42:17.315668] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x636860) 00:31:45.730 [2024-11-20 06:42:17.315689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:15216 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.730 [2024-11-20 06:42:17.315698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:45.730 [2024-11-20 06:42:17.327541] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x636860) 00:31:45.730 [2024-11-20 06:42:17.327564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20624 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.730 [2024-11-20 06:42:17.327573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:45.730 [2024-11-20 06:42:17.339242] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x636860) 00:31:45.730 [2024-11-20 06:42:17.339264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:13587 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.730 [2024-11-20 06:42:17.339272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:45.730 [2024-11-20 06:42:17.350435] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x636860) 00:31:45.730 [2024-11-20 06:42:17.350457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:10786 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.730 [2024-11-20 06:42:17.350465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:45.730 [2024-11-20 06:42:17.358728] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x636860) 00:31:45.730 [2024-11-20 06:42:17.358750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13987 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.730 [2024-11-20 06:42:17.358763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:45.730 [2024-11-20 06:42:17.370121] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x636860) 00:31:45.730 [2024-11-20 06:42:17.370145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11290 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.730 [2024-11-20 06:42:17.370153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:45.730 [2024-11-20 06:42:17.378787] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x636860) 00:31:45.730 [2024-11-20 06:42:17.378809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:16446 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.730 [2024-11-20 06:42:17.378818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:45.730 [2024-11-20 06:42:17.388889] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x636860) 00:31:45.730 [2024-11-20 06:42:17.388911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:17453 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.730 [2024-11-20 06:42:17.388920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:45.731 [2024-11-20 06:42:17.398991] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x636860) 00:31:45.731 [2024-11-20 06:42:17.399013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:5398 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.731 [2024-11-20 06:42:17.399022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:45.731 [2024-11-20 06:42:17.409037] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x636860) 00:31:45.731 [2024-11-20 06:42:17.409059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:14673 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.731 [2024-11-20 06:42:17.409068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:45.731 [2024-11-20 06:42:17.420418] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x636860) 00:31:45.731 [2024-11-20 06:42:17.420441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:6042 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.731 [2024-11-20 06:42:17.420450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:45.731 [2024-11-20 06:42:17.428672] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x636860) 00:31:45.731 [2024-11-20 06:42:17.428694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:7545 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.731 [2024-11-20 06:42:17.428703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:45.731 [2024-11-20 06:42:17.441024] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x636860) 00:31:45.731 [2024-11-20 06:42:17.441047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:17574 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.731 [2024-11-20 06:42:17.441057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:45.731 [2024-11-20 06:42:17.454005] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x636860) 00:31:45.731 [2024-11-20 06:42:17.454031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:15932 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.731 [2024-11-20 06:42:17.454039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:45.731 [2024-11-20 06:42:17.465572] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x636860) 00:31:45.731 [2024-11-20 06:42:17.465593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:14775 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.731 [2024-11-20 06:42:17.465602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:45.731 [2024-11-20 06:42:17.475301] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x636860) 00:31:45.731 [2024-11-20 06:42:17.475324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:8326 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.731 [2024-11-20 06:42:17.475333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:45.731 [2024-11-20 06:42:17.488590] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x636860) 00:31:45.731 [2024-11-20 06:42:17.488612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16922 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.731 [2024-11-20 06:42:17.488621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:45.731 [2024-11-20 06:42:17.500832] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x636860) 00:31:45.731 [2024-11-20 06:42:17.500853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:1870 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.731 [2024-11-20 06:42:17.500862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:45.731 [2024-11-20 06:42:17.513969] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x636860) 00:31:45.731 [2024-11-20 06:42:17.513990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1816 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.731 [2024-11-20 06:42:17.514000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:45.731 [2024-11-20 06:42:17.526893] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x636860) 00:31:45.731 [2024-11-20 06:42:17.526915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:18753 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.731 [2024-11-20 06:42:17.526924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:45.731 [2024-11-20 06:42:17.535665] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x636860) 00:31:45.731 [2024-11-20 06:42:17.535687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:14124 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.731 [2024-11-20 06:42:17.535697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:45.731 [2024-11-20 06:42:17.549288] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x636860) 00:31:45.731 [2024-11-20 06:42:17.549309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:9584 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.731 [2024-11-20 06:42:17.549318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:45.731 [2024-11-20 06:42:17.560377] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x636860) 00:31:45.731 [2024-11-20 06:42:17.560417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:8238 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.731 [2024-11-20 06:42:17.560433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:45.989 [2024-11-20 06:42:17.569080] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x636860) 00:31:45.989 [2024-11-20 06:42:17.569105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:19206 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.989 [2024-11-20 06:42:17.569115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:45.989 [2024-11-20 06:42:17.579422] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x636860) 00:31:45.989 [2024-11-20 06:42:17.579445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:18565 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.989 [2024-11-20 06:42:17.579454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:45.989 25229.50 IOPS, 98.55 MiB/s [2024-11-20T05:42:17.825Z] [2024-11-20 06:42:17.589629] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x636860) 00:31:45.989 [2024-11-20 06:42:17.589649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:7156 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.989 [2024-11-20 06:42:17.589658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:45.989 00:31:45.989 Latency(us) 00:31:45.989 [2024-11-20T05:42:17.825Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:45.989 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:31:45.989 nvme0n1 : 2.04 24732.08 96.61 0.00 0.00 5067.65 2434.19 47435.58 00:31:45.989 [2024-11-20T05:42:17.825Z] =================================================================================================================== 00:31:45.989 [2024-11-20T05:42:17.825Z] Total : 24732.08 96.61 0.00 0.00 5067.65 2434.19 47435.58 00:31:45.989 { 00:31:45.989 "results": [ 00:31:45.989 { 00:31:45.989 "job": "nvme0n1", 00:31:45.989 "core_mask": "0x2", 00:31:45.989 "workload": "randread", 00:31:45.989 "status": "finished", 00:31:45.989 "queue_depth": 128, 00:31:45.989 "io_size": 4096, 00:31:45.989 "runtime": 2.044511, 00:31:45.989 "iops": 24732.07529820089, 00:31:45.989 "mibps": 96.60966913359722, 00:31:45.989 "io_failed": 0, 00:31:45.989 "io_timeout": 0, 00:31:45.989 "avg_latency_us": 5067.647814967063, 00:31:45.989 "min_latency_us": 2434.194285714286, 00:31:45.989 "max_latency_us": 47435.58095238095 00:31:45.989 } 00:31:45.989 ], 00:31:45.989 "core_count": 1 00:31:45.989 } 00:31:45.989 06:42:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:31:45.989 06:42:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:31:45.989 06:42:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:31:45.989 | .driver_specific 00:31:45.989 | .nvme_error 00:31:45.989 | .status_code 00:31:45.989 | .command_transient_transport_error' 00:31:45.990 06:42:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:31:46.248 06:42:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 198 > 0 )) 00:31:46.248 06:42:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 697958 00:31:46.248 06:42:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 697958 ']' 00:31:46.248 06:42:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 697958 00:31:46.248 06:42:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:31:46.248 06:42:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:31:46.248 06:42:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 697958 00:31:46.248 06:42:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:31:46.248 06:42:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:31:46.248 06:42:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 697958' 00:31:46.248 killing process with pid 697958 00:31:46.248 06:42:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 697958 00:31:46.248 Received shutdown signal, test time was about 2.000000 seconds 00:31:46.248 00:31:46.248 Latency(us) 00:31:46.248 [2024-11-20T05:42:18.084Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:46.248 [2024-11-20T05:42:18.084Z] =================================================================================================================== 00:31:46.248 [2024-11-20T05:42:18.084Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:46.248 06:42:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 697958 00:31:46.248 06:42:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:31:46.248 06:42:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:31:46.248 06:42:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:31:46.248 06:42:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:31:46.248 06:42:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:31:46.248 06:42:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=698464 00:31:46.248 06:42:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 698464 /var/tmp/bperf.sock 00:31:46.248 06:42:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:31:46.248 06:42:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 698464 ']' 00:31:46.248 06:42:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:46.248 06:42:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:31:46.248 06:42:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:46.248 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:46.248 06:42:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:31:46.248 06:42:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:31:46.506 [2024-11-20 06:42:18.116649] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:31:46.506 [2024-11-20 06:42:18.116699] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid698464 ] 00:31:46.506 I/O size of 131072 is greater than zero copy threshold (65536). 00:31:46.506 Zero copy mechanism will not be used. 00:31:46.506 [2024-11-20 06:42:18.192650] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:46.506 [2024-11-20 06:42:18.229485] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:47.440 06:42:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:31:47.440 06:42:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:31:47.440 06:42:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:31:47.440 06:42:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:31:47.440 06:42:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:31:47.440 06:42:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:47.440 06:42:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:31:47.440 06:42:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:47.440 06:42:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:47.440 06:42:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:48.007 nvme0n1 00:31:48.007 06:42:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:31:48.007 06:42:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:48.007 06:42:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:31:48.007 06:42:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:48.007 06:42:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:31:48.007 06:42:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:31:48.008 I/O size of 131072 is greater than zero copy threshold (65536). 00:31:48.008 Zero copy mechanism will not be used. 00:31:48.008 Running I/O for 2 seconds... 00:31:48.008 [2024-11-20 06:42:19.705622] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:48.008 [2024-11-20 06:42:19.705656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.008 [2024-11-20 06:42:19.705666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:48.008 [2024-11-20 06:42:19.710914] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:48.008 [2024-11-20 06:42:19.710940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.008 [2024-11-20 06:42:19.710949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:48.008 [2024-11-20 06:42:19.714371] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:48.008 [2024-11-20 06:42:19.714394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.008 [2024-11-20 06:42:19.714403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:48.008 [2024-11-20 06:42:19.718426] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:48.008 [2024-11-20 06:42:19.718450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.008 [2024-11-20 06:42:19.718463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:48.008 [2024-11-20 06:42:19.723666] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:48.008 [2024-11-20 06:42:19.723687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.008 [2024-11-20 06:42:19.723695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:48.008 [2024-11-20 06:42:19.728761] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:48.008 [2024-11-20 06:42:19.728784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.008 [2024-11-20 06:42:19.728792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:48.008 [2024-11-20 06:42:19.733819] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:48.008 [2024-11-20 06:42:19.733841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.008 [2024-11-20 06:42:19.733849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:48.008 [2024-11-20 06:42:19.738921] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:48.008 [2024-11-20 06:42:19.738942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.008 [2024-11-20 06:42:19.738951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:48.008 [2024-11-20 06:42:19.743957] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:48.008 [2024-11-20 06:42:19.743978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.008 [2024-11-20 06:42:19.743986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:48.008 [2024-11-20 06:42:19.749003] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:48.008 [2024-11-20 06:42:19.749024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.008 [2024-11-20 06:42:19.749034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:48.008 [2024-11-20 06:42:19.754074] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:48.008 [2024-11-20 06:42:19.754096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.008 [2024-11-20 06:42:19.754104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:48.008 [2024-11-20 06:42:19.759249] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:48.008 [2024-11-20 06:42:19.759270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.008 [2024-11-20 06:42:19.759278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:48.008 [2024-11-20 06:42:19.764403] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:48.008 [2024-11-20 06:42:19.764428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.008 [2024-11-20 06:42:19.764436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:48.008 [2024-11-20 06:42:19.769604] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:48.008 [2024-11-20 06:42:19.769626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.008 [2024-11-20 06:42:19.769633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:48.008 [2024-11-20 06:42:19.774741] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:48.008 [2024-11-20 06:42:19.774762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.008 [2024-11-20 06:42:19.774770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:48.008 [2024-11-20 06:42:19.779935] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:48.008 [2024-11-20 06:42:19.779955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.008 [2024-11-20 06:42:19.779963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:48.008 [2024-11-20 06:42:19.785116] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:48.008 [2024-11-20 06:42:19.785137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.008 [2024-11-20 06:42:19.785146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:48.008 [2024-11-20 06:42:19.790323] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:48.008 [2024-11-20 06:42:19.790345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.008 [2024-11-20 06:42:19.790353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:48.008 [2024-11-20 06:42:19.795491] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:48.008 [2024-11-20 06:42:19.795513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.008 [2024-11-20 06:42:19.795521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:48.008 [2024-11-20 06:42:19.800699] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:48.008 [2024-11-20 06:42:19.800721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.008 [2024-11-20 06:42:19.800729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:48.008 [2024-11-20 06:42:19.805862] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:48.008 [2024-11-20 06:42:19.805882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.008 [2024-11-20 06:42:19.805890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:48.008 [2024-11-20 06:42:19.811065] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:48.008 [2024-11-20 06:42:19.811087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.008 [2024-11-20 06:42:19.811095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:48.008 [2024-11-20 06:42:19.816297] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:48.008 [2024-11-20 06:42:19.816320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.008 [2024-11-20 06:42:19.816330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:48.008 [2024-11-20 06:42:19.821447] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:48.008 [2024-11-20 06:42:19.821469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.008 [2024-11-20 06:42:19.821476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:48.008 [2024-11-20 06:42:19.826633] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:48.008 [2024-11-20 06:42:19.826654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.008 [2024-11-20 06:42:19.826662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:48.008 [2024-11-20 06:42:19.831793] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:48.008 [2024-11-20 06:42:19.831814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.008 [2024-11-20 06:42:19.831822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:48.008 [2024-11-20 06:42:19.836997] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:48.009 [2024-11-20 06:42:19.837022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.009 [2024-11-20 06:42:19.837030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:48.268 [2024-11-20 06:42:19.842549] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:48.268 [2024-11-20 06:42:19.842574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.268 [2024-11-20 06:42:19.842583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:48.268 [2024-11-20 06:42:19.848017] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:48.268 [2024-11-20 06:42:19.848042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.268 [2024-11-20 06:42:19.848051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:48.268 [2024-11-20 06:42:19.853600] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:48.268 [2024-11-20 06:42:19.853623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.268 [2024-11-20 06:42:19.853635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:48.268 [2024-11-20 06:42:19.859080] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:48.268 [2024-11-20 06:42:19.859103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.268 [2024-11-20 06:42:19.859111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:48.268 [2024-11-20 06:42:19.864581] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:48.268 [2024-11-20 06:42:19.864603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.268 [2024-11-20 06:42:19.864612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:48.268 [2024-11-20 06:42:19.870209] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:48.268 [2024-11-20 06:42:19.870231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.268 [2024-11-20 06:42:19.870239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:48.268 [2024-11-20 06:42:19.875714] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:48.268 [2024-11-20 06:42:19.875735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.268 [2024-11-20 06:42:19.875743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:48.268 [2024-11-20 06:42:19.881020] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:48.268 [2024-11-20 06:42:19.881041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.268 [2024-11-20 06:42:19.881049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:48.268 [2024-11-20 06:42:19.886312] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:48.268 [2024-11-20 06:42:19.886333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.268 [2024-11-20 06:42:19.886342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:48.268 [2024-11-20 06:42:19.891849] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:48.268 [2024-11-20 06:42:19.891871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.269 [2024-11-20 06:42:19.891879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:48.269 [2024-11-20 06:42:19.897215] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:48.269 [2024-11-20 06:42:19.897236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.269 [2024-11-20 06:42:19.897244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:48.269 [2024-11-20 06:42:19.902650] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:48.269 [2024-11-20 06:42:19.902672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.269 [2024-11-20 06:42:19.902681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:48.269 [2024-11-20 06:42:19.908071] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:48.269 [2024-11-20 06:42:19.908093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.269 [2024-11-20 06:42:19.908102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:48.269 [2024-11-20 06:42:19.913557] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:48.269 [2024-11-20 06:42:19.913580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.269 [2024-11-20 06:42:19.913588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:48.269 [2024-11-20 06:42:19.918990] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:48.269 [2024-11-20 06:42:19.919012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.269 [2024-11-20 06:42:19.919021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:48.269 [2024-11-20 06:42:19.924407] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:48.269 [2024-11-20 06:42:19.924429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.269 [2024-11-20 06:42:19.924437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:48.269 [2024-11-20 06:42:19.929886] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:48.269 [2024-11-20 06:42:19.929907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.269 [2024-11-20 06:42:19.929915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:48.269 [2024-11-20 06:42:19.935383] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:48.269 [2024-11-20 06:42:19.935405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.269 [2024-11-20 06:42:19.935413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:48.269 [2024-11-20 06:42:19.941282] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:48.269 [2024-11-20 06:42:19.941303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.269 [2024-11-20 06:42:19.941312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:48.269 [2024-11-20 06:42:19.947438] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:48.269 [2024-11-20 06:42:19.947460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.269 [2024-11-20 06:42:19.947474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:48.269 [2024-11-20 06:42:19.953807] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:48.269 [2024-11-20 06:42:19.953830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.269 [2024-11-20 06:42:19.953838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:48.269 [2024-11-20 06:42:19.960563] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:48.269 [2024-11-20 06:42:19.960585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.269 [2024-11-20 06:42:19.960594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:48.269 [2024-11-20 06:42:19.967321] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:48.269 [2024-11-20 06:42:19.967344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.269 [2024-11-20 06:42:19.967353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:48.269 [2024-11-20 06:42:19.975035] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:48.269 [2024-11-20 06:42:19.975058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.269 [2024-11-20 06:42:19.975067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:48.269 [2024-11-20 06:42:19.983168] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:48.269 [2024-11-20 06:42:19.983191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.269 [2024-11-20 06:42:19.983200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:48.269 [2024-11-20 06:42:19.990349] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:48.269 [2024-11-20 06:42:19.990372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.269 [2024-11-20 06:42:19.990381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:48.269 [2024-11-20 06:42:19.996671] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:48.269 [2024-11-20 06:42:19.996692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.269 [2024-11-20 06:42:19.996702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:48.269 [2024-11-20 06:42:20.002694] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:48.269 [2024-11-20 06:42:20.002719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.269 [2024-11-20 06:42:20.002728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:48.269 [2024-11-20 06:42:20.009101] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:48.269 [2024-11-20 06:42:20.009124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.269 [2024-11-20 06:42:20.009138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:48.269 [2024-11-20 06:42:20.014591] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:48.269 [2024-11-20 06:42:20.014613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.269 [2024-11-20 06:42:20.014622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:48.269 [2024-11-20 06:42:20.019936] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:48.269 [2024-11-20 06:42:20.019959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.269 [2024-11-20 06:42:20.019968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:48.269 [2024-11-20 06:42:20.025452] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:48.269 [2024-11-20 06:42:20.025475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.269 [2024-11-20 06:42:20.025484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:48.269 [2024-11-20 06:42:20.031545] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:48.269 [2024-11-20 06:42:20.031567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.269 [2024-11-20 06:42:20.031575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:48.269 [2024-11-20 06:42:20.036900] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:48.269 [2024-11-20 06:42:20.036922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.269 [2024-11-20 06:42:20.036931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:48.269 [2024-11-20 06:42:20.042315] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:48.269 [2024-11-20 06:42:20.042338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.269 [2024-11-20 06:42:20.042346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:48.269 [2024-11-20 06:42:20.047783] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:48.269 [2024-11-20 06:42:20.047804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.270 [2024-11-20 06:42:20.047812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:48.270 [2024-11-20 06:42:20.052926] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:48.270 [2024-11-20 06:42:20.052948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.270 [2024-11-20 06:42:20.052956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:48.270 [2024-11-20 06:42:20.057996] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:48.270 [2024-11-20 06:42:20.058036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.270 [2024-11-20 06:42:20.058045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:48.270 [2024-11-20 06:42:20.064466] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:48.270 [2024-11-20 06:42:20.064491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.270 [2024-11-20 06:42:20.064501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:48.270 [2024-11-20 06:42:20.069653] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:48.270 [2024-11-20 06:42:20.069677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.270 [2024-11-20 06:42:20.069687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:48.270 [2024-11-20 06:42:20.074910] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:48.270 [2024-11-20 06:42:20.074933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.270 [2024-11-20 06:42:20.074941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:48.270 [2024-11-20 06:42:20.080120] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:48.270 [2024-11-20 06:42:20.080143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.270 [2024-11-20 06:42:20.080151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:48.270 [2024-11-20 06:42:20.085502] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:48.270 [2024-11-20 06:42:20.085526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.270 [2024-11-20 06:42:20.085534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:48.270 [2024-11-20 06:42:20.090706] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:48.270 [2024-11-20 06:42:20.090730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.270 [2024-11-20 06:42:20.090738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:48.270 [2024-11-20 06:42:20.095955] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:48.270 [2024-11-20 06:42:20.095980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.270 [2024-11-20 06:42:20.095990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:48.530 [2024-11-20 06:42:20.101391] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:48.530 [2024-11-20 06:42:20.101416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.530 [2024-11-20 06:42:20.101429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:48.530 [2024-11-20 06:42:20.106727] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:48.530 [2024-11-20 06:42:20.106752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.530 [2024-11-20 06:42:20.106761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:48.530 [2024-11-20 06:42:20.111974] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:48.530 [2024-11-20 06:42:20.111997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.530 [2024-11-20 06:42:20.112006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:48.530 [2024-11-20 06:42:20.117174] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:48.530 [2024-11-20 06:42:20.117197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.530 [2024-11-20 06:42:20.117212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:48.530 [2024-11-20 06:42:20.122360] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:48.530 [2024-11-20 06:42:20.122383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.530 [2024-11-20 06:42:20.122391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:48.530 [2024-11-20 06:42:20.127541] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:48.530 [2024-11-20 06:42:20.127563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.530 [2024-11-20 06:42:20.127571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:48.530 [2024-11-20 06:42:20.132699] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:48.530 [2024-11-20 06:42:20.132722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.530 [2024-11-20 06:42:20.132730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:48.530 [2024-11-20 06:42:20.138079] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:48.530 [2024-11-20 06:42:20.138101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.530 [2024-11-20 06:42:20.138109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:48.530 [2024-11-20 06:42:20.144143] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:48.530 [2024-11-20 06:42:20.144167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.530 [2024-11-20 06:42:20.144176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:48.530 [2024-11-20 06:42:20.149882] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:48.530 [2024-11-20 06:42:20.149910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.530 [2024-11-20 06:42:20.149919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:48.530 [2024-11-20 06:42:20.155229] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:48.530 [2024-11-20 06:42:20.155252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.530 [2024-11-20 06:42:20.155260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:48.530 [2024-11-20 06:42:20.160526] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:48.530 [2024-11-20 06:42:20.160548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.530 [2024-11-20 06:42:20.160556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:48.530 [2024-11-20 06:42:20.165710] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:48.530 [2024-11-20 06:42:20.165732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.530 [2024-11-20 06:42:20.165740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:48.530 [2024-11-20 06:42:20.171659] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:48.530 [2024-11-20 06:42:20.171682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.530 [2024-11-20 06:42:20.171691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:48.530 [2024-11-20 06:42:20.177249] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:48.530 [2024-11-20 06:42:20.177271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.530 [2024-11-20 06:42:20.177280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:48.530 [2024-11-20 06:42:20.182773] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:48.530 [2024-11-20 06:42:20.182795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.530 [2024-11-20 06:42:20.182803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:48.530 [2024-11-20 06:42:20.188371] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:48.530 [2024-11-20 06:42:20.188393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.530 [2024-11-20 06:42:20.188401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:48.530 [2024-11-20 06:42:20.194146] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:48.530 [2024-11-20 06:42:20.194169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.530 [2024-11-20 06:42:20.194177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:48.530 [2024-11-20 06:42:20.199764] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:48.530 [2024-11-20 06:42:20.199786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.530 [2024-11-20 06:42:20.199794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:48.530 [2024-11-20 06:42:20.205361] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:48.530 [2024-11-20 06:42:20.205384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.530 [2024-11-20 06:42:20.205392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:48.530 [2024-11-20 06:42:20.210634] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:48.530 [2024-11-20 06:42:20.210657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.530 [2024-11-20 06:42:20.210665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:48.530 [2024-11-20 06:42:20.215886] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:48.530 [2024-11-20 06:42:20.215908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.530 [2024-11-20 06:42:20.215917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:48.530 [2024-11-20 06:42:20.221172] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:48.530 [2024-11-20 06:42:20.221194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.530 [2024-11-20 06:42:20.221219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:48.530 [2024-11-20 06:42:20.226397] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:48.530 [2024-11-20 06:42:20.226419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.530 [2024-11-20 06:42:20.226428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:48.531 [2024-11-20 06:42:20.231729] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:48.531 [2024-11-20 06:42:20.231752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.531 [2024-11-20 06:42:20.231760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:48.531 [2024-11-20 06:42:20.237125] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:48.531 [2024-11-20 06:42:20.237148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.531 [2024-11-20 06:42:20.237156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:48.531 [2024-11-20 06:42:20.242730] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:48.531 [2024-11-20 06:42:20.242753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.531 [2024-11-20 06:42:20.242765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:48.531 [2024-11-20 06:42:20.248212] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:48.531 [2024-11-20 06:42:20.248234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.531 [2024-11-20 06:42:20.248243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:48.531 [2024-11-20 06:42:20.253664] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:48.531 [2024-11-20 06:42:20.253686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.531 [2024-11-20 06:42:20.253695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:48.531 [2024-11-20 06:42:20.259112] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:48.531 [2024-11-20 06:42:20.259134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.531 [2024-11-20 06:42:20.259143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:48.531 [2024-11-20 06:42:20.264697] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:48.531 [2024-11-20 06:42:20.264720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.531 [2024-11-20 06:42:20.264729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:48.531 [2024-11-20 06:42:20.270015] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:48.531 [2024-11-20 06:42:20.270038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.531 [2024-11-20 06:42:20.270046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:48.531 [2024-11-20 06:42:20.275307] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:48.531 [2024-11-20 06:42:20.275331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.531 [2024-11-20 06:42:20.275340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:48.531 [2024-11-20 06:42:20.280695] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:48.531 [2024-11-20 06:42:20.280718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.531 [2024-11-20 06:42:20.280726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:48.531 [2024-11-20 06:42:20.286087] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:48.531 [2024-11-20 06:42:20.286110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.531 [2024-11-20 06:42:20.286118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:48.531 [2024-11-20 06:42:20.291455] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:48.531 [2024-11-20 06:42:20.291480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.531 [2024-11-20 06:42:20.291488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:48.531 [2024-11-20 06:42:20.296723] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:48.531 [2024-11-20 06:42:20.296745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.531 [2024-11-20 06:42:20.296754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:48.531 [2024-11-20 06:42:20.302064] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:48.531 [2024-11-20 06:42:20.302087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.531 [2024-11-20 06:42:20.302095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:48.531 [2024-11-20 06:42:20.307304] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:48.531 [2024-11-20 06:42:20.307327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.531 [2024-11-20 06:42:20.307335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:48.531 [2024-11-20 06:42:20.312665] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:48.531 [2024-11-20 06:42:20.312689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.531 [2024-11-20 06:42:20.312698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:48.531 [2024-11-20 06:42:20.317935] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:48.531 [2024-11-20 06:42:20.317959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.531 [2024-11-20 06:42:20.317967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:48.531 [2024-11-20 06:42:20.323234] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:48.531 [2024-11-20 06:42:20.323256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.531 [2024-11-20 06:42:20.323265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:48.531 [2024-11-20 06:42:20.328541] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:48.531 [2024-11-20 06:42:20.328564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.531 [2024-11-20 06:42:20.328573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:48.531 [2024-11-20 06:42:20.333945] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:48.531 [2024-11-20 06:42:20.333968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.531 [2024-11-20 06:42:20.333976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:48.531 [2024-11-20 06:42:20.339264] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:48.531 [2024-11-20 06:42:20.339287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.531 [2024-11-20 06:42:20.339295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:48.531 [2024-11-20 06:42:20.344407] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:48.531 [2024-11-20 06:42:20.344429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.531 [2024-11-20 06:42:20.344437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:48.531 [2024-11-20 06:42:20.347245] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:48.531 [2024-11-20 06:42:20.347267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.531 [2024-11-20 06:42:20.347275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:48.531 [2024-11-20 06:42:20.352312] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:48.531 [2024-11-20 06:42:20.352334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.531 [2024-11-20 06:42:20.352342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:48.531 [2024-11-20 06:42:20.358230] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:48.531 [2024-11-20 06:42:20.358253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.531 [2024-11-20 06:42:20.358262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:48.791 [2024-11-20 06:42:20.364195] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:48.791 [2024-11-20 06:42:20.364226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.791 [2024-11-20 06:42:20.364235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:48.791 [2024-11-20 06:42:20.370050] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:48.791 [2024-11-20 06:42:20.370074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.791 [2024-11-20 06:42:20.370083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:48.791 [2024-11-20 06:42:20.375350] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:48.791 [2024-11-20 06:42:20.375373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.791 [2024-11-20 06:42:20.375382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:48.791 [2024-11-20 06:42:20.380690] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:48.791 [2024-11-20 06:42:20.380712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.791 [2024-11-20 06:42:20.380726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:48.791 [2024-11-20 06:42:20.385959] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:48.791 [2024-11-20 06:42:20.385981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.791 [2024-11-20 06:42:20.385990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:48.791 [2024-11-20 06:42:20.391371] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:48.791 [2024-11-20 06:42:20.391394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.791 [2024-11-20 06:42:20.391403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:48.791 [2024-11-20 06:42:20.396759] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:48.791 [2024-11-20 06:42:20.396781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.791 [2024-11-20 06:42:20.396790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:48.791 [2024-11-20 06:42:20.402144] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:48.791 [2024-11-20 06:42:20.402168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.791 [2024-11-20 06:42:20.402176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:48.791 [2024-11-20 06:42:20.407462] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:48.791 [2024-11-20 06:42:20.407485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.791 [2024-11-20 06:42:20.407493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:48.791 [2024-11-20 06:42:20.412859] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:48.791 [2024-11-20 06:42:20.412882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.791 [2024-11-20 06:42:20.412890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:48.791 [2024-11-20 06:42:20.418548] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:48.791 [2024-11-20 06:42:20.418570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.791 [2024-11-20 06:42:20.418578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:48.791 [2024-11-20 06:42:20.424140] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:48.791 [2024-11-20 06:42:20.424162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.791 [2024-11-20 06:42:20.424170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:48.791 [2024-11-20 06:42:20.430115] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:48.791 [2024-11-20 06:42:20.430140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.791 [2024-11-20 06:42:20.430148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:48.791 [2024-11-20 06:42:20.435788] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:48.791 [2024-11-20 06:42:20.435811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.791 [2024-11-20 06:42:20.435820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:48.791 [2024-11-20 06:42:20.441274] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:48.791 [2024-11-20 06:42:20.441297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.791 [2024-11-20 06:42:20.441305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:48.791 [2024-11-20 06:42:20.446855] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:48.791 [2024-11-20 06:42:20.446878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.791 [2024-11-20 06:42:20.446886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:48.791 [2024-11-20 06:42:20.452126] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:48.792 [2024-11-20 06:42:20.452148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.792 [2024-11-20 06:42:20.452156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:48.792 [2024-11-20 06:42:20.457579] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:48.792 [2024-11-20 06:42:20.457600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.792 [2024-11-20 06:42:20.457608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:48.792 [2024-11-20 06:42:20.462398] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:48.792 [2024-11-20 06:42:20.462421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.792 [2024-11-20 06:42:20.462429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:48.792 [2024-11-20 06:42:20.467876] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:48.792 [2024-11-20 06:42:20.467898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.792 [2024-11-20 06:42:20.467907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:48.792 [2024-11-20 06:42:20.473305] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:48.792 [2024-11-20 06:42:20.473328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.792 [2024-11-20 06:42:20.473341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:48.792 [2024-11-20 06:42:20.478964] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:48.792 [2024-11-20 06:42:20.478987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.792 [2024-11-20 06:42:20.478995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:48.792 [2024-11-20 06:42:20.484687] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:48.792 [2024-11-20 06:42:20.484709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.792 [2024-11-20 06:42:20.484717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:48.792 [2024-11-20 06:42:20.490246] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:48.792 [2024-11-20 06:42:20.490268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.792 [2024-11-20 06:42:20.490276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:48.792 [2024-11-20 06:42:20.495805] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:48.792 [2024-11-20 06:42:20.495828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.792 [2024-11-20 06:42:20.495836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:48.792 [2024-11-20 06:42:20.502384] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:48.792 [2024-11-20 06:42:20.502408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.792 [2024-11-20 06:42:20.502416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:48.792 [2024-11-20 06:42:20.509791] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:48.792 [2024-11-20 06:42:20.509814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.792 [2024-11-20 06:42:20.509822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:48.792 [2024-11-20 06:42:20.517675] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:48.792 [2024-11-20 06:42:20.517698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.792 [2024-11-20 06:42:20.517706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:48.792 [2024-11-20 06:42:20.524927] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:48.792 [2024-11-20 06:42:20.524950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.792 [2024-11-20 06:42:20.524959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:48.792 [2024-11-20 06:42:20.531705] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:48.792 [2024-11-20 06:42:20.531732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.792 [2024-11-20 06:42:20.531741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:48.792 [2024-11-20 06:42:20.538227] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:48.792 [2024-11-20 06:42:20.538250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.792 [2024-11-20 06:42:20.538258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:48.792 [2024-11-20 06:42:20.544740] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:48.792 [2024-11-20 06:42:20.544762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.792 [2024-11-20 06:42:20.544770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:48.792 [2024-11-20 06:42:20.551730] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:48.792 [2024-11-20 06:42:20.551752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.792 [2024-11-20 06:42:20.551761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:48.792 [2024-11-20 06:42:20.559977] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:48.792 [2024-11-20 06:42:20.559999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.792 [2024-11-20 06:42:20.560008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:48.792 [2024-11-20 06:42:20.566638] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:48.792 [2024-11-20 06:42:20.566660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.792 [2024-11-20 06:42:20.566669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:48.792 [2024-11-20 06:42:20.573064] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:48.792 [2024-11-20 06:42:20.573086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.792 [2024-11-20 06:42:20.573095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:48.792 [2024-11-20 06:42:20.579820] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:48.792 [2024-11-20 06:42:20.579844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.792 [2024-11-20 06:42:20.579852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:48.792 [2024-11-20 06:42:20.586693] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:48.792 [2024-11-20 06:42:20.586718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.792 [2024-11-20 06:42:20.586727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:48.793 [2024-11-20 06:42:20.594793] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:48.793 [2024-11-20 06:42:20.594817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.793 [2024-11-20 06:42:20.594825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:48.793 [2024-11-20 06:42:20.599488] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:48.793 [2024-11-20 06:42:20.599510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.793 [2024-11-20 06:42:20.599519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:48.793 [2024-11-20 06:42:20.605409] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:48.793 [2024-11-20 06:42:20.605432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.793 [2024-11-20 06:42:20.605442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:48.793 [2024-11-20 06:42:20.612263] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:48.793 [2024-11-20 06:42:20.612286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.793 [2024-11-20 06:42:20.612294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:48.793 [2024-11-20 06:42:20.620449] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:48.793 [2024-11-20 06:42:20.620473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.793 [2024-11-20 06:42:20.620483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:49.053 [2024-11-20 06:42:20.628316] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:49.053 [2024-11-20 06:42:20.628340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.053 [2024-11-20 06:42:20.628349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:49.053 [2024-11-20 06:42:20.635603] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:49.053 [2024-11-20 06:42:20.635626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.053 [2024-11-20 06:42:20.635635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:49.053 [2024-11-20 06:42:20.641825] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:49.053 [2024-11-20 06:42:20.641848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.053 [2024-11-20 06:42:20.641856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:49.053 [2024-11-20 06:42:20.647181] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:49.053 [2024-11-20 06:42:20.647210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.053 [2024-11-20 06:42:20.647223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:49.053 [2024-11-20 06:42:20.652507] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:49.053 [2024-11-20 06:42:20.652529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.053 [2024-11-20 06:42:20.652538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:49.053 [2024-11-20 06:42:20.657255] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:49.053 [2024-11-20 06:42:20.657277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.053 [2024-11-20 06:42:20.657285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:49.053 [2024-11-20 06:42:20.661012] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:49.053 [2024-11-20 06:42:20.661034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.053 [2024-11-20 06:42:20.661043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:49.053 [2024-11-20 06:42:20.666127] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:49.053 [2024-11-20 06:42:20.666149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.053 [2024-11-20 06:42:20.666157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:49.053 [2024-11-20 06:42:20.671290] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:49.053 [2024-11-20 06:42:20.671311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.053 [2024-11-20 06:42:20.671319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:49.053 [2024-11-20 06:42:20.676173] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:49.053 [2024-11-20 06:42:20.676195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.053 [2024-11-20 06:42:20.676210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:49.053 [2024-11-20 06:42:20.681200] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:49.053 [2024-11-20 06:42:20.681226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.053 [2024-11-20 06:42:20.681234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:49.053 [2024-11-20 06:42:20.686277] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:49.053 [2024-11-20 06:42:20.686299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.053 [2024-11-20 06:42:20.686307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:49.053 [2024-11-20 06:42:20.691430] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:49.053 [2024-11-20 06:42:20.691458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.053 [2024-11-20 06:42:20.691466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:49.053 [2024-11-20 06:42:20.696647] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:49.053 [2024-11-20 06:42:20.696669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.053 [2024-11-20 06:42:20.696677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:49.053 [2024-11-20 06:42:20.702027] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:49.053 [2024-11-20 06:42:20.702048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.053 [2024-11-20 06:42:20.702056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:49.053 5541.00 IOPS, 692.62 MiB/s [2024-11-20T05:42:20.889Z] [2024-11-20 06:42:20.708322] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:49.053 [2024-11-20 06:42:20.708344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.053 [2024-11-20 06:42:20.708352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:49.053 [2024-11-20 06:42:20.713733] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:49.053 [2024-11-20 06:42:20.713754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.053 [2024-11-20 06:42:20.713762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:49.053 [2024-11-20 06:42:20.719220] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:49.053 [2024-11-20 06:42:20.719242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.053 [2024-11-20 06:42:20.719250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:49.053 [2024-11-20 06:42:20.724708] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:49.054 [2024-11-20 06:42:20.724730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.054 [2024-11-20 06:42:20.724738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:49.054 [2024-11-20 06:42:20.730116] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:49.054 [2024-11-20 06:42:20.730138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.054 [2024-11-20 06:42:20.730146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:49.054 [2024-11-20 06:42:20.735416] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:49.054 [2024-11-20 06:42:20.735437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.054 [2024-11-20 06:42:20.735445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:49.054 [2024-11-20 06:42:20.740875] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:49.054 [2024-11-20 06:42:20.740897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.054 [2024-11-20 06:42:20.740905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:49.054 [2024-11-20 06:42:20.746531] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:49.054 [2024-11-20 06:42:20.746553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.054 [2024-11-20 06:42:20.746561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:49.054 [2024-11-20 06:42:20.751934] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:49.054 [2024-11-20 06:42:20.751956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.054 [2024-11-20 06:42:20.751963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:49.054 [2024-11-20 06:42:20.757313] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:49.054 [2024-11-20 06:42:20.757336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.054 [2024-11-20 06:42:20.757344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:49.054 [2024-11-20 06:42:20.762694] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:49.054 [2024-11-20 06:42:20.762715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.054 [2024-11-20 06:42:20.762724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:49.054 [2024-11-20 06:42:20.768199] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:49.054 [2024-11-20 06:42:20.768226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.054 [2024-11-20 06:42:20.768234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:49.054 [2024-11-20 06:42:20.773563] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:49.054 [2024-11-20 06:42:20.773585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.054 [2024-11-20 06:42:20.773593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:49.054 [2024-11-20 06:42:20.778926] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:49.054 [2024-11-20 06:42:20.778947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.054 [2024-11-20 06:42:20.778956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:49.054 [2024-11-20 06:42:20.784400] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:49.054 [2024-11-20 06:42:20.784422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.054 [2024-11-20 06:42:20.784434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:49.054 [2024-11-20 06:42:20.789861] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:49.054 [2024-11-20 06:42:20.789883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.054 [2024-11-20 06:42:20.789892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:49.054 [2024-11-20 06:42:20.795368] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:49.054 [2024-11-20 06:42:20.795391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.054 [2024-11-20 06:42:20.795399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:49.054 [2024-11-20 06:42:20.800703] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:49.054 [2024-11-20 06:42:20.800725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.054 [2024-11-20 06:42:20.800733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:49.054 [2024-11-20 06:42:20.806065] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:49.054 [2024-11-20 06:42:20.806087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.054 [2024-11-20 06:42:20.806094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:49.054 [2024-11-20 06:42:20.811569] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:49.054 [2024-11-20 06:42:20.811590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.054 [2024-11-20 06:42:20.811598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:49.054 [2024-11-20 06:42:20.816702] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:49.054 [2024-11-20 06:42:20.816724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.054 [2024-11-20 06:42:20.816732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:49.054 [2024-11-20 06:42:20.821889] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:49.054 [2024-11-20 06:42:20.821911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.054 [2024-11-20 06:42:20.821919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:49.054 [2024-11-20 06:42:20.827450] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:49.054 [2024-11-20 06:42:20.827472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.054 [2024-11-20 06:42:20.827480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:49.054 [2024-11-20 06:42:20.833278] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:49.054 [2024-11-20 06:42:20.833305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.054 [2024-11-20 06:42:20.833313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:49.054 [2024-11-20 06:42:20.838739] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:49.054 [2024-11-20 06:42:20.838761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.054 [2024-11-20 06:42:20.838770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:49.054 [2024-11-20 06:42:20.844125] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:49.054 [2024-11-20 06:42:20.844148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.054 [2024-11-20 06:42:20.844157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:49.054 [2024-11-20 06:42:20.849438] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:49.054 [2024-11-20 06:42:20.849459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.054 [2024-11-20 06:42:20.849467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:49.054 [2024-11-20 06:42:20.854586] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:49.054 [2024-11-20 06:42:20.854607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.054 [2024-11-20 06:42:20.854614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:49.054 [2024-11-20 06:42:20.859763] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:49.054 [2024-11-20 06:42:20.859785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.054 [2024-11-20 06:42:20.859792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:49.054 [2024-11-20 06:42:20.864966] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:49.054 [2024-11-20 06:42:20.864989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.054 [2024-11-20 06:42:20.864997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:49.054 [2024-11-20 06:42:20.870179] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:49.055 [2024-11-20 06:42:20.870208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.055 [2024-11-20 06:42:20.870217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:49.055 [2024-11-20 06:42:20.875338] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:49.055 [2024-11-20 06:42:20.875361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.055 [2024-11-20 06:42:20.875368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:49.055 [2024-11-20 06:42:20.880183] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:49.055 [2024-11-20 06:42:20.880214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.055 [2024-11-20 06:42:20.880223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:49.314 [2024-11-20 06:42:20.885387] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:49.314 [2024-11-20 06:42:20.885410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.314 [2024-11-20 06:42:20.885419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:49.314 [2024-11-20 06:42:20.890576] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:49.314 [2024-11-20 06:42:20.890602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.314 [2024-11-20 06:42:20.890610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:49.314 [2024-11-20 06:42:20.895818] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:49.314 [2024-11-20 06:42:20.895840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.314 [2024-11-20 06:42:20.895848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:49.314 [2024-11-20 06:42:20.901066] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:49.314 [2024-11-20 06:42:20.901087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.314 [2024-11-20 06:42:20.901095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:49.314 [2024-11-20 06:42:20.906261] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:49.314 [2024-11-20 06:42:20.906282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.314 [2024-11-20 06:42:20.906290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:49.314 [2024-11-20 06:42:20.911332] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:49.314 [2024-11-20 06:42:20.911354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.314 [2024-11-20 06:42:20.911361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:49.314 [2024-11-20 06:42:20.916399] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:49.314 [2024-11-20 06:42:20.916420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.314 [2024-11-20 06:42:20.916429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:49.314 [2024-11-20 06:42:20.921423] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:49.314 [2024-11-20 06:42:20.921445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.314 [2024-11-20 06:42:20.921457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:49.314 [2024-11-20 06:42:20.926383] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:49.314 [2024-11-20 06:42:20.926404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.314 [2024-11-20 06:42:20.926413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:49.314 [2024-11-20 06:42:20.931444] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:49.315 [2024-11-20 06:42:20.931465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.315 [2024-11-20 06:42:20.931474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:49.315 [2024-11-20 06:42:20.936552] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:49.315 [2024-11-20 06:42:20.936573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.315 [2024-11-20 06:42:20.936581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:49.315 [2024-11-20 06:42:20.941644] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:49.315 [2024-11-20 06:42:20.941666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.315 [2024-11-20 06:42:20.941673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:49.315 [2024-11-20 06:42:20.946878] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:49.315 [2024-11-20 06:42:20.946900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.315 [2024-11-20 06:42:20.946908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:49.315 [2024-11-20 06:42:20.951987] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:49.315 [2024-11-20 06:42:20.952008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.315 [2024-11-20 06:42:20.952015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:49.315 [2024-11-20 06:42:20.957138] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:49.315 [2024-11-20 06:42:20.957160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.315 [2024-11-20 06:42:20.957167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:49.315 [2024-11-20 06:42:20.962341] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:49.315 [2024-11-20 06:42:20.962362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.315 [2024-11-20 06:42:20.962371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:49.315 [2024-11-20 06:42:20.967594] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:49.315 [2024-11-20 06:42:20.967616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.315 [2024-11-20 06:42:20.967624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:49.315 [2024-11-20 06:42:20.972826] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:49.315 [2024-11-20 06:42:20.972848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.315 [2024-11-20 06:42:20.972856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:49.315 [2024-11-20 06:42:20.978111] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:49.315 [2024-11-20 06:42:20.978133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.315 [2024-11-20 06:42:20.978141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:49.315 [2024-11-20 06:42:20.983402] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:49.315 [2024-11-20 06:42:20.983425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.315 [2024-11-20 06:42:20.983433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:49.315 [2024-11-20 06:42:20.988612] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:49.315 [2024-11-20 06:42:20.988633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.315 [2024-11-20 06:42:20.988642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:49.315 [2024-11-20 06:42:20.993835] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:49.315 [2024-11-20 06:42:20.993856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.315 [2024-11-20 06:42:20.993864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:49.315 [2024-11-20 06:42:20.999023] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:49.315 [2024-11-20 06:42:20.999044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.315 [2024-11-20 06:42:20.999052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:49.315 [2024-11-20 06:42:21.004291] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:49.315 [2024-11-20 06:42:21.004312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.315 [2024-11-20 06:42:21.004320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:49.315 [2024-11-20 06:42:21.009463] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:49.315 [2024-11-20 06:42:21.009484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.315 [2024-11-20 06:42:21.009495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:49.315 [2024-11-20 06:42:21.014653] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:49.315 [2024-11-20 06:42:21.014674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.315 [2024-11-20 06:42:21.014682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:49.315 [2024-11-20 06:42:21.019826] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:49.315 [2024-11-20 06:42:21.019847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.315 [2024-11-20 06:42:21.019855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:49.315 [2024-11-20 06:42:21.024990] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:49.315 [2024-11-20 06:42:21.025012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.315 [2024-11-20 06:42:21.025020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:49.315 [2024-11-20 06:42:21.030183] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:49.315 [2024-11-20 06:42:21.030211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.315 [2024-11-20 06:42:21.030219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:49.315 [2024-11-20 06:42:21.035417] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:49.315 [2024-11-20 06:42:21.035437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.315 [2024-11-20 06:42:21.035445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:49.315 [2024-11-20 06:42:21.040602] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:49.315 [2024-11-20 06:42:21.040623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.315 [2024-11-20 06:42:21.040631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:49.315 [2024-11-20 06:42:21.045823] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:49.315 [2024-11-20 06:42:21.045844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.315 [2024-11-20 06:42:21.045852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:49.315 [2024-11-20 06:42:21.051058] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:49.315 [2024-11-20 06:42:21.051079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.315 [2024-11-20 06:42:21.051086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:49.315 [2024-11-20 06:42:21.056298] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:49.315 [2024-11-20 06:42:21.056323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.315 [2024-11-20 06:42:21.056331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:49.315 [2024-11-20 06:42:21.061554] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:49.315 [2024-11-20 06:42:21.061575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.315 [2024-11-20 06:42:21.061583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:49.315 [2024-11-20 06:42:21.066805] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:49.315 [2024-11-20 06:42:21.066827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.315 [2024-11-20 06:42:21.066835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:49.315 [2024-11-20 06:42:21.072027] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:49.316 [2024-11-20 06:42:21.072050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.316 [2024-11-20 06:42:21.072058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:49.316 [2024-11-20 06:42:21.077275] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:49.316 [2024-11-20 06:42:21.077298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.316 [2024-11-20 06:42:21.077306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:49.316 [2024-11-20 06:42:21.082433] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:49.316 [2024-11-20 06:42:21.082455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.316 [2024-11-20 06:42:21.082462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:49.316 [2024-11-20 06:42:21.087648] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:49.316 [2024-11-20 06:42:21.087670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.316 [2024-11-20 06:42:21.087679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:49.316 [2024-11-20 06:42:21.092874] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:49.316 [2024-11-20 06:42:21.092896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.316 [2024-11-20 06:42:21.092905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:49.316 [2024-11-20 06:42:21.098077] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:49.316 [2024-11-20 06:42:21.098098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.316 [2024-11-20 06:42:21.098106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:49.316 [2024-11-20 06:42:21.103274] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:49.316 [2024-11-20 06:42:21.103295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.316 [2024-11-20 06:42:21.103303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:49.316 [2024-11-20 06:42:21.108460] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:49.316 [2024-11-20 06:42:21.108482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.316 [2024-11-20 06:42:21.108490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:49.316 [2024-11-20 06:42:21.113615] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:49.316 [2024-11-20 06:42:21.113636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.316 [2024-11-20 06:42:21.113643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:49.316 [2024-11-20 06:42:21.118824] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:49.316 [2024-11-20 06:42:21.118845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.316 [2024-11-20 06:42:21.118853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:49.316 [2024-11-20 06:42:21.124061] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:49.316 [2024-11-20 06:42:21.124082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.316 [2024-11-20 06:42:21.124090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:49.316 [2024-11-20 06:42:21.129305] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:49.316 [2024-11-20 06:42:21.129326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.316 [2024-11-20 06:42:21.129334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:49.316 [2024-11-20 06:42:21.134499] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:49.316 [2024-11-20 06:42:21.134521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.316 [2024-11-20 06:42:21.134529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:49.316 [2024-11-20 06:42:21.139728] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:49.316 [2024-11-20 06:42:21.139750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.316 [2024-11-20 06:42:21.139757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:49.316 [2024-11-20 06:42:21.145099] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:49.316 [2024-11-20 06:42:21.145122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.316 [2024-11-20 06:42:21.145135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:49.575 [2024-11-20 06:42:21.150331] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:49.575 [2024-11-20 06:42:21.150354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.575 [2024-11-20 06:42:21.150363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:49.575 [2024-11-20 06:42:21.155550] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:49.575 [2024-11-20 06:42:21.155573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.575 [2024-11-20 06:42:21.155581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:49.575 [2024-11-20 06:42:21.160772] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:49.575 [2024-11-20 06:42:21.160794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.575 [2024-11-20 06:42:21.160803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:49.575 [2024-11-20 06:42:21.165956] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:49.575 [2024-11-20 06:42:21.165978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.575 [2024-11-20 06:42:21.165987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:49.575 [2024-11-20 06:42:21.171128] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:49.575 [2024-11-20 06:42:21.171150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.575 [2024-11-20 06:42:21.171158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:49.575 [2024-11-20 06:42:21.176336] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:49.575 [2024-11-20 06:42:21.176358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.575 [2024-11-20 06:42:21.176366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:49.575 [2024-11-20 06:42:21.181557] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:49.575 [2024-11-20 06:42:21.181578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.575 [2024-11-20 06:42:21.181586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:49.575 [2024-11-20 06:42:21.186840] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:49.575 [2024-11-20 06:42:21.186860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.575 [2024-11-20 06:42:21.186869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:49.575 [2024-11-20 06:42:21.192105] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:49.575 [2024-11-20 06:42:21.192130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.575 [2024-11-20 06:42:21.192138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:49.575 [2024-11-20 06:42:21.197338] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:49.575 [2024-11-20 06:42:21.197360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.575 [2024-11-20 06:42:21.197368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:49.575 [2024-11-20 06:42:21.202546] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:49.575 [2024-11-20 06:42:21.202567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.575 [2024-11-20 06:42:21.202575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:49.575 [2024-11-20 06:42:21.207807] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:49.575 [2024-11-20 06:42:21.207828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.575 [2024-11-20 06:42:21.207836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:49.575 [2024-11-20 06:42:21.212775] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:49.575 [2024-11-20 06:42:21.212796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.575 [2024-11-20 06:42:21.212804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:49.575 [2024-11-20 06:42:21.217971] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:49.575 [2024-11-20 06:42:21.217992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.575 [2024-11-20 06:42:21.218001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:49.575 [2024-11-20 06:42:21.223268] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:49.575 [2024-11-20 06:42:21.223291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.575 [2024-11-20 06:42:21.223299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:49.575 [2024-11-20 06:42:21.228576] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:49.575 [2024-11-20 06:42:21.228598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.575 [2024-11-20 06:42:21.228606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:49.575 [2024-11-20 06:42:21.233876] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:49.575 [2024-11-20 06:42:21.233898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.575 [2024-11-20 06:42:21.233907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:49.575 [2024-11-20 06:42:21.239095] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:49.575 [2024-11-20 06:42:21.239117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.575 [2024-11-20 06:42:21.239125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:49.575 [2024-11-20 06:42:21.244320] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:49.575 [2024-11-20 06:42:21.244343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.575 [2024-11-20 06:42:21.244351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:49.575 [2024-11-20 06:42:21.249448] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:49.575 [2024-11-20 06:42:21.249470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.575 [2024-11-20 06:42:21.249478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:49.575 [2024-11-20 06:42:21.254730] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:49.575 [2024-11-20 06:42:21.254751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.575 [2024-11-20 06:42:21.254759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:49.575 [2024-11-20 06:42:21.259943] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:49.575 [2024-11-20 06:42:21.259965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.576 [2024-11-20 06:42:21.259973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:49.576 [2024-11-20 06:42:21.265122] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:49.576 [2024-11-20 06:42:21.265144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.576 [2024-11-20 06:42:21.265152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:49.576 [2024-11-20 06:42:21.270321] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:49.576 [2024-11-20 06:42:21.270344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.576 [2024-11-20 06:42:21.270352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:49.576 [2024-11-20 06:42:21.275535] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:49.576 [2024-11-20 06:42:21.275555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.576 [2024-11-20 06:42:21.275563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:49.576 [2024-11-20 06:42:21.280715] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:49.576 [2024-11-20 06:42:21.280736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.576 [2024-11-20 06:42:21.280748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:49.576 [2024-11-20 06:42:21.285934] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:49.576 [2024-11-20 06:42:21.285955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.576 [2024-11-20 06:42:21.285963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:49.576 [2024-11-20 06:42:21.291175] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:49.576 [2024-11-20 06:42:21.291196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.576 [2024-11-20 06:42:21.291212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:49.576 [2024-11-20 06:42:21.296385] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:49.576 [2024-11-20 06:42:21.296406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.576 [2024-11-20 06:42:21.296414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:49.576 [2024-11-20 06:42:21.301611] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:49.576 [2024-11-20 06:42:21.301632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.576 [2024-11-20 06:42:21.301640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:49.576 [2024-11-20 06:42:21.306774] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:49.576 [2024-11-20 06:42:21.306795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.576 [2024-11-20 06:42:21.306803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:49.576 [2024-11-20 06:42:21.311984] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:49.576 [2024-11-20 06:42:21.312006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.576 [2024-11-20 06:42:21.312014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:49.576 [2024-11-20 06:42:21.317210] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:49.576 [2024-11-20 06:42:21.317232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.576 [2024-11-20 06:42:21.317240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:49.576 [2024-11-20 06:42:21.322432] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:49.576 [2024-11-20 06:42:21.322454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.576 [2024-11-20 06:42:21.322461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:49.576 [2024-11-20 06:42:21.327652] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:49.576 [2024-11-20 06:42:21.327675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.576 [2024-11-20 06:42:21.327683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:49.576 [2024-11-20 06:42:21.332870] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:49.576 [2024-11-20 06:42:21.332890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.576 [2024-11-20 06:42:21.332898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:49.576 [2024-11-20 06:42:21.338103] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:49.576 [2024-11-20 06:42:21.338124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.576 [2024-11-20 06:42:21.338132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:49.576 [2024-11-20 06:42:21.343282] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:49.576 [2024-11-20 06:42:21.343305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.576 [2024-11-20 06:42:21.343314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:49.576 [2024-11-20 06:42:21.348521] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:49.576 [2024-11-20 06:42:21.348542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.576 [2024-11-20 06:42:21.348550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:49.576 [2024-11-20 06:42:21.353687] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:49.576 [2024-11-20 06:42:21.353709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.576 [2024-11-20 06:42:21.353717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:49.576 [2024-11-20 06:42:21.358863] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:49.576 [2024-11-20 06:42:21.358884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.576 [2024-11-20 06:42:21.358892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:49.576 [2024-11-20 06:42:21.364116] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:49.576 [2024-11-20 06:42:21.364138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.576 [2024-11-20 06:42:21.364146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:49.576 [2024-11-20 06:42:21.369261] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:49.576 [2024-11-20 06:42:21.369283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.576 [2024-11-20 06:42:21.369291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:49.576 [2024-11-20 06:42:21.374508] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:49.576 [2024-11-20 06:42:21.374529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.576 [2024-11-20 06:42:21.374537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:49.576 [2024-11-20 06:42:21.379764] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:49.576 [2024-11-20 06:42:21.379786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.576 [2024-11-20 06:42:21.379793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:49.576 [2024-11-20 06:42:21.384998] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:49.576 [2024-11-20 06:42:21.385018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.576 [2024-11-20 06:42:21.385026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:49.576 [2024-11-20 06:42:21.390206] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:49.576 [2024-11-20 06:42:21.390227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.576 [2024-11-20 06:42:21.390235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:49.576 [2024-11-20 06:42:21.395368] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:49.576 [2024-11-20 06:42:21.395389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.576 [2024-11-20 06:42:21.395397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:49.576 [2024-11-20 06:42:21.400570] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:49.576 [2024-11-20 06:42:21.400592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.577 [2024-11-20 06:42:21.400600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:49.577 [2024-11-20 06:42:21.405826] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:49.577 [2024-11-20 06:42:21.405851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.577 [2024-11-20 06:42:21.405860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:49.835 [2024-11-20 06:42:21.411128] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:49.835 [2024-11-20 06:42:21.411157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.835 [2024-11-20 06:42:21.411169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:49.835 [2024-11-20 06:42:21.416378] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:49.835 [2024-11-20 06:42:21.416400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.836 [2024-11-20 06:42:21.416413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:49.836 [2024-11-20 06:42:21.421576] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:49.836 [2024-11-20 06:42:21.421598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.836 [2024-11-20 06:42:21.421606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:49.836 [2024-11-20 06:42:21.426807] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:49.836 [2024-11-20 06:42:21.426828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.836 [2024-11-20 06:42:21.426837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:49.836 [2024-11-20 06:42:21.432004] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:49.836 [2024-11-20 06:42:21.432026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.836 [2024-11-20 06:42:21.432034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:49.836 [2024-11-20 06:42:21.437255] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:49.836 [2024-11-20 06:42:21.437278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.836 [2024-11-20 06:42:21.437286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:49.836 [2024-11-20 06:42:21.442451] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:49.836 [2024-11-20 06:42:21.442473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.836 [2024-11-20 06:42:21.442481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:49.836 [2024-11-20 06:42:21.447675] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:49.836 [2024-11-20 06:42:21.447696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.836 [2024-11-20 06:42:21.447704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:49.836 [2024-11-20 06:42:21.452927] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:49.836 [2024-11-20 06:42:21.452952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.836 [2024-11-20 06:42:21.452960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:49.836 [2024-11-20 06:42:21.458155] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:49.836 [2024-11-20 06:42:21.458175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.836 [2024-11-20 06:42:21.458183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:49.836 [2024-11-20 06:42:21.463318] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:49.836 [2024-11-20 06:42:21.463339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.836 [2024-11-20 06:42:21.463347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:49.836 [2024-11-20 06:42:21.468534] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:49.836 [2024-11-20 06:42:21.468557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.836 [2024-11-20 06:42:21.468565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:49.836 [2024-11-20 06:42:21.473752] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:49.836 [2024-11-20 06:42:21.473774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.836 [2024-11-20 06:42:21.473783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:49.836 [2024-11-20 06:42:21.478988] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:49.836 [2024-11-20 06:42:21.479011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.836 [2024-11-20 06:42:21.479020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:49.836 [2024-11-20 06:42:21.484259] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:49.836 [2024-11-20 06:42:21.484281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.836 [2024-11-20 06:42:21.484289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:49.836 [2024-11-20 06:42:21.487753] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:49.836 [2024-11-20 06:42:21.487774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.836 [2024-11-20 06:42:21.487783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:49.836 [2024-11-20 06:42:21.491809] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:49.836 [2024-11-20 06:42:21.491833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.836 [2024-11-20 06:42:21.491842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:49.836 [2024-11-20 06:42:21.496669] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:49.836 [2024-11-20 06:42:21.496692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.836 [2024-11-20 06:42:21.496699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:49.836 [2024-11-20 06:42:21.501560] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:49.836 [2024-11-20 06:42:21.501583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.836 [2024-11-20 06:42:21.501595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:49.836 [2024-11-20 06:42:21.506427] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:49.836 [2024-11-20 06:42:21.506448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.836 [2024-11-20 06:42:21.506456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:49.836 [2024-11-20 06:42:21.511412] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:49.836 [2024-11-20 06:42:21.511435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.836 [2024-11-20 06:42:21.511443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:49.836 [2024-11-20 06:42:21.516472] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:49.836 [2024-11-20 06:42:21.516496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.836 [2024-11-20 06:42:21.516503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:49.836 [2024-11-20 06:42:21.521682] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:49.836 [2024-11-20 06:42:21.521704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.836 [2024-11-20 06:42:21.521713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:49.836 [2024-11-20 06:42:21.526914] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:49.836 [2024-11-20 06:42:21.526936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.836 [2024-11-20 06:42:21.526944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:49.836 [2024-11-20 06:42:21.532264] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:49.836 [2024-11-20 06:42:21.532285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.836 [2024-11-20 06:42:21.532294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:49.836 [2024-11-20 06:42:21.537578] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:49.836 [2024-11-20 06:42:21.537599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.836 [2024-11-20 06:42:21.537607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:49.836 [2024-11-20 06:42:21.542777] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:49.836 [2024-11-20 06:42:21.542799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.836 [2024-11-20 06:42:21.542806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:49.836 [2024-11-20 06:42:21.547994] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:49.836 [2024-11-20 06:42:21.548020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.836 [2024-11-20 06:42:21.548028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:49.837 [2024-11-20 06:42:21.553200] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:49.837 [2024-11-20 06:42:21.553229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.837 [2024-11-20 06:42:21.553237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:49.837 [2024-11-20 06:42:21.558388] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:49.837 [2024-11-20 06:42:21.558409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.837 [2024-11-20 06:42:21.558417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:49.837 [2024-11-20 06:42:21.563562] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:49.837 [2024-11-20 06:42:21.563584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.837 [2024-11-20 06:42:21.563592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:49.837 [2024-11-20 06:42:21.568817] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:49.837 [2024-11-20 06:42:21.568838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.837 [2024-11-20 06:42:21.568847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:49.837 [2024-11-20 06:42:21.574068] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:49.837 [2024-11-20 06:42:21.574089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.837 [2024-11-20 06:42:21.574097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:49.837 [2024-11-20 06:42:21.579307] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:49.837 [2024-11-20 06:42:21.579329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.837 [2024-11-20 06:42:21.579337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:49.837 [2024-11-20 06:42:21.584506] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:49.837 [2024-11-20 06:42:21.584528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.837 [2024-11-20 06:42:21.584536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:49.837 [2024-11-20 06:42:21.589734] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:49.837 [2024-11-20 06:42:21.589755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.837 [2024-11-20 06:42:21.589763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:49.837 [2024-11-20 06:42:21.594987] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:49.837 [2024-11-20 06:42:21.595009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.837 [2024-11-20 06:42:21.595017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:49.837 [2024-11-20 06:42:21.600251] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:49.837 [2024-11-20 06:42:21.600272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.837 [2024-11-20 06:42:21.600280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:49.837 [2024-11-20 06:42:21.605464] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:49.837 [2024-11-20 06:42:21.605486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.837 [2024-11-20 06:42:21.605493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:49.837 [2024-11-20 06:42:21.610683] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:49.837 [2024-11-20 06:42:21.610705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.837 [2024-11-20 06:42:21.610713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:49.837 [2024-11-20 06:42:21.615926] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:49.837 [2024-11-20 06:42:21.615947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.837 [2024-11-20 06:42:21.615955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:49.837 [2024-11-20 06:42:21.621093] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:49.837 [2024-11-20 06:42:21.621115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.837 [2024-11-20 06:42:21.621123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:49.837 [2024-11-20 06:42:21.626353] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:49.837 [2024-11-20 06:42:21.626375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.837 [2024-11-20 06:42:21.626383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:49.837 [2024-11-20 06:42:21.631533] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:49.837 [2024-11-20 06:42:21.631555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.837 [2024-11-20 06:42:21.631563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:49.837 [2024-11-20 06:42:21.636811] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:49.837 [2024-11-20 06:42:21.636833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.837 [2024-11-20 06:42:21.636844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:49.837 [2024-11-20 06:42:21.642083] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:49.837 [2024-11-20 06:42:21.642105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.837 [2024-11-20 06:42:21.642112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:49.837 [2024-11-20 06:42:21.647299] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:49.837 [2024-11-20 06:42:21.647321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.837 [2024-11-20 06:42:21.647329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:49.837 [2024-11-20 06:42:21.652556] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:49.837 [2024-11-20 06:42:21.652578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.837 [2024-11-20 06:42:21.652586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:49.837 [2024-11-20 06:42:21.657831] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:49.837 [2024-11-20 06:42:21.657854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.837 [2024-11-20 06:42:21.657862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:49.837 [2024-11-20 06:42:21.663107] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:49.837 [2024-11-20 06:42:21.663133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.837 [2024-11-20 06:42:21.663145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:50.096 [2024-11-20 06:42:21.668500] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:50.096 [2024-11-20 06:42:21.668526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:50.096 [2024-11-20 06:42:21.668535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:50.096 [2024-11-20 06:42:21.673806] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:50.096 [2024-11-20 06:42:21.673830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:50.096 [2024-11-20 06:42:21.673839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:50.096 [2024-11-20 06:42:21.679456] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:50.096 [2024-11-20 06:42:21.679479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:50.096 [2024-11-20 06:42:21.679487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:50.096 [2024-11-20 06:42:21.685406] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:50.096 [2024-11-20 06:42:21.685433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:50.096 [2024-11-20 06:42:21.685442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:50.096 [2024-11-20 06:42:21.691553] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:50.096 [2024-11-20 06:42:21.691576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:50.096 [2024-11-20 06:42:21.691585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:50.096 [2024-11-20 06:42:21.699107] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:50.096 [2024-11-20 06:42:21.699131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:50.096 [2024-11-20 06:42:21.699140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:50.096 [2024-11-20 06:42:21.706698] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8bc600) 00:31:50.096 [2024-11-20 06:42:21.706721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:50.096 [2024-11-20 06:42:21.706729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:50.096 5723.50 IOPS, 715.44 MiB/s 00:31:50.096 Latency(us) 00:31:50.096 [2024-11-20T05:42:21.932Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:50.096 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:31:50.096 nvme0n1 : 2.00 5720.06 715.01 0.00 0.00 2794.60 440.81 8862.96 00:31:50.096 [2024-11-20T05:42:21.932Z] =================================================================================================================== 00:31:50.096 [2024-11-20T05:42:21.932Z] Total : 5720.06 715.01 0.00 0.00 2794.60 440.81 8862.96 00:31:50.096 { 00:31:50.096 "results": [ 00:31:50.096 { 00:31:50.096 "job": "nvme0n1", 00:31:50.096 "core_mask": "0x2", 00:31:50.096 "workload": "randread", 00:31:50.096 "status": "finished", 00:31:50.096 "queue_depth": 16, 00:31:50.096 "io_size": 131072, 00:31:50.096 "runtime": 2.003999, 00:31:50.096 "iops": 5720.062734562242, 00:31:50.096 "mibps": 715.0078418202803, 00:31:50.096 "io_failed": 0, 00:31:50.096 "io_timeout": 0, 00:31:50.096 "avg_latency_us": 2794.600274672549, 00:31:50.096 "min_latency_us": 440.807619047619, 00:31:50.096 "max_latency_us": 8862.96380952381 00:31:50.096 } 00:31:50.096 ], 00:31:50.096 "core_count": 1 00:31:50.096 } 00:31:50.096 06:42:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:31:50.096 06:42:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:31:50.096 06:42:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:31:50.096 | .driver_specific 00:31:50.096 | .nvme_error 00:31:50.096 | .status_code 00:31:50.096 | .command_transient_transport_error' 00:31:50.096 06:42:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:31:50.417 06:42:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 370 > 0 )) 00:31:50.417 06:42:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 698464 00:31:50.417 06:42:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 698464 ']' 00:31:50.417 06:42:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 698464 00:31:50.417 06:42:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:31:50.417 06:42:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:31:50.417 06:42:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 698464 00:31:50.417 06:42:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:31:50.417 06:42:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:31:50.417 06:42:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 698464' 00:31:50.417 killing process with pid 698464 00:31:50.417 06:42:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 698464 00:31:50.417 Received shutdown signal, test time was about 2.000000 seconds 00:31:50.417 00:31:50.417 Latency(us) 00:31:50.417 [2024-11-20T05:42:22.253Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:50.417 [2024-11-20T05:42:22.253Z] =================================================================================================================== 00:31:50.417 [2024-11-20T05:42:22.253Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:50.417 06:42:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 698464 00:31:50.417 06:42:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:31:50.417 06:42:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:31:50.417 06:42:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:31:50.417 06:42:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:31:50.417 06:42:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:31:50.417 06:42:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=699163 00:31:50.417 06:42:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 699163 /var/tmp/bperf.sock 00:31:50.417 06:42:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:31:50.417 06:42:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 699163 ']' 00:31:50.417 06:42:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:50.417 06:42:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:31:50.417 06:42:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:50.417 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:50.417 06:42:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:31:50.417 06:42:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:31:50.417 [2024-11-20 06:42:22.187198] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:31:50.417 [2024-11-20 06:42:22.187262] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid699163 ] 00:31:50.722 [2024-11-20 06:42:22.261788] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:50.722 [2024-11-20 06:42:22.300306] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:50.722 06:42:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:31:50.722 06:42:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:31:50.722 06:42:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:31:50.722 06:42:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:31:50.979 06:42:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:31:50.979 06:42:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:50.980 06:42:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:31:50.980 06:42:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:50.980 06:42:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:50.980 06:42:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:51.238 nvme0n1 00:31:51.238 06:42:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:31:51.238 06:42:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:51.238 06:42:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:31:51.238 06:42:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:51.238 06:42:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:31:51.238 06:42:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:31:51.238 Running I/O for 2 seconds... 00:31:51.238 [2024-11-20 06:42:23.001475] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166e4578 00:31:51.238 [2024-11-20 06:42:23.002301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:19930 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.238 [2024-11-20 06:42:23.002332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:51.238 [2024-11-20 06:42:23.010547] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166e4578 00:31:51.238 [2024-11-20 06:42:23.011287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:17185 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.238 [2024-11-20 06:42:23.011309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:51.238 [2024-11-20 06:42:23.019580] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166e4578 00:31:51.238 [2024-11-20 06:42:23.020299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:4380 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.238 [2024-11-20 06:42:23.020320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:51.238 [2024-11-20 06:42:23.028487] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166ecc78 00:31:51.238 [2024-11-20 06:42:23.029279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:19359 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.238 [2024-11-20 06:42:23.029299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:31:51.238 [2024-11-20 06:42:23.037194] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166e3d08 00:31:51.238 [2024-11-20 06:42:23.038021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:7106 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.238 [2024-11-20 06:42:23.038042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:51.238 [2024-11-20 06:42:23.048303] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166e95a0 00:31:51.238 [2024-11-20 06:42:23.049498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:10720 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.238 [2024-11-20 06:42:23.049519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:51.238 [2024-11-20 06:42:23.056886] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166f92c0 00:31:51.238 [2024-11-20 06:42:23.057829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:5591 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.238 [2024-11-20 06:42:23.057848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:31:51.238 [2024-11-20 06:42:23.066008] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166e7c50 00:31:51.238 [2024-11-20 06:42:23.067012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:21255 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.238 [2024-11-20 06:42:23.067036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:31:51.498 [2024-11-20 06:42:23.075628] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166e01f8 00:31:51.498 [2024-11-20 06:42:23.076671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:18515 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.498 [2024-11-20 06:42:23.076694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:31:51.498 [2024-11-20 06:42:23.083061] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166e0a68 00:31:51.498 [2024-11-20 06:42:23.083644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:8069 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.498 [2024-11-20 06:42:23.083664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:31:51.498 [2024-11-20 06:42:23.092033] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166f4b08 00:31:51.498 [2024-11-20 06:42:23.092613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:7828 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.498 [2024-11-20 06:42:23.092633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:31:51.498 [2024-11-20 06:42:23.100825] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166e5a90 00:31:51.498 [2024-11-20 06:42:23.101504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:16386 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.498 [2024-11-20 06:42:23.101524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:31:51.498 [2024-11-20 06:42:23.109813] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166e5a90 00:31:51.498 [2024-11-20 06:42:23.110470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:23245 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.498 [2024-11-20 06:42:23.110490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:31:51.498 [2024-11-20 06:42:23.118217] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166f4298 00:31:51.498 [2024-11-20 06:42:23.118760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:11602 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.498 [2024-11-20 06:42:23.118780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:51.498 [2024-11-20 06:42:23.127808] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166f4298 00:31:51.498 [2024-11-20 06:42:23.128362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:24946 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.498 [2024-11-20 06:42:23.128383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:31:51.498 [2024-11-20 06:42:23.136849] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166e1b48 00:31:51.498 [2024-11-20 06:42:23.137399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18278 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.498 [2024-11-20 06:42:23.137419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:31:51.498 [2024-11-20 06:42:23.146809] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166ec408 00:31:51.498 [2024-11-20 06:42:23.147804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:21415 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.498 [2024-11-20 06:42:23.147823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:31:51.498 [2024-11-20 06:42:23.156417] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166f6890 00:31:51.498 [2024-11-20 06:42:23.157431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:6966 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.498 [2024-11-20 06:42:23.157451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:51.498 [2024-11-20 06:42:23.166422] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166e1b48 00:31:51.498 [2024-11-20 06:42:23.167918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:1209 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.498 [2024-11-20 06:42:23.167936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:31:51.498 [2024-11-20 06:42:23.172736] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166e5a90 00:31:51.498 [2024-11-20 06:42:23.173459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:5051 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.498 [2024-11-20 06:42:23.173479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:31:51.498 [2024-11-20 06:42:23.182072] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166f0ff8 00:31:51.498 [2024-11-20 06:42:23.182572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:6787 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.498 [2024-11-20 06:42:23.182593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:31:51.498 [2024-11-20 06:42:23.193462] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166ff3c8 00:31:51.498 [2024-11-20 06:42:23.194941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:13778 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.498 [2024-11-20 06:42:23.194965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:31:51.498 [2024-11-20 06:42:23.200827] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166f0788 00:31:51.498 [2024-11-20 06:42:23.201791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:8422 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.498 [2024-11-20 06:42:23.201811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:31:51.498 [2024-11-20 06:42:23.209762] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166ee5c8 00:31:51.499 [2024-11-20 06:42:23.211059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:1847 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.499 [2024-11-20 06:42:23.211078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:31:51.499 [2024-11-20 06:42:23.220043] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166f4298 00:31:51.499 [2024-11-20 06:42:23.221151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:6197 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.499 [2024-11-20 06:42:23.221171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:31:51.499 [2024-11-20 06:42:23.228939] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166e38d0 00:31:51.499 [2024-11-20 06:42:23.230056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:24691 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.499 [2024-11-20 06:42:23.230075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:31:51.499 [2024-11-20 06:42:23.239199] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166df988 00:31:51.499 [2024-11-20 06:42:23.240542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:18085 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.499 [2024-11-20 06:42:23.240564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:31:51.499 [2024-11-20 06:42:23.246751] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166fb8b8 00:31:51.499 [2024-11-20 06:42:23.247504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:22980 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.499 [2024-11-20 06:42:23.247524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:31:51.499 [2024-11-20 06:42:23.256924] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166f2d80 00:31:51.499 [2024-11-20 06:42:23.257742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23743 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.499 [2024-11-20 06:42:23.257763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:31:51.499 [2024-11-20 06:42:23.266933] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166fb480 00:31:51.499 [2024-11-20 06:42:23.267907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:21273 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.499 [2024-11-20 06:42:23.267926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:31:51.499 [2024-11-20 06:42:23.276631] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166f7970 00:31:51.499 [2024-11-20 06:42:23.277704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:20688 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.499 [2024-11-20 06:42:23.277723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:31:51.499 [2024-11-20 06:42:23.286016] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166e6b70 00:31:51.499 [2024-11-20 06:42:23.287075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:19192 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.499 [2024-11-20 06:42:23.287095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:31:51.499 [2024-11-20 06:42:23.294721] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166ee5c8 00:31:51.499 [2024-11-20 06:42:23.295573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:16727 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.499 [2024-11-20 06:42:23.295594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:31:51.499 [2024-11-20 06:42:23.303254] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166fb048 00:31:51.499 [2024-11-20 06:42:23.303858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:4213 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.499 [2024-11-20 06:42:23.303877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:31:51.499 [2024-11-20 06:42:23.311882] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166eee38 00:31:51.499 [2024-11-20 06:42:23.312579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:20084 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.499 [2024-11-20 06:42:23.312598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:31:51.499 [2024-11-20 06:42:23.321324] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166f5be8 00:31:51.499 [2024-11-20 06:42:23.322130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:12315 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.499 [2024-11-20 06:42:23.322149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:31:51.758 [2024-11-20 06:42:23.330681] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166ef270 00:31:51.758 [2024-11-20 06:42:23.331527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:6750 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.758 [2024-11-20 06:42:23.331551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:31:51.758 [2024-11-20 06:42:23.341072] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166f0350 00:31:51.758 [2024-11-20 06:42:23.342039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:2020 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.758 [2024-11-20 06:42:23.342062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:31:51.758 [2024-11-20 06:42:23.349440] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166f4298 00:31:51.758 [2024-11-20 06:42:23.350387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19660 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.758 [2024-11-20 06:42:23.350406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:31:51.758 [2024-11-20 06:42:23.358415] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166f0bc0 00:31:51.758 [2024-11-20 06:42:23.359462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:3620 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.758 [2024-11-20 06:42:23.359481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:51.758 [2024-11-20 06:42:23.367519] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166fac10 00:31:51.758 [2024-11-20 06:42:23.368230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5481 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.758 [2024-11-20 06:42:23.368251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:51.758 [2024-11-20 06:42:23.375772] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166f3e60 00:31:51.758 [2024-11-20 06:42:23.376579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:2936 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.758 [2024-11-20 06:42:23.376599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:51.758 [2024-11-20 06:42:23.385252] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166e84c0 00:31:51.758 [2024-11-20 06:42:23.386157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:4420 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.758 [2024-11-20 06:42:23.386176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:31:51.758 [2024-11-20 06:42:23.394658] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166e7818 00:31:51.758 [2024-11-20 06:42:23.395680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:18363 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.758 [2024-11-20 06:42:23.395699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:51.758 [2024-11-20 06:42:23.404056] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166ec840 00:31:51.758 [2024-11-20 06:42:23.405193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:6973 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.758 [2024-11-20 06:42:23.405215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:31:51.758 [2024-11-20 06:42:23.413489] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166f3e60 00:31:51.758 [2024-11-20 06:42:23.414779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:6641 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.758 [2024-11-20 06:42:23.414799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:51.758 [2024-11-20 06:42:23.422640] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166ecc78 00:31:51.758 [2024-11-20 06:42:23.423896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:18295 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.758 [2024-11-20 06:42:23.423915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:31:51.758 [2024-11-20 06:42:23.430113] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166e6b70 00:31:51.758 [2024-11-20 06:42:23.430605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:2872 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.758 [2024-11-20 06:42:23.430624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:51.758 [2024-11-20 06:42:23.441605] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166ed920 00:31:51.758 [2024-11-20 06:42:23.443100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:24010 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.758 [2024-11-20 06:42:23.443120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:51.758 [2024-11-20 06:42:23.447921] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166e73e0 00:31:51.758 [2024-11-20 06:42:23.448614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:2235 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.758 [2024-11-20 06:42:23.448633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:51.758 [2024-11-20 06:42:23.456457] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166f96f8 00:31:51.758 [2024-11-20 06:42:23.457129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:815 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.758 [2024-11-20 06:42:23.457148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:31:51.758 [2024-11-20 06:42:23.465642] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166dece0 00:31:51.758 [2024-11-20 06:42:23.466313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:20641 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.758 [2024-11-20 06:42:23.466333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:51.758 [2024-11-20 06:42:23.474909] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166de8a8 00:31:51.758 [2024-11-20 06:42:23.475605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:12853 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.758 [2024-11-20 06:42:23.475624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:51.758 [2024-11-20 06:42:23.485475] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166fd208 00:31:51.758 [2024-11-20 06:42:23.486430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:15864 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.758 [2024-11-20 06:42:23.486449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:51.758 [2024-11-20 06:42:23.493719] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166f31b8 00:31:51.758 [2024-11-20 06:42:23.494739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:19049 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.758 [2024-11-20 06:42:23.494758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:51.759 [2024-11-20 06:42:23.503099] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166fa3a0 00:31:51.759 [2024-11-20 06:42:23.504284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:2989 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.759 [2024-11-20 06:42:23.504304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:31:51.759 [2024-11-20 06:42:23.512560] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166fe720 00:31:51.759 [2024-11-20 06:42:23.513855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:19153 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.759 [2024-11-20 06:42:23.513878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:51.759 [2024-11-20 06:42:23.521149] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166f4f40 00:31:51.759 [2024-11-20 06:42:23.521995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:3703 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.759 [2024-11-20 06:42:23.522015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:31:51.759 [2024-11-20 06:42:23.531372] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166f0ff8 00:31:51.759 [2024-11-20 06:42:23.532794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:9042 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.759 [2024-11-20 06:42:23.532813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:31:51.759 [2024-11-20 06:42:23.537936] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166fb8b8 00:31:51.759 [2024-11-20 06:42:23.538618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9671 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.759 [2024-11-20 06:42:23.538637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:31:51.759 [2024-11-20 06:42:23.547399] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166fa3a0 00:31:51.759 [2024-11-20 06:42:23.548191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:15652 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.759 [2024-11-20 06:42:23.548213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:31:51.759 [2024-11-20 06:42:23.556531] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166feb58 00:31:51.759 [2024-11-20 06:42:23.557351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:9144 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.759 [2024-11-20 06:42:23.557370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:31:51.759 [2024-11-20 06:42:23.565839] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166e5220 00:31:51.759 [2024-11-20 06:42:23.566642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:5699 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.759 [2024-11-20 06:42:23.566661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:31:51.759 [2024-11-20 06:42:23.574422] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166ddc00 00:31:51.759 [2024-11-20 06:42:23.575150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:20483 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.759 [2024-11-20 06:42:23.575171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:31:51.759 [2024-11-20 06:42:23.584378] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166e5220 00:31:51.759 [2024-11-20 06:42:23.585466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:22494 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.759 [2024-11-20 06:42:23.585492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:31:52.018 [2024-11-20 06:42:23.593419] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166f8618 00:31:52.018 [2024-11-20 06:42:23.594498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:7658 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:52.018 [2024-11-20 06:42:23.594522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:31:52.018 [2024-11-20 06:42:23.602922] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166f20d8 00:31:52.018 [2024-11-20 06:42:23.604067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:1993 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:52.018 [2024-11-20 06:42:23.604088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:31:52.018 [2024-11-20 06:42:23.612357] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166f6cc8 00:31:52.018 [2024-11-20 06:42:23.613651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:23851 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:52.018 [2024-11-20 06:42:23.613671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:31:52.018 [2024-11-20 06:42:23.621966] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166ed0b0 00:31:52.018 [2024-11-20 06:42:23.623344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:1253 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:52.018 [2024-11-20 06:42:23.623363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:31:52.018 [2024-11-20 06:42:23.631225] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166e0ea0 00:31:52.018 [2024-11-20 06:42:23.632604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:8685 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:52.018 [2024-11-20 06:42:23.632624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:31:52.018 [2024-11-20 06:42:23.638630] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166f1868 00:31:52.018 [2024-11-20 06:42:23.639247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:670 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:52.018 [2024-11-20 06:42:23.639267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:31:52.018 [2024-11-20 06:42:23.648001] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166e5220 00:31:52.018 [2024-11-20 06:42:23.648705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:10164 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:52.018 [2024-11-20 06:42:23.648724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:52.018 [2024-11-20 06:42:23.657062] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166f7100 00:31:52.018 [2024-11-20 06:42:23.658000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:25420 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:52.018 [2024-11-20 06:42:23.658019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:31:52.018 [2024-11-20 06:42:23.665910] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166df118 00:31:52.018 [2024-11-20 06:42:23.666613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:4943 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:52.018 [2024-11-20 06:42:23.666633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:52.018 [2024-11-20 06:42:23.674383] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166e8d30 00:31:52.018 [2024-11-20 06:42:23.675633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:16814 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:52.018 [2024-11-20 06:42:23.675652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:31:52.018 [2024-11-20 06:42:23.682122] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166fb480 00:31:52.018 [2024-11-20 06:42:23.682788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:4040 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:52.018 [2024-11-20 06:42:23.682807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:31:52.018 [2024-11-20 06:42:23.691509] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166e7c50 00:31:52.018 [2024-11-20 06:42:23.692286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:24918 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:52.018 [2024-11-20 06:42:23.692306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:52.018 [2024-11-20 06:42:23.700913] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166fc560 00:31:52.018 [2024-11-20 06:42:23.701816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:25169 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:52.018 [2024-11-20 06:42:23.701835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:31:52.018 [2024-11-20 06:42:23.710301] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166e12d8 00:31:52.018 [2024-11-20 06:42:23.711322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:23062 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:52.019 [2024-11-20 06:42:23.711342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:52.019 [2024-11-20 06:42:23.719700] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166e7818 00:31:52.019 [2024-11-20 06:42:23.720814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:16700 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:52.019 [2024-11-20 06:42:23.720833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:31:52.019 [2024-11-20 06:42:23.728807] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166e7c50 00:31:52.019 [2024-11-20 06:42:23.729965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:8822 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:52.019 [2024-11-20 06:42:23.729984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:52.019 [2024-11-20 06:42:23.737375] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166f35f0 00:31:52.019 [2024-11-20 06:42:23.738404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:23010 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:52.019 [2024-11-20 06:42:23.738423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:31:52.019 [2024-11-20 06:42:23.746618] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166ee5c8 00:31:52.019 [2024-11-20 06:42:23.747649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12576 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:52.019 [2024-11-20 06:42:23.747672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:31:52.019 [2024-11-20 06:42:23.755972] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166fe720 00:31:52.019 [2024-11-20 06:42:23.756950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:12698 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:52.019 [2024-11-20 06:42:23.756970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:31:52.019 [2024-11-20 06:42:23.765036] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166fb8b8 00:31:52.019 [2024-11-20 06:42:23.766012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:15697 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:52.019 [2024-11-20 06:42:23.766033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:31:52.019 [2024-11-20 06:42:23.774670] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166ecc78 00:31:52.019 [2024-11-20 06:42:23.775937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:14754 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:52.019 [2024-11-20 06:42:23.775957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:31:52.019 [2024-11-20 06:42:23.783937] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166feb58 00:31:52.019 [2024-11-20 06:42:23.785246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:19016 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:52.019 [2024-11-20 06:42:23.785265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:31:52.019 [2024-11-20 06:42:23.791811] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166e1710 00:31:52.019 [2024-11-20 06:42:23.792567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:13847 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:52.019 [2024-11-20 06:42:23.792587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:52.019 [2024-11-20 06:42:23.800171] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166f5be8 00:31:52.019 [2024-11-20 06:42:23.800980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:3513 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:52.019 [2024-11-20 06:42:23.800999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:31:52.019 [2024-11-20 06:42:23.809586] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166ee5c8 00:31:52.019 [2024-11-20 06:42:23.810511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:2015 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:52.019 [2024-11-20 06:42:23.810531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:31:52.019 [2024-11-20 06:42:23.819001] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166e7818 00:31:52.019 [2024-11-20 06:42:23.820039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:10554 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:52.019 [2024-11-20 06:42:23.820059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:31:52.019 [2024-11-20 06:42:23.828397] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166feb58 00:31:52.019 [2024-11-20 06:42:23.829558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:9629 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:52.019 [2024-11-20 06:42:23.829577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:31:52.019 [2024-11-20 06:42:23.837799] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166fe2e8 00:31:52.019 [2024-11-20 06:42:23.839067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:19123 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:52.019 [2024-11-20 06:42:23.839086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:31:52.019 [2024-11-20 06:42:23.847321] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166e5a90 00:31:52.019 [2024-11-20 06:42:23.848766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:25498 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:52.019 [2024-11-20 06:42:23.848788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:31:52.279 [2024-11-20 06:42:23.854019] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166de038 00:31:52.279 [2024-11-20 06:42:23.854761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:25552 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:52.279 [2024-11-20 06:42:23.854784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:31:52.279 [2024-11-20 06:42:23.864008] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166f7970 00:31:52.279 [2024-11-20 06:42:23.864724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:9922 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:52.279 [2024-11-20 06:42:23.864745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:31:52.279 [2024-11-20 06:42:23.875445] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166eb760 00:31:52.279 [2024-11-20 06:42:23.876844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:3185 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:52.279 [2024-11-20 06:42:23.876865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:31:52.279 [2024-11-20 06:42:23.882889] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166e01f8 00:31:52.279 [2024-11-20 06:42:23.883529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:21123 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:52.279 [2024-11-20 06:42:23.883548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:31:52.279 [2024-11-20 06:42:23.892357] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166ecc78 00:31:52.279 [2024-11-20 06:42:23.893075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:19479 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:52.279 [2024-11-20 06:42:23.893095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:31:52.279 [2024-11-20 06:42:23.900865] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166eaab8 00:31:52.279 [2024-11-20 06:42:23.902136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:13773 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:52.279 [2024-11-20 06:42:23.902155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:31:52.279 [2024-11-20 06:42:23.908597] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166f9f68 00:31:52.279 [2024-11-20 06:42:23.909273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:20055 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:52.279 [2024-11-20 06:42:23.909292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:31:52.279 [2024-11-20 06:42:23.918047] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166fb480 00:31:52.279 [2024-11-20 06:42:23.918837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:12024 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:52.279 [2024-11-20 06:42:23.918857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:31:52.279 [2024-11-20 06:42:23.927178] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166ea248 00:31:52.279 [2024-11-20 06:42:23.928007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:8778 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:52.279 [2024-11-20 06:42:23.928026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:31:52.279 [2024-11-20 06:42:23.935726] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166e4140 00:31:52.279 [2024-11-20 06:42:23.936439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:22199 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:52.279 [2024-11-20 06:42:23.936458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:31:52.279 [2024-11-20 06:42:23.944999] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166df118 00:31:52.279 [2024-11-20 06:42:23.945750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:22777 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:52.279 [2024-11-20 06:42:23.945769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:31:52.279 [2024-11-20 06:42:23.955208] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166f9b30 00:31:52.279 [2024-11-20 06:42:23.955980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:4616 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:52.279 [2024-11-20 06:42:23.956000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:31:52.279 [2024-11-20 06:42:23.964398] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166f7538 00:31:52.279 [2024-11-20 06:42:23.965477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:10964 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:52.279 [2024-11-20 06:42:23.965496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:31:52.279 [2024-11-20 06:42:23.973376] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166e01f8 00:31:52.279 [2024-11-20 06:42:23.974579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:18214 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:52.279 [2024-11-20 06:42:23.974599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:31:52.279 [2024-11-20 06:42:23.983034] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166f0788 00:31:52.279 [2024-11-20 06:42:23.984217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:17435 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:52.279 [2024-11-20 06:42:23.984240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:52.279 [2024-11-20 06:42:23.991500] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166f7970 00:31:52.279 [2024-11-20 06:42:23.992577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:17781 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:52.279 [2024-11-20 06:42:23.992597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:31:52.279 28088.00 IOPS, 109.72 MiB/s [2024-11-20T05:42:24.115Z] [2024-11-20 06:42:24.001350] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166efae0 00:31:52.279 [2024-11-20 06:42:24.002076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:22609 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:52.279 [2024-11-20 06:42:24.002096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:31:52.279 [2024-11-20 06:42:24.009842] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166f2510 00:31:52.279 [2024-11-20 06:42:24.011165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:21619 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:52.279 [2024-11-20 06:42:24.011184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:31:52.279 [2024-11-20 06:42:24.017578] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166e88f8 00:31:52.279 [2024-11-20 06:42:24.018307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:18606 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:52.279 [2024-11-20 06:42:24.018327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:31:52.279 [2024-11-20 06:42:24.027265] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166f5378 00:31:52.279 [2024-11-20 06:42:24.028119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:14765 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:52.279 [2024-11-20 06:42:24.028138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:31:52.279 [2024-11-20 06:42:24.038487] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166fd208 00:31:52.279 [2024-11-20 06:42:24.039709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:10637 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:52.279 [2024-11-20 06:42:24.039728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:31:52.279 [2024-11-20 06:42:24.047103] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166fd208 00:31:52.279 [2024-11-20 06:42:24.048191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:3493 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:52.279 [2024-11-20 06:42:24.048213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:31:52.279 [2024-11-20 06:42:24.055508] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166e73e0 00:31:52.279 [2024-11-20 06:42:24.056488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:6872 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:52.279 [2024-11-20 06:42:24.056507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:31:52.279 [2024-11-20 06:42:24.063868] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166fd208 00:31:52.279 [2024-11-20 06:42:24.064696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:14286 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:52.279 [2024-11-20 06:42:24.064714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:31:52.279 [2024-11-20 06:42:24.074737] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166e9168 00:31:52.279 [2024-11-20 06:42:24.075843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:9089 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:52.279 [2024-11-20 06:42:24.075862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:31:52.279 [2024-11-20 06:42:24.082318] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166e7c50 00:31:52.279 [2024-11-20 06:42:24.082824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:16755 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:52.280 [2024-11-20 06:42:24.082843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:31:52.280 [2024-11-20 06:42:24.091721] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166e38d0 00:31:52.280 [2024-11-20 06:42:24.092353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:25580 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:52.280 [2024-11-20 06:42:24.092373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:31:52.280 [2024-11-20 06:42:24.100280] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166fcdd0 00:31:52.280 [2024-11-20 06:42:24.101264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:4010 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:52.280 [2024-11-20 06:42:24.101283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:31:52.540 [2024-11-20 06:42:24.111656] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166ed0b0 00:31:52.540 [2024-11-20 06:42:24.113135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:24413 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:52.540 [2024-11-20 06:42:24.113158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:31:52.540 [2024-11-20 06:42:24.121195] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166f0ff8 00:31:52.540 [2024-11-20 06:42:24.122764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:3141 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:52.540 [2024-11-20 06:42:24.122785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:52.540 [2024-11-20 06:42:24.127668] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166f9f68 00:31:52.540 [2024-11-20 06:42:24.128533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:22637 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:52.540 [2024-11-20 06:42:24.128552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:31:52.540 [2024-11-20 06:42:24.137071] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166f81e0 00:31:52.540 [2024-11-20 06:42:24.138071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:19905 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:52.540 [2024-11-20 06:42:24.138090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:31:52.540 [2024-11-20 06:42:24.146492] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166fb8b8 00:31:52.540 [2024-11-20 06:42:24.147576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:18553 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:52.540 [2024-11-20 06:42:24.147595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:31:52.540 [2024-11-20 06:42:24.155906] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166fc998 00:31:52.540 [2024-11-20 06:42:24.157136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25139 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:52.540 [2024-11-20 06:42:24.157157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:31:52.540 [2024-11-20 06:42:24.165059] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166e6300 00:31:52.540 [2024-11-20 06:42:24.166294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:1550 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:52.540 [2024-11-20 06:42:24.166314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:31:52.540 [2024-11-20 06:42:24.173646] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166fbcf0 00:31:52.540 [2024-11-20 06:42:24.174751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:5439 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:52.540 [2024-11-20 06:42:24.174771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:52.540 [2024-11-20 06:42:24.182918] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166e1f80 00:31:52.540 [2024-11-20 06:42:24.184004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:6752 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:52.540 [2024-11-20 06:42:24.184024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:31:52.540 [2024-11-20 06:42:24.190232] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166f4f40 00:31:52.540 [2024-11-20 06:42:24.190850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:23811 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:52.540 [2024-11-20 06:42:24.190870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:31:52.540 [2024-11-20 06:42:24.199224] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166fac10 00:31:52.540 [2024-11-20 06:42:24.199848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:18868 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:52.540 [2024-11-20 06:42:24.199868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:31:52.540 [2024-11-20 06:42:24.209365] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166fda78 00:31:52.540 [2024-11-20 06:42:24.210457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:18950 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:52.540 [2024-11-20 06:42:24.210476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:31:52.540 [2024-11-20 06:42:24.218810] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166e73e0 00:31:52.540 [2024-11-20 06:42:24.219990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:17105 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:52.540 [2024-11-20 06:42:24.220012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:31:52.540 [2024-11-20 06:42:24.228192] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166fb8b8 00:31:52.540 [2024-11-20 06:42:24.229519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:6701 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:52.540 [2024-11-20 06:42:24.229538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:31:52.540 [2024-11-20 06:42:24.236543] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166e3498 00:31:52.540 [2024-11-20 06:42:24.237518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:17777 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:52.540 [2024-11-20 06:42:24.237537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:31:52.540 [2024-11-20 06:42:24.244709] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166df118 00:31:52.540 [2024-11-20 06:42:24.245914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23205 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:52.540 [2024-11-20 06:42:24.245933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:31:52.540 [2024-11-20 06:42:24.253012] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166e7c50 00:31:52.540 [2024-11-20 06:42:24.253667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:18356 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:52.540 [2024-11-20 06:42:24.253687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:31:52.540 [2024-11-20 06:42:24.261996] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166f3e60 00:31:52.540 [2024-11-20 06:42:24.262653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:6361 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:52.540 [2024-11-20 06:42:24.262673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:31:52.540 [2024-11-20 06:42:24.270993] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166ed4e8 00:31:52.540 [2024-11-20 06:42:24.271657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:16010 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:52.540 [2024-11-20 06:42:24.271677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:31:52.540 [2024-11-20 06:42:24.280253] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166f8e88 00:31:52.540 [2024-11-20 06:42:24.280928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:25180 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:52.540 [2024-11-20 06:42:24.280948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:31:52.540 [2024-11-20 06:42:24.291493] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166f6890 00:31:52.540 [2024-11-20 06:42:24.292704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:10534 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:52.540 [2024-11-20 06:42:24.292724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:52.540 [2024-11-20 06:42:24.299245] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166fcdd0 00:31:52.540 [2024-11-20 06:42:24.299784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:21470 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:52.540 [2024-11-20 06:42:24.299806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:31:52.540 [2024-11-20 06:42:24.308689] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166e8088 00:31:52.540 [2024-11-20 06:42:24.309343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:3953 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:52.540 [2024-11-20 06:42:24.309364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:31:52.540 [2024-11-20 06:42:24.317159] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166e27f0 00:31:52.540 [2024-11-20 06:42:24.318338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:794 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:52.540 [2024-11-20 06:42:24.318357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:31:52.541 [2024-11-20 06:42:24.325467] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166eea00 00:31:52.541 [2024-11-20 06:42:24.326100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:23412 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:52.541 [2024-11-20 06:42:24.326120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:52.541 [2024-11-20 06:42:24.334448] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166eb760 00:31:52.541 [2024-11-20 06:42:24.335077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:21444 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:52.541 [2024-11-20 06:42:24.335097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:52.541 [2024-11-20 06:42:24.343399] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166df988 00:31:52.541 [2024-11-20 06:42:24.344058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:5579 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:52.541 [2024-11-20 06:42:24.344077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:52.541 [2024-11-20 06:42:24.352393] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166e1b48 00:31:52.541 [2024-11-20 06:42:24.353021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:14206 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:52.541 [2024-11-20 06:42:24.353042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:52.541 [2024-11-20 06:42:24.361728] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166e5220 00:31:52.541 [2024-11-20 06:42:24.362470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:5399 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:52.541 [2024-11-20 06:42:24.362490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:52.800 [2024-11-20 06:42:24.371811] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166f7da8 00:31:52.800 [2024-11-20 06:42:24.371969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:5314 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:52.800 [2024-11-20 06:42:24.371990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:52.800 [2024-11-20 06:42:24.381531] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166f7da8 00:31:52.800 [2024-11-20 06:42:24.381688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:9322 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:52.800 [2024-11-20 06:42:24.381709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:52.800 [2024-11-20 06:42:24.390974] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166f7da8 00:31:52.800 [2024-11-20 06:42:24.391126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:24793 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:52.800 [2024-11-20 06:42:24.391144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:52.800 [2024-11-20 06:42:24.400373] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166f7da8 00:31:52.800 [2024-11-20 06:42:24.400526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:22715 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:52.800 [2024-11-20 06:42:24.400544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:52.800 [2024-11-20 06:42:24.409834] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166f7da8 00:31:52.800 [2024-11-20 06:42:24.409992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:11457 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:52.800 [2024-11-20 06:42:24.410010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:52.800 [2024-11-20 06:42:24.419557] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166f7da8 00:31:52.800 [2024-11-20 06:42:24.419711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:1362 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:52.800 [2024-11-20 06:42:24.419730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:52.800 [2024-11-20 06:42:24.429079] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166f7da8 00:31:52.800 [2024-11-20 06:42:24.429238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:17705 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:52.800 [2024-11-20 06:42:24.429257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:52.800 [2024-11-20 06:42:24.438667] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166f7da8 00:31:52.800 [2024-11-20 06:42:24.438818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:3900 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:52.800 [2024-11-20 06:42:24.438837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:52.800 [2024-11-20 06:42:24.448081] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166f7da8 00:31:52.800 [2024-11-20 06:42:24.448240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:19799 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:52.800 [2024-11-20 06:42:24.448259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:52.800 [2024-11-20 06:42:24.457482] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166f7da8 00:31:52.800 [2024-11-20 06:42:24.457635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:23479 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:52.800 [2024-11-20 06:42:24.457652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:52.800 [2024-11-20 06:42:24.466890] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166f7da8 00:31:52.800 [2024-11-20 06:42:24.467042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:23838 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:52.800 [2024-11-20 06:42:24.467059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:52.800 [2024-11-20 06:42:24.476285] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166f7da8 00:31:52.800 [2024-11-20 06:42:24.476437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:8614 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:52.800 [2024-11-20 06:42:24.476454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:52.800 [2024-11-20 06:42:24.485721] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166f7da8 00:31:52.800 [2024-11-20 06:42:24.485876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:11722 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:52.800 [2024-11-20 06:42:24.485893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:52.800 [2024-11-20 06:42:24.495156] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166f7da8 00:31:52.800 [2024-11-20 06:42:24.495315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:5628 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:52.800 [2024-11-20 06:42:24.495333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:52.800 [2024-11-20 06:42:24.504557] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166f7da8 00:31:52.800 [2024-11-20 06:42:24.504709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:14871 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:52.800 [2024-11-20 06:42:24.504727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:52.800 [2024-11-20 06:42:24.514015] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166f7da8 00:31:52.800 [2024-11-20 06:42:24.514167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:20275 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:52.800 [2024-11-20 06:42:24.514183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:52.800 [2024-11-20 06:42:24.523424] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166f7da8 00:31:52.800 [2024-11-20 06:42:24.523576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:16661 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:52.800 [2024-11-20 06:42:24.523609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:52.800 [2024-11-20 06:42:24.533088] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166f7da8 00:31:52.800 [2024-11-20 06:42:24.533247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:10165 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:52.800 [2024-11-20 06:42:24.533265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:52.801 [2024-11-20 06:42:24.542616] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166f7da8 00:31:52.801 [2024-11-20 06:42:24.542770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:10277 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:52.801 [2024-11-20 06:42:24.542791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:52.801 [2024-11-20 06:42:24.552166] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166f7da8 00:31:52.801 [2024-11-20 06:42:24.552324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:23376 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:52.801 [2024-11-20 06:42:24.552342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:52.801 [2024-11-20 06:42:24.561601] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166f7da8 00:31:52.801 [2024-11-20 06:42:24.561754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:3038 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:52.801 [2024-11-20 06:42:24.561771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:52.801 [2024-11-20 06:42:24.571062] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166f7da8 00:31:52.801 [2024-11-20 06:42:24.571218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:13221 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:52.801 [2024-11-20 06:42:24.571236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:52.801 [2024-11-20 06:42:24.580477] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166f7da8 00:31:52.801 [2024-11-20 06:42:24.580629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:1030 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:52.801 [2024-11-20 06:42:24.580648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:52.801 [2024-11-20 06:42:24.589917] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166f7da8 00:31:52.801 [2024-11-20 06:42:24.590068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:9111 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:52.801 [2024-11-20 06:42:24.590086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:52.801 [2024-11-20 06:42:24.599328] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166f7da8 00:31:52.801 [2024-11-20 06:42:24.599480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:7804 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:52.801 [2024-11-20 06:42:24.599498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:52.801 [2024-11-20 06:42:24.608714] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166f7da8 00:31:52.801 [2024-11-20 06:42:24.608866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:7615 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:52.801 [2024-11-20 06:42:24.608884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:52.801 [2024-11-20 06:42:24.618192] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166f7da8 00:31:52.801 [2024-11-20 06:42:24.618352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:5418 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:52.801 [2024-11-20 06:42:24.618369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:52.801 [2024-11-20 06:42:24.627622] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166f7da8 00:31:52.801 [2024-11-20 06:42:24.627783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:4566 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:52.801 [2024-11-20 06:42:24.627803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:53.060 [2024-11-20 06:42:24.637360] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166f7da8 00:31:53.060 [2024-11-20 06:42:24.637515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:3830 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.060 [2024-11-20 06:42:24.637535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:53.060 [2024-11-20 06:42:24.646838] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166f7da8 00:31:53.060 [2024-11-20 06:42:24.646991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:13608 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.060 [2024-11-20 06:42:24.647009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:53.060 [2024-11-20 06:42:24.656243] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166f7da8 00:31:53.060 [2024-11-20 06:42:24.656397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:16305 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.060 [2024-11-20 06:42:24.656415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:53.060 [2024-11-20 06:42:24.665629] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166f7da8 00:31:53.060 [2024-11-20 06:42:24.665778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:25094 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.060 [2024-11-20 06:42:24.665796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:53.060 [2024-11-20 06:42:24.675092] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166f7da8 00:31:53.060 [2024-11-20 06:42:24.675253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:5893 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.060 [2024-11-20 06:42:24.675271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:53.060 [2024-11-20 06:42:24.684518] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166f7da8 00:31:53.060 [2024-11-20 06:42:24.684673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:21042 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.060 [2024-11-20 06:42:24.684690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:53.060 [2024-11-20 06:42:24.693923] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166f7da8 00:31:53.060 [2024-11-20 06:42:24.694076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:17001 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.060 [2024-11-20 06:42:24.694095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:53.060 [2024-11-20 06:42:24.703320] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166f7da8 00:31:53.060 [2024-11-20 06:42:24.703473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:24660 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.060 [2024-11-20 06:42:24.703491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:53.060 [2024-11-20 06:42:24.712761] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166f7da8 00:31:53.060 [2024-11-20 06:42:24.712913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:5647 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.060 [2024-11-20 06:42:24.712931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:53.060 [2024-11-20 06:42:24.722239] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166f7da8 00:31:53.060 [2024-11-20 06:42:24.722392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:20579 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.060 [2024-11-20 06:42:24.722409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:53.060 [2024-11-20 06:42:24.731648] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166f7da8 00:31:53.060 [2024-11-20 06:42:24.731800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:23764 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.060 [2024-11-20 06:42:24.731817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:53.060 [2024-11-20 06:42:24.741046] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166f7da8 00:31:53.060 [2024-11-20 06:42:24.741196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:23322 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.060 [2024-11-20 06:42:24.741219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:53.060 [2024-11-20 06:42:24.750463] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166f7da8 00:31:53.060 [2024-11-20 06:42:24.750612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:20270 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.060 [2024-11-20 06:42:24.750629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:53.060 [2024-11-20 06:42:24.759964] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166f7da8 00:31:53.060 [2024-11-20 06:42:24.760117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:8298 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.060 [2024-11-20 06:42:24.760135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:53.060 [2024-11-20 06:42:24.769450] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166f7da8 00:31:53.060 [2024-11-20 06:42:24.769601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:24864 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.060 [2024-11-20 06:42:24.769619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:53.060 [2024-11-20 06:42:24.778871] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166f7da8 00:31:53.060 [2024-11-20 06:42:24.779024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19685 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.060 [2024-11-20 06:42:24.779058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:53.060 [2024-11-20 06:42:24.788533] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166f7da8 00:31:53.060 [2024-11-20 06:42:24.788684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:2855 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.060 [2024-11-20 06:42:24.788706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:53.060 [2024-11-20 06:42:24.798056] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166f7da8 00:31:53.060 [2024-11-20 06:42:24.798218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:25135 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.060 [2024-11-20 06:42:24.798237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:53.060 [2024-11-20 06:42:24.807551] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166f7da8 00:31:53.060 [2024-11-20 06:42:24.807703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:11819 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.060 [2024-11-20 06:42:24.807721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:53.060 [2024-11-20 06:42:24.816964] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166f7da8 00:31:53.060 [2024-11-20 06:42:24.817116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:22128 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.060 [2024-11-20 06:42:24.817134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:53.060 [2024-11-20 06:42:24.826469] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166f7da8 00:31:53.060 [2024-11-20 06:42:24.826620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:19058 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.060 [2024-11-20 06:42:24.826639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:53.060 [2024-11-20 06:42:24.835866] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166f7da8 00:31:53.060 [2024-11-20 06:42:24.836017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:708 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.060 [2024-11-20 06:42:24.836034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:53.060 [2024-11-20 06:42:24.845345] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166f7da8 00:31:53.060 [2024-11-20 06:42:24.845500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:9607 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.060 [2024-11-20 06:42:24.845517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:53.060 [2024-11-20 06:42:24.854740] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166f7da8 00:31:53.060 [2024-11-20 06:42:24.854895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:9382 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.060 [2024-11-20 06:42:24.854913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:53.061 [2024-11-20 06:42:24.864119] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166f7da8 00:31:53.061 [2024-11-20 06:42:24.864276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:16182 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.061 [2024-11-20 06:42:24.864293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:53.061 [2024-11-20 06:42:24.873526] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166f7da8 00:31:53.061 [2024-11-20 06:42:24.873680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:21099 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.061 [2024-11-20 06:42:24.873698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:53.061 [2024-11-20 06:42:24.882937] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166f7da8 00:31:53.061 [2024-11-20 06:42:24.883088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:7539 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.061 [2024-11-20 06:42:24.883106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:53.319 [2024-11-20 06:42:24.892516] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166f7da8 00:31:53.319 [2024-11-20 06:42:24.892671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:9892 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.319 [2024-11-20 06:42:24.892692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:53.319 [2024-11-20 06:42:24.902121] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166f7da8 00:31:53.319 [2024-11-20 06:42:24.902279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:6988 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.319 [2024-11-20 06:42:24.902299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:53.319 [2024-11-20 06:42:24.911534] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166f7da8 00:31:53.319 [2024-11-20 06:42:24.911685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:13977 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.319 [2024-11-20 06:42:24.911704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:53.319 [2024-11-20 06:42:24.920968] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166f7da8 00:31:53.319 [2024-11-20 06:42:24.921119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:5854 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.319 [2024-11-20 06:42:24.921137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:53.319 [2024-11-20 06:42:24.930534] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166f7da8 00:31:53.319 [2024-11-20 06:42:24.930686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:17332 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.319 [2024-11-20 06:42:24.930704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:53.319 [2024-11-20 06:42:24.940053] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166f7da8 00:31:53.319 [2024-11-20 06:42:24.940209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:15931 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.319 [2024-11-20 06:42:24.940227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:53.319 [2024-11-20 06:42:24.949514] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166f7da8 00:31:53.319 [2024-11-20 06:42:24.949667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:24561 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.319 [2024-11-20 06:42:24.949685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:53.319 [2024-11-20 06:42:24.958935] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166f7da8 00:31:53.319 [2024-11-20 06:42:24.959086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:21350 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.319 [2024-11-20 06:42:24.959104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:53.319 [2024-11-20 06:42:24.968344] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166f7da8 00:31:53.319 [2024-11-20 06:42:24.968496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:10576 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.319 [2024-11-20 06:42:24.968515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:53.319 [2024-11-20 06:42:24.977744] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166f7da8 00:31:53.319 [2024-11-20 06:42:24.977894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:7995 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.319 [2024-11-20 06:42:24.977911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:53.319 [2024-11-20 06:42:24.987138] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166f7da8 00:31:53.319 [2024-11-20 06:42:24.987299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:70 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.319 [2024-11-20 06:42:24.987317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:53.319 [2024-11-20 06:42:24.996529] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae2180) with pdu=0x2000166f7da8 00:31:53.319 [2024-11-20 06:42:24.997779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:4957 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.320 [2024-11-20 06:42:24.997799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:53.320 27782.50 IOPS, 108.53 MiB/s 00:31:53.320 Latency(us) 00:31:53.320 [2024-11-20T05:42:25.156Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:53.320 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:53.320 nvme0n1 : 2.00 27767.65 108.47 0.00 0.00 4602.11 2106.51 12545.46 00:31:53.320 [2024-11-20T05:42:25.156Z] =================================================================================================================== 00:31:53.320 [2024-11-20T05:42:25.156Z] Total : 27767.65 108.47 0.00 0.00 4602.11 2106.51 12545.46 00:31:53.320 { 00:31:53.320 "results": [ 00:31:53.320 { 00:31:53.320 "job": "nvme0n1", 00:31:53.320 "core_mask": "0x2", 00:31:53.320 "workload": "randwrite", 00:31:53.320 "status": "finished", 00:31:53.320 "queue_depth": 128, 00:31:53.320 "io_size": 4096, 00:31:53.320 "runtime": 2.004527, 00:31:53.320 "iops": 27767.647928912906, 00:31:53.320 "mibps": 108.46737472231604, 00:31:53.320 "io_failed": 0, 00:31:53.320 "io_timeout": 0, 00:31:53.320 "avg_latency_us": 4602.105347045594, 00:31:53.320 "min_latency_us": 2106.5142857142855, 00:31:53.320 "max_latency_us": 12545.462857142857 00:31:53.320 } 00:31:53.320 ], 00:31:53.320 "core_count": 1 00:31:53.320 } 00:31:53.320 06:42:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:31:53.320 06:42:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:31:53.320 06:42:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:31:53.320 | .driver_specific 00:31:53.320 | .nvme_error 00:31:53.320 | .status_code 00:31:53.320 | .command_transient_transport_error' 00:31:53.320 06:42:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:31:53.578 06:42:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 218 > 0 )) 00:31:53.578 06:42:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 699163 00:31:53.578 06:42:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 699163 ']' 00:31:53.578 06:42:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 699163 00:31:53.578 06:42:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:31:53.578 06:42:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:31:53.578 06:42:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 699163 00:31:53.578 06:42:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:31:53.578 06:42:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:31:53.578 06:42:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 699163' 00:31:53.578 killing process with pid 699163 00:31:53.578 06:42:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 699163 00:31:53.578 Received shutdown signal, test time was about 2.000000 seconds 00:31:53.578 00:31:53.578 Latency(us) 00:31:53.578 [2024-11-20T05:42:25.414Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:53.578 [2024-11-20T05:42:25.414Z] =================================================================================================================== 00:31:53.578 [2024-11-20T05:42:25.414Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:53.578 06:42:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 699163 00:31:53.836 06:42:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:31:53.836 06:42:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:31:53.836 06:42:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:31:53.836 06:42:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:31:53.836 06:42:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:31:53.836 06:42:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=699641 00:31:53.836 06:42:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 699641 /var/tmp/bperf.sock 00:31:53.836 06:42:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:31:53.836 06:42:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 699641 ']' 00:31:53.836 06:42:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:53.836 06:42:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:31:53.836 06:42:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:53.836 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:53.836 06:42:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:31:53.836 06:42:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:31:53.836 [2024-11-20 06:42:25.478761] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:31:53.836 [2024-11-20 06:42:25.478813] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid699641 ] 00:31:53.836 I/O size of 131072 is greater than zero copy threshold (65536). 00:31:53.836 Zero copy mechanism will not be used. 00:31:53.836 [2024-11-20 06:42:25.553996] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:53.836 [2024-11-20 06:42:25.590763] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:54.094 06:42:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:31:54.094 06:42:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:31:54.094 06:42:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:31:54.094 06:42:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:31:54.094 06:42:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:31:54.094 06:42:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:54.094 06:42:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:31:54.094 06:42:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:54.094 06:42:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:54.094 06:42:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:54.352 nvme0n1 00:31:54.352 06:42:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:31:54.352 06:42:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:54.352 06:42:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:31:54.352 06:42:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:54.352 06:42:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:31:54.352 06:42:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:31:54.611 I/O size of 131072 is greater than zero copy threshold (65536). 00:31:54.611 Zero copy mechanism will not be used. 00:31:54.611 Running I/O for 2 seconds... 00:31:54.611 [2024-11-20 06:42:26.263004] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:54.611 [2024-11-20 06:42:26.263074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.611 [2024-11-20 06:42:26.263104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:54.611 [2024-11-20 06:42:26.267581] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:54.611 [2024-11-20 06:42:26.267641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.611 [2024-11-20 06:42:26.267665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:54.611 [2024-11-20 06:42:26.272035] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:54.611 [2024-11-20 06:42:26.272095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.611 [2024-11-20 06:42:26.272121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:54.611 [2024-11-20 06:42:26.276570] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:54.611 [2024-11-20 06:42:26.276629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.611 [2024-11-20 06:42:26.276649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:54.611 [2024-11-20 06:42:26.281020] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:54.611 [2024-11-20 06:42:26.281077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.611 [2024-11-20 06:42:26.281095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:54.611 [2024-11-20 06:42:26.285286] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:54.611 [2024-11-20 06:42:26.285345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.611 [2024-11-20 06:42:26.285364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:54.611 [2024-11-20 06:42:26.289508] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:54.611 [2024-11-20 06:42:26.289586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.611 [2024-11-20 06:42:26.289605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:54.611 [2024-11-20 06:42:26.293677] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:54.611 [2024-11-20 06:42:26.293751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.611 [2024-11-20 06:42:26.293769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:54.611 [2024-11-20 06:42:26.297944] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:54.611 [2024-11-20 06:42:26.298000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.611 [2024-11-20 06:42:26.298019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:54.611 [2024-11-20 06:42:26.302265] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:54.611 [2024-11-20 06:42:26.302335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.611 [2024-11-20 06:42:26.302354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:54.611 [2024-11-20 06:42:26.306391] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:54.611 [2024-11-20 06:42:26.306454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.611 [2024-11-20 06:42:26.306473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:54.611 [2024-11-20 06:42:26.310631] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:54.611 [2024-11-20 06:42:26.310723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.611 [2024-11-20 06:42:26.310742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:54.611 [2024-11-20 06:42:26.315077] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:54.611 [2024-11-20 06:42:26.315134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.611 [2024-11-20 06:42:26.315151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:54.611 [2024-11-20 06:42:26.319330] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:54.611 [2024-11-20 06:42:26.319425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.612 [2024-11-20 06:42:26.319443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:54.612 [2024-11-20 06:42:26.323560] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:54.612 [2024-11-20 06:42:26.323619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.612 [2024-11-20 06:42:26.323638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:54.612 [2024-11-20 06:42:26.327672] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:54.612 [2024-11-20 06:42:26.327741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.612 [2024-11-20 06:42:26.327760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:54.612 [2024-11-20 06:42:26.332029] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:54.612 [2024-11-20 06:42:26.332087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.612 [2024-11-20 06:42:26.332105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:54.612 [2024-11-20 06:42:26.336602] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:54.612 [2024-11-20 06:42:26.336676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.612 [2024-11-20 06:42:26.336695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:54.612 [2024-11-20 06:42:26.341026] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:54.612 [2024-11-20 06:42:26.341090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.612 [2024-11-20 06:42:26.341109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:54.612 [2024-11-20 06:42:26.345176] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:54.612 [2024-11-20 06:42:26.345246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.612 [2024-11-20 06:42:26.345265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:54.612 [2024-11-20 06:42:26.349291] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:54.612 [2024-11-20 06:42:26.349368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.612 [2024-11-20 06:42:26.349388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:54.612 [2024-11-20 06:42:26.353470] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:54.612 [2024-11-20 06:42:26.353553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.612 [2024-11-20 06:42:26.353571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:54.612 [2024-11-20 06:42:26.357962] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:54.612 [2024-11-20 06:42:26.358076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.612 [2024-11-20 06:42:26.358095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:54.612 [2024-11-20 06:42:26.363318] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:54.612 [2024-11-20 06:42:26.363460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.612 [2024-11-20 06:42:26.363479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:54.612 [2024-11-20 06:42:26.369916] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:54.612 [2024-11-20 06:42:26.370009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.612 [2024-11-20 06:42:26.370028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:54.612 [2024-11-20 06:42:26.375927] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:54.612 [2024-11-20 06:42:26.376092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.612 [2024-11-20 06:42:26.376110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:54.612 [2024-11-20 06:42:26.382288] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:54.612 [2024-11-20 06:42:26.382436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.612 [2024-11-20 06:42:26.382465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:54.612 [2024-11-20 06:42:26.388663] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:54.612 [2024-11-20 06:42:26.388818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.612 [2024-11-20 06:42:26.388836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:54.612 [2024-11-20 06:42:26.394818] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:54.612 [2024-11-20 06:42:26.394966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.612 [2024-11-20 06:42:26.394988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:54.612 [2024-11-20 06:42:26.400839] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:54.612 [2024-11-20 06:42:26.401017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.612 [2024-11-20 06:42:26.401036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:54.612 [2024-11-20 06:42:26.407081] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:54.612 [2024-11-20 06:42:26.407148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.612 [2024-11-20 06:42:26.407167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:54.612 [2024-11-20 06:42:26.412109] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:54.612 [2024-11-20 06:42:26.412175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.612 [2024-11-20 06:42:26.412194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:54.612 [2024-11-20 06:42:26.416416] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:54.612 [2024-11-20 06:42:26.416487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.612 [2024-11-20 06:42:26.416505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:54.612 [2024-11-20 06:42:26.420842] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:54.612 [2024-11-20 06:42:26.420903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.612 [2024-11-20 06:42:26.420921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:54.612 [2024-11-20 06:42:26.425132] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:54.612 [2024-11-20 06:42:26.425207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.612 [2024-11-20 06:42:26.425226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:54.612 [2024-11-20 06:42:26.429335] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:54.612 [2024-11-20 06:42:26.429435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.612 [2024-11-20 06:42:26.429453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:54.612 [2024-11-20 06:42:26.433939] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:54.612 [2024-11-20 06:42:26.434015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.612 [2024-11-20 06:42:26.434034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:54.612 [2024-11-20 06:42:26.438372] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:54.612 [2024-11-20 06:42:26.438479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.612 [2024-11-20 06:42:26.438501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:54.873 [2024-11-20 06:42:26.442915] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:54.873 [2024-11-20 06:42:26.442972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.873 [2024-11-20 06:42:26.442993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:54.873 [2024-11-20 06:42:26.447248] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:54.873 [2024-11-20 06:42:26.447303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.873 [2024-11-20 06:42:26.447324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:54.873 [2024-11-20 06:42:26.451480] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:54.873 [2024-11-20 06:42:26.451536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.873 [2024-11-20 06:42:26.451555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:54.873 [2024-11-20 06:42:26.455621] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:54.873 [2024-11-20 06:42:26.455683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.873 [2024-11-20 06:42:26.455702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:54.873 [2024-11-20 06:42:26.459824] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:54.873 [2024-11-20 06:42:26.459879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.873 [2024-11-20 06:42:26.459897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:54.873 [2024-11-20 06:42:26.464083] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:54.873 [2024-11-20 06:42:26.464142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.873 [2024-11-20 06:42:26.464160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:54.873 [2024-11-20 06:42:26.468241] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:54.873 [2024-11-20 06:42:26.468354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.873 [2024-11-20 06:42:26.468372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:54.873 [2024-11-20 06:42:26.472908] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:54.873 [2024-11-20 06:42:26.472997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.873 [2024-11-20 06:42:26.473016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:54.873 [2024-11-20 06:42:26.477483] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:54.873 [2024-11-20 06:42:26.477547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.873 [2024-11-20 06:42:26.477566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:54.873 [2024-11-20 06:42:26.481924] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:54.873 [2024-11-20 06:42:26.481993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.873 [2024-11-20 06:42:26.482012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:54.873 [2024-11-20 06:42:26.486276] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:54.873 [2024-11-20 06:42:26.486346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.873 [2024-11-20 06:42:26.486365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:54.873 [2024-11-20 06:42:26.490475] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:54.873 [2024-11-20 06:42:26.490566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.873 [2024-11-20 06:42:26.490586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:54.873 [2024-11-20 06:42:26.494887] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:54.873 [2024-11-20 06:42:26.494944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.873 [2024-11-20 06:42:26.494963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:54.873 [2024-11-20 06:42:26.499101] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:54.873 [2024-11-20 06:42:26.499170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.873 [2024-11-20 06:42:26.499188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:54.873 [2024-11-20 06:42:26.503557] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:54.873 [2024-11-20 06:42:26.503646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.873 [2024-11-20 06:42:26.503665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:54.873 [2024-11-20 06:42:26.507825] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:54.873 [2024-11-20 06:42:26.507894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.873 [2024-11-20 06:42:26.507913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:54.873 [2024-11-20 06:42:26.512402] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:54.873 [2024-11-20 06:42:26.512464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.873 [2024-11-20 06:42:26.512482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:54.873 [2024-11-20 06:42:26.517467] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:54.873 [2024-11-20 06:42:26.517527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.874 [2024-11-20 06:42:26.517545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:54.874 [2024-11-20 06:42:26.522289] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:54.874 [2024-11-20 06:42:26.522349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.874 [2024-11-20 06:42:26.522367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:54.874 [2024-11-20 06:42:26.527700] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:54.874 [2024-11-20 06:42:26.527800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.874 [2024-11-20 06:42:26.527819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:54.874 [2024-11-20 06:42:26.532526] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:54.874 [2024-11-20 06:42:26.532583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.874 [2024-11-20 06:42:26.532602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:54.874 [2024-11-20 06:42:26.537279] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:54.874 [2024-11-20 06:42:26.537403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.874 [2024-11-20 06:42:26.537422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:54.874 [2024-11-20 06:42:26.542432] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:54.874 [2024-11-20 06:42:26.542486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.874 [2024-11-20 06:42:26.542504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:54.874 [2024-11-20 06:42:26.547533] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:54.874 [2024-11-20 06:42:26.547586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.874 [2024-11-20 06:42:26.547605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:54.874 [2024-11-20 06:42:26.553140] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:54.874 [2024-11-20 06:42:26.553255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.874 [2024-11-20 06:42:26.553274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:54.874 [2024-11-20 06:42:26.558051] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:54.874 [2024-11-20 06:42:26.558108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.874 [2024-11-20 06:42:26.558134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:54.874 [2024-11-20 06:42:26.563188] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:54.874 [2024-11-20 06:42:26.563250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.874 [2024-11-20 06:42:26.563268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:54.874 [2024-11-20 06:42:26.568105] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:54.874 [2024-11-20 06:42:26.568170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.874 [2024-11-20 06:42:26.568189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:54.874 [2024-11-20 06:42:26.572978] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:54.874 [2024-11-20 06:42:26.573045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.874 [2024-11-20 06:42:26.573063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:54.874 [2024-11-20 06:42:26.577812] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:54.874 [2024-11-20 06:42:26.577864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.874 [2024-11-20 06:42:26.577882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:54.874 [2024-11-20 06:42:26.582646] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:54.874 [2024-11-20 06:42:26.582712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.874 [2024-11-20 06:42:26.582730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:54.874 [2024-11-20 06:42:26.588070] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:54.874 [2024-11-20 06:42:26.588169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.874 [2024-11-20 06:42:26.588188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:54.874 [2024-11-20 06:42:26.593742] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:54.874 [2024-11-20 06:42:26.593851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.874 [2024-11-20 06:42:26.593870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:54.874 [2024-11-20 06:42:26.600324] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:54.874 [2024-11-20 06:42:26.600379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.874 [2024-11-20 06:42:26.600398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:54.874 [2024-11-20 06:42:26.605114] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:54.874 [2024-11-20 06:42:26.605187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.874 [2024-11-20 06:42:26.605211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:54.874 [2024-11-20 06:42:26.609511] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:54.874 [2024-11-20 06:42:26.609585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.874 [2024-11-20 06:42:26.609603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:54.874 [2024-11-20 06:42:26.613938] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:54.874 [2024-11-20 06:42:26.614040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.874 [2024-11-20 06:42:26.614058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:54.874 [2024-11-20 06:42:26.618436] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:54.874 [2024-11-20 06:42:26.618491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.874 [2024-11-20 06:42:26.618510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:54.874 [2024-11-20 06:42:26.622803] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:54.874 [2024-11-20 06:42:26.622874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.874 [2024-11-20 06:42:26.622892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:54.874 [2024-11-20 06:42:26.627165] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:54.874 [2024-11-20 06:42:26.627234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.874 [2024-11-20 06:42:26.627252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:54.874 [2024-11-20 06:42:26.631433] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:54.874 [2024-11-20 06:42:26.631502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.874 [2024-11-20 06:42:26.631520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:54.874 [2024-11-20 06:42:26.635698] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:54.874 [2024-11-20 06:42:26.635764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.874 [2024-11-20 06:42:26.635783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:54.874 [2024-11-20 06:42:26.640364] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:54.874 [2024-11-20 06:42:26.640534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.874 [2024-11-20 06:42:26.640552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:54.874 [2024-11-20 06:42:26.644892] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:54.874 [2024-11-20 06:42:26.644950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.874 [2024-11-20 06:42:26.644968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:54.874 [2024-11-20 06:42:26.649104] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:54.874 [2024-11-20 06:42:26.649219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.875 [2024-11-20 06:42:26.649237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:54.875 [2024-11-20 06:42:26.653616] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:54.875 [2024-11-20 06:42:26.653689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.875 [2024-11-20 06:42:26.653707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:54.875 [2024-11-20 06:42:26.657903] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:54.875 [2024-11-20 06:42:26.657978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.875 [2024-11-20 06:42:26.657997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:54.875 [2024-11-20 06:42:26.662302] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:54.875 [2024-11-20 06:42:26.662364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.875 [2024-11-20 06:42:26.662382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:54.875 [2024-11-20 06:42:26.666623] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:54.875 [2024-11-20 06:42:26.666675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.875 [2024-11-20 06:42:26.666693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:54.875 [2024-11-20 06:42:26.670897] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:54.875 [2024-11-20 06:42:26.670960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.875 [2024-11-20 06:42:26.670978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:54.875 [2024-11-20 06:42:26.675294] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:54.875 [2024-11-20 06:42:26.675348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.875 [2024-11-20 06:42:26.675366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:54.875 [2024-11-20 06:42:26.679593] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:54.875 [2024-11-20 06:42:26.679692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.875 [2024-11-20 06:42:26.679714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:54.875 [2024-11-20 06:42:26.683921] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:54.875 [2024-11-20 06:42:26.683992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.875 [2024-11-20 06:42:26.684011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:54.875 [2024-11-20 06:42:26.688265] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:54.875 [2024-11-20 06:42:26.688356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.875 [2024-11-20 06:42:26.688374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:54.875 [2024-11-20 06:42:26.692647] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:54.875 [2024-11-20 06:42:26.692717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.875 [2024-11-20 06:42:26.692735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:54.875 [2024-11-20 06:42:26.697353] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:54.875 [2024-11-20 06:42:26.697428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.875 [2024-11-20 06:42:26.697459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:54.875 [2024-11-20 06:42:26.702010] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:54.875 [2024-11-20 06:42:26.702086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.875 [2024-11-20 06:42:26.702109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:55.135 [2024-11-20 06:42:26.706508] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.135 [2024-11-20 06:42:26.706605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.135 [2024-11-20 06:42:26.706626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:55.135 [2024-11-20 06:42:26.710839] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.135 [2024-11-20 06:42:26.710900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.135 [2024-11-20 06:42:26.710922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:55.135 [2024-11-20 06:42:26.715213] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.135 [2024-11-20 06:42:26.715276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.135 [2024-11-20 06:42:26.715295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:55.135 [2024-11-20 06:42:26.719598] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.135 [2024-11-20 06:42:26.719669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.135 [2024-11-20 06:42:26.719687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:55.135 [2024-11-20 06:42:26.724093] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.135 [2024-11-20 06:42:26.724147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.135 [2024-11-20 06:42:26.724166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:55.135 [2024-11-20 06:42:26.728425] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.135 [2024-11-20 06:42:26.728479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.135 [2024-11-20 06:42:26.728498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:55.135 [2024-11-20 06:42:26.732716] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.135 [2024-11-20 06:42:26.732787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.135 [2024-11-20 06:42:26.732806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:55.135 [2024-11-20 06:42:26.737083] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.135 [2024-11-20 06:42:26.737144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.135 [2024-11-20 06:42:26.737163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:55.135 [2024-11-20 06:42:26.741462] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.135 [2024-11-20 06:42:26.741532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.135 [2024-11-20 06:42:26.741551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:55.135 [2024-11-20 06:42:26.746023] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.135 [2024-11-20 06:42:26.746094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.135 [2024-11-20 06:42:26.746112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:55.135 [2024-11-20 06:42:26.750378] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.135 [2024-11-20 06:42:26.750451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.135 [2024-11-20 06:42:26.750480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:55.135 [2024-11-20 06:42:26.754942] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.135 [2024-11-20 06:42:26.754994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.135 [2024-11-20 06:42:26.755012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:55.135 [2024-11-20 06:42:26.759474] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.135 [2024-11-20 06:42:26.759566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.135 [2024-11-20 06:42:26.759585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:55.135 [2024-11-20 06:42:26.764460] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.135 [2024-11-20 06:42:26.764516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.135 [2024-11-20 06:42:26.764534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:55.135 [2024-11-20 06:42:26.769653] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.135 [2024-11-20 06:42:26.769706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.135 [2024-11-20 06:42:26.769724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:55.135 [2024-11-20 06:42:26.774616] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.135 [2024-11-20 06:42:26.774739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.135 [2024-11-20 06:42:26.774758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:55.135 [2024-11-20 06:42:26.779846] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.135 [2024-11-20 06:42:26.779903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.135 [2024-11-20 06:42:26.779922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:55.135 [2024-11-20 06:42:26.784963] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.136 [2024-11-20 06:42:26.785018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.136 [2024-11-20 06:42:26.785037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:55.136 [2024-11-20 06:42:26.789936] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.136 [2024-11-20 06:42:26.789990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.136 [2024-11-20 06:42:26.790007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:55.136 [2024-11-20 06:42:26.795509] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.136 [2024-11-20 06:42:26.795565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.136 [2024-11-20 06:42:26.795583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:55.136 [2024-11-20 06:42:26.800616] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.136 [2024-11-20 06:42:26.800681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.136 [2024-11-20 06:42:26.800704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:55.136 [2024-11-20 06:42:26.805436] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.136 [2024-11-20 06:42:26.805556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.136 [2024-11-20 06:42:26.805574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:55.136 [2024-11-20 06:42:26.810171] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.136 [2024-11-20 06:42:26.810232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.136 [2024-11-20 06:42:26.810249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:55.136 [2024-11-20 06:42:26.815144] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.136 [2024-11-20 06:42:26.815210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.136 [2024-11-20 06:42:26.815228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:55.136 [2024-11-20 06:42:26.820249] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.136 [2024-11-20 06:42:26.820306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.136 [2024-11-20 06:42:26.820324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:55.136 [2024-11-20 06:42:26.825195] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.136 [2024-11-20 06:42:26.825275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.136 [2024-11-20 06:42:26.825293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:55.136 [2024-11-20 06:42:26.829800] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.136 [2024-11-20 06:42:26.829873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.136 [2024-11-20 06:42:26.829891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:55.136 [2024-11-20 06:42:26.835143] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.136 [2024-11-20 06:42:26.835209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.136 [2024-11-20 06:42:26.835228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:55.136 [2024-11-20 06:42:26.840098] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.136 [2024-11-20 06:42:26.840189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.136 [2024-11-20 06:42:26.840214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:55.136 [2024-11-20 06:42:26.845117] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.136 [2024-11-20 06:42:26.845210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.136 [2024-11-20 06:42:26.845228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:55.136 [2024-11-20 06:42:26.850126] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.136 [2024-11-20 06:42:26.850179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.136 [2024-11-20 06:42:26.850197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:55.136 [2024-11-20 06:42:26.856054] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.136 [2024-11-20 06:42:26.856107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.136 [2024-11-20 06:42:26.856125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:55.136 [2024-11-20 06:42:26.862313] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.136 [2024-11-20 06:42:26.862388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.136 [2024-11-20 06:42:26.862407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:55.136 [2024-11-20 06:42:26.868091] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.136 [2024-11-20 06:42:26.868182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.136 [2024-11-20 06:42:26.868206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:55.136 [2024-11-20 06:42:26.875173] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.136 [2024-11-20 06:42:26.875328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.136 [2024-11-20 06:42:26.875346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:55.136 [2024-11-20 06:42:26.882348] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.136 [2024-11-20 06:42:26.882404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.136 [2024-11-20 06:42:26.882423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:55.136 [2024-11-20 06:42:26.889340] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.136 [2024-11-20 06:42:26.889429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.136 [2024-11-20 06:42:26.889448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:55.136 [2024-11-20 06:42:26.895672] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.136 [2024-11-20 06:42:26.895737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.136 [2024-11-20 06:42:26.895755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:55.136 [2024-11-20 06:42:26.900401] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.136 [2024-11-20 06:42:26.900455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.136 [2024-11-20 06:42:26.900473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:55.136 [2024-11-20 06:42:26.904914] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.136 [2024-11-20 06:42:26.904982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.136 [2024-11-20 06:42:26.905000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:55.136 [2024-11-20 06:42:26.909323] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.136 [2024-11-20 06:42:26.909392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.136 [2024-11-20 06:42:26.909410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:55.136 [2024-11-20 06:42:26.913648] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.136 [2024-11-20 06:42:26.913724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.136 [2024-11-20 06:42:26.913742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:55.136 [2024-11-20 06:42:26.918038] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.136 [2024-11-20 06:42:26.918115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.136 [2024-11-20 06:42:26.918133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:55.136 [2024-11-20 06:42:26.922395] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.136 [2024-11-20 06:42:26.922465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.136 [2024-11-20 06:42:26.922483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:55.136 [2024-11-20 06:42:26.926750] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.136 [2024-11-20 06:42:26.926809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.137 [2024-11-20 06:42:26.926827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:55.137 [2024-11-20 06:42:26.931078] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.137 [2024-11-20 06:42:26.931141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.137 [2024-11-20 06:42:26.931160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:55.137 [2024-11-20 06:42:26.935477] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.137 [2024-11-20 06:42:26.935542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.137 [2024-11-20 06:42:26.935563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:55.137 [2024-11-20 06:42:26.940157] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.137 [2024-11-20 06:42:26.940222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.137 [2024-11-20 06:42:26.940240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:55.137 [2024-11-20 06:42:26.944994] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.137 [2024-11-20 06:42:26.945051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.137 [2024-11-20 06:42:26.945069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:55.137 [2024-11-20 06:42:26.950265] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.137 [2024-11-20 06:42:26.950334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.137 [2024-11-20 06:42:26.950352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:55.137 [2024-11-20 06:42:26.955228] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.137 [2024-11-20 06:42:26.955282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.137 [2024-11-20 06:42:26.955300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:55.137 [2024-11-20 06:42:26.960030] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.137 [2024-11-20 06:42:26.960083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.137 [2024-11-20 06:42:26.960101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:55.137 [2024-11-20 06:42:26.965130] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.137 [2024-11-20 06:42:26.965192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.137 [2024-11-20 06:42:26.965221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:55.397 [2024-11-20 06:42:26.970834] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.397 [2024-11-20 06:42:26.970965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.397 [2024-11-20 06:42:26.970986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:55.397 [2024-11-20 06:42:26.975879] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.397 [2024-11-20 06:42:26.975954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.397 [2024-11-20 06:42:26.975975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:55.397 [2024-11-20 06:42:26.980798] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.397 [2024-11-20 06:42:26.980856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.397 [2024-11-20 06:42:26.980874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:55.397 [2024-11-20 06:42:26.985625] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.397 [2024-11-20 06:42:26.985715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.397 [2024-11-20 06:42:26.985733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:55.397 [2024-11-20 06:42:26.990399] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.397 [2024-11-20 06:42:26.990511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.397 [2024-11-20 06:42:26.990529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:55.397 [2024-11-20 06:42:26.995321] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.397 [2024-11-20 06:42:26.995418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.397 [2024-11-20 06:42:26.995436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:55.397 [2024-11-20 06:42:27.000249] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.397 [2024-11-20 06:42:27.000324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.397 [2024-11-20 06:42:27.000343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:55.397 [2024-11-20 06:42:27.005664] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.398 [2024-11-20 06:42:27.005774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.398 [2024-11-20 06:42:27.005793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:55.398 [2024-11-20 06:42:27.010682] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.398 [2024-11-20 06:42:27.010765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.398 [2024-11-20 06:42:27.010783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:55.398 [2024-11-20 06:42:27.015791] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.398 [2024-11-20 06:42:27.015918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.398 [2024-11-20 06:42:27.015937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:55.398 [2024-11-20 06:42:27.020605] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.398 [2024-11-20 06:42:27.020656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.398 [2024-11-20 06:42:27.020674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:55.398 [2024-11-20 06:42:27.025196] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.398 [2024-11-20 06:42:27.025307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.398 [2024-11-20 06:42:27.025326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:55.398 [2024-11-20 06:42:27.029722] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.398 [2024-11-20 06:42:27.029785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.398 [2024-11-20 06:42:27.029804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:55.398 [2024-11-20 06:42:27.034138] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.398 [2024-11-20 06:42:27.034239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.398 [2024-11-20 06:42:27.034257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:55.398 [2024-11-20 06:42:27.038619] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.398 [2024-11-20 06:42:27.038742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.398 [2024-11-20 06:42:27.038760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:55.398 [2024-11-20 06:42:27.043168] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.398 [2024-11-20 06:42:27.043241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.398 [2024-11-20 06:42:27.043261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:55.398 [2024-11-20 06:42:27.047837] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.398 [2024-11-20 06:42:27.047890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.398 [2024-11-20 06:42:27.047908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:55.398 [2024-11-20 06:42:27.052275] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.398 [2024-11-20 06:42:27.052335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.398 [2024-11-20 06:42:27.052354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:55.398 [2024-11-20 06:42:27.056552] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.398 [2024-11-20 06:42:27.056610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.398 [2024-11-20 06:42:27.056628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:55.398 [2024-11-20 06:42:27.060787] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.398 [2024-11-20 06:42:27.060838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.398 [2024-11-20 06:42:27.060860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:55.398 [2024-11-20 06:42:27.064948] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.398 [2024-11-20 06:42:27.065001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.398 [2024-11-20 06:42:27.065019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:55.398 [2024-11-20 06:42:27.069110] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.398 [2024-11-20 06:42:27.069162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.398 [2024-11-20 06:42:27.069180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:55.398 [2024-11-20 06:42:27.073291] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.398 [2024-11-20 06:42:27.073360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.398 [2024-11-20 06:42:27.073378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:55.398 [2024-11-20 06:42:27.077752] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.398 [2024-11-20 06:42:27.077809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.398 [2024-11-20 06:42:27.077827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:55.398 [2024-11-20 06:42:27.082426] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.398 [2024-11-20 06:42:27.082504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.398 [2024-11-20 06:42:27.082523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:55.398 [2024-11-20 06:42:27.086792] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.398 [2024-11-20 06:42:27.086862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.398 [2024-11-20 06:42:27.086880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:55.398 [2024-11-20 06:42:27.091107] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.398 [2024-11-20 06:42:27.091222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.398 [2024-11-20 06:42:27.091241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:55.398 [2024-11-20 06:42:27.095428] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.398 [2024-11-20 06:42:27.095491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.398 [2024-11-20 06:42:27.095510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:55.398 [2024-11-20 06:42:27.099658] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.398 [2024-11-20 06:42:27.099725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.398 [2024-11-20 06:42:27.099745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:55.398 [2024-11-20 06:42:27.103884] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.398 [2024-11-20 06:42:27.103980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.398 [2024-11-20 06:42:27.103998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:55.398 [2024-11-20 06:42:27.108270] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.398 [2024-11-20 06:42:27.108344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.398 [2024-11-20 06:42:27.108363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:55.398 [2024-11-20 06:42:27.112504] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.398 [2024-11-20 06:42:27.112570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.398 [2024-11-20 06:42:27.112588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:55.398 [2024-11-20 06:42:27.116688] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.398 [2024-11-20 06:42:27.116751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.398 [2024-11-20 06:42:27.116770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:55.398 [2024-11-20 06:42:27.120908] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.398 [2024-11-20 06:42:27.120963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.398 [2024-11-20 06:42:27.120981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:55.398 [2024-11-20 06:42:27.125182] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.398 [2024-11-20 06:42:27.125278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.399 [2024-11-20 06:42:27.125296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:55.399 [2024-11-20 06:42:27.129735] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.399 [2024-11-20 06:42:27.129845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.399 [2024-11-20 06:42:27.129864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:55.399 [2024-11-20 06:42:27.134163] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.399 [2024-11-20 06:42:27.134220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.399 [2024-11-20 06:42:27.134238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:55.399 [2024-11-20 06:42:27.138507] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.399 [2024-11-20 06:42:27.138567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.399 [2024-11-20 06:42:27.138585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:55.399 [2024-11-20 06:42:27.143143] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.399 [2024-11-20 06:42:27.143208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.399 [2024-11-20 06:42:27.143227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:55.399 [2024-11-20 06:42:27.147347] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.399 [2024-11-20 06:42:27.147404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.399 [2024-11-20 06:42:27.147423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:55.399 [2024-11-20 06:42:27.151471] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.399 [2024-11-20 06:42:27.151538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.399 [2024-11-20 06:42:27.151556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:55.399 [2024-11-20 06:42:27.155607] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.399 [2024-11-20 06:42:27.155665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.399 [2024-11-20 06:42:27.155683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:55.399 [2024-11-20 06:42:27.159800] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.399 [2024-11-20 06:42:27.159864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.399 [2024-11-20 06:42:27.159882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:55.399 [2024-11-20 06:42:27.163959] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.399 [2024-11-20 06:42:27.164016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.399 [2024-11-20 06:42:27.164035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:55.399 [2024-11-20 06:42:27.168256] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.399 [2024-11-20 06:42:27.168309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.399 [2024-11-20 06:42:27.168327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:55.399 [2024-11-20 06:42:27.172812] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.399 [2024-11-20 06:42:27.172874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.399 [2024-11-20 06:42:27.172897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:55.399 [2024-11-20 06:42:27.177778] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.399 [2024-11-20 06:42:27.177882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.399 [2024-11-20 06:42:27.177900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:55.399 [2024-11-20 06:42:27.183237] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.399 [2024-11-20 06:42:27.183299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.399 [2024-11-20 06:42:27.183318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:55.399 [2024-11-20 06:42:27.187914] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.399 [2024-11-20 06:42:27.187977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.399 [2024-11-20 06:42:27.187995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:55.399 [2024-11-20 06:42:27.192707] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.399 [2024-11-20 06:42:27.192764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.399 [2024-11-20 06:42:27.192782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:55.399 [2024-11-20 06:42:27.197425] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.399 [2024-11-20 06:42:27.197509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.399 [2024-11-20 06:42:27.197529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:55.399 [2024-11-20 06:42:27.201994] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.399 [2024-11-20 06:42:27.202058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.399 [2024-11-20 06:42:27.202076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:55.399 [2024-11-20 06:42:27.206787] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.399 [2024-11-20 06:42:27.206860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.399 [2024-11-20 06:42:27.206878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:55.399 [2024-11-20 06:42:27.211424] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.399 [2024-11-20 06:42:27.211515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.399 [2024-11-20 06:42:27.211533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:55.399 [2024-11-20 06:42:27.216270] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.399 [2024-11-20 06:42:27.216337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.399 [2024-11-20 06:42:27.216359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:55.399 [2024-11-20 06:42:27.220663] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.399 [2024-11-20 06:42:27.220735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.399 [2024-11-20 06:42:27.220754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:55.399 [2024-11-20 06:42:27.225463] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.399 [2024-11-20 06:42:27.225530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.399 [2024-11-20 06:42:27.225551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:55.658 [2024-11-20 06:42:27.230356] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.659 [2024-11-20 06:42:27.230438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.659 [2024-11-20 06:42:27.230461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:55.659 [2024-11-20 06:42:27.235579] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.659 [2024-11-20 06:42:27.235645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.659 [2024-11-20 06:42:27.235666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:55.659 [2024-11-20 06:42:27.240915] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.659 [2024-11-20 06:42:27.240979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.659 [2024-11-20 06:42:27.241000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:55.659 [2024-11-20 06:42:27.246496] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.659 [2024-11-20 06:42:27.246612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.659 [2024-11-20 06:42:27.246632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:55.659 [2024-11-20 06:42:27.251503] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.659 [2024-11-20 06:42:27.251608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.659 [2024-11-20 06:42:27.251627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:55.659 [2024-11-20 06:42:27.256639] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.659 [2024-11-20 06:42:27.256753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.659 [2024-11-20 06:42:27.256774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:55.659 6541.00 IOPS, 817.62 MiB/s [2024-11-20T05:42:27.495Z] [2024-11-20 06:42:27.263145] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.659 [2024-11-20 06:42:27.263210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.659 [2024-11-20 06:42:27.263228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:55.659 [2024-11-20 06:42:27.267982] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.659 [2024-11-20 06:42:27.268103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.659 [2024-11-20 06:42:27.268122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:55.659 [2024-11-20 06:42:27.272725] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.659 [2024-11-20 06:42:27.272782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.659 [2024-11-20 06:42:27.272801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:55.659 [2024-11-20 06:42:27.277346] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.659 [2024-11-20 06:42:27.277411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.659 [2024-11-20 06:42:27.277430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:55.659 [2024-11-20 06:42:27.281789] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.659 [2024-11-20 06:42:27.281846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.659 [2024-11-20 06:42:27.281865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:55.659 [2024-11-20 06:42:27.286405] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.659 [2024-11-20 06:42:27.286469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.659 [2024-11-20 06:42:27.286488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:55.659 [2024-11-20 06:42:27.291005] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.659 [2024-11-20 06:42:27.291062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.659 [2024-11-20 06:42:27.291080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:55.659 [2024-11-20 06:42:27.295798] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.659 [2024-11-20 06:42:27.295853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.659 [2024-11-20 06:42:27.295872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:55.659 [2024-11-20 06:42:27.300408] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.659 [2024-11-20 06:42:27.300471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.659 [2024-11-20 06:42:27.300494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:55.659 [2024-11-20 06:42:27.304727] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.659 [2024-11-20 06:42:27.304825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.659 [2024-11-20 06:42:27.304843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:55.659 [2024-11-20 06:42:27.309255] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.659 [2024-11-20 06:42:27.309314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.659 [2024-11-20 06:42:27.309333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:55.659 [2024-11-20 06:42:27.313773] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.659 [2024-11-20 06:42:27.313832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.659 [2024-11-20 06:42:27.313850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:55.659 [2024-11-20 06:42:27.318219] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.659 [2024-11-20 06:42:27.318274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.659 [2024-11-20 06:42:27.318292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:55.659 [2024-11-20 06:42:27.322469] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.659 [2024-11-20 06:42:27.322521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.659 [2024-11-20 06:42:27.322538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:55.659 [2024-11-20 06:42:27.326742] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.659 [2024-11-20 06:42:27.326801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.659 [2024-11-20 06:42:27.326819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:55.659 [2024-11-20 06:42:27.331184] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.659 [2024-11-20 06:42:27.331255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.659 [2024-11-20 06:42:27.331273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:55.659 [2024-11-20 06:42:27.335616] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.659 [2024-11-20 06:42:27.335672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.659 [2024-11-20 06:42:27.335690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:55.659 [2024-11-20 06:42:27.339913] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.659 [2024-11-20 06:42:27.339982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.659 [2024-11-20 06:42:27.340005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:55.659 [2024-11-20 06:42:27.344568] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.659 [2024-11-20 06:42:27.344661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.659 [2024-11-20 06:42:27.344680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:55.659 [2024-11-20 06:42:27.350330] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.659 [2024-11-20 06:42:27.350480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.659 [2024-11-20 06:42:27.350499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:55.659 [2024-11-20 06:42:27.356730] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.659 [2024-11-20 06:42:27.356901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.660 [2024-11-20 06:42:27.356919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:55.660 [2024-11-20 06:42:27.363077] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.660 [2024-11-20 06:42:27.363254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.660 [2024-11-20 06:42:27.363274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:55.660 [2024-11-20 06:42:27.370146] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.660 [2024-11-20 06:42:27.370310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.660 [2024-11-20 06:42:27.370329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:55.660 [2024-11-20 06:42:27.376687] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.660 [2024-11-20 06:42:27.376852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.660 [2024-11-20 06:42:27.376870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:55.660 [2024-11-20 06:42:27.383042] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.660 [2024-11-20 06:42:27.383226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.660 [2024-11-20 06:42:27.383245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:55.660 [2024-11-20 06:42:27.389331] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.660 [2024-11-20 06:42:27.389468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.660 [2024-11-20 06:42:27.389487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:55.660 [2024-11-20 06:42:27.395656] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.660 [2024-11-20 06:42:27.395784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.660 [2024-11-20 06:42:27.395803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:55.660 [2024-11-20 06:42:27.402363] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.660 [2024-11-20 06:42:27.402531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.660 [2024-11-20 06:42:27.402550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:55.660 [2024-11-20 06:42:27.408852] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.660 [2024-11-20 06:42:27.409027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.660 [2024-11-20 06:42:27.409046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:55.660 [2024-11-20 06:42:27.415259] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.660 [2024-11-20 06:42:27.415431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.660 [2024-11-20 06:42:27.415449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:55.660 [2024-11-20 06:42:27.421584] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.660 [2024-11-20 06:42:27.421749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.660 [2024-11-20 06:42:27.421769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:55.660 [2024-11-20 06:42:27.428159] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.660 [2024-11-20 06:42:27.428329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.660 [2024-11-20 06:42:27.428348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:55.660 [2024-11-20 06:42:27.434884] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.660 [2024-11-20 06:42:27.435061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.660 [2024-11-20 06:42:27.435081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:55.660 [2024-11-20 06:42:27.441110] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.660 [2024-11-20 06:42:27.441228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.660 [2024-11-20 06:42:27.441247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:55.660 [2024-11-20 06:42:27.446159] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.660 [2024-11-20 06:42:27.446250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.660 [2024-11-20 06:42:27.446270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:55.660 [2024-11-20 06:42:27.451712] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.660 [2024-11-20 06:42:27.451769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.660 [2024-11-20 06:42:27.451788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:55.660 [2024-11-20 06:42:27.456666] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.660 [2024-11-20 06:42:27.456732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.660 [2024-11-20 06:42:27.456751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:55.660 [2024-11-20 06:42:27.462006] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.660 [2024-11-20 06:42:27.462109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.660 [2024-11-20 06:42:27.462127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:55.660 [2024-11-20 06:42:27.467413] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.660 [2024-11-20 06:42:27.467472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.660 [2024-11-20 06:42:27.467491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:55.660 [2024-11-20 06:42:27.473480] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.660 [2024-11-20 06:42:27.473546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.660 [2024-11-20 06:42:27.473565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:55.660 [2024-11-20 06:42:27.479416] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.660 [2024-11-20 06:42:27.479500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.660 [2024-11-20 06:42:27.479519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:55.660 [2024-11-20 06:42:27.484715] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.660 [2024-11-20 06:42:27.484783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.660 [2024-11-20 06:42:27.484803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:55.660 [2024-11-20 06:42:27.489506] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.660 [2024-11-20 06:42:27.489578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.660 [2024-11-20 06:42:27.489606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:55.920 [2024-11-20 06:42:27.494020] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.920 [2024-11-20 06:42:27.494089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.920 [2024-11-20 06:42:27.494121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:55.920 [2024-11-20 06:42:27.498698] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.920 [2024-11-20 06:42:27.498764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.920 [2024-11-20 06:42:27.498786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:55.920 [2024-11-20 06:42:27.503672] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.920 [2024-11-20 06:42:27.503765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.920 [2024-11-20 06:42:27.503787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:55.920 [2024-11-20 06:42:27.509048] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.920 [2024-11-20 06:42:27.509113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.920 [2024-11-20 06:42:27.509133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:55.920 [2024-11-20 06:42:27.513445] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.920 [2024-11-20 06:42:27.513500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.920 [2024-11-20 06:42:27.513519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:55.920 [2024-11-20 06:42:27.517760] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.920 [2024-11-20 06:42:27.517831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.920 [2024-11-20 06:42:27.517850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:55.920 [2024-11-20 06:42:27.522066] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.920 [2024-11-20 06:42:27.522123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.920 [2024-11-20 06:42:27.522141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:55.920 [2024-11-20 06:42:27.526449] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.920 [2024-11-20 06:42:27.526508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.920 [2024-11-20 06:42:27.526526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:55.920 [2024-11-20 06:42:27.530788] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.920 [2024-11-20 06:42:27.530856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.920 [2024-11-20 06:42:27.530875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:55.920 [2024-11-20 06:42:27.535093] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.920 [2024-11-20 06:42:27.535159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.920 [2024-11-20 06:42:27.535178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:55.920 [2024-11-20 06:42:27.539412] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.920 [2024-11-20 06:42:27.539487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.920 [2024-11-20 06:42:27.539506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:55.920 [2024-11-20 06:42:27.543770] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.920 [2024-11-20 06:42:27.543827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.920 [2024-11-20 06:42:27.543845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:55.920 [2024-11-20 06:42:27.548023] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.920 [2024-11-20 06:42:27.548088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.920 [2024-11-20 06:42:27.548106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:55.920 [2024-11-20 06:42:27.552285] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.920 [2024-11-20 06:42:27.552348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.920 [2024-11-20 06:42:27.552366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:55.920 [2024-11-20 06:42:27.556609] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.920 [2024-11-20 06:42:27.556676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.920 [2024-11-20 06:42:27.556695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:55.920 [2024-11-20 06:42:27.560912] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.920 [2024-11-20 06:42:27.560984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.920 [2024-11-20 06:42:27.561003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:55.920 [2024-11-20 06:42:27.565173] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.920 [2024-11-20 06:42:27.565237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.921 [2024-11-20 06:42:27.565255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:55.921 [2024-11-20 06:42:27.569450] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.921 [2024-11-20 06:42:27.569517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.921 [2024-11-20 06:42:27.569535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:55.921 [2024-11-20 06:42:27.573661] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.921 [2024-11-20 06:42:27.573739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.921 [2024-11-20 06:42:27.573758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:55.921 [2024-11-20 06:42:27.577892] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.921 [2024-11-20 06:42:27.577952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.921 [2024-11-20 06:42:27.577970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:55.921 [2024-11-20 06:42:27.582099] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.921 [2024-11-20 06:42:27.582166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.921 [2024-11-20 06:42:27.582183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:55.921 [2024-11-20 06:42:27.586341] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.921 [2024-11-20 06:42:27.586415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.921 [2024-11-20 06:42:27.586435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:55.921 [2024-11-20 06:42:27.590587] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.921 [2024-11-20 06:42:27.590645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.921 [2024-11-20 06:42:27.590664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:55.921 [2024-11-20 06:42:27.594814] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.921 [2024-11-20 06:42:27.594971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.921 [2024-11-20 06:42:27.594990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:55.921 [2024-11-20 06:42:27.599731] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.921 [2024-11-20 06:42:27.599800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.921 [2024-11-20 06:42:27.599818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:55.921 [2024-11-20 06:42:27.603906] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.921 [2024-11-20 06:42:27.603966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.921 [2024-11-20 06:42:27.603984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:55.921 [2024-11-20 06:42:27.608110] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.921 [2024-11-20 06:42:27.608220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.921 [2024-11-20 06:42:27.608242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:55.921 [2024-11-20 06:42:27.612906] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.921 [2024-11-20 06:42:27.613105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.921 [2024-11-20 06:42:27.613123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:55.921 [2024-11-20 06:42:27.619053] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.921 [2024-11-20 06:42:27.619251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.921 [2024-11-20 06:42:27.619270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:55.921 [2024-11-20 06:42:27.624635] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.921 [2024-11-20 06:42:27.624739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.921 [2024-11-20 06:42:27.624758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:55.921 [2024-11-20 06:42:27.630382] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.921 [2024-11-20 06:42:27.630543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.921 [2024-11-20 06:42:27.630562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:55.921 [2024-11-20 06:42:27.636539] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.921 [2024-11-20 06:42:27.636714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.921 [2024-11-20 06:42:27.636733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:55.921 [2024-11-20 06:42:27.643074] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.921 [2024-11-20 06:42:27.643233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.921 [2024-11-20 06:42:27.643253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:55.921 [2024-11-20 06:42:27.649362] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.921 [2024-11-20 06:42:27.649514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.921 [2024-11-20 06:42:27.649533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:55.921 [2024-11-20 06:42:27.655983] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.921 [2024-11-20 06:42:27.656144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.921 [2024-11-20 06:42:27.656163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:55.921 [2024-11-20 06:42:27.662156] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.921 [2024-11-20 06:42:27.662232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.921 [2024-11-20 06:42:27.662251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:55.921 [2024-11-20 06:42:27.667020] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.921 [2024-11-20 06:42:27.667087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.921 [2024-11-20 06:42:27.667105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:55.921 [2024-11-20 06:42:27.672462] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.921 [2024-11-20 06:42:27.672564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.921 [2024-11-20 06:42:27.672582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:55.921 [2024-11-20 06:42:27.677176] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.921 [2024-11-20 06:42:27.677269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.922 [2024-11-20 06:42:27.677287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:55.922 [2024-11-20 06:42:27.682033] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.922 [2024-11-20 06:42:27.682098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.922 [2024-11-20 06:42:27.682116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:55.922 [2024-11-20 06:42:27.687092] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.922 [2024-11-20 06:42:27.687161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.922 [2024-11-20 06:42:27.687181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:55.922 [2024-11-20 06:42:27.692523] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.922 [2024-11-20 06:42:27.692576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.922 [2024-11-20 06:42:27.692594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:55.922 [2024-11-20 06:42:27.697764] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.922 [2024-11-20 06:42:27.697900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.922 [2024-11-20 06:42:27.697919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:55.922 [2024-11-20 06:42:27.702940] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.922 [2024-11-20 06:42:27.703043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.922 [2024-11-20 06:42:27.703062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:55.922 [2024-11-20 06:42:27.707677] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.922 [2024-11-20 06:42:27.707754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.922 [2024-11-20 06:42:27.707773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:55.922 [2024-11-20 06:42:27.712294] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.922 [2024-11-20 06:42:27.712354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.922 [2024-11-20 06:42:27.712373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:55.922 [2024-11-20 06:42:27.716894] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.922 [2024-11-20 06:42:27.716994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.922 [2024-11-20 06:42:27.717012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:55.922 [2024-11-20 06:42:27.721460] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.922 [2024-11-20 06:42:27.721569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.922 [2024-11-20 06:42:27.721587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:55.922 [2024-11-20 06:42:27.726088] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.922 [2024-11-20 06:42:27.726147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.922 [2024-11-20 06:42:27.726165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:55.922 [2024-11-20 06:42:27.730738] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.922 [2024-11-20 06:42:27.730802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.922 [2024-11-20 06:42:27.730820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:55.922 [2024-11-20 06:42:27.735322] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.922 [2024-11-20 06:42:27.735404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.922 [2024-11-20 06:42:27.735423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:55.922 [2024-11-20 06:42:27.739794] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.922 [2024-11-20 06:42:27.739860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.922 [2024-11-20 06:42:27.739878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:55.922 [2024-11-20 06:42:27.744486] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.922 [2024-11-20 06:42:27.744545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.922 [2024-11-20 06:42:27.744566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:55.922 [2024-11-20 06:42:27.749238] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:55.922 [2024-11-20 06:42:27.749354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.922 [2024-11-20 06:42:27.749376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:56.183 [2024-11-20 06:42:27.753836] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:56.184 [2024-11-20 06:42:27.753909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.184 [2024-11-20 06:42:27.753930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:56.184 [2024-11-20 06:42:27.758401] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:56.184 [2024-11-20 06:42:27.758454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.184 [2024-11-20 06:42:27.758476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:56.184 [2024-11-20 06:42:27.763011] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:56.184 [2024-11-20 06:42:27.763116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.184 [2024-11-20 06:42:27.763136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:56.184 [2024-11-20 06:42:27.767611] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:56.184 [2024-11-20 06:42:27.767672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.184 [2024-11-20 06:42:27.767690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:56.184 [2024-11-20 06:42:27.771923] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:56.184 [2024-11-20 06:42:27.772025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.184 [2024-11-20 06:42:27.772044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:56.184 [2024-11-20 06:42:27.776279] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:56.184 [2024-11-20 06:42:27.776347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.184 [2024-11-20 06:42:27.776367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:56.184 [2024-11-20 06:42:27.780603] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:56.184 [2024-11-20 06:42:27.780658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.184 [2024-11-20 06:42:27.780676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:56.184 [2024-11-20 06:42:27.784949] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:56.184 [2024-11-20 06:42:27.785022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.184 [2024-11-20 06:42:27.785042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:56.184 [2024-11-20 06:42:27.789291] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:56.184 [2024-11-20 06:42:27.789356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.184 [2024-11-20 06:42:27.789375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:56.184 [2024-11-20 06:42:27.793680] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:56.184 [2024-11-20 06:42:27.793746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.184 [2024-11-20 06:42:27.793765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:56.184 [2024-11-20 06:42:27.798067] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:56.184 [2024-11-20 06:42:27.798135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.184 [2024-11-20 06:42:27.798154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:56.184 [2024-11-20 06:42:27.802459] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:56.184 [2024-11-20 06:42:27.802539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.184 [2024-11-20 06:42:27.802558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:56.184 [2024-11-20 06:42:27.806782] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:56.184 [2024-11-20 06:42:27.806852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.184 [2024-11-20 06:42:27.806871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:56.184 [2024-11-20 06:42:27.811104] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:56.184 [2024-11-20 06:42:27.811208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.184 [2024-11-20 06:42:27.811242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:56.184 [2024-11-20 06:42:27.815670] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:56.184 [2024-11-20 06:42:27.815728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.184 [2024-11-20 06:42:27.815747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:56.184 [2024-11-20 06:42:27.820182] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:56.184 [2024-11-20 06:42:27.820309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.184 [2024-11-20 06:42:27.820330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:56.184 [2024-11-20 06:42:27.825498] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:56.184 [2024-11-20 06:42:27.825553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.184 [2024-11-20 06:42:27.825572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:56.184 [2024-11-20 06:42:27.830550] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:56.184 [2024-11-20 06:42:27.830618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.184 [2024-11-20 06:42:27.830637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:56.184 [2024-11-20 06:42:27.835290] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:56.184 [2024-11-20 06:42:27.835371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.184 [2024-11-20 06:42:27.835392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:56.184 [2024-11-20 06:42:27.839911] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:56.184 [2024-11-20 06:42:27.840025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.184 [2024-11-20 06:42:27.840045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:56.184 [2024-11-20 06:42:27.844560] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:56.184 [2024-11-20 06:42:27.844656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.184 [2024-11-20 06:42:27.844674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:56.184 [2024-11-20 06:42:27.849143] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:56.184 [2024-11-20 06:42:27.849209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.184 [2024-11-20 06:42:27.849227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:56.184 [2024-11-20 06:42:27.853757] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:56.184 [2024-11-20 06:42:27.853809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.184 [2024-11-20 06:42:27.853827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:56.184 [2024-11-20 06:42:27.858148] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:56.184 [2024-11-20 06:42:27.858214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.184 [2024-11-20 06:42:27.858232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:56.184 [2024-11-20 06:42:27.862421] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:56.184 [2024-11-20 06:42:27.862473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.184 [2024-11-20 06:42:27.862494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:56.184 [2024-11-20 06:42:27.867078] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:56.184 [2024-11-20 06:42:27.867133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.184 [2024-11-20 06:42:27.867151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:56.184 [2024-11-20 06:42:27.871783] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:56.184 [2024-11-20 06:42:27.871918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.184 [2024-11-20 06:42:27.871936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:56.185 [2024-11-20 06:42:27.876863] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:56.185 [2024-11-20 06:42:27.876917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.185 [2024-11-20 06:42:27.876935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:56.185 [2024-11-20 06:42:27.883286] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:56.185 [2024-11-20 06:42:27.883445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.185 [2024-11-20 06:42:27.883464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:56.185 [2024-11-20 06:42:27.890300] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:56.185 [2024-11-20 06:42:27.890441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.185 [2024-11-20 06:42:27.890460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:56.185 [2024-11-20 06:42:27.897483] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:56.185 [2024-11-20 06:42:27.897639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.185 [2024-11-20 06:42:27.897658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:56.185 [2024-11-20 06:42:27.904894] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:56.185 [2024-11-20 06:42:27.905044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.185 [2024-11-20 06:42:27.905063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:56.185 [2024-11-20 06:42:27.911817] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:56.185 [2024-11-20 06:42:27.911957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.185 [2024-11-20 06:42:27.911976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:56.185 [2024-11-20 06:42:27.919695] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:56.185 [2024-11-20 06:42:27.919883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.185 [2024-11-20 06:42:27.919902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:56.185 [2024-11-20 06:42:27.926622] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:56.185 [2024-11-20 06:42:27.926776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.185 [2024-11-20 06:42:27.926795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:56.185 [2024-11-20 06:42:27.933722] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:56.185 [2024-11-20 06:42:27.933882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.185 [2024-11-20 06:42:27.933901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:56.185 [2024-11-20 06:42:27.941047] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:56.185 [2024-11-20 06:42:27.941195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.185 [2024-11-20 06:42:27.941220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:56.185 [2024-11-20 06:42:27.947886] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:56.185 [2024-11-20 06:42:27.948045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.185 [2024-11-20 06:42:27.948066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:56.185 [2024-11-20 06:42:27.954955] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:56.185 [2024-11-20 06:42:27.955102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.185 [2024-11-20 06:42:27.955121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:56.185 [2024-11-20 06:42:27.961899] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:56.185 [2024-11-20 06:42:27.962035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.185 [2024-11-20 06:42:27.962054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:56.185 [2024-11-20 06:42:27.969070] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:56.185 [2024-11-20 06:42:27.969262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.185 [2024-11-20 06:42:27.969281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:56.185 [2024-11-20 06:42:27.976775] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:56.185 [2024-11-20 06:42:27.976828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.185 [2024-11-20 06:42:27.976846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:56.185 [2024-11-20 06:42:27.984032] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:56.185 [2024-11-20 06:42:27.984218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.185 [2024-11-20 06:42:27.984237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:56.185 [2024-11-20 06:42:27.990985] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:56.185 [2024-11-20 06:42:27.991171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.185 [2024-11-20 06:42:27.991189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:56.185 [2024-11-20 06:42:27.997474] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:56.185 [2024-11-20 06:42:27.997629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.185 [2024-11-20 06:42:27.997647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:56.185 [2024-11-20 06:42:28.004513] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:56.185 [2024-11-20 06:42:28.004639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.185 [2024-11-20 06:42:28.004657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:56.185 [2024-11-20 06:42:28.011025] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:56.185 [2024-11-20 06:42:28.011098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.185 [2024-11-20 06:42:28.011121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:56.445 [2024-11-20 06:42:28.017355] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:56.445 [2024-11-20 06:42:28.017485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.445 [2024-11-20 06:42:28.017506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:56.445 [2024-11-20 06:42:28.023734] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:56.445 [2024-11-20 06:42:28.023793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.445 [2024-11-20 06:42:28.023813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:56.445 [2024-11-20 06:42:28.028869] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:56.445 [2024-11-20 06:42:28.028958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.445 [2024-11-20 06:42:28.028977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:56.445 [2024-11-20 06:42:28.033901] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:56.445 [2024-11-20 06:42:28.033990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.445 [2024-11-20 06:42:28.034013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:56.445 [2024-11-20 06:42:28.038814] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:56.445 [2024-11-20 06:42:28.038887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.445 [2024-11-20 06:42:28.038907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:56.445 [2024-11-20 06:42:28.043509] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:56.445 [2024-11-20 06:42:28.043568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.445 [2024-11-20 06:42:28.043588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:56.445 [2024-11-20 06:42:28.048602] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:56.445 [2024-11-20 06:42:28.048655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.445 [2024-11-20 06:42:28.048673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:56.445 [2024-11-20 06:42:28.053578] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:56.445 [2024-11-20 06:42:28.053760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.445 [2024-11-20 06:42:28.053779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:56.445 [2024-11-20 06:42:28.058518] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:56.445 [2024-11-20 06:42:28.058780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.445 [2024-11-20 06:42:28.058801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:56.445 [2024-11-20 06:42:28.063524] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:56.445 [2024-11-20 06:42:28.063775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.445 [2024-11-20 06:42:28.063796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:56.445 [2024-11-20 06:42:28.068101] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:56.445 [2024-11-20 06:42:28.068343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.445 [2024-11-20 06:42:28.068362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:56.445 [2024-11-20 06:42:28.072466] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:56.445 [2024-11-20 06:42:28.072710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.445 [2024-11-20 06:42:28.072729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:56.445 [2024-11-20 06:42:28.076741] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:56.445 [2024-11-20 06:42:28.076981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.445 [2024-11-20 06:42:28.077002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:56.445 [2024-11-20 06:42:28.080949] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:56.445 [2024-11-20 06:42:28.081200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.445 [2024-11-20 06:42:28.081227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:56.445 [2024-11-20 06:42:28.085339] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:56.445 [2024-11-20 06:42:28.085584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.445 [2024-11-20 06:42:28.085605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:56.445 [2024-11-20 06:42:28.089597] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:56.445 [2024-11-20 06:42:28.089839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.445 [2024-11-20 06:42:28.089859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:56.445 [2024-11-20 06:42:28.093690] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:56.445 [2024-11-20 06:42:28.093928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.445 [2024-11-20 06:42:28.093948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:56.445 [2024-11-20 06:42:28.097798] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:56.445 [2024-11-20 06:42:28.098030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.445 [2024-11-20 06:42:28.098050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:56.445 [2024-11-20 06:42:28.101873] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:56.446 [2024-11-20 06:42:28.102112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.446 [2024-11-20 06:42:28.102132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:56.446 [2024-11-20 06:42:28.105978] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:56.446 [2024-11-20 06:42:28.106219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.446 [2024-11-20 06:42:28.106239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:56.446 [2024-11-20 06:42:28.110082] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:56.446 [2024-11-20 06:42:28.110335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.446 [2024-11-20 06:42:28.110355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:56.446 [2024-11-20 06:42:28.114149] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:56.446 [2024-11-20 06:42:28.114387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.446 [2024-11-20 06:42:28.114407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:56.446 [2024-11-20 06:42:28.118193] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:56.446 [2024-11-20 06:42:28.118439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.446 [2024-11-20 06:42:28.118460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:56.446 [2024-11-20 06:42:28.122094] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:56.446 [2024-11-20 06:42:28.122314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.446 [2024-11-20 06:42:28.122335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:56.446 [2024-11-20 06:42:28.125983] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:56.446 [2024-11-20 06:42:28.126196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.446 [2024-11-20 06:42:28.126223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:56.446 [2024-11-20 06:42:28.129873] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:56.446 [2024-11-20 06:42:28.130083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.446 [2024-11-20 06:42:28.130103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:56.446 [2024-11-20 06:42:28.133762] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:56.446 [2024-11-20 06:42:28.133985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.446 [2024-11-20 06:42:28.134005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:56.446 [2024-11-20 06:42:28.137645] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:56.446 [2024-11-20 06:42:28.137858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.446 [2024-11-20 06:42:28.137879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:56.446 [2024-11-20 06:42:28.141493] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:56.446 [2024-11-20 06:42:28.141712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.446 [2024-11-20 06:42:28.141733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:56.446 [2024-11-20 06:42:28.145277] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:56.446 [2024-11-20 06:42:28.145473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.446 [2024-11-20 06:42:28.145491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:56.446 [2024-11-20 06:42:28.149007] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:56.446 [2024-11-20 06:42:28.149214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.446 [2024-11-20 06:42:28.149248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:56.446 [2024-11-20 06:42:28.152877] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:56.446 [2024-11-20 06:42:28.153077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.446 [2024-11-20 06:42:28.153096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:56.446 [2024-11-20 06:42:28.157078] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:56.446 [2024-11-20 06:42:28.157266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.446 [2024-11-20 06:42:28.157284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:56.446 [2024-11-20 06:42:28.161579] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:56.446 [2024-11-20 06:42:28.161759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.446 [2024-11-20 06:42:28.161777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:56.446 [2024-11-20 06:42:28.165911] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:56.446 [2024-11-20 06:42:28.166101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.446 [2024-11-20 06:42:28.166119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:56.446 [2024-11-20 06:42:28.170986] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:56.446 [2024-11-20 06:42:28.171167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.446 [2024-11-20 06:42:28.171186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:56.446 [2024-11-20 06:42:28.176270] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:56.446 [2024-11-20 06:42:28.176486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.446 [2024-11-20 06:42:28.176506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:56.446 [2024-11-20 06:42:28.180349] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:56.446 [2024-11-20 06:42:28.180539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.446 [2024-11-20 06:42:28.180559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:56.446 [2024-11-20 06:42:28.184224] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:56.446 [2024-11-20 06:42:28.184408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.446 [2024-11-20 06:42:28.184432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:56.446 [2024-11-20 06:42:28.188090] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:56.446 [2024-11-20 06:42:28.188281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.446 [2024-11-20 06:42:28.188301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:56.446 [2024-11-20 06:42:28.192031] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:56.446 [2024-11-20 06:42:28.192221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.446 [2024-11-20 06:42:28.192241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:56.446 [2024-11-20 06:42:28.196033] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:56.446 [2024-11-20 06:42:28.196233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.446 [2024-11-20 06:42:28.196254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:56.446 [2024-11-20 06:42:28.199858] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:56.446 [2024-11-20 06:42:28.200050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.446 [2024-11-20 06:42:28.200070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:56.446 [2024-11-20 06:42:28.203638] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:56.446 [2024-11-20 06:42:28.203842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.446 [2024-11-20 06:42:28.203863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:56.447 [2024-11-20 06:42:28.207615] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:56.447 [2024-11-20 06:42:28.207807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.447 [2024-11-20 06:42:28.207827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:56.447 [2024-11-20 06:42:28.212547] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:56.447 [2024-11-20 06:42:28.212745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.447 [2024-11-20 06:42:28.212766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:56.447 [2024-11-20 06:42:28.217390] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:56.447 [2024-11-20 06:42:28.217593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.447 [2024-11-20 06:42:28.217614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:56.447 [2024-11-20 06:42:28.221232] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:56.447 [2024-11-20 06:42:28.221445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.447 [2024-11-20 06:42:28.221465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:56.447 [2024-11-20 06:42:28.225093] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:56.447 [2024-11-20 06:42:28.225304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.447 [2024-11-20 06:42:28.225323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:56.447 [2024-11-20 06:42:28.228952] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:56.447 [2024-11-20 06:42:28.229137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.447 [2024-11-20 06:42:28.229155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:56.447 [2024-11-20 06:42:28.232764] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:56.447 [2024-11-20 06:42:28.232957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.447 [2024-11-20 06:42:28.232976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:56.447 [2024-11-20 06:42:28.236747] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:56.447 [2024-11-20 06:42:28.236944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.447 [2024-11-20 06:42:28.236962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:56.447 [2024-11-20 06:42:28.241383] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:56.447 [2024-11-20 06:42:28.241573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.447 [2024-11-20 06:42:28.241592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:56.447 [2024-11-20 06:42:28.245693] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:56.447 [2024-11-20 06:42:28.245872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.447 [2024-11-20 06:42:28.245890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:56.447 [2024-11-20 06:42:28.249708] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:56.447 [2024-11-20 06:42:28.249900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.447 [2024-11-20 06:42:28.249918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:56.447 [2024-11-20 06:42:28.253672] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:56.447 [2024-11-20 06:42:28.253821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.447 [2024-11-20 06:42:28.253839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:56.447 [2024-11-20 06:42:28.257590] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:56.447 [2024-11-20 06:42:28.257704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.447 [2024-11-20 06:42:28.257723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:56.447 [2024-11-20 06:42:28.261414] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:56.447 [2024-11-20 06:42:28.261545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.447 [2024-11-20 06:42:28.261564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:56.447 6396.50 IOPS, 799.56 MiB/s [2024-11-20T05:42:28.283Z] [2024-11-20 06:42:28.266086] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xae24c0) with pdu=0x2000166ff3c8 00:31:56.447 [2024-11-20 06:42:28.266138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.447 [2024-11-20 06:42:28.266156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:56.447 00:31:56.447 Latency(us) 00:31:56.447 [2024-11-20T05:42:28.283Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:56.447 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:31:56.447 nvme0n1 : 2.00 6394.89 799.36 0.00 0.00 2497.96 1747.63 7739.49 00:31:56.447 [2024-11-20T05:42:28.283Z] =================================================================================================================== 00:31:56.447 [2024-11-20T05:42:28.283Z] Total : 6394.89 799.36 0.00 0.00 2497.96 1747.63 7739.49 00:31:56.447 { 00:31:56.447 "results": [ 00:31:56.447 { 00:31:56.447 "job": "nvme0n1", 00:31:56.447 "core_mask": "0x2", 00:31:56.447 "workload": "randwrite", 00:31:56.447 "status": "finished", 00:31:56.447 "queue_depth": 16, 00:31:56.447 "io_size": 131072, 00:31:56.447 "runtime": 2.002849, 00:31:56.447 "iops": 6394.890478513358, 00:31:56.447 "mibps": 799.3613098141698, 00:31:56.447 "io_failed": 0, 00:31:56.447 "io_timeout": 0, 00:31:56.447 "avg_latency_us": 2497.9575574789565, 00:31:56.447 "min_latency_us": 1747.6266666666668, 00:31:56.447 "max_latency_us": 7739.489523809524 00:31:56.447 } 00:31:56.447 ], 00:31:56.447 "core_count": 1 00:31:56.447 } 00:31:56.705 06:42:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:31:56.705 06:42:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:31:56.705 06:42:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:31:56.705 | .driver_specific 00:31:56.705 | .nvme_error 00:31:56.705 | .status_code 00:31:56.705 | .command_transient_transport_error' 00:31:56.705 06:42:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:31:56.705 06:42:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 414 > 0 )) 00:31:56.705 06:42:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 699641 00:31:56.705 06:42:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 699641 ']' 00:31:56.705 06:42:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 699641 00:31:56.705 06:42:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:31:56.705 06:42:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:31:56.705 06:42:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 699641 00:31:56.965 06:42:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:31:56.965 06:42:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:31:56.965 06:42:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 699641' 00:31:56.965 killing process with pid 699641 00:31:56.965 06:42:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 699641 00:31:56.965 Received shutdown signal, test time was about 2.000000 seconds 00:31:56.965 00:31:56.965 Latency(us) 00:31:56.965 [2024-11-20T05:42:28.801Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:56.965 [2024-11-20T05:42:28.801Z] =================================================================================================================== 00:31:56.965 [2024-11-20T05:42:28.801Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:56.965 06:42:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 699641 00:31:56.965 06:42:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 697760 00:31:56.965 06:42:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 697760 ']' 00:31:56.965 06:42:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 697760 00:31:56.965 06:42:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:31:56.965 06:42:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:31:56.965 06:42:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 697760 00:31:56.965 06:42:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:31:56.965 06:42:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:31:56.965 06:42:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 697760' 00:31:56.965 killing process with pid 697760 00:31:56.965 06:42:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 697760 00:31:56.965 06:42:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 697760 00:31:57.224 00:31:57.224 real 0m14.605s 00:31:57.224 user 0m28.235s 00:31:57.224 sys 0m4.513s 00:31:57.224 06:42:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1128 -- # xtrace_disable 00:31:57.224 06:42:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:31:57.224 ************************************ 00:31:57.224 END TEST nvmf_digest_error 00:31:57.224 ************************************ 00:31:57.224 06:42:28 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:31:57.224 06:42:28 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:31:57.224 06:42:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:57.224 06:42:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:31:57.224 06:42:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:57.224 06:42:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:31:57.224 06:42:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:57.224 06:42:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:57.224 rmmod nvme_tcp 00:31:57.224 rmmod nvme_fabrics 00:31:57.224 rmmod nvme_keyring 00:31:57.224 06:42:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:57.224 06:42:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:31:57.224 06:42:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:31:57.224 06:42:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 697760 ']' 00:31:57.224 06:42:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 697760 00:31:57.224 06:42:29 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@952 -- # '[' -z 697760 ']' 00:31:57.224 06:42:29 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@956 -- # kill -0 697760 00:31:57.224 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (697760) - No such process 00:31:57.224 06:42:29 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@979 -- # echo 'Process with pid 697760 is not found' 00:31:57.224 Process with pid 697760 is not found 00:31:57.224 06:42:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:57.224 06:42:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:57.224 06:42:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:57.224 06:42:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:31:57.224 06:42:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:31:57.224 06:42:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:57.224 06:42:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:31:57.224 06:42:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:57.224 06:42:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:57.224 06:42:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:57.224 06:42:29 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:57.224 06:42:29 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:59.758 06:42:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:59.758 00:31:59.758 real 0m36.817s 00:31:59.758 user 0m56.592s 00:31:59.758 sys 0m13.556s 00:31:59.758 06:42:31 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1128 -- # xtrace_disable 00:31:59.758 06:42:31 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:31:59.758 ************************************ 00:31:59.758 END TEST nvmf_digest 00:31:59.758 ************************************ 00:31:59.758 06:42:31 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:31:59.758 06:42:31 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:31:59.758 06:42:31 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:31:59.758 06:42:31 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:31:59.758 06:42:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:31:59.758 06:42:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:31:59.758 06:42:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:59.758 ************************************ 00:31:59.758 START TEST nvmf_bdevperf 00:31:59.758 ************************************ 00:31:59.758 06:42:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:31:59.758 * Looking for test storage... 00:31:59.758 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:59.758 06:42:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:31:59.758 06:42:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1691 -- # lcov --version 00:31:59.758 06:42:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:31:59.758 06:42:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:31:59.758 06:42:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:59.758 06:42:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:59.758 06:42:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:59.758 06:42:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:31:59.758 06:42:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:31:59.758 06:42:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:31:59.758 06:42:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:31:59.758 06:42:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:31:59.758 06:42:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:31:59.758 06:42:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:31:59.758 06:42:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:59.758 06:42:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:31:59.758 06:42:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:31:59.758 06:42:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:59.758 06:42:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:59.758 06:42:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:31:59.758 06:42:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:31:59.758 06:42:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:59.758 06:42:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:31:59.758 06:42:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:31:59.758 06:42:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:31:59.758 06:42:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:31:59.758 06:42:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:59.758 06:42:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:31:59.758 06:42:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:31:59.758 06:42:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:59.758 06:42:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:59.758 06:42:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:31:59.758 06:42:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:59.758 06:42:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:31:59.758 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:59.758 --rc genhtml_branch_coverage=1 00:31:59.758 --rc genhtml_function_coverage=1 00:31:59.758 --rc genhtml_legend=1 00:31:59.758 --rc geninfo_all_blocks=1 00:31:59.758 --rc geninfo_unexecuted_blocks=1 00:31:59.758 00:31:59.758 ' 00:31:59.758 06:42:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:31:59.758 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:59.758 --rc genhtml_branch_coverage=1 00:31:59.758 --rc genhtml_function_coverage=1 00:31:59.758 --rc genhtml_legend=1 00:31:59.758 --rc geninfo_all_blocks=1 00:31:59.758 --rc geninfo_unexecuted_blocks=1 00:31:59.758 00:31:59.758 ' 00:31:59.758 06:42:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:31:59.758 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:59.758 --rc genhtml_branch_coverage=1 00:31:59.758 --rc genhtml_function_coverage=1 00:31:59.758 --rc genhtml_legend=1 00:31:59.758 --rc geninfo_all_blocks=1 00:31:59.758 --rc geninfo_unexecuted_blocks=1 00:31:59.758 00:31:59.758 ' 00:31:59.758 06:42:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:31:59.758 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:59.758 --rc genhtml_branch_coverage=1 00:31:59.758 --rc genhtml_function_coverage=1 00:31:59.758 --rc genhtml_legend=1 00:31:59.758 --rc geninfo_all_blocks=1 00:31:59.758 --rc geninfo_unexecuted_blocks=1 00:31:59.758 00:31:59.758 ' 00:31:59.758 06:42:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:59.758 06:42:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:31:59.758 06:42:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:59.758 06:42:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:59.758 06:42:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:59.758 06:42:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:59.758 06:42:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:59.758 06:42:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:59.758 06:42:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:59.758 06:42:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:59.758 06:42:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:59.758 06:42:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:59.758 06:42:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:31:59.758 06:42:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:31:59.758 06:42:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:59.758 06:42:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:59.759 06:42:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:59.759 06:42:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:59.759 06:42:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:59.759 06:42:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:31:59.759 06:42:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:59.759 06:42:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:59.759 06:42:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:59.759 06:42:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:59.759 06:42:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:59.759 06:42:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:59.759 06:42:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:31:59.759 06:42:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:59.759 06:42:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:31:59.759 06:42:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:59.759 06:42:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:59.759 06:42:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:59.759 06:42:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:59.759 06:42:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:59.759 06:42:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:59.759 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:59.759 06:42:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:59.759 06:42:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:59.759 06:42:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:59.759 06:42:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:59.759 06:42:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:59.759 06:42:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:31:59.759 06:42:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:59.759 06:42:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:59.759 06:42:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:59.759 06:42:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:59.759 06:42:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:59.759 06:42:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:59.759 06:42:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:59.759 06:42:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:59.759 06:42:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:59.759 06:42:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:59.759 06:42:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:31:59.759 06:42:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:06.324 06:42:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:06.324 06:42:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:32:06.324 06:42:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:06.324 06:42:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:06.324 06:42:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:06.324 06:42:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:06.324 06:42:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:06.324 06:42:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:32:06.324 06:42:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:06.324 06:42:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:32:06.324 06:42:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:32:06.324 06:42:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:32:06.324 06:42:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:32:06.324 06:42:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:32:06.324 06:42:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:32:06.324 06:42:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:06.324 06:42:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:06.324 06:42:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:06.324 06:42:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:06.324 06:42:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:06.324 06:42:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:06.324 06:42:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:06.324 06:42:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:06.324 06:42:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:06.324 06:42:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:06.324 06:42:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:06.324 06:42:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:06.324 06:42:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:06.324 06:42:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:06.324 06:42:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:06.324 06:42:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:06.324 06:42:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:06.324 06:42:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:06.324 06:42:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:06.324 06:42:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:32:06.324 Found 0000:86:00.0 (0x8086 - 0x159b) 00:32:06.324 06:42:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:06.324 06:42:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:06.324 06:42:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:06.324 06:42:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:06.324 06:42:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:06.324 06:42:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:06.324 06:42:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:32:06.324 Found 0000:86:00.1 (0x8086 - 0x159b) 00:32:06.324 06:42:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:06.324 06:42:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:06.324 06:42:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:06.324 06:42:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:06.324 06:42:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:06.324 06:42:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:06.324 06:42:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:06.324 06:42:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:06.324 06:42:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:06.324 06:42:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:06.324 06:42:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:06.324 06:42:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:06.324 06:42:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:06.324 06:42:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:06.324 06:42:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:06.324 06:42:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:32:06.324 Found net devices under 0000:86:00.0: cvl_0_0 00:32:06.324 06:42:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:06.324 06:42:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:06.324 06:42:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:06.324 06:42:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:06.324 06:42:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:06.324 06:42:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:06.324 06:42:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:06.324 06:42:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:06.324 06:42:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:32:06.324 Found net devices under 0000:86:00.1: cvl_0_1 00:32:06.325 06:42:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:06.325 06:42:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:06.325 06:42:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes 00:32:06.325 06:42:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:06.325 06:42:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:06.325 06:42:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:06.325 06:42:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:06.325 06:42:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:06.325 06:42:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:06.325 06:42:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:06.325 06:42:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:06.325 06:42:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:06.325 06:42:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:06.325 06:42:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:06.325 06:42:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:06.325 06:42:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:06.325 06:42:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:06.325 06:42:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:06.325 06:42:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:06.325 06:42:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:06.325 06:42:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:06.325 06:42:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:06.325 06:42:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:06.325 06:42:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:06.325 06:42:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:06.325 06:42:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:06.325 06:42:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:06.325 06:42:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:06.325 06:42:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:06.325 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:06.325 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.423 ms 00:32:06.325 00:32:06.325 --- 10.0.0.2 ping statistics --- 00:32:06.325 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:06.325 rtt min/avg/max/mdev = 0.423/0.423/0.423/0.000 ms 00:32:06.325 06:42:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:06.325 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:06.325 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.202 ms 00:32:06.325 00:32:06.325 --- 10.0.0.1 ping statistics --- 00:32:06.325 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:06.325 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:32:06.325 06:42:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:06.325 06:42:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0 00:32:06.325 06:42:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:06.325 06:42:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:06.325 06:42:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:06.325 06:42:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:06.325 06:42:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:06.325 06:42:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:06.325 06:42:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:06.325 06:42:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:32:06.325 06:42:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:32:06.325 06:42:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:06.325 06:42:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:06.325 06:42:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:06.325 06:42:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=703652 00:32:06.325 06:42:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 703652 00:32:06.325 06:42:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:32:06.325 06:42:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@833 -- # '[' -z 703652 ']' 00:32:06.325 06:42:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:06.325 06:42:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # local max_retries=100 00:32:06.325 06:42:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:06.325 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:06.325 06:42:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # xtrace_disable 00:32:06.325 06:42:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:06.325 [2024-11-20 06:42:37.360639] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:32:06.325 [2024-11-20 06:42:37.360689] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:06.325 [2024-11-20 06:42:37.440926] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:32:06.325 [2024-11-20 06:42:37.483292] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:06.325 [2024-11-20 06:42:37.483327] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:06.325 [2024-11-20 06:42:37.483335] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:06.325 [2024-11-20 06:42:37.483341] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:06.325 [2024-11-20 06:42:37.483347] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:06.325 [2024-11-20 06:42:37.484810] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:06.325 [2024-11-20 06:42:37.484919] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:06.325 [2024-11-20 06:42:37.484920] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:06.325 06:42:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:32:06.325 06:42:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@866 -- # return 0 00:32:06.325 06:42:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:06.325 06:42:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:06.325 06:42:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:06.325 06:42:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:06.325 06:42:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:06.325 06:42:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:06.325 06:42:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:06.326 [2024-11-20 06:42:37.621236] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:06.326 06:42:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:06.326 06:42:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:06.326 06:42:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:06.326 06:42:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:06.326 Malloc0 00:32:06.326 06:42:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:06.326 06:42:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:06.326 06:42:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:06.326 06:42:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:06.326 06:42:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:06.326 06:42:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:06.326 06:42:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:06.326 06:42:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:06.326 06:42:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:06.326 06:42:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:06.326 06:42:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:06.326 06:42:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:06.326 [2024-11-20 06:42:37.690229] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:06.326 06:42:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:06.326 06:42:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:32:06.326 06:42:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:32:06.326 06:42:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:32:06.326 06:42:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:32:06.326 06:42:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:06.326 06:42:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:06.326 { 00:32:06.326 "params": { 00:32:06.326 "name": "Nvme$subsystem", 00:32:06.326 "trtype": "$TEST_TRANSPORT", 00:32:06.326 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:06.326 "adrfam": "ipv4", 00:32:06.326 "trsvcid": "$NVMF_PORT", 00:32:06.326 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:06.326 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:06.326 "hdgst": ${hdgst:-false}, 00:32:06.326 "ddgst": ${ddgst:-false} 00:32:06.326 }, 00:32:06.326 "method": "bdev_nvme_attach_controller" 00:32:06.326 } 00:32:06.326 EOF 00:32:06.326 )") 00:32:06.326 06:42:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:32:06.326 06:42:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:32:06.326 06:42:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:32:06.326 06:42:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:06.326 "params": { 00:32:06.326 "name": "Nvme1", 00:32:06.326 "trtype": "tcp", 00:32:06.326 "traddr": "10.0.0.2", 00:32:06.326 "adrfam": "ipv4", 00:32:06.326 "trsvcid": "4420", 00:32:06.326 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:06.326 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:06.326 "hdgst": false, 00:32:06.326 "ddgst": false 00:32:06.326 }, 00:32:06.326 "method": "bdev_nvme_attach_controller" 00:32:06.326 }' 00:32:06.326 [2024-11-20 06:42:37.741447] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:32:06.326 [2024-11-20 06:42:37.741490] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid703869 ] 00:32:06.326 [2024-11-20 06:42:37.815742] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:06.326 [2024-11-20 06:42:37.858350] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:06.326 Running I/O for 1 seconds... 00:32:07.258 11537.00 IOPS, 45.07 MiB/s 00:32:07.258 Latency(us) 00:32:07.258 [2024-11-20T05:42:39.094Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:07.258 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:32:07.258 Verification LBA range: start 0x0 length 0x4000 00:32:07.258 Nvme1n1 : 1.01 11553.17 45.13 0.00 0.00 11037.33 2356.18 12420.63 00:32:07.258 [2024-11-20T05:42:39.094Z] =================================================================================================================== 00:32:07.258 [2024-11-20T05:42:39.095Z] Total : 11553.17 45.13 0.00 0.00 11037.33 2356.18 12420.63 00:32:07.515 06:42:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=704118 00:32:07.515 06:42:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:32:07.515 06:42:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:32:07.515 06:42:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:32:07.515 06:42:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:32:07.515 06:42:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:32:07.515 06:42:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:07.515 06:42:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:07.515 { 00:32:07.515 "params": { 00:32:07.515 "name": "Nvme$subsystem", 00:32:07.515 "trtype": "$TEST_TRANSPORT", 00:32:07.515 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:07.515 "adrfam": "ipv4", 00:32:07.515 "trsvcid": "$NVMF_PORT", 00:32:07.515 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:07.515 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:07.515 "hdgst": ${hdgst:-false}, 00:32:07.515 "ddgst": ${ddgst:-false} 00:32:07.515 }, 00:32:07.515 "method": "bdev_nvme_attach_controller" 00:32:07.515 } 00:32:07.515 EOF 00:32:07.515 )") 00:32:07.515 06:42:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:32:07.515 06:42:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:32:07.515 06:42:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:32:07.515 06:42:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:07.515 "params": { 00:32:07.515 "name": "Nvme1", 00:32:07.515 "trtype": "tcp", 00:32:07.515 "traddr": "10.0.0.2", 00:32:07.515 "adrfam": "ipv4", 00:32:07.515 "trsvcid": "4420", 00:32:07.515 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:07.515 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:07.515 "hdgst": false, 00:32:07.515 "ddgst": false 00:32:07.515 }, 00:32:07.515 "method": "bdev_nvme_attach_controller" 00:32:07.515 }' 00:32:07.515 [2024-11-20 06:42:39.236418] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:32:07.515 [2024-11-20 06:42:39.236472] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid704118 ] 00:32:07.515 [2024-11-20 06:42:39.312510] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:07.771 [2024-11-20 06:42:39.350702] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:08.028 Running I/O for 15 seconds... 00:32:09.890 11389.00 IOPS, 44.49 MiB/s [2024-11-20T05:42:42.294Z] 11384.50 IOPS, 44.47 MiB/s [2024-11-20T05:42:42.294Z] 06:42:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 703652 00:32:10.458 06:42:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:32:10.458 [2024-11-20 06:42:42.206839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:101896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.458 [2024-11-20 06:42:42.206875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.458 [2024-11-20 06:42:42.206894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:101904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.458 [2024-11-20 06:42:42.206902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.458 [2024-11-20 06:42:42.206913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:101912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.458 [2024-11-20 06:42:42.206921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.458 [2024-11-20 06:42:42.206930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:101920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.458 [2024-11-20 06:42:42.206937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.458 [2024-11-20 06:42:42.206946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:101928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.458 [2024-11-20 06:42:42.206953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.458 [2024-11-20 06:42:42.206962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:101936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.458 [2024-11-20 06:42:42.206969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.458 [2024-11-20 06:42:42.206979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:101944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.458 [2024-11-20 06:42:42.206986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.458 [2024-11-20 06:42:42.206995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:101952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.458 [2024-11-20 06:42:42.207002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.458 [2024-11-20 06:42:42.207010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:101960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.458 [2024-11-20 06:42:42.207017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.458 [2024-11-20 06:42:42.207027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:101968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.458 [2024-11-20 06:42:42.207037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.458 [2024-11-20 06:42:42.207046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:101976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.458 [2024-11-20 06:42:42.207058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.458 [2024-11-20 06:42:42.207068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:101984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.458 [2024-11-20 06:42:42.207075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.458 [2024-11-20 06:42:42.207084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:101992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.458 [2024-11-20 06:42:42.207091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.458 [2024-11-20 06:42:42.207100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:102000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.458 [2024-11-20 06:42:42.207107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.458 [2024-11-20 06:42:42.207118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:102008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.458 [2024-11-20 06:42:42.207129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.458 [2024-11-20 06:42:42.207139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:102016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.458 [2024-11-20 06:42:42.207150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.458 [2024-11-20 06:42:42.207159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:102024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.458 [2024-11-20 06:42:42.207168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.459 [2024-11-20 06:42:42.207178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:102032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.459 [2024-11-20 06:42:42.207187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.459 [2024-11-20 06:42:42.207196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:102040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.459 [2024-11-20 06:42:42.207321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.459 [2024-11-20 06:42:42.207331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:102048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.459 [2024-11-20 06:42:42.207338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.459 [2024-11-20 06:42:42.207348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:102056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.459 [2024-11-20 06:42:42.207354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.459 [2024-11-20 06:42:42.207363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:102064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.459 [2024-11-20 06:42:42.207370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.459 [2024-11-20 06:42:42.207378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:102072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.459 [2024-11-20 06:42:42.207385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.459 [2024-11-20 06:42:42.207396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:102080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.459 [2024-11-20 06:42:42.207404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.459 [2024-11-20 06:42:42.207412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:102088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.459 [2024-11-20 06:42:42.207418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.459 [2024-11-20 06:42:42.207427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:102096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.459 [2024-11-20 06:42:42.207433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.459 [2024-11-20 06:42:42.207441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:102104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.459 [2024-11-20 06:42:42.207448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.459 [2024-11-20 06:42:42.207457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:102112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.459 [2024-11-20 06:42:42.207465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.459 [2024-11-20 06:42:42.207475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:102120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.459 [2024-11-20 06:42:42.207481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.459 [2024-11-20 06:42:42.207489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:102128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.459 [2024-11-20 06:42:42.207496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.459 [2024-11-20 06:42:42.207503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:102136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.459 [2024-11-20 06:42:42.207510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.459 [2024-11-20 06:42:42.207519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:102144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.459 [2024-11-20 06:42:42.207526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.459 [2024-11-20 06:42:42.207535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:102152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.459 [2024-11-20 06:42:42.207542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.459 [2024-11-20 06:42:42.207552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:102160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.459 [2024-11-20 06:42:42.207560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.459 [2024-11-20 06:42:42.207568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:102168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.459 [2024-11-20 06:42:42.207576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.459 [2024-11-20 06:42:42.207584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:102176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.459 [2024-11-20 06:42:42.207594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.459 [2024-11-20 06:42:42.207603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:102184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.459 [2024-11-20 06:42:42.207609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.459 [2024-11-20 06:42:42.207617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:102192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.459 [2024-11-20 06:42:42.207624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.459 [2024-11-20 06:42:42.207632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:102200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.459 [2024-11-20 06:42:42.207639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.459 [2024-11-20 06:42:42.207646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:102208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.459 [2024-11-20 06:42:42.207653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.459 [2024-11-20 06:42:42.207661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:102216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.459 [2024-11-20 06:42:42.207668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.459 [2024-11-20 06:42:42.207676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:102224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.459 [2024-11-20 06:42:42.207682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.459 [2024-11-20 06:42:42.207691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:102232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.459 [2024-11-20 06:42:42.207697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.459 [2024-11-20 06:42:42.207705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:102240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.459 [2024-11-20 06:42:42.207712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.459 [2024-11-20 06:42:42.207720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:102248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.459 [2024-11-20 06:42:42.207726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.459 [2024-11-20 06:42:42.207734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:102256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.459 [2024-11-20 06:42:42.207740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.459 [2024-11-20 06:42:42.207748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:102264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.459 [2024-11-20 06:42:42.207755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.459 [2024-11-20 06:42:42.207763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:102272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.460 [2024-11-20 06:42:42.207769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.460 [2024-11-20 06:42:42.207778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:102280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.460 [2024-11-20 06:42:42.207786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.460 [2024-11-20 06:42:42.207794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:102288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.460 [2024-11-20 06:42:42.207802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.460 [2024-11-20 06:42:42.207811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:102296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.460 [2024-11-20 06:42:42.207817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.460 [2024-11-20 06:42:42.207825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:102304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.460 [2024-11-20 06:42:42.207831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.460 [2024-11-20 06:42:42.207839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:102312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.460 [2024-11-20 06:42:42.207845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.460 [2024-11-20 06:42:42.207854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:102320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.460 [2024-11-20 06:42:42.207860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.460 [2024-11-20 06:42:42.207869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:102328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.460 [2024-11-20 06:42:42.207875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.460 [2024-11-20 06:42:42.207883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:102336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.460 [2024-11-20 06:42:42.207890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.460 [2024-11-20 06:42:42.207898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:102344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.460 [2024-11-20 06:42:42.207904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.460 [2024-11-20 06:42:42.207912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:102352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.460 [2024-11-20 06:42:42.207919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.460 [2024-11-20 06:42:42.207927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:102360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.460 [2024-11-20 06:42:42.207933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.460 [2024-11-20 06:42:42.207941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:102368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.460 [2024-11-20 06:42:42.207947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.460 [2024-11-20 06:42:42.207955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:102376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.460 [2024-11-20 06:42:42.207962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.460 [2024-11-20 06:42:42.207972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:102384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.460 [2024-11-20 06:42:42.207978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.460 [2024-11-20 06:42:42.207986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:102392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.460 [2024-11-20 06:42:42.207993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.460 [2024-11-20 06:42:42.208000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:102400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.460 [2024-11-20 06:42:42.208007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.460 [2024-11-20 06:42:42.208014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:102408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.460 [2024-11-20 06:42:42.208020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.460 [2024-11-20 06:42:42.208030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:102416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.460 [2024-11-20 06:42:42.208039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.460 [2024-11-20 06:42:42.208049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:102424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.460 [2024-11-20 06:42:42.208055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.460 [2024-11-20 06:42:42.208064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:102432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.460 [2024-11-20 06:42:42.208070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.460 [2024-11-20 06:42:42.208079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:102440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.460 [2024-11-20 06:42:42.208086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.460 [2024-11-20 06:42:42.208094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:102448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.460 [2024-11-20 06:42:42.208101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.460 [2024-11-20 06:42:42.208109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:102456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.460 [2024-11-20 06:42:42.208115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.460 [2024-11-20 06:42:42.208123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:102464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.460 [2024-11-20 06:42:42.208130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.460 [2024-11-20 06:42:42.208137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:102472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.460 [2024-11-20 06:42:42.208144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.460 [2024-11-20 06:42:42.208153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:102480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.460 [2024-11-20 06:42:42.208161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.460 [2024-11-20 06:42:42.208169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:102488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.460 [2024-11-20 06:42:42.208175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.460 [2024-11-20 06:42:42.208183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:102496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.460 [2024-11-20 06:42:42.208189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.460 [2024-11-20 06:42:42.208197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:102504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.460 [2024-11-20 06:42:42.208209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.460 [2024-11-20 06:42:42.208217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:102512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.460 [2024-11-20 06:42:42.208224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.460 [2024-11-20 06:42:42.208232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:102520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.460 [2024-11-20 06:42:42.208239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.460 [2024-11-20 06:42:42.208246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:102528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.460 [2024-11-20 06:42:42.208253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.460 [2024-11-20 06:42:42.208261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:102536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.460 [2024-11-20 06:42:42.208267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.461 [2024-11-20 06:42:42.208276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:102544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.461 [2024-11-20 06:42:42.208283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.461 [2024-11-20 06:42:42.208291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:102552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.461 [2024-11-20 06:42:42.208298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.461 [2024-11-20 06:42:42.208306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:102560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.461 [2024-11-20 06:42:42.208312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.461 [2024-11-20 06:42:42.208320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:102568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.461 [2024-11-20 06:42:42.208326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.461 [2024-11-20 06:42:42.208335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:102576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.461 [2024-11-20 06:42:42.208343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.461 [2024-11-20 06:42:42.208352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:102584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.461 [2024-11-20 06:42:42.208359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.461 [2024-11-20 06:42:42.208367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:102592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.461 [2024-11-20 06:42:42.208373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.461 [2024-11-20 06:42:42.208381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:102600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.461 [2024-11-20 06:42:42.208388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.461 [2024-11-20 06:42:42.208396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:102608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.461 [2024-11-20 06:42:42.208403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.461 [2024-11-20 06:42:42.208411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:102616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.461 [2024-11-20 06:42:42.208417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.461 [2024-11-20 06:42:42.208425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:102624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.461 [2024-11-20 06:42:42.208431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.461 [2024-11-20 06:42:42.208439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:102632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.461 [2024-11-20 06:42:42.208446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.461 [2024-11-20 06:42:42.208455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:102640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.461 [2024-11-20 06:42:42.208462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.461 [2024-11-20 06:42:42.208470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:102648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.461 [2024-11-20 06:42:42.208476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.461 [2024-11-20 06:42:42.208484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:102656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.461 [2024-11-20 06:42:42.208491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.461 [2024-11-20 06:42:42.208498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:102664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.461 [2024-11-20 06:42:42.208505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.461 [2024-11-20 06:42:42.208514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:102672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.461 [2024-11-20 06:42:42.208521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.461 [2024-11-20 06:42:42.208529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:102680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.461 [2024-11-20 06:42:42.208536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.461 [2024-11-20 06:42:42.208544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:102688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.461 [2024-11-20 06:42:42.208551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.461 [2024-11-20 06:42:42.208560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:102696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.461 [2024-11-20 06:42:42.208566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.461 [2024-11-20 06:42:42.208574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:102704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.461 [2024-11-20 06:42:42.208581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.461 [2024-11-20 06:42:42.208589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:102712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.461 [2024-11-20 06:42:42.208595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.461 [2024-11-20 06:42:42.208603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:102720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.461 [2024-11-20 06:42:42.208610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.461 [2024-11-20 06:42:42.208619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:102728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.461 [2024-11-20 06:42:42.208626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.461 [2024-11-20 06:42:42.208634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:102736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.461 [2024-11-20 06:42:42.208641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.461 [2024-11-20 06:42:42.208648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:102744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.461 [2024-11-20 06:42:42.208655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.461 [2024-11-20 06:42:42.208663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:102752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.461 [2024-11-20 06:42:42.208669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.461 [2024-11-20 06:42:42.208677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:102760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.461 [2024-11-20 06:42:42.208684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.461 [2024-11-20 06:42:42.208692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:102768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.461 [2024-11-20 06:42:42.208699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.461 [2024-11-20 06:42:42.208707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:102776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.461 [2024-11-20 06:42:42.208713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.461 [2024-11-20 06:42:42.208725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:102784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.461 [2024-11-20 06:42:42.208732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.461 [2024-11-20 06:42:42.208740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:102792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.461 [2024-11-20 06:42:42.208746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.461 [2024-11-20 06:42:42.208761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:102800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.461 [2024-11-20 06:42:42.208767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.462 [2024-11-20 06:42:42.208775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:102808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.462 [2024-11-20 06:42:42.208782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.462 [2024-11-20 06:42:42.208790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:102816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.462 [2024-11-20 06:42:42.208797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.462 [2024-11-20 06:42:42.208805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.462 [2024-11-20 06:42:42.208812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.462 [2024-11-20 06:42:42.208819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:102832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.462 [2024-11-20 06:42:42.208826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.462 [2024-11-20 06:42:42.208833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:102840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.462 [2024-11-20 06:42:42.208840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.462 [2024-11-20 06:42:42.208848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:102848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.462 [2024-11-20 06:42:42.208856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.462 [2024-11-20 06:42:42.208864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:102856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.462 [2024-11-20 06:42:42.208871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.462 [2024-11-20 06:42:42.208879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:102864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.462 [2024-11-20 06:42:42.208885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.462 [2024-11-20 06:42:42.208893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:102872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.462 [2024-11-20 06:42:42.208900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.462 [2024-11-20 06:42:42.208909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:102880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.462 [2024-11-20 06:42:42.208917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.462 [2024-11-20 06:42:42.208925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:102888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.462 [2024-11-20 06:42:42.208931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.462 [2024-11-20 06:42:42.208939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:102896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.462 [2024-11-20 06:42:42.208945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.462 [2024-11-20 06:42:42.208953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:102904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.462 [2024-11-20 06:42:42.208960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.462 [2024-11-20 06:42:42.208968] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80aba0 is same with the state(6) to be set 00:32:10.462 [2024-11-20 06:42:42.208977] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.462 [2024-11-20 06:42:42.208983] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.462 [2024-11-20 06:42:42.208989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102912 len:8 PRP1 0x0 PRP2 0x0 00:32:10.462 [2024-11-20 06:42:42.208998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.462 [2024-11-20 06:42:42.211813] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:10.462 [2024-11-20 06:42:42.211867] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:10.462 [2024-11-20 06:42:42.212399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.462 [2024-11-20 06:42:42.212416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:10.462 [2024-11-20 06:42:42.212424] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:10.462 [2024-11-20 06:42:42.212598] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:10.462 [2024-11-20 06:42:42.212772] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:10.462 [2024-11-20 06:42:42.212780] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:10.462 [2024-11-20 06:42:42.212789] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:10.462 [2024-11-20 06:42:42.212796] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:10.462 [2024-11-20 06:42:42.225029] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:10.462 [2024-11-20 06:42:42.225470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.462 [2024-11-20 06:42:42.225522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:10.462 [2024-11-20 06:42:42.225548] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:10.462 [2024-11-20 06:42:42.226129] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:10.462 [2024-11-20 06:42:42.226728] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:10.462 [2024-11-20 06:42:42.226756] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:10.462 [2024-11-20 06:42:42.226787] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:10.462 [2024-11-20 06:42:42.226813] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:10.462 [2024-11-20 06:42:42.237800] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:10.462 [2024-11-20 06:42:42.238228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.462 [2024-11-20 06:42:42.238246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:10.462 [2024-11-20 06:42:42.238254] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:10.462 [2024-11-20 06:42:42.238414] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:10.462 [2024-11-20 06:42:42.238575] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:10.462 [2024-11-20 06:42:42.238584] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:10.462 [2024-11-20 06:42:42.238590] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:10.462 [2024-11-20 06:42:42.238597] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:10.462 [2024-11-20 06:42:42.250535] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:10.462 [2024-11-20 06:42:42.250961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.462 [2024-11-20 06:42:42.251010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:10.462 [2024-11-20 06:42:42.251036] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:10.462 [2024-11-20 06:42:42.251641] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:10.462 [2024-11-20 06:42:42.251812] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:10.462 [2024-11-20 06:42:42.251821] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:10.462 [2024-11-20 06:42:42.251829] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:10.462 [2024-11-20 06:42:42.251835] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:10.462 [2024-11-20 06:42:42.263363] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:10.462 [2024-11-20 06:42:42.263785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.462 [2024-11-20 06:42:42.263804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:10.462 [2024-11-20 06:42:42.263811] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:10.462 [2024-11-20 06:42:42.263971] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:10.462 [2024-11-20 06:42:42.264131] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:10.462 [2024-11-20 06:42:42.264140] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:10.463 [2024-11-20 06:42:42.264146] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:10.463 [2024-11-20 06:42:42.264153] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:10.463 [2024-11-20 06:42:42.276157] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:10.463 [2024-11-20 06:42:42.276511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.463 [2024-11-20 06:42:42.276528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:10.463 [2024-11-20 06:42:42.276536] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:10.463 [2024-11-20 06:42:42.276696] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:10.463 [2024-11-20 06:42:42.276855] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:10.463 [2024-11-20 06:42:42.276864] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:10.463 [2024-11-20 06:42:42.276871] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:10.463 [2024-11-20 06:42:42.276877] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:10.723 [2024-11-20 06:42:42.289221] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:10.723 [2024-11-20 06:42:42.289637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.723 [2024-11-20 06:42:42.289683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:10.723 [2024-11-20 06:42:42.289707] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:10.723 [2024-11-20 06:42:42.290303] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:10.723 [2024-11-20 06:42:42.290890] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:10.723 [2024-11-20 06:42:42.290916] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:10.723 [2024-11-20 06:42:42.290938] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:10.723 [2024-11-20 06:42:42.290959] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:10.723 [2024-11-20 06:42:42.304317] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:10.723 [2024-11-20 06:42:42.304782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.723 [2024-11-20 06:42:42.304828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:10.723 [2024-11-20 06:42:42.304852] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:10.723 [2024-11-20 06:42:42.305448] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:10.723 [2024-11-20 06:42:42.306032] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:10.723 [2024-11-20 06:42:42.306060] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:10.723 [2024-11-20 06:42:42.306070] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:10.723 [2024-11-20 06:42:42.306081] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:10.723 [2024-11-20 06:42:42.317211] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:10.723 [2024-11-20 06:42:42.317563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.723 [2024-11-20 06:42:42.317581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:10.723 [2024-11-20 06:42:42.317592] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:10.723 [2024-11-20 06:42:42.317761] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:10.723 [2024-11-20 06:42:42.317928] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:10.723 [2024-11-20 06:42:42.317938] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:10.723 [2024-11-20 06:42:42.317945] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:10.723 [2024-11-20 06:42:42.317951] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:10.723 [2024-11-20 06:42:42.330023] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:10.723 [2024-11-20 06:42:42.330378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.723 [2024-11-20 06:42:42.330395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:10.723 [2024-11-20 06:42:42.330404] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:10.723 [2024-11-20 06:42:42.330563] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:10.723 [2024-11-20 06:42:42.330722] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:10.723 [2024-11-20 06:42:42.330731] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:10.723 [2024-11-20 06:42:42.330738] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:10.723 [2024-11-20 06:42:42.330744] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:10.723 [2024-11-20 06:42:42.342765] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:10.723 [2024-11-20 06:42:42.343165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.723 [2024-11-20 06:42:42.343182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:10.723 [2024-11-20 06:42:42.343191] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:10.723 [2024-11-20 06:42:42.343379] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:10.723 [2024-11-20 06:42:42.343548] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:10.723 [2024-11-20 06:42:42.343557] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:10.723 [2024-11-20 06:42:42.343564] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:10.723 [2024-11-20 06:42:42.343571] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:10.723 [2024-11-20 06:42:42.355519] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:10.723 [2024-11-20 06:42:42.355935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.723 [2024-11-20 06:42:42.355952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:10.723 [2024-11-20 06:42:42.355959] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:10.723 [2024-11-20 06:42:42.356117] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:10.723 [2024-11-20 06:42:42.356302] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:10.723 [2024-11-20 06:42:42.356313] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:10.723 [2024-11-20 06:42:42.356320] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:10.723 [2024-11-20 06:42:42.356326] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:10.723 [2024-11-20 06:42:42.368451] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:10.723 [2024-11-20 06:42:42.368870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.723 [2024-11-20 06:42:42.368887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:10.723 [2024-11-20 06:42:42.368894] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:10.723 [2024-11-20 06:42:42.369053] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:10.723 [2024-11-20 06:42:42.369219] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:10.723 [2024-11-20 06:42:42.369245] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:10.723 [2024-11-20 06:42:42.369252] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:10.723 [2024-11-20 06:42:42.369259] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:10.723 [2024-11-20 06:42:42.381243] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:10.723 [2024-11-20 06:42:42.381634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.723 [2024-11-20 06:42:42.381651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:10.723 [2024-11-20 06:42:42.381659] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:10.723 [2024-11-20 06:42:42.381817] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:10.723 [2024-11-20 06:42:42.381976] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:10.723 [2024-11-20 06:42:42.381986] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:10.723 [2024-11-20 06:42:42.381992] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:10.723 [2024-11-20 06:42:42.381999] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:10.723 [2024-11-20 06:42:42.393999] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:10.723 [2024-11-20 06:42:42.394371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.723 [2024-11-20 06:42:42.394388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:10.723 [2024-11-20 06:42:42.394395] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:10.723 [2024-11-20 06:42:42.394554] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:10.723 [2024-11-20 06:42:42.394713] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:10.723 [2024-11-20 06:42:42.394723] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:10.724 [2024-11-20 06:42:42.394732] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:10.724 [2024-11-20 06:42:42.394739] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:10.724 [2024-11-20 06:42:42.406764] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:10.724 [2024-11-20 06:42:42.407128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.724 [2024-11-20 06:42:42.407145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:10.724 [2024-11-20 06:42:42.407152] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:10.724 [2024-11-20 06:42:42.407335] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:10.724 [2024-11-20 06:42:42.407503] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:10.724 [2024-11-20 06:42:42.407513] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:10.724 [2024-11-20 06:42:42.407519] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:10.724 [2024-11-20 06:42:42.407526] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:10.724 [2024-11-20 06:42:42.419615] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:10.724 [2024-11-20 06:42:42.420004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.724 [2024-11-20 06:42:42.420021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:10.724 [2024-11-20 06:42:42.420029] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:10.724 [2024-11-20 06:42:42.420188] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:10.724 [2024-11-20 06:42:42.420373] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:10.724 [2024-11-20 06:42:42.420384] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:10.724 [2024-11-20 06:42:42.420390] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:10.724 [2024-11-20 06:42:42.420397] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:10.724 [2024-11-20 06:42:42.432618] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:10.724 [2024-11-20 06:42:42.433046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.724 [2024-11-20 06:42:42.433065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:10.724 [2024-11-20 06:42:42.433073] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:10.724 [2024-11-20 06:42:42.433238] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:10.724 [2024-11-20 06:42:42.433422] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:10.724 [2024-11-20 06:42:42.433432] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:10.724 [2024-11-20 06:42:42.433438] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:10.724 [2024-11-20 06:42:42.433445] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:10.724 [2024-11-20 06:42:42.445356] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:10.724 [2024-11-20 06:42:42.445686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.724 [2024-11-20 06:42:42.445703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:10.724 [2024-11-20 06:42:42.445711] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:10.724 [2024-11-20 06:42:42.445870] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:10.724 [2024-11-20 06:42:42.446030] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:10.724 [2024-11-20 06:42:42.446039] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:10.724 [2024-11-20 06:42:42.446045] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:10.724 [2024-11-20 06:42:42.446052] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:10.724 [2024-11-20 06:42:42.458181] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:10.724 [2024-11-20 06:42:42.458578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.724 [2024-11-20 06:42:42.458596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:10.724 [2024-11-20 06:42:42.458604] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:10.724 [2024-11-20 06:42:42.458772] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:10.724 [2024-11-20 06:42:42.458939] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:10.724 [2024-11-20 06:42:42.458949] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:10.724 [2024-11-20 06:42:42.458955] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:10.724 [2024-11-20 06:42:42.458962] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:10.724 [2024-11-20 06:42:42.471279] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:10.724 [2024-11-20 06:42:42.471686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.724 [2024-11-20 06:42:42.471704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:10.724 [2024-11-20 06:42:42.471713] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:10.724 [2024-11-20 06:42:42.471885] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:10.724 [2024-11-20 06:42:42.472057] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:10.724 [2024-11-20 06:42:42.472067] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:10.724 [2024-11-20 06:42:42.472074] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:10.724 [2024-11-20 06:42:42.472081] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:10.724 [2024-11-20 06:42:42.484238] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:10.724 [2024-11-20 06:42:42.484644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.724 [2024-11-20 06:42:42.484662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:10.724 [2024-11-20 06:42:42.484673] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:10.724 [2024-11-20 06:42:42.484846] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:10.724 [2024-11-20 06:42:42.485017] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:10.724 [2024-11-20 06:42:42.485027] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:10.724 [2024-11-20 06:42:42.485034] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:10.724 [2024-11-20 06:42:42.485041] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:10.724 [2024-11-20 06:42:42.497180] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:10.724 [2024-11-20 06:42:42.497597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.724 [2024-11-20 06:42:42.497615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:10.724 [2024-11-20 06:42:42.497622] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:10.724 [2024-11-20 06:42:42.497789] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:10.724 [2024-11-20 06:42:42.497956] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:10.724 [2024-11-20 06:42:42.497966] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:10.724 [2024-11-20 06:42:42.497973] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:10.724 [2024-11-20 06:42:42.497979] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:10.724 [2024-11-20 06:42:42.510136] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:10.724 [2024-11-20 06:42:42.510503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.724 [2024-11-20 06:42:42.510520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:10.724 [2024-11-20 06:42:42.510528] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:10.724 [2024-11-20 06:42:42.510686] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:10.724 [2024-11-20 06:42:42.510846] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:10.724 [2024-11-20 06:42:42.510855] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:10.724 [2024-11-20 06:42:42.510862] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:10.724 [2024-11-20 06:42:42.510868] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:10.725 [2024-11-20 06:42:42.522912] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:10.725 [2024-11-20 06:42:42.523348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.725 [2024-11-20 06:42:42.523395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:10.725 [2024-11-20 06:42:42.523419] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:10.725 [2024-11-20 06:42:42.523999] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:10.725 [2024-11-20 06:42:42.524598] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:10.725 [2024-11-20 06:42:42.524609] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:10.725 [2024-11-20 06:42:42.524616] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:10.725 [2024-11-20 06:42:42.524622] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:10.725 [2024-11-20 06:42:42.535677] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:10.725 [2024-11-20 06:42:42.536088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.725 [2024-11-20 06:42:42.536105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:10.725 [2024-11-20 06:42:42.536112] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:10.725 [2024-11-20 06:42:42.536293] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:10.725 [2024-11-20 06:42:42.536461] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:10.725 [2024-11-20 06:42:42.536471] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:10.725 [2024-11-20 06:42:42.536478] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:10.725 [2024-11-20 06:42:42.536484] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:10.725 [2024-11-20 06:42:42.548382] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:10.725 [2024-11-20 06:42:42.548732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.725 [2024-11-20 06:42:42.548778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:10.725 [2024-11-20 06:42:42.548802] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:10.725 [2024-11-20 06:42:42.549340] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:10.725 [2024-11-20 06:42:42.549515] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:10.725 [2024-11-20 06:42:42.549525] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:10.725 [2024-11-20 06:42:42.549532] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:10.725 [2024-11-20 06:42:42.549539] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:10.985 [2024-11-20 06:42:42.561391] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:10.985 [2024-11-20 06:42:42.561823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.985 [2024-11-20 06:42:42.561869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:10.985 [2024-11-20 06:42:42.561893] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:10.985 [2024-11-20 06:42:42.562486] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:10.985 [2024-11-20 06:42:42.562683] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:10.985 [2024-11-20 06:42:42.562693] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:10.985 [2024-11-20 06:42:42.562703] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:10.985 [2024-11-20 06:42:42.562711] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:10.985 [2024-11-20 06:42:42.574204] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:10.985 [2024-11-20 06:42:42.574622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.985 [2024-11-20 06:42:42.574660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:10.985 [2024-11-20 06:42:42.574686] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:10.985 [2024-11-20 06:42:42.575277] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:10.985 [2024-11-20 06:42:42.575778] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:10.985 [2024-11-20 06:42:42.575788] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:10.985 [2024-11-20 06:42:42.575794] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:10.985 [2024-11-20 06:42:42.575801] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:10.985 [2024-11-20 06:42:42.587025] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:10.985 [2024-11-20 06:42:42.587416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.985 [2024-11-20 06:42:42.587433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:10.985 [2024-11-20 06:42:42.587441] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:10.985 [2024-11-20 06:42:42.587598] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:10.985 [2024-11-20 06:42:42.587757] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:10.985 [2024-11-20 06:42:42.587766] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:10.985 [2024-11-20 06:42:42.587773] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:10.985 [2024-11-20 06:42:42.587779] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:10.985 [2024-11-20 06:42:42.599794] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:10.985 [2024-11-20 06:42:42.600214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.985 [2024-11-20 06:42:42.600258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:10.985 [2024-11-20 06:42:42.600283] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:10.985 [2024-11-20 06:42:42.600846] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:10.985 [2024-11-20 06:42:42.601016] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:10.985 [2024-11-20 06:42:42.601026] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:10.985 [2024-11-20 06:42:42.601033] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:10.985 [2024-11-20 06:42:42.601039] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:10.985 [2024-11-20 06:42:42.612535] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:10.985 [2024-11-20 06:42:42.612888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.985 [2024-11-20 06:42:42.612906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:10.985 [2024-11-20 06:42:42.612913] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:10.985 [2024-11-20 06:42:42.613072] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:10.985 [2024-11-20 06:42:42.613236] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:10.985 [2024-11-20 06:42:42.613246] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:10.985 [2024-11-20 06:42:42.613269] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:10.985 [2024-11-20 06:42:42.613277] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:10.985 [2024-11-20 06:42:42.625341] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:10.985 [2024-11-20 06:42:42.625731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.985 [2024-11-20 06:42:42.625748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:10.985 [2024-11-20 06:42:42.625756] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:10.985 [2024-11-20 06:42:42.625915] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:10.985 [2024-11-20 06:42:42.626075] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:10.985 [2024-11-20 06:42:42.626084] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:10.985 [2024-11-20 06:42:42.626090] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:10.985 [2024-11-20 06:42:42.626096] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:10.985 [2024-11-20 06:42:42.638127] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:10.985 [2024-11-20 06:42:42.638494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.985 [2024-11-20 06:42:42.638512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:10.985 [2024-11-20 06:42:42.638519] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:10.986 [2024-11-20 06:42:42.638686] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:10.986 [2024-11-20 06:42:42.638854] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:10.986 [2024-11-20 06:42:42.638864] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:10.986 [2024-11-20 06:42:42.638871] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:10.986 [2024-11-20 06:42:42.638878] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:10.986 9707.00 IOPS, 37.92 MiB/s [2024-11-20T05:42:42.822Z] [2024-11-20 06:42:42.650903] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:10.986 [2024-11-20 06:42:42.651233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.986 [2024-11-20 06:42:42.651250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:10.986 [2024-11-20 06:42:42.651264] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:10.986 [2024-11-20 06:42:42.651433] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:10.986 [2024-11-20 06:42:42.651601] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:10.986 [2024-11-20 06:42:42.651611] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:10.986 [2024-11-20 06:42:42.651618] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:10.986 [2024-11-20 06:42:42.651624] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:10.986 [2024-11-20 06:42:42.663664] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:10.986 [2024-11-20 06:42:42.663994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.986 [2024-11-20 06:42:42.664011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:10.986 [2024-11-20 06:42:42.664018] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:10.986 [2024-11-20 06:42:42.664186] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:10.986 [2024-11-20 06:42:42.664360] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:10.986 [2024-11-20 06:42:42.664370] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:10.986 [2024-11-20 06:42:42.664377] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:10.986 [2024-11-20 06:42:42.664383] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:10.986 [2024-11-20 06:42:42.676485] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:10.986 [2024-11-20 06:42:42.676882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.986 [2024-11-20 06:42:42.676928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:10.986 [2024-11-20 06:42:42.676952] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:10.986 [2024-11-20 06:42:42.677545] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:10.986 [2024-11-20 06:42:42.677973] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:10.986 [2024-11-20 06:42:42.677983] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:10.986 [2024-11-20 06:42:42.677990] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:10.986 [2024-11-20 06:42:42.677996] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:10.986 [2024-11-20 06:42:42.689321] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:10.986 [2024-11-20 06:42:42.689744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.986 [2024-11-20 06:42:42.689790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:10.986 [2024-11-20 06:42:42.689813] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:10.986 [2024-11-20 06:42:42.690408] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:10.986 [2024-11-20 06:42:42.690978] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:10.986 [2024-11-20 06:42:42.690988] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:10.986 [2024-11-20 06:42:42.690994] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:10.986 [2024-11-20 06:42:42.691001] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:10.986 [2024-11-20 06:42:42.702155] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:10.986 [2024-11-20 06:42:42.702571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.986 [2024-11-20 06:42:42.702589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:10.986 [2024-11-20 06:42:42.702596] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:10.986 [2024-11-20 06:42:42.702755] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:10.986 [2024-11-20 06:42:42.702915] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:10.986 [2024-11-20 06:42:42.702924] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:10.986 [2024-11-20 06:42:42.702931] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:10.986 [2024-11-20 06:42:42.702937] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:10.986 [2024-11-20 06:42:42.714968] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:10.986 [2024-11-20 06:42:42.715336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.986 [2024-11-20 06:42:42.715354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:10.986 [2024-11-20 06:42:42.715362] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:10.986 [2024-11-20 06:42:42.715529] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:10.986 [2024-11-20 06:42:42.715697] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:10.986 [2024-11-20 06:42:42.715707] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:10.986 [2024-11-20 06:42:42.715714] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:10.986 [2024-11-20 06:42:42.715720] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:10.986 [2024-11-20 06:42:42.728061] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:10.986 [2024-11-20 06:42:42.728404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.986 [2024-11-20 06:42:42.728422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:10.986 [2024-11-20 06:42:42.728430] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:10.986 [2024-11-20 06:42:42.728602] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:10.986 [2024-11-20 06:42:42.728774] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:10.986 [2024-11-20 06:42:42.728784] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:10.986 [2024-11-20 06:42:42.728795] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:10.986 [2024-11-20 06:42:42.728803] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:10.986 [2024-11-20 06:42:42.741040] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:10.986 [2024-11-20 06:42:42.741412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.986 [2024-11-20 06:42:42.741430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:10.986 [2024-11-20 06:42:42.741438] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:10.986 [2024-11-20 06:42:42.741606] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:10.986 [2024-11-20 06:42:42.741775] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:10.986 [2024-11-20 06:42:42.741786] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:10.986 [2024-11-20 06:42:42.741793] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:10.986 [2024-11-20 06:42:42.741800] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:10.986 [2024-11-20 06:42:42.753827] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:10.986 [2024-11-20 06:42:42.754173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.986 [2024-11-20 06:42:42.754190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:10.986 [2024-11-20 06:42:42.754197] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:10.986 [2024-11-20 06:42:42.754383] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:10.986 [2024-11-20 06:42:42.754552] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:10.987 [2024-11-20 06:42:42.754562] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:10.987 [2024-11-20 06:42:42.754569] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:10.987 [2024-11-20 06:42:42.754575] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:10.987 [2024-11-20 06:42:42.766603] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:10.987 [2024-11-20 06:42:42.767029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.987 [2024-11-20 06:42:42.767074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:10.987 [2024-11-20 06:42:42.767098] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:10.987 [2024-11-20 06:42:42.767691] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:10.987 [2024-11-20 06:42:42.768160] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:10.987 [2024-11-20 06:42:42.768169] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:10.987 [2024-11-20 06:42:42.768176] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:10.987 [2024-11-20 06:42:42.768183] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:10.987 [2024-11-20 06:42:42.779370] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:10.987 [2024-11-20 06:42:42.779796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.987 [2024-11-20 06:42:42.779841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:10.987 [2024-11-20 06:42:42.779865] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:10.987 [2024-11-20 06:42:42.780251] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:10.987 [2024-11-20 06:42:42.780421] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:10.987 [2024-11-20 06:42:42.780430] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:10.987 [2024-11-20 06:42:42.780437] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:10.987 [2024-11-20 06:42:42.780443] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:10.987 [2024-11-20 06:42:42.792105] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:10.987 [2024-11-20 06:42:42.792515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.987 [2024-11-20 06:42:42.792554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:10.987 [2024-11-20 06:42:42.792580] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:10.987 [2024-11-20 06:42:42.793147] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:10.987 [2024-11-20 06:42:42.793333] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:10.987 [2024-11-20 06:42:42.793344] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:10.987 [2024-11-20 06:42:42.793350] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:10.987 [2024-11-20 06:42:42.793357] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:10.987 [2024-11-20 06:42:42.804857] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:10.987 [2024-11-20 06:42:42.805161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.987 [2024-11-20 06:42:42.805178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:10.987 [2024-11-20 06:42:42.805185] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:10.987 [2024-11-20 06:42:42.805369] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:10.987 [2024-11-20 06:42:42.805537] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:10.987 [2024-11-20 06:42:42.805547] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:10.987 [2024-11-20 06:42:42.805554] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:10.987 [2024-11-20 06:42:42.805560] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:11.249 [2024-11-20 06:42:42.817837] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:11.249 [2024-11-20 06:42:42.818194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.249 [2024-11-20 06:42:42.818223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:11.249 [2024-11-20 06:42:42.818234] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:11.249 [2024-11-20 06:42:42.818402] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:11.249 [2024-11-20 06:42:42.818569] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:11.249 [2024-11-20 06:42:42.818579] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:11.249 [2024-11-20 06:42:42.818586] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:11.249 [2024-11-20 06:42:42.818592] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:11.249 [2024-11-20 06:42:42.830608] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:11.249 [2024-11-20 06:42:42.831007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.249 [2024-11-20 06:42:42.831052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:11.249 [2024-11-20 06:42:42.831075] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:11.249 [2024-11-20 06:42:42.831542] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:11.249 [2024-11-20 06:42:42.831711] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:11.249 [2024-11-20 06:42:42.831721] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:11.249 [2024-11-20 06:42:42.831727] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:11.249 [2024-11-20 06:42:42.831734] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:11.249 [2024-11-20 06:42:42.843392] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:11.249 [2024-11-20 06:42:42.843732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.249 [2024-11-20 06:42:42.843750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:11.249 [2024-11-20 06:42:42.843757] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:11.249 [2024-11-20 06:42:42.843915] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:11.249 [2024-11-20 06:42:42.844074] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:11.249 [2024-11-20 06:42:42.844083] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:11.249 [2024-11-20 06:42:42.844090] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:11.249 [2024-11-20 06:42:42.844096] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:11.249 [2024-11-20 06:42:42.856166] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:11.249 [2024-11-20 06:42:42.856573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.249 [2024-11-20 06:42:42.856607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:11.249 [2024-11-20 06:42:42.856633] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:11.249 [2024-11-20 06:42:42.857184] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:11.249 [2024-11-20 06:42:42.857374] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:11.250 [2024-11-20 06:42:42.857385] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:11.250 [2024-11-20 06:42:42.857391] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:11.250 [2024-11-20 06:42:42.857398] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:11.250 [2024-11-20 06:42:42.868907] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:11.250 [2024-11-20 06:42:42.869325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.250 [2024-11-20 06:42:42.869342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:11.250 [2024-11-20 06:42:42.869350] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:11.250 [2024-11-20 06:42:42.869509] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:11.250 [2024-11-20 06:42:42.869667] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:11.250 [2024-11-20 06:42:42.869677] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:11.250 [2024-11-20 06:42:42.869683] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:11.250 [2024-11-20 06:42:42.869689] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:11.250 [2024-11-20 06:42:42.881702] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:11.250 [2024-11-20 06:42:42.882126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.250 [2024-11-20 06:42:42.882171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:11.250 [2024-11-20 06:42:42.882195] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:11.250 [2024-11-20 06:42:42.882686] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:11.250 [2024-11-20 06:42:42.882856] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:11.250 [2024-11-20 06:42:42.882865] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:11.250 [2024-11-20 06:42:42.882872] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:11.250 [2024-11-20 06:42:42.882878] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:11.250 [2024-11-20 06:42:42.894428] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:11.250 [2024-11-20 06:42:42.894816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.250 [2024-11-20 06:42:42.894833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:11.250 [2024-11-20 06:42:42.894840] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:11.250 [2024-11-20 06:42:42.894998] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:11.250 [2024-11-20 06:42:42.895157] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:11.250 [2024-11-20 06:42:42.895167] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:11.250 [2024-11-20 06:42:42.895173] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:11.250 [2024-11-20 06:42:42.895182] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:11.250 [2024-11-20 06:42:42.907156] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:11.250 [2024-11-20 06:42:42.907582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.250 [2024-11-20 06:42:42.907628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:11.250 [2024-11-20 06:42:42.907651] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:11.250 [2024-11-20 06:42:42.908245] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:11.250 [2024-11-20 06:42:42.908612] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:11.250 [2024-11-20 06:42:42.908621] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:11.250 [2024-11-20 06:42:42.908628] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:11.250 [2024-11-20 06:42:42.908634] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:11.250 [2024-11-20 06:42:42.920036] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:11.250 [2024-11-20 06:42:42.920365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.250 [2024-11-20 06:42:42.920382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:11.250 [2024-11-20 06:42:42.920390] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:11.250 [2024-11-20 06:42:42.920549] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:11.250 [2024-11-20 06:42:42.920709] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:11.250 [2024-11-20 06:42:42.920718] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:11.250 [2024-11-20 06:42:42.920724] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:11.250 [2024-11-20 06:42:42.920731] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:11.250 [2024-11-20 06:42:42.932895] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:11.250 [2024-11-20 06:42:42.933323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.250 [2024-11-20 06:42:42.933370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:11.250 [2024-11-20 06:42:42.933394] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:11.250 [2024-11-20 06:42:42.933973] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:11.250 [2024-11-20 06:42:42.934531] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:11.250 [2024-11-20 06:42:42.934541] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:11.250 [2024-11-20 06:42:42.934548] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:11.250 [2024-11-20 06:42:42.934554] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:11.250 [2024-11-20 06:42:42.945704] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:11.250 [2024-11-20 06:42:42.946132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.250 [2024-11-20 06:42:42.946177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:11.250 [2024-11-20 06:42:42.946215] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:11.250 [2024-11-20 06:42:42.946667] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:11.250 [2024-11-20 06:42:42.946836] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:11.250 [2024-11-20 06:42:42.946846] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:11.250 [2024-11-20 06:42:42.946852] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:11.250 [2024-11-20 06:42:42.946859] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:11.250 [2024-11-20 06:42:42.958676] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:11.250 [2024-11-20 06:42:42.959099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.250 [2024-11-20 06:42:42.959116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:11.250 [2024-11-20 06:42:42.959124] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:11.250 [2024-11-20 06:42:42.959297] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:11.250 [2024-11-20 06:42:42.959466] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:11.250 [2024-11-20 06:42:42.959475] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:11.250 [2024-11-20 06:42:42.959482] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:11.250 [2024-11-20 06:42:42.959488] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:11.250 [2024-11-20 06:42:42.971421] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:11.250 [2024-11-20 06:42:42.971810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.250 [2024-11-20 06:42:42.971828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:11.250 [2024-11-20 06:42:42.971835] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:11.250 [2024-11-20 06:42:42.972002] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:11.250 [2024-11-20 06:42:42.972169] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:11.250 [2024-11-20 06:42:42.972179] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:11.250 [2024-11-20 06:42:42.972185] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:11.250 [2024-11-20 06:42:42.972192] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:11.250 [2024-11-20 06:42:42.984394] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:11.250 [2024-11-20 06:42:42.984803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.250 [2024-11-20 06:42:42.984821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:11.250 [2024-11-20 06:42:42.984831] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:11.250 [2024-11-20 06:42:42.985004] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:11.251 [2024-11-20 06:42:42.985176] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:11.251 [2024-11-20 06:42:42.985186] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:11.251 [2024-11-20 06:42:42.985193] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:11.251 [2024-11-20 06:42:42.985200] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:11.251 [2024-11-20 06:42:42.997142] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:11.251 [2024-11-20 06:42:42.997560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.251 [2024-11-20 06:42:42.997577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:11.251 [2024-11-20 06:42:42.997584] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:11.251 [2024-11-20 06:42:42.997743] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:11.251 [2024-11-20 06:42:42.997903] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:11.251 [2024-11-20 06:42:42.997912] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:11.251 [2024-11-20 06:42:42.997919] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:11.251 [2024-11-20 06:42:42.997925] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:11.251 [2024-11-20 06:42:43.009961] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:11.251 [2024-11-20 06:42:43.010301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.251 [2024-11-20 06:42:43.010318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:11.251 [2024-11-20 06:42:43.010326] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:11.251 [2024-11-20 06:42:43.010484] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:11.251 [2024-11-20 06:42:43.010644] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:11.251 [2024-11-20 06:42:43.010653] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:11.251 [2024-11-20 06:42:43.010659] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:11.251 [2024-11-20 06:42:43.010665] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:11.251 [2024-11-20 06:42:43.022769] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:11.251 [2024-11-20 06:42:43.023163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.251 [2024-11-20 06:42:43.023181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:11.251 [2024-11-20 06:42:43.023188] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:11.251 [2024-11-20 06:42:43.023373] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:11.251 [2024-11-20 06:42:43.023543] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:11.251 [2024-11-20 06:42:43.023556] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:11.251 [2024-11-20 06:42:43.023562] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:11.251 [2024-11-20 06:42:43.023569] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:11.251 [2024-11-20 06:42:43.035528] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:11.251 [2024-11-20 06:42:43.035865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.251 [2024-11-20 06:42:43.035911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:11.251 [2024-11-20 06:42:43.035935] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:11.251 [2024-11-20 06:42:43.036487] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:11.251 [2024-11-20 06:42:43.036657] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:11.251 [2024-11-20 06:42:43.036667] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:11.251 [2024-11-20 06:42:43.036674] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:11.251 [2024-11-20 06:42:43.036680] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:11.251 [2024-11-20 06:42:43.048368] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:11.251 [2024-11-20 06:42:43.048674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.251 [2024-11-20 06:42:43.048691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:11.251 [2024-11-20 06:42:43.048698] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:11.251 [2024-11-20 06:42:43.048857] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:11.251 [2024-11-20 06:42:43.049017] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:11.251 [2024-11-20 06:42:43.049027] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:11.251 [2024-11-20 06:42:43.049033] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:11.251 [2024-11-20 06:42:43.049039] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:11.251 [2024-11-20 06:42:43.061368] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:11.251 [2024-11-20 06:42:43.061771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.251 [2024-11-20 06:42:43.061788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:11.251 [2024-11-20 06:42:43.061797] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:11.251 [2024-11-20 06:42:43.061965] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:11.251 [2024-11-20 06:42:43.062133] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:11.251 [2024-11-20 06:42:43.062143] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:11.251 [2024-11-20 06:42:43.062150] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:11.251 [2024-11-20 06:42:43.062159] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:11.251 [2024-11-20 06:42:43.074330] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:11.251 [2024-11-20 06:42:43.074688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.251 [2024-11-20 06:42:43.074705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:11.251 [2024-11-20 06:42:43.074713] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:11.251 [2024-11-20 06:42:43.074885] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:11.251 [2024-11-20 06:42:43.075058] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:11.251 [2024-11-20 06:42:43.075068] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:11.251 [2024-11-20 06:42:43.075075] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:11.251 [2024-11-20 06:42:43.075081] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:11.512 [2024-11-20 06:42:43.087405] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:11.512 [2024-11-20 06:42:43.087777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.512 [2024-11-20 06:42:43.087795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:11.512 [2024-11-20 06:42:43.087803] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:11.512 [2024-11-20 06:42:43.087972] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:11.512 [2024-11-20 06:42:43.088140] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:11.512 [2024-11-20 06:42:43.088151] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:11.512 [2024-11-20 06:42:43.088158] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:11.512 [2024-11-20 06:42:43.088164] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:11.512 [2024-11-20 06:42:43.100347] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:11.512 [2024-11-20 06:42:43.100641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.512 [2024-11-20 06:42:43.100659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:11.512 [2024-11-20 06:42:43.100666] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:11.512 [2024-11-20 06:42:43.100833] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:11.512 [2024-11-20 06:42:43.101003] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:11.512 [2024-11-20 06:42:43.101013] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:11.512 [2024-11-20 06:42:43.101019] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:11.512 [2024-11-20 06:42:43.101026] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:11.512 [2024-11-20 06:42:43.113339] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:11.512 [2024-11-20 06:42:43.113747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.512 [2024-11-20 06:42:43.113765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:11.512 [2024-11-20 06:42:43.113772] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:11.512 [2024-11-20 06:42:43.113945] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:11.512 [2024-11-20 06:42:43.114117] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:11.512 [2024-11-20 06:42:43.114127] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:11.512 [2024-11-20 06:42:43.114133] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:11.512 [2024-11-20 06:42:43.114140] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:11.512 [2024-11-20 06:42:43.126196] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:11.512 [2024-11-20 06:42:43.126581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.512 [2024-11-20 06:42:43.126626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:11.512 [2024-11-20 06:42:43.126650] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:11.512 [2024-11-20 06:42:43.127085] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:11.512 [2024-11-20 06:42:43.127259] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:11.512 [2024-11-20 06:42:43.127269] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:11.512 [2024-11-20 06:42:43.127275] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:11.512 [2024-11-20 06:42:43.127282] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:11.512 [2024-11-20 06:42:43.138984] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:11.512 [2024-11-20 06:42:43.139262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.512 [2024-11-20 06:42:43.139279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:11.512 [2024-11-20 06:42:43.139288] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:11.512 [2024-11-20 06:42:43.139454] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:11.512 [2024-11-20 06:42:43.139622] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:11.512 [2024-11-20 06:42:43.139632] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:11.512 [2024-11-20 06:42:43.139639] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:11.512 [2024-11-20 06:42:43.139645] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:11.512 [2024-11-20 06:42:43.151993] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:11.512 [2024-11-20 06:42:43.152421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.512 [2024-11-20 06:42:43.152440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:11.512 [2024-11-20 06:42:43.152449] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:11.512 [2024-11-20 06:42:43.152626] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:11.512 [2024-11-20 06:42:43.152799] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:11.512 [2024-11-20 06:42:43.152810] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:11.512 [2024-11-20 06:42:43.152816] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:11.512 [2024-11-20 06:42:43.152824] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:11.512 [2024-11-20 06:42:43.164950] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:11.512 [2024-11-20 06:42:43.165307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.512 [2024-11-20 06:42:43.165325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:11.512 [2024-11-20 06:42:43.165334] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:11.512 [2024-11-20 06:42:43.165507] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:11.512 [2024-11-20 06:42:43.165678] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:11.512 [2024-11-20 06:42:43.165688] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:11.512 [2024-11-20 06:42:43.165695] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:11.512 [2024-11-20 06:42:43.165702] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:11.512 [2024-11-20 06:42:43.178096] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:11.512 [2024-11-20 06:42:43.178524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.512 [2024-11-20 06:42:43.178543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:11.512 [2024-11-20 06:42:43.178551] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:11.512 [2024-11-20 06:42:43.178724] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:11.512 [2024-11-20 06:42:43.178897] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:11.512 [2024-11-20 06:42:43.178907] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:11.512 [2024-11-20 06:42:43.178914] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:11.512 [2024-11-20 06:42:43.178921] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:11.512 [2024-11-20 06:42:43.191345] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:11.512 [2024-11-20 06:42:43.191768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.512 [2024-11-20 06:42:43.191787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:11.513 [2024-11-20 06:42:43.191794] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:11.513 [2024-11-20 06:42:43.191977] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:11.513 [2024-11-20 06:42:43.192159] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:11.513 [2024-11-20 06:42:43.192173] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:11.513 [2024-11-20 06:42:43.192181] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:11.513 [2024-11-20 06:42:43.192188] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:11.513 [2024-11-20 06:42:43.204418] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:11.513 [2024-11-20 06:42:43.204847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.513 [2024-11-20 06:42:43.204865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:11.513 [2024-11-20 06:42:43.204873] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:11.513 [2024-11-20 06:42:43.205056] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:11.513 [2024-11-20 06:42:43.205247] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:11.513 [2024-11-20 06:42:43.205258] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:11.513 [2024-11-20 06:42:43.205266] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:11.513 [2024-11-20 06:42:43.205274] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:11.513 [2024-11-20 06:42:43.217569] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:11.513 [2024-11-20 06:42:43.217970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.513 [2024-11-20 06:42:43.217988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:11.513 [2024-11-20 06:42:43.217996] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:11.513 [2024-11-20 06:42:43.218168] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:11.513 [2024-11-20 06:42:43.218348] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:11.513 [2024-11-20 06:42:43.218360] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:11.513 [2024-11-20 06:42:43.218367] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:11.513 [2024-11-20 06:42:43.218374] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:11.513 [2024-11-20 06:42:43.230574] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:11.513 [2024-11-20 06:42:43.231016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.513 [2024-11-20 06:42:43.231034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:11.513 [2024-11-20 06:42:43.231043] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:11.513 [2024-11-20 06:42:43.231232] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:11.513 [2024-11-20 06:42:43.231417] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:11.513 [2024-11-20 06:42:43.231427] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:11.513 [2024-11-20 06:42:43.231435] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:11.513 [2024-11-20 06:42:43.231446] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:11.513 [2024-11-20 06:42:43.243932] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:11.513 [2024-11-20 06:42:43.244303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.513 [2024-11-20 06:42:43.244338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:11.513 [2024-11-20 06:42:43.244346] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:11.513 [2024-11-20 06:42:43.244533] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:11.513 [2024-11-20 06:42:43.244706] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:11.513 [2024-11-20 06:42:43.244717] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:11.513 [2024-11-20 06:42:43.244724] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:11.513 [2024-11-20 06:42:43.244731] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:11.513 [2024-11-20 06:42:43.256949] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:11.513 [2024-11-20 06:42:43.257329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.513 [2024-11-20 06:42:43.257348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:11.513 [2024-11-20 06:42:43.257355] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:11.513 [2024-11-20 06:42:43.257532] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:11.513 [2024-11-20 06:42:43.257691] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:11.513 [2024-11-20 06:42:43.257701] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:11.513 [2024-11-20 06:42:43.257707] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:11.513 [2024-11-20 06:42:43.257713] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:11.513 [2024-11-20 06:42:43.270013] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:11.513 [2024-11-20 06:42:43.270358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.513 [2024-11-20 06:42:43.270377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:11.513 [2024-11-20 06:42:43.270385] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:11.513 [2024-11-20 06:42:43.270567] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:11.513 [2024-11-20 06:42:43.270736] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:11.513 [2024-11-20 06:42:43.270746] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:11.513 [2024-11-20 06:42:43.270753] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:11.513 [2024-11-20 06:42:43.270760] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:11.513 [2024-11-20 06:42:43.282888] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:11.513 [2024-11-20 06:42:43.283291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.513 [2024-11-20 06:42:43.283315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:11.513 [2024-11-20 06:42:43.283323] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:11.513 [2024-11-20 06:42:43.283482] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:11.513 [2024-11-20 06:42:43.283641] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:11.513 [2024-11-20 06:42:43.283651] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:11.513 [2024-11-20 06:42:43.283657] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:11.513 [2024-11-20 06:42:43.283663] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:11.513 [2024-11-20 06:42:43.295686] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:11.513 [2024-11-20 06:42:43.296095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.513 [2024-11-20 06:42:43.296113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:11.513 [2024-11-20 06:42:43.296121] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:11.513 [2024-11-20 06:42:43.296303] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:11.513 [2024-11-20 06:42:43.296471] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:11.513 [2024-11-20 06:42:43.296481] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:11.513 [2024-11-20 06:42:43.296488] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:11.513 [2024-11-20 06:42:43.296494] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:11.513 [2024-11-20 06:42:43.308544] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:11.513 [2024-11-20 06:42:43.308995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.513 [2024-11-20 06:42:43.309013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:11.514 [2024-11-20 06:42:43.309021] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:11.514 [2024-11-20 06:42:43.309188] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:11.514 [2024-11-20 06:42:43.309363] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:11.514 [2024-11-20 06:42:43.309374] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:11.514 [2024-11-20 06:42:43.309381] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:11.514 [2024-11-20 06:42:43.309387] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:11.514 [2024-11-20 06:42:43.321484] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:11.514 [2024-11-20 06:42:43.321807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.514 [2024-11-20 06:42:43.321824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:11.514 [2024-11-20 06:42:43.321831] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:11.514 [2024-11-20 06:42:43.321992] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:11.514 [2024-11-20 06:42:43.322152] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:11.514 [2024-11-20 06:42:43.322162] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:11.514 [2024-11-20 06:42:43.322168] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:11.514 [2024-11-20 06:42:43.322174] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:11.514 [2024-11-20 06:42:43.334444] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:11.514 [2024-11-20 06:42:43.334727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.514 [2024-11-20 06:42:43.334744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:11.514 [2024-11-20 06:42:43.334751] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:11.514 [2024-11-20 06:42:43.334909] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:11.514 [2024-11-20 06:42:43.335069] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:11.514 [2024-11-20 06:42:43.335079] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:11.514 [2024-11-20 06:42:43.335085] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:11.514 [2024-11-20 06:42:43.335092] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:11.774 [2024-11-20 06:42:43.347344] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:11.774 [2024-11-20 06:42:43.347778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.774 [2024-11-20 06:42:43.347797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:11.774 [2024-11-20 06:42:43.347804] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:11.774 [2024-11-20 06:42:43.347971] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:11.774 [2024-11-20 06:42:43.348139] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:11.774 [2024-11-20 06:42:43.348150] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:11.774 [2024-11-20 06:42:43.348156] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:11.774 [2024-11-20 06:42:43.348163] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:11.774 [2024-11-20 06:42:43.360185] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:11.774 [2024-11-20 06:42:43.360598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.774 [2024-11-20 06:42:43.360644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:11.774 [2024-11-20 06:42:43.360667] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:11.774 [2024-11-20 06:42:43.361258] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:11.774 [2024-11-20 06:42:43.361709] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:11.774 [2024-11-20 06:42:43.361722] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:11.774 [2024-11-20 06:42:43.361729] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:11.774 [2024-11-20 06:42:43.361736] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:11.774 [2024-11-20 06:42:43.373022] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:11.774 [2024-11-20 06:42:43.373342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.774 [2024-11-20 06:42:43.373360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:11.774 [2024-11-20 06:42:43.373368] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:11.775 [2024-11-20 06:42:43.373527] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:11.775 [2024-11-20 06:42:43.373686] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:11.775 [2024-11-20 06:42:43.373696] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:11.775 [2024-11-20 06:42:43.373702] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:11.775 [2024-11-20 06:42:43.373709] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:11.775 [2024-11-20 06:42:43.385844] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:11.775 [2024-11-20 06:42:43.386182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.775 [2024-11-20 06:42:43.386199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:11.775 [2024-11-20 06:42:43.386212] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:11.775 [2024-11-20 06:42:43.386371] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:11.775 [2024-11-20 06:42:43.386531] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:11.775 [2024-11-20 06:42:43.386540] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:11.775 [2024-11-20 06:42:43.386546] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:11.775 [2024-11-20 06:42:43.386553] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:11.775 [2024-11-20 06:42:43.398702] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:11.775 [2024-11-20 06:42:43.399043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.775 [2024-11-20 06:42:43.399060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:11.775 [2024-11-20 06:42:43.399067] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:11.775 [2024-11-20 06:42:43.399231] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:11.775 [2024-11-20 06:42:43.399391] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:11.775 [2024-11-20 06:42:43.399401] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:11.775 [2024-11-20 06:42:43.399407] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:11.775 [2024-11-20 06:42:43.399413] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:11.775 [2024-11-20 06:42:43.411474] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:11.775 [2024-11-20 06:42:43.411861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.775 [2024-11-20 06:42:43.411878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:11.775 [2024-11-20 06:42:43.411885] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:11.775 [2024-11-20 06:42:43.412044] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:11.775 [2024-11-20 06:42:43.412209] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:11.775 [2024-11-20 06:42:43.412220] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:11.775 [2024-11-20 06:42:43.412243] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:11.775 [2024-11-20 06:42:43.412252] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:11.775 [2024-11-20 06:42:43.424373] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:11.775 [2024-11-20 06:42:43.424640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.775 [2024-11-20 06:42:43.424657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:11.775 [2024-11-20 06:42:43.424665] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:11.775 [2024-11-20 06:42:43.424823] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:11.775 [2024-11-20 06:42:43.424983] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:11.775 [2024-11-20 06:42:43.424992] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:11.775 [2024-11-20 06:42:43.424998] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:11.775 [2024-11-20 06:42:43.425004] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:11.775 [2024-11-20 06:42:43.437302] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:11.775 [2024-11-20 06:42:43.437595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.775 [2024-11-20 06:42:43.437614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:11.775 [2024-11-20 06:42:43.437622] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:11.775 [2024-11-20 06:42:43.437790] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:11.775 [2024-11-20 06:42:43.437957] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:11.775 [2024-11-20 06:42:43.437968] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:11.775 [2024-11-20 06:42:43.437976] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:11.775 [2024-11-20 06:42:43.437983] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:11.775 [2024-11-20 06:42:43.450295] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:11.775 [2024-11-20 06:42:43.450651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.775 [2024-11-20 06:42:43.450672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:11.775 [2024-11-20 06:42:43.450680] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:11.775 [2024-11-20 06:42:43.450839] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:11.775 [2024-11-20 06:42:43.450998] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:11.775 [2024-11-20 06:42:43.451008] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:11.775 [2024-11-20 06:42:43.451014] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:11.775 [2024-11-20 06:42:43.451020] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:11.775 [2024-11-20 06:42:43.463213] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:11.775 [2024-11-20 06:42:43.463497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.775 [2024-11-20 06:42:43.463514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:11.775 [2024-11-20 06:42:43.463522] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:11.775 [2024-11-20 06:42:43.463681] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:11.775 [2024-11-20 06:42:43.463840] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:11.775 [2024-11-20 06:42:43.463850] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:11.775 [2024-11-20 06:42:43.463856] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:11.775 [2024-11-20 06:42:43.463862] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:11.775 [2024-11-20 06:42:43.475968] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:11.775 [2024-11-20 06:42:43.476312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.775 [2024-11-20 06:42:43.476330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:11.775 [2024-11-20 06:42:43.476338] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:11.775 [2024-11-20 06:42:43.476505] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:11.775 [2024-11-20 06:42:43.476673] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:11.775 [2024-11-20 06:42:43.476683] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:11.775 [2024-11-20 06:42:43.476690] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:11.775 [2024-11-20 06:42:43.476696] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:11.775 [2024-11-20 06:42:43.488859] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:11.775 [2024-11-20 06:42:43.489287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.775 [2024-11-20 06:42:43.489305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:11.775 [2024-11-20 06:42:43.489313] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:11.775 [2024-11-20 06:42:43.489498] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:11.775 [2024-11-20 06:42:43.489664] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:11.775 [2024-11-20 06:42:43.489674] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:11.775 [2024-11-20 06:42:43.489680] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:11.775 [2024-11-20 06:42:43.489687] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:11.776 [2024-11-20 06:42:43.501883] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:11.776 [2024-11-20 06:42:43.502339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.776 [2024-11-20 06:42:43.502357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:11.776 [2024-11-20 06:42:43.502366] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:11.776 [2024-11-20 06:42:43.502540] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:11.776 [2024-11-20 06:42:43.502712] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:11.776 [2024-11-20 06:42:43.502722] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:11.776 [2024-11-20 06:42:43.502729] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:11.776 [2024-11-20 06:42:43.502736] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:11.776 [2024-11-20 06:42:43.514784] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:11.776 [2024-11-20 06:42:43.515198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.776 [2024-11-20 06:42:43.515219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:11.776 [2024-11-20 06:42:43.515227] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:11.776 [2024-11-20 06:42:43.515386] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:11.776 [2024-11-20 06:42:43.515545] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:11.776 [2024-11-20 06:42:43.515554] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:11.776 [2024-11-20 06:42:43.515560] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:11.776 [2024-11-20 06:42:43.515567] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:11.776 [2024-11-20 06:42:43.527766] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:11.776 [2024-11-20 06:42:43.528185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.776 [2024-11-20 06:42:43.528207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:11.776 [2024-11-20 06:42:43.528215] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:11.776 [2024-11-20 06:42:43.528383] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:11.776 [2024-11-20 06:42:43.528550] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:11.776 [2024-11-20 06:42:43.528560] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:11.776 [2024-11-20 06:42:43.528570] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:11.776 [2024-11-20 06:42:43.528577] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:11.776 [2024-11-20 06:42:43.540564] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:11.776 [2024-11-20 06:42:43.540959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.776 [2024-11-20 06:42:43.541004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:11.776 [2024-11-20 06:42:43.541029] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:11.776 [2024-11-20 06:42:43.541622] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:11.776 [2024-11-20 06:42:43.541945] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:11.776 [2024-11-20 06:42:43.541954] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:11.776 [2024-11-20 06:42:43.541961] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:11.776 [2024-11-20 06:42:43.541967] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:11.776 [2024-11-20 06:42:43.553382] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:11.776 [2024-11-20 06:42:43.553798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.776 [2024-11-20 06:42:43.553815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:11.776 [2024-11-20 06:42:43.553823] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:11.776 [2024-11-20 06:42:43.553980] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:11.776 [2024-11-20 06:42:43.554140] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:11.776 [2024-11-20 06:42:43.554150] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:11.776 [2024-11-20 06:42:43.554156] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:11.776 [2024-11-20 06:42:43.554162] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:11.776 [2024-11-20 06:42:43.566162] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:11.776 [2024-11-20 06:42:43.566575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.776 [2024-11-20 06:42:43.566592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:11.776 [2024-11-20 06:42:43.566599] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:11.776 [2024-11-20 06:42:43.566758] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:11.776 [2024-11-20 06:42:43.566917] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:11.776 [2024-11-20 06:42:43.566926] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:11.776 [2024-11-20 06:42:43.566932] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:11.776 [2024-11-20 06:42:43.566939] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:11.776 [2024-11-20 06:42:43.578956] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:11.776 [2024-11-20 06:42:43.579370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.776 [2024-11-20 06:42:43.579387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:11.776 [2024-11-20 06:42:43.579395] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:11.776 [2024-11-20 06:42:43.579554] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:11.776 [2024-11-20 06:42:43.579713] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:11.776 [2024-11-20 06:42:43.579723] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:11.776 [2024-11-20 06:42:43.579729] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:11.776 [2024-11-20 06:42:43.579735] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:11.776 [2024-11-20 06:42:43.591697] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:11.776 [2024-11-20 06:42:43.592117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.776 [2024-11-20 06:42:43.592161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:11.776 [2024-11-20 06:42:43.592185] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:11.776 [2024-11-20 06:42:43.592779] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:11.776 [2024-11-20 06:42:43.593237] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:11.776 [2024-11-20 06:42:43.593247] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:11.776 [2024-11-20 06:42:43.593254] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:11.776 [2024-11-20 06:42:43.593260] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:11.776 [2024-11-20 06:42:43.604772] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:11.776 [2024-11-20 06:42:43.605140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.037 [2024-11-20 06:42:43.605158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:12.037 [2024-11-20 06:42:43.605167] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:12.037 [2024-11-20 06:42:43.605345] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:12.037 [2024-11-20 06:42:43.605519] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:12.037 [2024-11-20 06:42:43.605529] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:12.037 [2024-11-20 06:42:43.605536] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:12.037 [2024-11-20 06:42:43.605543] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:12.037 [2024-11-20 06:42:43.617607] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:12.037 [2024-11-20 06:42:43.618029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.037 [2024-11-20 06:42:43.618082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:12.037 [2024-11-20 06:42:43.618106] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:12.037 [2024-11-20 06:42:43.618699] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:12.037 [2024-11-20 06:42:43.619224] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:12.037 [2024-11-20 06:42:43.619234] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:12.037 [2024-11-20 06:42:43.619241] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:12.037 [2024-11-20 06:42:43.619248] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:12.037 [2024-11-20 06:42:43.630437] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:12.037 [2024-11-20 06:42:43.630842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.037 [2024-11-20 06:42:43.630859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:12.037 [2024-11-20 06:42:43.630866] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:12.037 [2024-11-20 06:42:43.631025] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:12.037 [2024-11-20 06:42:43.631183] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:12.037 [2024-11-20 06:42:43.631192] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:12.037 [2024-11-20 06:42:43.631199] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:12.037 [2024-11-20 06:42:43.631212] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:12.037 [2024-11-20 06:42:43.643216] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:12.037 [2024-11-20 06:42:43.643629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.037 [2024-11-20 06:42:43.643646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:12.037 [2024-11-20 06:42:43.643653] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:12.037 [2024-11-20 06:42:43.643810] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:12.037 [2024-11-20 06:42:43.643969] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:12.037 [2024-11-20 06:42:43.643979] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:12.037 [2024-11-20 06:42:43.643986] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:12.037 [2024-11-20 06:42:43.643992] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:12.037 7280.25 IOPS, 28.44 MiB/s [2024-11-20T05:42:43.873Z] [2024-11-20 06:42:43.656067] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:12.037 [2024-11-20 06:42:43.656427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.037 [2024-11-20 06:42:43.656448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:12.037 [2024-11-20 06:42:43.656456] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:12.037 [2024-11-20 06:42:43.656630] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:12.037 [2024-11-20 06:42:43.656789] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:12.037 [2024-11-20 06:42:43.656798] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:12.037 [2024-11-20 06:42:43.656804] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:12.037 [2024-11-20 06:42:43.656811] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:12.037 [2024-11-20 06:42:43.668994] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:12.037 [2024-11-20 06:42:43.669384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.037 [2024-11-20 06:42:43.669401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:12.037 [2024-11-20 06:42:43.669409] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:12.037 [2024-11-20 06:42:43.669568] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:12.037 [2024-11-20 06:42:43.669728] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:12.037 [2024-11-20 06:42:43.669738] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:12.037 [2024-11-20 06:42:43.669744] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:12.037 [2024-11-20 06:42:43.669750] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:12.037 [2024-11-20 06:42:43.681862] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:12.037 [2024-11-20 06:42:43.682286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.037 [2024-11-20 06:42:43.682332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:12.037 [2024-11-20 06:42:43.682355] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:12.037 [2024-11-20 06:42:43.682933] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:12.037 [2024-11-20 06:42:43.683458] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:12.037 [2024-11-20 06:42:43.683468] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:12.037 [2024-11-20 06:42:43.683475] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:12.037 [2024-11-20 06:42:43.683481] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:12.037 [2024-11-20 06:42:43.694742] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:12.037 [2024-11-20 06:42:43.695161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.037 [2024-11-20 06:42:43.695218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:12.037 [2024-11-20 06:42:43.695244] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:12.037 [2024-11-20 06:42:43.695611] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:12.037 [2024-11-20 06:42:43.695771] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:12.037 [2024-11-20 06:42:43.695780] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:12.037 [2024-11-20 06:42:43.695791] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:12.037 [2024-11-20 06:42:43.695797] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:12.037 [2024-11-20 06:42:43.707562] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:12.037 [2024-11-20 06:42:43.707970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.037 [2024-11-20 06:42:43.707987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:12.038 [2024-11-20 06:42:43.707994] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:12.038 [2024-11-20 06:42:43.708153] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:12.038 [2024-11-20 06:42:43.708338] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:12.038 [2024-11-20 06:42:43.708348] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:12.038 [2024-11-20 06:42:43.708354] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:12.038 [2024-11-20 06:42:43.708361] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:12.038 [2024-11-20 06:42:43.720524] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:12.038 [2024-11-20 06:42:43.720888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.038 [2024-11-20 06:42:43.720905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:12.038 [2024-11-20 06:42:43.720913] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:12.038 [2024-11-20 06:42:43.721080] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:12.038 [2024-11-20 06:42:43.721257] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:12.038 [2024-11-20 06:42:43.721268] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:12.038 [2024-11-20 06:42:43.721275] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:12.038 [2024-11-20 06:42:43.721281] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:12.038 [2024-11-20 06:42:43.733315] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:12.038 [2024-11-20 06:42:43.733724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.038 [2024-11-20 06:42:43.733764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:12.038 [2024-11-20 06:42:43.733790] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:12.038 [2024-11-20 06:42:43.734325] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:12.038 [2024-11-20 06:42:43.734486] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:12.038 [2024-11-20 06:42:43.734494] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:12.038 [2024-11-20 06:42:43.734500] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:12.038 [2024-11-20 06:42:43.734506] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:12.038 [2024-11-20 06:42:43.746091] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:12.038 [2024-11-20 06:42:43.746509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.038 [2024-11-20 06:42:43.746526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:12.038 [2024-11-20 06:42:43.746534] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:12.038 [2024-11-20 06:42:43.746701] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:12.038 [2024-11-20 06:42:43.746870] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:12.038 [2024-11-20 06:42:43.746880] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:12.038 [2024-11-20 06:42:43.746887] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:12.038 [2024-11-20 06:42:43.746894] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:12.038 [2024-11-20 06:42:43.759078] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:12.038 [2024-11-20 06:42:43.759504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.038 [2024-11-20 06:42:43.759522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:12.038 [2024-11-20 06:42:43.759531] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:12.038 [2024-11-20 06:42:43.759703] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:12.038 [2024-11-20 06:42:43.759875] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:12.038 [2024-11-20 06:42:43.759885] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:12.038 [2024-11-20 06:42:43.759891] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:12.038 [2024-11-20 06:42:43.759898] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:12.038 [2024-11-20 06:42:43.772052] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:12.038 [2024-11-20 06:42:43.772482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.038 [2024-11-20 06:42:43.772500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:12.038 [2024-11-20 06:42:43.772508] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:12.038 [2024-11-20 06:42:43.772676] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:12.038 [2024-11-20 06:42:43.772844] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:12.038 [2024-11-20 06:42:43.772854] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:12.038 [2024-11-20 06:42:43.772860] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:12.038 [2024-11-20 06:42:43.772867] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:12.038 [2024-11-20 06:42:43.784862] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:12.038 [2024-11-20 06:42:43.785207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.038 [2024-11-20 06:42:43.785224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:12.038 [2024-11-20 06:42:43.785235] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:12.038 [2024-11-20 06:42:43.785395] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:12.038 [2024-11-20 06:42:43.785554] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:12.038 [2024-11-20 06:42:43.785564] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:12.038 [2024-11-20 06:42:43.785570] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:12.038 [2024-11-20 06:42:43.785576] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:12.038 [2024-11-20 06:42:43.797676] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:12.038 [2024-11-20 06:42:43.798081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.038 [2024-11-20 06:42:43.798120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:12.038 [2024-11-20 06:42:43.798145] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:12.038 [2024-11-20 06:42:43.798701] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:12.038 [2024-11-20 06:42:43.798869] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:12.038 [2024-11-20 06:42:43.798877] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:12.038 [2024-11-20 06:42:43.798884] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:12.038 [2024-11-20 06:42:43.798890] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:12.038 [2024-11-20 06:42:43.810402] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:12.038 [2024-11-20 06:42:43.810798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.038 [2024-11-20 06:42:43.810815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:12.038 [2024-11-20 06:42:43.810823] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:12.038 [2024-11-20 06:42:43.810982] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:12.038 [2024-11-20 06:42:43.811140] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:12.038 [2024-11-20 06:42:43.811149] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:12.038 [2024-11-20 06:42:43.811155] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:12.038 [2024-11-20 06:42:43.811162] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:12.038 [2024-11-20 06:42:43.823111] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:12.038 [2024-11-20 06:42:43.823514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.038 [2024-11-20 06:42:43.823560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:12.039 [2024-11-20 06:42:43.823585] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:12.039 [2024-11-20 06:42:43.824164] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:12.039 [2024-11-20 06:42:43.824652] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:12.039 [2024-11-20 06:42:43.824663] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:12.039 [2024-11-20 06:42:43.824670] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:12.039 [2024-11-20 06:42:43.824676] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:12.039 [2024-11-20 06:42:43.835960] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:12.039 [2024-11-20 06:42:43.836369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.039 [2024-11-20 06:42:43.836386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:12.039 [2024-11-20 06:42:43.836393] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:12.039 [2024-11-20 06:42:43.836552] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:12.039 [2024-11-20 06:42:43.836711] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:12.039 [2024-11-20 06:42:43.836721] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:12.039 [2024-11-20 06:42:43.836727] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:12.039 [2024-11-20 06:42:43.836733] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:12.039 [2024-11-20 06:42:43.848700] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:12.039 [2024-11-20 06:42:43.849109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.039 [2024-11-20 06:42:43.849147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:12.039 [2024-11-20 06:42:43.849172] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:12.039 [2024-11-20 06:42:43.849698] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:12.039 [2024-11-20 06:42:43.849857] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:12.039 [2024-11-20 06:42:43.849865] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:12.039 [2024-11-20 06:42:43.849871] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:12.039 [2024-11-20 06:42:43.849877] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:12.039 [2024-11-20 06:42:43.861505] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:12.039 [2024-11-20 06:42:43.861906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.039 [2024-11-20 06:42:43.861951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:12.039 [2024-11-20 06:42:43.861974] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:12.039 [2024-11-20 06:42:43.862569] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:12.039 [2024-11-20 06:42:43.863034] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:12.039 [2024-11-20 06:42:43.863044] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:12.039 [2024-11-20 06:42:43.863054] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:12.039 [2024-11-20 06:42:43.863062] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:12.300 [2024-11-20 06:42:43.874526] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:12.300 [2024-11-20 06:42:43.874962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.300 [2024-11-20 06:42:43.875008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:12.300 [2024-11-20 06:42:43.875032] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:12.300 [2024-11-20 06:42:43.875635] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:12.300 [2024-11-20 06:42:43.875806] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:12.300 [2024-11-20 06:42:43.875816] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:12.300 [2024-11-20 06:42:43.875822] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:12.300 [2024-11-20 06:42:43.875829] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:12.300 [2024-11-20 06:42:43.887326] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:12.300 [2024-11-20 06:42:43.887742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.300 [2024-11-20 06:42:43.887797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:12.300 [2024-11-20 06:42:43.887820] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:12.300 [2024-11-20 06:42:43.888413] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:12.300 [2024-11-20 06:42:43.888907] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:12.300 [2024-11-20 06:42:43.888916] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:12.300 [2024-11-20 06:42:43.888923] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:12.300 [2024-11-20 06:42:43.888929] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:12.300 [2024-11-20 06:42:43.900049] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:12.300 [2024-11-20 06:42:43.900478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.300 [2024-11-20 06:42:43.900524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:12.300 [2024-11-20 06:42:43.900548] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:12.300 [2024-11-20 06:42:43.900976] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:12.300 [2024-11-20 06:42:43.901137] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:12.300 [2024-11-20 06:42:43.901145] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:12.300 [2024-11-20 06:42:43.901151] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:12.300 [2024-11-20 06:42:43.901157] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:12.300 [2024-11-20 06:42:43.912799] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:12.300 [2024-11-20 06:42:43.913222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.300 [2024-11-20 06:42:43.913269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:12.300 [2024-11-20 06:42:43.913293] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:12.300 [2024-11-20 06:42:43.913797] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:12.300 [2024-11-20 06:42:43.913957] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:12.300 [2024-11-20 06:42:43.913965] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:12.300 [2024-11-20 06:42:43.913971] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:12.300 [2024-11-20 06:42:43.913977] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:12.300 [2024-11-20 06:42:43.925520] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:12.300 [2024-11-20 06:42:43.925943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.300 [2024-11-20 06:42:43.925988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:12.300 [2024-11-20 06:42:43.926012] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:12.300 [2024-11-20 06:42:43.926606] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:12.300 [2024-11-20 06:42:43.927060] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:12.300 [2024-11-20 06:42:43.927069] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:12.300 [2024-11-20 06:42:43.927076] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:12.300 [2024-11-20 06:42:43.927084] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:12.300 [2024-11-20 06:42:43.938343] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:12.300 [2024-11-20 06:42:43.938684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.300 [2024-11-20 06:42:43.938701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:12.300 [2024-11-20 06:42:43.938708] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:12.300 [2024-11-20 06:42:43.938867] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:12.300 [2024-11-20 06:42:43.939025] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:12.300 [2024-11-20 06:42:43.939034] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:12.300 [2024-11-20 06:42:43.939040] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:12.300 [2024-11-20 06:42:43.939046] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:12.300 [2024-11-20 06:42:43.951168] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:12.300 [2024-11-20 06:42:43.951592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.300 [2024-11-20 06:42:43.951638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:12.300 [2024-11-20 06:42:43.951672] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:12.300 [2024-11-20 06:42:43.952175] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:12.301 [2024-11-20 06:42:43.952339] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:12.301 [2024-11-20 06:42:43.952347] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:12.301 [2024-11-20 06:42:43.952353] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:12.301 [2024-11-20 06:42:43.952359] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:12.301 [2024-11-20 06:42:43.963968] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:12.301 [2024-11-20 06:42:43.964374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.301 [2024-11-20 06:42:43.964391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:12.301 [2024-11-20 06:42:43.964399] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:12.301 [2024-11-20 06:42:43.964557] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:12.301 [2024-11-20 06:42:43.964715] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:12.301 [2024-11-20 06:42:43.964725] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:12.301 [2024-11-20 06:42:43.964731] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:12.301 [2024-11-20 06:42:43.964737] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:12.301 [2024-11-20 06:42:43.976710] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:12.301 [2024-11-20 06:42:43.977118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.301 [2024-11-20 06:42:43.977173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:12.301 [2024-11-20 06:42:43.977197] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:12.301 [2024-11-20 06:42:43.977795] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:12.301 [2024-11-20 06:42:43.978254] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:12.301 [2024-11-20 06:42:43.978265] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:12.301 [2024-11-20 06:42:43.978272] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:12.301 [2024-11-20 06:42:43.978278] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:12.301 [2024-11-20 06:42:43.989511] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:12.301 [2024-11-20 06:42:43.989921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.301 [2024-11-20 06:42:43.989938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:12.301 [2024-11-20 06:42:43.989945] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:12.301 [2024-11-20 06:42:43.990104] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:12.301 [2024-11-20 06:42:43.990289] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:12.301 [2024-11-20 06:42:43.990301] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:12.301 [2024-11-20 06:42:43.990308] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:12.301 [2024-11-20 06:42:43.990315] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:12.301 [2024-11-20 06:42:44.002342] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:12.301 [2024-11-20 06:42:44.002761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.301 [2024-11-20 06:42:44.002806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:12.301 [2024-11-20 06:42:44.002830] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:12.301 [2024-11-20 06:42:44.003338] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:12.301 [2024-11-20 06:42:44.003514] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:12.301 [2024-11-20 06:42:44.003523] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:12.301 [2024-11-20 06:42:44.003530] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:12.301 [2024-11-20 06:42:44.003537] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:12.301 [2024-11-20 06:42:44.015402] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:12.301 [2024-11-20 06:42:44.015825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.301 [2024-11-20 06:42:44.015842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:12.301 [2024-11-20 06:42:44.015851] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:12.301 [2024-11-20 06:42:44.016023] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:12.301 [2024-11-20 06:42:44.016195] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:12.301 [2024-11-20 06:42:44.016210] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:12.301 [2024-11-20 06:42:44.016217] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:12.301 [2024-11-20 06:42:44.016224] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:12.301 [2024-11-20 06:42:44.028175] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:12.301 [2024-11-20 06:42:44.028594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.301 [2024-11-20 06:42:44.028611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:12.301 [2024-11-20 06:42:44.028618] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:12.301 [2024-11-20 06:42:44.028775] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:12.301 [2024-11-20 06:42:44.028934] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:12.301 [2024-11-20 06:42:44.028944] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:12.301 [2024-11-20 06:42:44.028954] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:12.301 [2024-11-20 06:42:44.028960] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:12.301 [2024-11-20 06:42:44.041014] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:12.301 [2024-11-20 06:42:44.041428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.301 [2024-11-20 06:42:44.041445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:12.301 [2024-11-20 06:42:44.041454] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:12.301 [2024-11-20 06:42:44.041612] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:12.301 [2024-11-20 06:42:44.041771] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:12.301 [2024-11-20 06:42:44.041781] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:12.301 [2024-11-20 06:42:44.041787] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:12.301 [2024-11-20 06:42:44.041793] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:12.301 [2024-11-20 06:42:44.053746] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:12.301 [2024-11-20 06:42:44.054153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.301 [2024-11-20 06:42:44.054171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:12.301 [2024-11-20 06:42:44.054178] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:12.301 [2024-11-20 06:42:44.054344] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:12.301 [2024-11-20 06:42:44.054504] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:12.301 [2024-11-20 06:42:44.054513] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:12.301 [2024-11-20 06:42:44.054520] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:12.301 [2024-11-20 06:42:44.054526] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:12.301 [2024-11-20 06:42:44.066561] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:12.301 [2024-11-20 06:42:44.066987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.301 [2024-11-20 06:42:44.067004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:12.301 [2024-11-20 06:42:44.067011] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:12.301 [2024-11-20 06:42:44.067170] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:12.301 [2024-11-20 06:42:44.067357] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:12.301 [2024-11-20 06:42:44.067367] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:12.301 [2024-11-20 06:42:44.067374] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:12.301 [2024-11-20 06:42:44.067381] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:12.301 [2024-11-20 06:42:44.079400] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:12.301 [2024-11-20 06:42:44.079793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.301 [2024-11-20 06:42:44.079809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:12.301 [2024-11-20 06:42:44.079817] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:12.302 [2024-11-20 06:42:44.079976] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:12.302 [2024-11-20 06:42:44.080134] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:12.302 [2024-11-20 06:42:44.080144] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:12.302 [2024-11-20 06:42:44.080150] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:12.302 [2024-11-20 06:42:44.080156] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:12.302 [2024-11-20 06:42:44.092189] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:12.302 [2024-11-20 06:42:44.092590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.302 [2024-11-20 06:42:44.092607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:12.302 [2024-11-20 06:42:44.092614] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:12.302 [2024-11-20 06:42:44.092773] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:12.302 [2024-11-20 06:42:44.092933] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:12.302 [2024-11-20 06:42:44.092942] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:12.302 [2024-11-20 06:42:44.092949] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:12.302 [2024-11-20 06:42:44.092955] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:12.302 [2024-11-20 06:42:44.105000] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:12.302 [2024-11-20 06:42:44.105426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.302 [2024-11-20 06:42:44.105473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:12.302 [2024-11-20 06:42:44.105497] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:12.302 [2024-11-20 06:42:44.106075] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:12.302 [2024-11-20 06:42:44.106489] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:12.302 [2024-11-20 06:42:44.106500] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:12.302 [2024-11-20 06:42:44.106507] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:12.302 [2024-11-20 06:42:44.106513] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:12.302 [2024-11-20 06:42:44.117798] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:12.302 [2024-11-20 06:42:44.118213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.302 [2024-11-20 06:42:44.118230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:12.302 [2024-11-20 06:42:44.118240] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:12.302 [2024-11-20 06:42:44.118400] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:12.302 [2024-11-20 06:42:44.118559] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:12.302 [2024-11-20 06:42:44.118568] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:12.302 [2024-11-20 06:42:44.118574] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:12.302 [2024-11-20 06:42:44.118581] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:12.561 [2024-11-20 06:42:44.130766] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:12.561 [2024-11-20 06:42:44.131217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.561 [2024-11-20 06:42:44.131262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:12.561 [2024-11-20 06:42:44.131287] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:12.561 [2024-11-20 06:42:44.131864] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:12.561 [2024-11-20 06:42:44.132451] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:12.561 [2024-11-20 06:42:44.132471] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:12.561 [2024-11-20 06:42:44.132486] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:12.561 [2024-11-20 06:42:44.132501] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:12.561 [2024-11-20 06:42:44.146069] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:12.561 [2024-11-20 06:42:44.146586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.561 [2024-11-20 06:42:44.146610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:12.561 [2024-11-20 06:42:44.146620] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:12.562 [2024-11-20 06:42:44.146874] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:12.562 [2024-11-20 06:42:44.147128] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:12.562 [2024-11-20 06:42:44.147142] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:12.562 [2024-11-20 06:42:44.147151] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:12.562 [2024-11-20 06:42:44.147160] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:12.562 [2024-11-20 06:42:44.159167] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:12.562 [2024-11-20 06:42:44.159504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.562 [2024-11-20 06:42:44.159522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:12.562 [2024-11-20 06:42:44.159530] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:12.562 [2024-11-20 06:42:44.159703] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:12.562 [2024-11-20 06:42:44.159880] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:12.562 [2024-11-20 06:42:44.159891] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:12.562 [2024-11-20 06:42:44.159897] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:12.562 [2024-11-20 06:42:44.159904] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:12.562 [2024-11-20 06:42:44.171969] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:12.562 [2024-11-20 06:42:44.172394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.562 [2024-11-20 06:42:44.172412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:12.562 [2024-11-20 06:42:44.172420] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:12.562 [2024-11-20 06:42:44.172588] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:12.562 [2024-11-20 06:42:44.172755] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:12.562 [2024-11-20 06:42:44.172765] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:12.562 [2024-11-20 06:42:44.172772] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:12.562 [2024-11-20 06:42:44.172779] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:12.562 [2024-11-20 06:42:44.184895] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:12.562 [2024-11-20 06:42:44.185341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.562 [2024-11-20 06:42:44.185387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:12.562 [2024-11-20 06:42:44.185411] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:12.562 [2024-11-20 06:42:44.185815] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:12.562 [2024-11-20 06:42:44.185975] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:12.562 [2024-11-20 06:42:44.185984] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:12.562 [2024-11-20 06:42:44.185991] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:12.562 [2024-11-20 06:42:44.185997] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:12.562 [2024-11-20 06:42:44.197787] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:12.562 [2024-11-20 06:42:44.198197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.562 [2024-11-20 06:42:44.198219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:12.562 [2024-11-20 06:42:44.198227] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:12.562 [2024-11-20 06:42:44.198385] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:12.562 [2024-11-20 06:42:44.198543] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:12.562 [2024-11-20 06:42:44.198553] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:12.562 [2024-11-20 06:42:44.198563] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:12.562 [2024-11-20 06:42:44.198570] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:12.562 [2024-11-20 06:42:44.210548] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:12.562 [2024-11-20 06:42:44.210959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.562 [2024-11-20 06:42:44.210976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:12.562 [2024-11-20 06:42:44.210983] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:12.562 [2024-11-20 06:42:44.211141] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:12.562 [2024-11-20 06:42:44.211328] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:12.562 [2024-11-20 06:42:44.211338] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:12.562 [2024-11-20 06:42:44.211345] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:12.562 [2024-11-20 06:42:44.211351] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:12.562 [2024-11-20 06:42:44.223491] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:12.562 [2024-11-20 06:42:44.223911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.562 [2024-11-20 06:42:44.223927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:12.562 [2024-11-20 06:42:44.223934] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:12.562 [2024-11-20 06:42:44.224093] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:12.562 [2024-11-20 06:42:44.224277] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:12.562 [2024-11-20 06:42:44.224287] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:12.562 [2024-11-20 06:42:44.224293] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:12.562 [2024-11-20 06:42:44.224300] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:12.562 [2024-11-20 06:42:44.236355] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:12.562 [2024-11-20 06:42:44.236780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.562 [2024-11-20 06:42:44.236796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:12.562 [2024-11-20 06:42:44.236804] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:12.562 [2024-11-20 06:42:44.236963] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:12.562 [2024-11-20 06:42:44.237123] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:12.562 [2024-11-20 06:42:44.237133] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:12.562 [2024-11-20 06:42:44.237140] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:12.562 [2024-11-20 06:42:44.237147] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:12.562 [2024-11-20 06:42:44.249171] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:12.562 [2024-11-20 06:42:44.249621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.562 [2024-11-20 06:42:44.249639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:12.562 [2024-11-20 06:42:44.249647] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:12.562 [2024-11-20 06:42:44.249816] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:12.562 [2024-11-20 06:42:44.249984] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:12.562 [2024-11-20 06:42:44.249993] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:12.562 [2024-11-20 06:42:44.250001] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:12.562 [2024-11-20 06:42:44.250008] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:12.562 [2024-11-20 06:42:44.262024] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:12.562 [2024-11-20 06:42:44.262377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.562 [2024-11-20 06:42:44.262394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:12.562 [2024-11-20 06:42:44.262403] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:12.562 [2024-11-20 06:42:44.262575] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:12.562 [2024-11-20 06:42:44.262748] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:12.562 [2024-11-20 06:42:44.262758] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:12.562 [2024-11-20 06:42:44.262765] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:12.562 [2024-11-20 06:42:44.262772] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:12.562 [2024-11-20 06:42:44.275089] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:12.562 [2024-11-20 06:42:44.275458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.562 [2024-11-20 06:42:44.275475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:12.563 [2024-11-20 06:42:44.275484] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:12.563 [2024-11-20 06:42:44.275656] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:12.563 [2024-11-20 06:42:44.275828] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:12.563 [2024-11-20 06:42:44.275838] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:12.563 [2024-11-20 06:42:44.275846] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:12.563 [2024-11-20 06:42:44.275852] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:12.563 [2024-11-20 06:42:44.288016] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:12.563 [2024-11-20 06:42:44.288367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.563 [2024-11-20 06:42:44.288385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:12.563 [2024-11-20 06:42:44.288396] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:12.563 [2024-11-20 06:42:44.288567] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:12.563 [2024-11-20 06:42:44.288726] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:12.563 [2024-11-20 06:42:44.288736] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:12.563 [2024-11-20 06:42:44.288742] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:12.563 [2024-11-20 06:42:44.288748] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:12.563 [2024-11-20 06:42:44.300849] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:12.563 [2024-11-20 06:42:44.301235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.563 [2024-11-20 06:42:44.301252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:12.563 [2024-11-20 06:42:44.301259] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:12.563 [2024-11-20 06:42:44.301418] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:12.563 [2024-11-20 06:42:44.301602] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:12.563 [2024-11-20 06:42:44.301612] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:12.563 [2024-11-20 06:42:44.301618] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:12.563 [2024-11-20 06:42:44.301625] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:12.563 [2024-11-20 06:42:44.313568] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:12.563 [2024-11-20 06:42:44.313896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.563 [2024-11-20 06:42:44.313943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:12.563 [2024-11-20 06:42:44.313968] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:12.563 [2024-11-20 06:42:44.314564] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:12.563 [2024-11-20 06:42:44.314749] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:12.563 [2024-11-20 06:42:44.314759] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:12.563 [2024-11-20 06:42:44.314766] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:12.563 [2024-11-20 06:42:44.314773] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:12.563 [2024-11-20 06:42:44.328515] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:12.563 [2024-11-20 06:42:44.328942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.563 [2024-11-20 06:42:44.328965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:12.563 [2024-11-20 06:42:44.328976] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:12.563 [2024-11-20 06:42:44.329238] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:12.563 [2024-11-20 06:42:44.329495] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:12.563 [2024-11-20 06:42:44.329512] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:12.563 [2024-11-20 06:42:44.329522] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:12.563 [2024-11-20 06:42:44.329532] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:12.563 [2024-11-20 06:42:44.341483] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:12.563 [2024-11-20 06:42:44.341918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.563 [2024-11-20 06:42:44.341976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:12.563 [2024-11-20 06:42:44.341999] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:12.563 [2024-11-20 06:42:44.342595] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:12.563 [2024-11-20 06:42:44.343078] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:12.563 [2024-11-20 06:42:44.343088] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:12.563 [2024-11-20 06:42:44.343095] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:12.563 [2024-11-20 06:42:44.343101] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:12.563 [2024-11-20 06:42:44.354224] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:12.563 [2024-11-20 06:42:44.354614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.563 [2024-11-20 06:42:44.354653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:12.563 [2024-11-20 06:42:44.354678] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:12.563 [2024-11-20 06:42:44.355273] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:12.563 [2024-11-20 06:42:44.355738] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:12.563 [2024-11-20 06:42:44.355758] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:12.563 [2024-11-20 06:42:44.355772] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:12.563 [2024-11-20 06:42:44.355787] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:12.563 [2024-11-20 06:42:44.368993] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:12.563 [2024-11-20 06:42:44.369510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.563 [2024-11-20 06:42:44.369533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:12.563 [2024-11-20 06:42:44.369545] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:12.563 [2024-11-20 06:42:44.369798] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:12.563 [2024-11-20 06:42:44.370054] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:12.563 [2024-11-20 06:42:44.370066] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:12.563 [2024-11-20 06:42:44.370076] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:12.563 [2024-11-20 06:42:44.370091] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:12.563 [2024-11-20 06:42:44.382058] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:12.563 [2024-11-20 06:42:44.382493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.563 [2024-11-20 06:42:44.382511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:12.563 [2024-11-20 06:42:44.382519] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:12.563 [2024-11-20 06:42:44.382691] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:12.563 [2024-11-20 06:42:44.382864] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:12.563 [2024-11-20 06:42:44.382873] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:12.563 [2024-11-20 06:42:44.382881] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:12.563 [2024-11-20 06:42:44.382887] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:12.823 [2024-11-20 06:42:44.394999] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:12.823 [2024-11-20 06:42:44.395407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.823 [2024-11-20 06:42:44.395425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:12.823 [2024-11-20 06:42:44.395432] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:12.823 [2024-11-20 06:42:44.395591] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:12.823 [2024-11-20 06:42:44.395751] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:12.823 [2024-11-20 06:42:44.395760] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:12.823 [2024-11-20 06:42:44.395766] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:12.823 [2024-11-20 06:42:44.395772] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:12.823 [2024-11-20 06:42:44.407712] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:12.823 [2024-11-20 06:42:44.408035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.823 [2024-11-20 06:42:44.408056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:12.823 [2024-11-20 06:42:44.408064] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:12.823 [2024-11-20 06:42:44.408227] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:12.823 [2024-11-20 06:42:44.408411] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:12.823 [2024-11-20 06:42:44.408420] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:12.823 [2024-11-20 06:42:44.408427] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:12.823 [2024-11-20 06:42:44.408433] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:12.823 [2024-11-20 06:42:44.420474] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:12.823 [2024-11-20 06:42:44.420866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.823 [2024-11-20 06:42:44.420882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:12.823 [2024-11-20 06:42:44.420890] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:12.823 [2024-11-20 06:42:44.421049] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:12.823 [2024-11-20 06:42:44.421213] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:12.823 [2024-11-20 06:42:44.421224] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:12.823 [2024-11-20 06:42:44.421231] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:12.823 [2024-11-20 06:42:44.421237] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:12.823 [2024-11-20 06:42:44.433365] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:12.823 [2024-11-20 06:42:44.433778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.823 [2024-11-20 06:42:44.433824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:12.823 [2024-11-20 06:42:44.433849] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:12.823 [2024-11-20 06:42:44.434295] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:12.823 [2024-11-20 06:42:44.434456] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:12.823 [2024-11-20 06:42:44.434465] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:12.823 [2024-11-20 06:42:44.434472] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:12.823 [2024-11-20 06:42:44.434478] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:12.823 [2024-11-20 06:42:44.446172] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:12.823 [2024-11-20 06:42:44.446589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.823 [2024-11-20 06:42:44.446635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:12.823 [2024-11-20 06:42:44.446661] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:12.823 [2024-11-20 06:42:44.447233] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:12.823 [2024-11-20 06:42:44.447418] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:12.823 [2024-11-20 06:42:44.447428] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:12.823 [2024-11-20 06:42:44.447434] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:12.823 [2024-11-20 06:42:44.447442] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:12.823 [2024-11-20 06:42:44.458953] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:12.823 [2024-11-20 06:42:44.459366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.823 [2024-11-20 06:42:44.459412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:12.823 [2024-11-20 06:42:44.459437] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:12.823 [2024-11-20 06:42:44.459981] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:12.823 [2024-11-20 06:42:44.460142] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:12.823 [2024-11-20 06:42:44.460151] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:12.823 [2024-11-20 06:42:44.460157] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:12.823 [2024-11-20 06:42:44.460163] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:12.823 [2024-11-20 06:42:44.471700] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:12.823 [2024-11-20 06:42:44.472100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.823 [2024-11-20 06:42:44.472117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:12.823 [2024-11-20 06:42:44.472124] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:12.823 [2024-11-20 06:42:44.472306] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:12.823 [2024-11-20 06:42:44.472475] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:12.823 [2024-11-20 06:42:44.472484] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:12.823 [2024-11-20 06:42:44.472491] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:12.823 [2024-11-20 06:42:44.472497] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:12.823 [2024-11-20 06:42:44.484485] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:12.823 [2024-11-20 06:42:44.484900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.823 [2024-11-20 06:42:44.484940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:12.823 [2024-11-20 06:42:44.484965] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:12.823 [2024-11-20 06:42:44.485504] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:12.823 [2024-11-20 06:42:44.485673] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:12.823 [2024-11-20 06:42:44.485683] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:12.823 [2024-11-20 06:42:44.485690] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:12.823 [2024-11-20 06:42:44.485696] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:12.823 [2024-11-20 06:42:44.497274] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:12.824 [2024-11-20 06:42:44.497627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.824 [2024-11-20 06:42:44.497644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:12.824 [2024-11-20 06:42:44.497653] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:12.824 [2024-11-20 06:42:44.497821] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:12.824 [2024-11-20 06:42:44.497988] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:12.824 [2024-11-20 06:42:44.498000] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:12.824 [2024-11-20 06:42:44.498007] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:12.824 [2024-11-20 06:42:44.498014] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:12.824 [2024-11-20 06:42:44.510191] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:12.824 [2024-11-20 06:42:44.510482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.824 [2024-11-20 06:42:44.510499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:12.824 [2024-11-20 06:42:44.510506] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:12.824 [2024-11-20 06:42:44.510665] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:12.824 [2024-11-20 06:42:44.510825] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:12.824 [2024-11-20 06:42:44.510834] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:12.824 [2024-11-20 06:42:44.510841] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:12.824 [2024-11-20 06:42:44.510846] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:12.824 [2024-11-20 06:42:44.523011] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:12.824 [2024-11-20 06:42:44.523295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.824 [2024-11-20 06:42:44.523313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:12.824 [2024-11-20 06:42:44.523320] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:12.824 [2024-11-20 06:42:44.523508] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:12.824 [2024-11-20 06:42:44.523682] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:12.824 [2024-11-20 06:42:44.523693] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:12.824 [2024-11-20 06:42:44.523700] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:12.824 [2024-11-20 06:42:44.523707] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:12.824 [2024-11-20 06:42:44.536097] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:12.824 [2024-11-20 06:42:44.536484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.824 [2024-11-20 06:42:44.536501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:12.824 [2024-11-20 06:42:44.536509] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:12.824 [2024-11-20 06:42:44.536681] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:12.824 [2024-11-20 06:42:44.536855] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:12.824 [2024-11-20 06:42:44.536865] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:12.824 [2024-11-20 06:42:44.536871] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:12.824 [2024-11-20 06:42:44.536884] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:12.824 [2024-11-20 06:42:44.548988] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:12.824 [2024-11-20 06:42:44.549436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.824 [2024-11-20 06:42:44.549481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:12.824 [2024-11-20 06:42:44.549506] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:12.824 [2024-11-20 06:42:44.550084] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:12.824 [2024-11-20 06:42:44.550566] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:12.824 [2024-11-20 06:42:44.550577] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:12.824 [2024-11-20 06:42:44.550584] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:12.824 [2024-11-20 06:42:44.550591] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:12.824 [2024-11-20 06:42:44.561959] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:12.824 [2024-11-20 06:42:44.562366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.824 [2024-11-20 06:42:44.562384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:12.824 [2024-11-20 06:42:44.562392] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:12.824 [2024-11-20 06:42:44.562564] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:12.824 [2024-11-20 06:42:44.562737] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:12.824 [2024-11-20 06:42:44.562747] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:12.824 [2024-11-20 06:42:44.562754] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:12.824 [2024-11-20 06:42:44.562760] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:12.824 [2024-11-20 06:42:44.574873] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:12.824 [2024-11-20 06:42:44.575181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.824 [2024-11-20 06:42:44.575198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:12.824 [2024-11-20 06:42:44.575211] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:12.824 [2024-11-20 06:42:44.575378] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:12.824 [2024-11-20 06:42:44.575552] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:12.824 [2024-11-20 06:42:44.575561] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:12.824 [2024-11-20 06:42:44.575567] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:12.824 [2024-11-20 06:42:44.575574] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:12.824 [2024-11-20 06:42:44.587973] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:12.824 [2024-11-20 06:42:44.588346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.824 [2024-11-20 06:42:44.588364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:12.824 [2024-11-20 06:42:44.588371] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:12.824 [2024-11-20 06:42:44.588544] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:12.824 [2024-11-20 06:42:44.588717] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:12.824 [2024-11-20 06:42:44.588726] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:12.824 [2024-11-20 06:42:44.588733] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:12.824 [2024-11-20 06:42:44.588740] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:12.824 [2024-11-20 06:42:44.601041] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:12.824 [2024-11-20 06:42:44.601477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.824 [2024-11-20 06:42:44.601496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:12.824 [2024-11-20 06:42:44.601504] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:12.824 [2024-11-20 06:42:44.601676] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:12.824 [2024-11-20 06:42:44.601848] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:12.824 [2024-11-20 06:42:44.601859] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:12.824 [2024-11-20 06:42:44.601865] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:12.824 [2024-11-20 06:42:44.601872] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:12.824 [2024-11-20 06:42:44.614073] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:12.824 [2024-11-20 06:42:44.614521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.824 [2024-11-20 06:42:44.614539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:12.824 [2024-11-20 06:42:44.614546] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:12.824 [2024-11-20 06:42:44.614719] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:12.824 [2024-11-20 06:42:44.614892] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:12.824 [2024-11-20 06:42:44.614901] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:12.824 [2024-11-20 06:42:44.614908] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:12.824 [2024-11-20 06:42:44.614915] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:12.825 [2024-11-20 06:42:44.627150] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:12.825 [2024-11-20 06:42:44.627515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.825 [2024-11-20 06:42:44.627533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:12.825 [2024-11-20 06:42:44.627540] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:12.825 [2024-11-20 06:42:44.627716] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:12.825 [2024-11-20 06:42:44.627888] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:12.825 [2024-11-20 06:42:44.627898] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:12.825 [2024-11-20 06:42:44.627905] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:12.825 [2024-11-20 06:42:44.627912] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:12.825 [2024-11-20 06:42:44.640306] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:12.825 [2024-11-20 06:42:44.640721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.825 [2024-11-20 06:42:44.640740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:12.825 [2024-11-20 06:42:44.640748] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:12.825 [2024-11-20 06:42:44.640931] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:12.825 [2024-11-20 06:42:44.641113] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:12.825 [2024-11-20 06:42:44.641124] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:12.825 [2024-11-20 06:42:44.641131] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:12.825 [2024-11-20 06:42:44.641138] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:13.084 5824.20 IOPS, 22.75 MiB/s [2024-11-20T05:42:44.920Z] [2024-11-20 06:42:44.654837] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:13.084 [2024-11-20 06:42:44.655187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.084 [2024-11-20 06:42:44.655210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:13.084 [2024-11-20 06:42:44.655219] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:13.084 [2024-11-20 06:42:44.655403] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:13.084 [2024-11-20 06:42:44.655589] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:13.084 [2024-11-20 06:42:44.655599] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:13.084 [2024-11-20 06:42:44.655606] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:13.084 [2024-11-20 06:42:44.655613] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:13.084 [2024-11-20 06:42:44.668067] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:13.084 [2024-11-20 06:42:44.668510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.084 [2024-11-20 06:42:44.668530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:13.084 [2024-11-20 06:42:44.668538] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:13.084 [2024-11-20 06:42:44.668721] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:13.084 [2024-11-20 06:42:44.668905] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:13.084 [2024-11-20 06:42:44.668919] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:13.084 [2024-11-20 06:42:44.668926] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:13.084 [2024-11-20 06:42:44.668934] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:13.084 [2024-11-20 06:42:44.681323] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:13.084 [2024-11-20 06:42:44.681767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.084 [2024-11-20 06:42:44.681785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:13.084 [2024-11-20 06:42:44.681794] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:13.084 [2024-11-20 06:42:44.681977] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:13.084 [2024-11-20 06:42:44.682162] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:13.084 [2024-11-20 06:42:44.682173] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:13.084 [2024-11-20 06:42:44.682180] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:13.084 [2024-11-20 06:42:44.682187] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:13.084 [2024-11-20 06:42:44.694458] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:13.084 [2024-11-20 06:42:44.694895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.084 [2024-11-20 06:42:44.694913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:13.084 [2024-11-20 06:42:44.694922] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:13.084 [2024-11-20 06:42:44.695105] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:13.084 [2024-11-20 06:42:44.695296] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:13.084 [2024-11-20 06:42:44.695306] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:13.084 [2024-11-20 06:42:44.695314] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:13.084 [2024-11-20 06:42:44.695321] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:13.084 [2024-11-20 06:42:44.707637] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:13.084 [2024-11-20 06:42:44.707938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.084 [2024-11-20 06:42:44.707957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:13.084 [2024-11-20 06:42:44.707965] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:13.084 [2024-11-20 06:42:44.708148] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:13.084 [2024-11-20 06:42:44.708339] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:13.084 [2024-11-20 06:42:44.708350] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:13.084 [2024-11-20 06:42:44.708357] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:13.084 [2024-11-20 06:42:44.708369] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:13.084 [2024-11-20 06:42:44.720828] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:13.084 [2024-11-20 06:42:44.721234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.084 [2024-11-20 06:42:44.721252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:13.084 [2024-11-20 06:42:44.721259] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:13.084 [2024-11-20 06:42:44.721431] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:13.084 [2024-11-20 06:42:44.721603] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:13.084 [2024-11-20 06:42:44.721613] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:13.084 [2024-11-20 06:42:44.721619] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:13.084 [2024-11-20 06:42:44.721626] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:13.084 [2024-11-20 06:42:44.733925] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:13.084 [2024-11-20 06:42:44.734331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.084 [2024-11-20 06:42:44.734350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:13.084 [2024-11-20 06:42:44.734358] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:13.084 [2024-11-20 06:42:44.734530] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:13.085 [2024-11-20 06:42:44.734704] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:13.085 [2024-11-20 06:42:44.734714] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:13.085 [2024-11-20 06:42:44.734721] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:13.085 [2024-11-20 06:42:44.734727] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:13.085 [2024-11-20 06:42:44.746880] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:13.085 [2024-11-20 06:42:44.747305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.085 [2024-11-20 06:42:44.747324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:13.085 [2024-11-20 06:42:44.747332] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:13.085 [2024-11-20 06:42:44.747504] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:13.085 [2024-11-20 06:42:44.747679] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:13.085 [2024-11-20 06:42:44.747690] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:13.085 [2024-11-20 06:42:44.747697] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:13.085 [2024-11-20 06:42:44.747703] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:13.085 [2024-11-20 06:42:44.759993] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:13.085 [2024-11-20 06:42:44.760431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.085 [2024-11-20 06:42:44.760454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:13.085 [2024-11-20 06:42:44.760462] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:13.085 [2024-11-20 06:42:44.760646] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:13.085 [2024-11-20 06:42:44.760831] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:13.085 [2024-11-20 06:42:44.760842] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:13.085 [2024-11-20 06:42:44.760849] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:13.085 [2024-11-20 06:42:44.760856] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:13.085 [2024-11-20 06:42:44.773158] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:13.085 [2024-11-20 06:42:44.773516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.085 [2024-11-20 06:42:44.773536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:13.085 [2024-11-20 06:42:44.773544] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:13.085 [2024-11-20 06:42:44.773726] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:13.085 [2024-11-20 06:42:44.773910] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:13.085 [2024-11-20 06:42:44.773920] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:13.085 [2024-11-20 06:42:44.773927] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:13.085 [2024-11-20 06:42:44.773934] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:13.085 [2024-11-20 06:42:44.786294] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:13.085 [2024-11-20 06:42:44.786711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.085 [2024-11-20 06:42:44.786729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:13.085 [2024-11-20 06:42:44.786737] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:13.085 [2024-11-20 06:42:44.786933] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:13.085 [2024-11-20 06:42:44.787118] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:13.085 [2024-11-20 06:42:44.787128] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:13.085 [2024-11-20 06:42:44.787135] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:13.085 [2024-11-20 06:42:44.787142] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:13.085 [2024-11-20 06:42:44.799240] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:13.085 [2024-11-20 06:42:44.799648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.085 [2024-11-20 06:42:44.799665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:13.085 [2024-11-20 06:42:44.799673] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:13.085 [2024-11-20 06:42:44.799849] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:13.085 [2024-11-20 06:42:44.800022] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:13.085 [2024-11-20 06:42:44.800032] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:13.085 [2024-11-20 06:42:44.800039] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:13.085 [2024-11-20 06:42:44.800045] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:13.085 [2024-11-20 06:42:44.812179] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:13.085 [2024-11-20 06:42:44.812586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.085 [2024-11-20 06:42:44.812605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:13.085 [2024-11-20 06:42:44.812613] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:13.085 [2024-11-20 06:42:44.812786] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:13.085 [2024-11-20 06:42:44.812958] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:13.085 [2024-11-20 06:42:44.812967] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:13.085 [2024-11-20 06:42:44.812974] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:13.085 [2024-11-20 06:42:44.812981] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:13.085 [2024-11-20 06:42:44.825380] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:13.085 [2024-11-20 06:42:44.825749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.085 [2024-11-20 06:42:44.825767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:13.085 [2024-11-20 06:42:44.825775] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:13.085 [2024-11-20 06:42:44.825958] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:13.085 [2024-11-20 06:42:44.826142] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:13.085 [2024-11-20 06:42:44.826153] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:13.085 [2024-11-20 06:42:44.826160] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:13.085 [2024-11-20 06:42:44.826168] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:13.085 [2024-11-20 06:42:44.838534] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:13.085 [2024-11-20 06:42:44.838872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.086 [2024-11-20 06:42:44.838891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:13.086 [2024-11-20 06:42:44.838899] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:13.086 [2024-11-20 06:42:44.839070] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:13.086 [2024-11-20 06:42:44.839248] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:13.086 [2024-11-20 06:42:44.839261] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:13.086 [2024-11-20 06:42:44.839268] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:13.086 [2024-11-20 06:42:44.839275] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:13.086 [2024-11-20 06:42:44.851629] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:13.086 [2024-11-20 06:42:44.852049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.086 [2024-11-20 06:42:44.852068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:13.086 [2024-11-20 06:42:44.852076] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:13.086 [2024-11-20 06:42:44.852263] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:13.086 [2024-11-20 06:42:44.852447] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:13.086 [2024-11-20 06:42:44.852457] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:13.086 [2024-11-20 06:42:44.852465] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:13.086 [2024-11-20 06:42:44.852471] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:13.086 [2024-11-20 06:42:44.864887] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:13.086 [2024-11-20 06:42:44.865223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.086 [2024-11-20 06:42:44.865242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:13.086 [2024-11-20 06:42:44.865250] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:13.086 [2024-11-20 06:42:44.865434] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:13.086 [2024-11-20 06:42:44.865619] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:13.086 [2024-11-20 06:42:44.865629] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:13.086 [2024-11-20 06:42:44.865636] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:13.086 [2024-11-20 06:42:44.865643] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:13.086 [2024-11-20 06:42:44.878098] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:13.086 [2024-11-20 06:42:44.878531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.086 [2024-11-20 06:42:44.878550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:13.086 [2024-11-20 06:42:44.878558] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:13.086 [2024-11-20 06:42:44.878742] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:13.086 [2024-11-20 06:42:44.878927] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:13.086 [2024-11-20 06:42:44.878937] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:13.086 [2024-11-20 06:42:44.878945] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:13.086 [2024-11-20 06:42:44.878952] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:13.086 [2024-11-20 06:42:44.891322] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:13.086 [2024-11-20 06:42:44.891736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.086 [2024-11-20 06:42:44.891755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:13.086 [2024-11-20 06:42:44.891764] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:13.086 [2024-11-20 06:42:44.891947] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:13.086 [2024-11-20 06:42:44.892131] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:13.086 [2024-11-20 06:42:44.892141] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:13.086 [2024-11-20 06:42:44.892148] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:13.086 [2024-11-20 06:42:44.892155] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:13.086 [2024-11-20 06:42:44.904650] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:13.086 [2024-11-20 06:42:44.905101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.086 [2024-11-20 06:42:44.905120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:13.086 [2024-11-20 06:42:44.905129] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:13.086 [2024-11-20 06:42:44.905330] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:13.086 [2024-11-20 06:42:44.905527] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:13.086 [2024-11-20 06:42:44.905538] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:13.086 [2024-11-20 06:42:44.905545] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:13.086 [2024-11-20 06:42:44.905553] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:13.347 [2024-11-20 06:42:44.917870] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:13.347 [2024-11-20 06:42:44.918312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.347 [2024-11-20 06:42:44.918330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:13.347 [2024-11-20 06:42:44.918339] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:13.347 [2024-11-20 06:42:44.918522] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:13.347 [2024-11-20 06:42:44.918707] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:13.347 [2024-11-20 06:42:44.918718] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:13.347 [2024-11-20 06:42:44.918725] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:13.347 [2024-11-20 06:42:44.918733] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:13.347 [2024-11-20 06:42:44.930819] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:13.347 [2024-11-20 06:42:44.931192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.347 [2024-11-20 06:42:44.931261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:13.347 [2024-11-20 06:42:44.931285] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:13.347 [2024-11-20 06:42:44.931865] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:13.347 [2024-11-20 06:42:44.932461] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:13.347 [2024-11-20 06:42:44.932481] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:13.347 [2024-11-20 06:42:44.932496] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:13.347 [2024-11-20 06:42:44.932511] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:13.347 [2024-11-20 06:42:44.945865] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:13.347 [2024-11-20 06:42:44.946387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.347 [2024-11-20 06:42:44.946445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:13.347 [2024-11-20 06:42:44.946470] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:13.347 [2024-11-20 06:42:44.947049] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:13.347 [2024-11-20 06:42:44.947601] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:13.347 [2024-11-20 06:42:44.947615] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:13.347 [2024-11-20 06:42:44.947625] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:13.347 [2024-11-20 06:42:44.947634] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:13.347 [2024-11-20 06:42:44.958848] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:13.347 [2024-11-20 06:42:44.959208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.347 [2024-11-20 06:42:44.959226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:13.347 [2024-11-20 06:42:44.959234] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:13.347 [2024-11-20 06:42:44.959402] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:13.347 [2024-11-20 06:42:44.959571] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:13.347 [2024-11-20 06:42:44.959582] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:13.347 [2024-11-20 06:42:44.959588] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:13.347 [2024-11-20 06:42:44.959594] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:13.347 [2024-11-20 06:42:44.971646] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:13.347 [2024-11-20 06:42:44.972058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.347 [2024-11-20 06:42:44.972075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:13.347 [2024-11-20 06:42:44.972082] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:13.347 [2024-11-20 06:42:44.972249] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:13.347 [2024-11-20 06:42:44.972409] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:13.347 [2024-11-20 06:42:44.972418] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:13.347 [2024-11-20 06:42:44.972425] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:13.347 [2024-11-20 06:42:44.972431] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:13.347 [2024-11-20 06:42:44.984415] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:13.347 [2024-11-20 06:42:44.984811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.347 [2024-11-20 06:42:44.984828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:13.347 [2024-11-20 06:42:44.984835] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:13.347 [2024-11-20 06:42:44.984994] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:13.347 [2024-11-20 06:42:44.985153] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:13.347 [2024-11-20 06:42:44.985162] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:13.347 [2024-11-20 06:42:44.985169] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:13.347 [2024-11-20 06:42:44.985175] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:13.347 [2024-11-20 06:42:44.997198] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:13.347 [2024-11-20 06:42:44.997554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.347 [2024-11-20 06:42:44.997571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:13.347 [2024-11-20 06:42:44.997578] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:13.347 [2024-11-20 06:42:44.997737] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:13.347 [2024-11-20 06:42:44.997896] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:13.347 [2024-11-20 06:42:44.997905] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:13.347 [2024-11-20 06:42:44.997911] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:13.347 [2024-11-20 06:42:44.997917] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:13.347 [2024-11-20 06:42:45.009990] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:13.347 [2024-11-20 06:42:45.010386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.347 [2024-11-20 06:42:45.010403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:13.347 [2024-11-20 06:42:45.010411] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:13.347 [2024-11-20 06:42:45.010570] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:13.347 [2024-11-20 06:42:45.010729] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:13.347 [2024-11-20 06:42:45.010738] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:13.347 [2024-11-20 06:42:45.010748] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:13.347 [2024-11-20 06:42:45.010754] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:13.347 [2024-11-20 06:42:45.022786] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:13.347 [2024-11-20 06:42:45.023224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.347 [2024-11-20 06:42:45.023269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:13.348 [2024-11-20 06:42:45.023293] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:13.348 [2024-11-20 06:42:45.023692] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:13.348 [2024-11-20 06:42:45.023862] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:13.348 [2024-11-20 06:42:45.023872] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:13.348 [2024-11-20 06:42:45.023878] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:13.348 [2024-11-20 06:42:45.023886] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:13.348 [2024-11-20 06:42:45.035801] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:13.348 [2024-11-20 06:42:45.036214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.348 [2024-11-20 06:42:45.036231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:13.348 [2024-11-20 06:42:45.036239] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:13.348 [2024-11-20 06:42:45.036412] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:13.348 [2024-11-20 06:42:45.036584] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:13.348 [2024-11-20 06:42:45.036594] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:13.348 [2024-11-20 06:42:45.036601] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:13.348 [2024-11-20 06:42:45.036608] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:13.348 [2024-11-20 06:42:45.048602] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:13.348 [2024-11-20 06:42:45.048928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.348 [2024-11-20 06:42:45.048945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:13.348 [2024-11-20 06:42:45.048952] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:13.348 [2024-11-20 06:42:45.049111] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:13.348 [2024-11-20 06:42:45.049276] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:13.348 [2024-11-20 06:42:45.049286] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:13.348 [2024-11-20 06:42:45.049293] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:13.348 [2024-11-20 06:42:45.049299] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:13.348 [2024-11-20 06:42:45.061339] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:13.348 [2024-11-20 06:42:45.061668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.348 [2024-11-20 06:42:45.061685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:13.348 [2024-11-20 06:42:45.061692] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:13.348 [2024-11-20 06:42:45.061851] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:13.348 [2024-11-20 06:42:45.062010] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:13.348 [2024-11-20 06:42:45.062019] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:13.348 [2024-11-20 06:42:45.062026] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:13.348 [2024-11-20 06:42:45.062032] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:13.348 [2024-11-20 06:42:45.074073] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:13.348 [2024-11-20 06:42:45.074514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.348 [2024-11-20 06:42:45.074560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:13.348 [2024-11-20 06:42:45.074583] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:13.348 [2024-11-20 06:42:45.075162] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:13.348 [2024-11-20 06:42:45.075679] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:13.348 [2024-11-20 06:42:45.075689] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:13.348 [2024-11-20 06:42:45.075696] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:13.348 [2024-11-20 06:42:45.075702] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:13.348 [2024-11-20 06:42:45.086839] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:13.348 [2024-11-20 06:42:45.087158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.348 [2024-11-20 06:42:45.087177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:13.348 [2024-11-20 06:42:45.087184] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:13.348 [2024-11-20 06:42:45.087348] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:13.348 [2024-11-20 06:42:45.087509] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:13.348 [2024-11-20 06:42:45.087518] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:13.348 [2024-11-20 06:42:45.087524] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:13.348 [2024-11-20 06:42:45.087530] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:13.348 [2024-11-20 06:42:45.099566] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:13.348 [2024-11-20 06:42:45.099978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.348 [2024-11-20 06:42:45.099998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:13.348 [2024-11-20 06:42:45.100006] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:13.348 [2024-11-20 06:42:45.100163] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:13.348 [2024-11-20 06:42:45.100329] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:13.348 [2024-11-20 06:42:45.100339] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:13.348 [2024-11-20 06:42:45.100345] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:13.348 [2024-11-20 06:42:45.100351] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:13.348 [2024-11-20 06:42:45.112362] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:13.348 [2024-11-20 06:42:45.112798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.348 [2024-11-20 06:42:45.112843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:13.348 [2024-11-20 06:42:45.112867] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:13.348 [2024-11-20 06:42:45.113461] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:13.348 [2024-11-20 06:42:45.113888] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:13.348 [2024-11-20 06:42:45.113898] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:13.348 [2024-11-20 06:42:45.113904] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:13.348 [2024-11-20 06:42:45.113911] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:13.348 [2024-11-20 06:42:45.127351] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:13.348 [2024-11-20 06:42:45.127853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.348 [2024-11-20 06:42:45.127876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:13.348 [2024-11-20 06:42:45.127887] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:13.348 [2024-11-20 06:42:45.128140] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:13.348 [2024-11-20 06:42:45.128403] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:13.348 [2024-11-20 06:42:45.128417] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:13.348 [2024-11-20 06:42:45.128427] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:13.348 [2024-11-20 06:42:45.128437] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:13.348 [2024-11-20 06:42:45.140389] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:13.348 [2024-11-20 06:42:45.140816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.348 [2024-11-20 06:42:45.140834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:13.348 [2024-11-20 06:42:45.140842] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:13.348 [2024-11-20 06:42:45.141015] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:13.348 [2024-11-20 06:42:45.141193] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:13.348 [2024-11-20 06:42:45.141209] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:13.348 [2024-11-20 06:42:45.141216] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:13.348 [2024-11-20 06:42:45.141223] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:13.348 [2024-11-20 06:42:45.153198] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:13.348 [2024-11-20 06:42:45.153615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.349 [2024-11-20 06:42:45.153633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:13.349 [2024-11-20 06:42:45.153640] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:13.349 [2024-11-20 06:42:45.153798] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:13.349 [2024-11-20 06:42:45.153957] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:13.349 [2024-11-20 06:42:45.153966] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:13.349 [2024-11-20 06:42:45.153972] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:13.349 [2024-11-20 06:42:45.153978] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:13.349 [2024-11-20 06:42:45.166004] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:13.349 [2024-11-20 06:42:45.166422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.349 [2024-11-20 06:42:45.166469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:13.349 [2024-11-20 06:42:45.166493] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:13.349 [2024-11-20 06:42:45.166987] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:13.349 [2024-11-20 06:42:45.167147] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:13.349 [2024-11-20 06:42:45.167157] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:13.349 [2024-11-20 06:42:45.167163] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:13.349 [2024-11-20 06:42:45.167169] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:13.609 [2024-11-20 06:42:45.178955] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:13.609 [2024-11-20 06:42:45.179374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.610 [2024-11-20 06:42:45.179392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:13.610 [2024-11-20 06:42:45.179400] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:13.610 [2024-11-20 06:42:45.179567] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:13.610 [2024-11-20 06:42:45.179736] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:13.610 [2024-11-20 06:42:45.179746] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:13.610 [2024-11-20 06:42:45.179757] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:13.610 [2024-11-20 06:42:45.179765] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:13.610 [2024-11-20 06:42:45.191768] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:13.610 [2024-11-20 06:42:45.192166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.610 [2024-11-20 06:42:45.192183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:13.610 [2024-11-20 06:42:45.192190] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:13.610 [2024-11-20 06:42:45.192786] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:13.610 [2024-11-20 06:42:45.192962] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:13.610 [2024-11-20 06:42:45.192971] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:13.610 [2024-11-20 06:42:45.192977] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:13.610 [2024-11-20 06:42:45.192983] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:13.610 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 703652 Killed "${NVMF_APP[@]}" "$@" 00:32:13.610 06:42:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:32:13.610 06:42:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:32:13.610 06:42:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:13.610 06:42:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:13.610 06:42:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:13.610 [2024-11-20 06:42:45.204845] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:13.610 [2024-11-20 06:42:45.205272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.610 [2024-11-20 06:42:45.205289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:13.610 [2024-11-20 06:42:45.205297] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:13.610 [2024-11-20 06:42:45.205469] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:13.610 [2024-11-20 06:42:45.205643] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:13.610 [2024-11-20 06:42:45.205653] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:13.610 [2024-11-20 06:42:45.205659] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:13.610 [2024-11-20 06:42:45.205666] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:13.610 06:42:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=705044 00:32:13.610 06:42:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 705044 00:32:13.610 06:42:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:32:13.610 06:42:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@833 -- # '[' -z 705044 ']' 00:32:13.610 06:42:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:13.610 06:42:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # local max_retries=100 00:32:13.610 06:42:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:13.610 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:13.610 06:42:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # xtrace_disable 00:32:13.610 06:42:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:13.610 [2024-11-20 06:42:45.217792] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:13.610 [2024-11-20 06:42:45.218130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.610 [2024-11-20 06:42:45.218149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:13.610 [2024-11-20 06:42:45.218157] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:13.610 [2024-11-20 06:42:45.218335] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:13.610 [2024-11-20 06:42:45.218509] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:13.610 [2024-11-20 06:42:45.218519] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:13.610 [2024-11-20 06:42:45.218526] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:13.610 [2024-11-20 06:42:45.218533] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:13.610 [2024-11-20 06:42:45.230818] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:13.610 [2024-11-20 06:42:45.231234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.610 [2024-11-20 06:42:45.231252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:13.610 [2024-11-20 06:42:45.231259] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:13.610 [2024-11-20 06:42:45.231431] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:13.610 [2024-11-20 06:42:45.231605] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:13.610 [2024-11-20 06:42:45.231615] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:13.610 [2024-11-20 06:42:45.231621] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:13.610 [2024-11-20 06:42:45.231628] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:13.610 [2024-11-20 06:42:45.243924] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:13.610 [2024-11-20 06:42:45.244296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.610 [2024-11-20 06:42:45.244315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:13.610 [2024-11-20 06:42:45.244323] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:13.610 [2024-11-20 06:42:45.244495] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:13.610 [2024-11-20 06:42:45.244668] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:13.610 [2024-11-20 06:42:45.244678] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:13.610 [2024-11-20 06:42:45.244684] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:13.610 [2024-11-20 06:42:45.244695] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:13.610 [2024-11-20 06:42:45.256865] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:13.610 [2024-11-20 06:42:45.257266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.610 [2024-11-20 06:42:45.257283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:13.610 [2024-11-20 06:42:45.257291] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:13.610 [2024-11-20 06:42:45.257459] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:13.610 [2024-11-20 06:42:45.257627] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:13.610 [2024-11-20 06:42:45.257637] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:13.610 [2024-11-20 06:42:45.257644] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:13.610 [2024-11-20 06:42:45.257651] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:13.610 [2024-11-20 06:42:45.260358] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:32:13.610 [2024-11-20 06:42:45.260399] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:13.610 [2024-11-20 06:42:45.269904] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:13.610 [2024-11-20 06:42:45.270329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.611 [2024-11-20 06:42:45.270348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:13.611 [2024-11-20 06:42:45.270356] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:13.611 [2024-11-20 06:42:45.270524] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:13.611 [2024-11-20 06:42:45.270692] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:13.611 [2024-11-20 06:42:45.270702] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:13.611 [2024-11-20 06:42:45.270708] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:13.611 [2024-11-20 06:42:45.270715] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:13.611 [2024-11-20 06:42:45.282853] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:13.611 [2024-11-20 06:42:45.283337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.611 [2024-11-20 06:42:45.283356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:13.611 [2024-11-20 06:42:45.283364] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:13.611 [2024-11-20 06:42:45.283537] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:13.611 [2024-11-20 06:42:45.283708] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:13.611 [2024-11-20 06:42:45.283719] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:13.611 [2024-11-20 06:42:45.283727] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:13.611 [2024-11-20 06:42:45.283738] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:13.611 [2024-11-20 06:42:45.295858] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:13.611 [2024-11-20 06:42:45.296310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.611 [2024-11-20 06:42:45.296328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:13.611 [2024-11-20 06:42:45.296337] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:13.611 [2024-11-20 06:42:45.296510] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:13.611 [2024-11-20 06:42:45.296683] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:13.611 [2024-11-20 06:42:45.296693] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:13.611 [2024-11-20 06:42:45.296700] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:13.611 [2024-11-20 06:42:45.296707] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:13.611 [2024-11-20 06:42:45.308824] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:13.611 [2024-11-20 06:42:45.309257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.611 [2024-11-20 06:42:45.309276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:13.611 [2024-11-20 06:42:45.309284] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:13.611 [2024-11-20 06:42:45.309457] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:13.611 [2024-11-20 06:42:45.309629] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:13.611 [2024-11-20 06:42:45.309639] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:13.611 [2024-11-20 06:42:45.309647] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:13.611 [2024-11-20 06:42:45.309653] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:13.611 [2024-11-20 06:42:45.321740] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:13.611 [2024-11-20 06:42:45.322165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.611 [2024-11-20 06:42:45.322183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:13.611 [2024-11-20 06:42:45.322191] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:13.611 [2024-11-20 06:42:45.322365] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:13.611 [2024-11-20 06:42:45.322534] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:13.611 [2024-11-20 06:42:45.322544] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:13.611 [2024-11-20 06:42:45.322552] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:13.611 [2024-11-20 06:42:45.322559] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:13.611 [2024-11-20 06:42:45.334808] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:13.611 [2024-11-20 06:42:45.335244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.611 [2024-11-20 06:42:45.335262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:13.611 [2024-11-20 06:42:45.335270] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:13.611 [2024-11-20 06:42:45.335453] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:13.611 [2024-11-20 06:42:45.335622] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:13.611 [2024-11-20 06:42:45.335632] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:13.611 [2024-11-20 06:42:45.335639] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:13.611 [2024-11-20 06:42:45.335645] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:13.611 [2024-11-20 06:42:45.340669] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:32:13.611 [2024-11-20 06:42:45.347817] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:13.611 [2024-11-20 06:42:45.348235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.611 [2024-11-20 06:42:45.348253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:13.611 [2024-11-20 06:42:45.348262] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:13.611 [2024-11-20 06:42:45.348443] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:13.611 [2024-11-20 06:42:45.348612] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:13.611 [2024-11-20 06:42:45.348622] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:13.611 [2024-11-20 06:42:45.348629] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:13.611 [2024-11-20 06:42:45.348636] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:13.611 [2024-11-20 06:42:45.360697] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:13.611 [2024-11-20 06:42:45.361118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.611 [2024-11-20 06:42:45.361137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:13.611 [2024-11-20 06:42:45.361145] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:13.611 [2024-11-20 06:42:45.361319] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:13.611 [2024-11-20 06:42:45.361488] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:13.611 [2024-11-20 06:42:45.361498] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:13.611 [2024-11-20 06:42:45.361505] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:13.611 [2024-11-20 06:42:45.361512] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:13.611 [2024-11-20 06:42:45.373612] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:13.611 [2024-11-20 06:42:45.374026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.611 [2024-11-20 06:42:45.374044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:13.611 [2024-11-20 06:42:45.374056] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:13.611 [2024-11-20 06:42:45.374230] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:13.611 [2024-11-20 06:42:45.374399] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:13.611 [2024-11-20 06:42:45.374408] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:13.612 [2024-11-20 06:42:45.374415] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:13.612 [2024-11-20 06:42:45.374422] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:13.612 [2024-11-20 06:42:45.382474] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:13.612 [2024-11-20 06:42:45.382500] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:13.612 [2024-11-20 06:42:45.382507] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:13.612 [2024-11-20 06:42:45.382512] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:13.612 [2024-11-20 06:42:45.382517] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:13.612 [2024-11-20 06:42:45.383912] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:13.612 [2024-11-20 06:42:45.384019] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:13.612 [2024-11-20 06:42:45.384021] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:13.612 [2024-11-20 06:42:45.386663] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:13.612 [2024-11-20 06:42:45.387102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.612 [2024-11-20 06:42:45.387121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:13.612 [2024-11-20 06:42:45.387130] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:13.612 [2024-11-20 06:42:45.387308] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:13.612 [2024-11-20 06:42:45.387483] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:13.612 [2024-11-20 06:42:45.387494] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:13.612 [2024-11-20 06:42:45.387501] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:13.612 [2024-11-20 06:42:45.387509] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:13.612 [2024-11-20 06:42:45.399626] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:13.612 [2024-11-20 06:42:45.400074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.612 [2024-11-20 06:42:45.400096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:13.612 [2024-11-20 06:42:45.400105] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:13.612 [2024-11-20 06:42:45.400282] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:13.612 [2024-11-20 06:42:45.400456] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:13.612 [2024-11-20 06:42:45.400467] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:13.612 [2024-11-20 06:42:45.400474] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:13.612 [2024-11-20 06:42:45.400488] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:13.612 [2024-11-20 06:42:45.412604] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:13.612 [2024-11-20 06:42:45.413043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.612 [2024-11-20 06:42:45.413065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:13.612 [2024-11-20 06:42:45.413074] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:13.612 [2024-11-20 06:42:45.413254] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:13.612 [2024-11-20 06:42:45.413429] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:13.612 [2024-11-20 06:42:45.413451] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:13.612 [2024-11-20 06:42:45.413458] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:13.612 [2024-11-20 06:42:45.413466] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:13.612 [2024-11-20 06:42:45.425556] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:13.612 [2024-11-20 06:42:45.425978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.612 [2024-11-20 06:42:45.425998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:13.612 [2024-11-20 06:42:45.426008] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:13.612 [2024-11-20 06:42:45.426181] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:13.612 [2024-11-20 06:42:45.426360] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:13.612 [2024-11-20 06:42:45.426371] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:13.612 [2024-11-20 06:42:45.426378] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:13.612 [2024-11-20 06:42:45.426386] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:13.612 [2024-11-20 06:42:45.438525] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:13.612 [2024-11-20 06:42:45.438893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.612 [2024-11-20 06:42:45.438916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:13.612 [2024-11-20 06:42:45.438925] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:13.612 [2024-11-20 06:42:45.439100] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:13.612 [2024-11-20 06:42:45.439279] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:13.612 [2024-11-20 06:42:45.439290] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:13.612 [2024-11-20 06:42:45.439297] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:13.612 [2024-11-20 06:42:45.439305] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:13.872 [2024-11-20 06:42:45.451571] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:13.872 [2024-11-20 06:42:45.452009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.872 [2024-11-20 06:42:45.452027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:13.872 [2024-11-20 06:42:45.452036] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:13.872 [2024-11-20 06:42:45.452214] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:13.872 [2024-11-20 06:42:45.452388] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:13.872 [2024-11-20 06:42:45.452398] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:13.872 [2024-11-20 06:42:45.452405] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:13.872 [2024-11-20 06:42:45.452412] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:13.872 [2024-11-20 06:42:45.464662] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:13.872 [2024-11-20 06:42:45.465091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.872 [2024-11-20 06:42:45.465110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:13.872 [2024-11-20 06:42:45.465118] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:13.872 [2024-11-20 06:42:45.465295] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:13.872 [2024-11-20 06:42:45.465469] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:13.872 [2024-11-20 06:42:45.465480] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:13.872 [2024-11-20 06:42:45.465487] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:13.872 [2024-11-20 06:42:45.465494] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:13.872 [2024-11-20 06:42:45.477731] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:13.872 [2024-11-20 06:42:45.478138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.872 [2024-11-20 06:42:45.478155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:13.872 [2024-11-20 06:42:45.478163] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:13.872 [2024-11-20 06:42:45.478340] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:13.872 [2024-11-20 06:42:45.478514] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:13.872 [2024-11-20 06:42:45.478524] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:13.872 [2024-11-20 06:42:45.478531] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:13.872 [2024-11-20 06:42:45.478537] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:13.872 [2024-11-20 06:42:45.490776] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:13.872 [2024-11-20 06:42:45.491199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.872 [2024-11-20 06:42:45.491223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:13.872 [2024-11-20 06:42:45.491231] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:13.872 [2024-11-20 06:42:45.491407] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:13.872 [2024-11-20 06:42:45.491581] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:13.872 [2024-11-20 06:42:45.491591] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:13.872 [2024-11-20 06:42:45.491598] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:13.872 [2024-11-20 06:42:45.491604] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:13.873 [2024-11-20 06:42:45.503849] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:13.873 [2024-11-20 06:42:45.504276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.873 [2024-11-20 06:42:45.504294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:13.873 [2024-11-20 06:42:45.504302] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:13.873 [2024-11-20 06:42:45.504483] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:13.873 [2024-11-20 06:42:45.504650] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:13.873 [2024-11-20 06:42:45.504660] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:13.873 [2024-11-20 06:42:45.504666] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:13.873 [2024-11-20 06:42:45.504673] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:13.873 [2024-11-20 06:42:45.516935] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:13.873 [2024-11-20 06:42:45.517384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.873 [2024-11-20 06:42:45.517402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:13.873 [2024-11-20 06:42:45.517410] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:13.873 [2024-11-20 06:42:45.517582] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:13.873 [2024-11-20 06:42:45.517755] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:13.873 [2024-11-20 06:42:45.517765] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:13.873 [2024-11-20 06:42:45.517771] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:13.873 [2024-11-20 06:42:45.517777] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:13.873 [2024-11-20 06:42:45.530020] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:13.873 [2024-11-20 06:42:45.530449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.873 [2024-11-20 06:42:45.530468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:13.873 [2024-11-20 06:42:45.530475] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:13.873 [2024-11-20 06:42:45.530647] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:13.873 [2024-11-20 06:42:45.530819] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:13.873 [2024-11-20 06:42:45.530832] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:13.873 [2024-11-20 06:42:45.530839] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:13.873 [2024-11-20 06:42:45.530845] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:13.873 [2024-11-20 06:42:45.543110] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:13.873 [2024-11-20 06:42:45.543541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.873 [2024-11-20 06:42:45.543559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:13.873 [2024-11-20 06:42:45.543567] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:13.873 [2024-11-20 06:42:45.543739] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:13.873 [2024-11-20 06:42:45.543912] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:13.873 [2024-11-20 06:42:45.543922] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:13.873 [2024-11-20 06:42:45.543929] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:13.873 [2024-11-20 06:42:45.543936] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:13.873 [2024-11-20 06:42:45.556205] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:13.873 [2024-11-20 06:42:45.556612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.873 [2024-11-20 06:42:45.556630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:13.873 [2024-11-20 06:42:45.556638] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:13.873 [2024-11-20 06:42:45.556811] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:13.873 [2024-11-20 06:42:45.556983] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:13.873 [2024-11-20 06:42:45.556993] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:13.873 [2024-11-20 06:42:45.556999] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:13.873 [2024-11-20 06:42:45.557006] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:13.873 [2024-11-20 06:42:45.569267] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:13.873 [2024-11-20 06:42:45.569686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.873 [2024-11-20 06:42:45.569704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:13.873 [2024-11-20 06:42:45.569711] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:13.873 [2024-11-20 06:42:45.569884] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:13.873 [2024-11-20 06:42:45.570056] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:13.873 [2024-11-20 06:42:45.570066] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:13.873 [2024-11-20 06:42:45.570073] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:13.873 [2024-11-20 06:42:45.570082] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:13.873 [2024-11-20 06:42:45.582337] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:13.873 [2024-11-20 06:42:45.582764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.873 [2024-11-20 06:42:45.582783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:13.873 [2024-11-20 06:42:45.582790] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:13.873 [2024-11-20 06:42:45.582963] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:13.873 [2024-11-20 06:42:45.583135] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:13.873 [2024-11-20 06:42:45.583145] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:13.873 [2024-11-20 06:42:45.583152] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:13.873 [2024-11-20 06:42:45.583158] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:13.873 [2024-11-20 06:42:45.595427] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:13.873 [2024-11-20 06:42:45.595858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.873 [2024-11-20 06:42:45.595877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:13.873 [2024-11-20 06:42:45.595885] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:13.873 [2024-11-20 06:42:45.596057] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:13.873 [2024-11-20 06:42:45.596233] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:13.873 [2024-11-20 06:42:45.596244] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:13.873 [2024-11-20 06:42:45.596251] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:13.873 [2024-11-20 06:42:45.596257] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:13.873 [2024-11-20 06:42:45.608515] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:13.873 [2024-11-20 06:42:45.608919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.873 [2024-11-20 06:42:45.608937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:13.873 [2024-11-20 06:42:45.608945] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:13.873 [2024-11-20 06:42:45.609117] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:13.873 [2024-11-20 06:42:45.609295] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:13.873 [2024-11-20 06:42:45.609305] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:13.873 [2024-11-20 06:42:45.609312] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:13.874 [2024-11-20 06:42:45.609318] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:13.874 [2024-11-20 06:42:45.621565] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:13.874 [2024-11-20 06:42:45.622005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.874 [2024-11-20 06:42:45.622023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:13.874 [2024-11-20 06:42:45.622031] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:13.874 [2024-11-20 06:42:45.622208] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:13.874 [2024-11-20 06:42:45.622382] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:13.874 [2024-11-20 06:42:45.622392] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:13.874 [2024-11-20 06:42:45.622398] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:13.874 [2024-11-20 06:42:45.622405] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:13.874 [2024-11-20 06:42:45.634657] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:13.874 [2024-11-20 06:42:45.635000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.874 [2024-11-20 06:42:45.635018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:13.874 [2024-11-20 06:42:45.635026] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:13.874 [2024-11-20 06:42:45.635198] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:13.874 [2024-11-20 06:42:45.635376] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:13.874 [2024-11-20 06:42:45.635387] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:13.874 [2024-11-20 06:42:45.635393] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:13.874 [2024-11-20 06:42:45.635400] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:13.874 [2024-11-20 06:42:45.647667] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:13.874 [2024-11-20 06:42:45.648072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.874 [2024-11-20 06:42:45.648089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:13.874 [2024-11-20 06:42:45.648097] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:13.874 [2024-11-20 06:42:45.648274] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:13.874 [2024-11-20 06:42:45.648447] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:13.874 [2024-11-20 06:42:45.648457] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:13.874 [2024-11-20 06:42:45.648464] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:13.874 [2024-11-20 06:42:45.648471] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:13.874 4853.50 IOPS, 18.96 MiB/s [2024-11-20T05:42:45.710Z] [2024-11-20 06:42:45.660684] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:13.874 [2024-11-20 06:42:45.661094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.874 [2024-11-20 06:42:45.661112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:13.874 [2024-11-20 06:42:45.661121] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:13.874 [2024-11-20 06:42:45.661301] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:13.874 [2024-11-20 06:42:45.661475] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:13.874 [2024-11-20 06:42:45.661485] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:13.874 [2024-11-20 06:42:45.661492] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:13.874 [2024-11-20 06:42:45.661499] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:13.874 [2024-11-20 06:42:45.673750] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:13.874 [2024-11-20 06:42:45.674114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.874 [2024-11-20 06:42:45.674132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:13.874 [2024-11-20 06:42:45.674142] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:13.874 [2024-11-20 06:42:45.674318] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:13.874 [2024-11-20 06:42:45.674492] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:13.874 [2024-11-20 06:42:45.674502] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:13.874 [2024-11-20 06:42:45.674509] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:13.874 [2024-11-20 06:42:45.674516] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:13.874 [2024-11-20 06:42:45.686767] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:13.874 [2024-11-20 06:42:45.687125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.874 [2024-11-20 06:42:45.687143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:13.874 [2024-11-20 06:42:45.687151] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:13.874 [2024-11-20 06:42:45.687328] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:13.874 [2024-11-20 06:42:45.687502] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:13.874 [2024-11-20 06:42:45.687512] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:13.874 [2024-11-20 06:42:45.687519] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:13.874 [2024-11-20 06:42:45.687526] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:13.874 [2024-11-20 06:42:45.699779] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:13.874 [2024-11-20 06:42:45.700213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.874 [2024-11-20 06:42:45.700232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:13.874 [2024-11-20 06:42:45.700240] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:13.874 [2024-11-20 06:42:45.700412] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:13.874 [2024-11-20 06:42:45.700586] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:13.874 [2024-11-20 06:42:45.700599] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:13.874 [2024-11-20 06:42:45.700606] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:13.874 [2024-11-20 06:42:45.700612] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:14.133 [2024-11-20 06:42:45.712752] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:14.133 [2024-11-20 06:42:45.713158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.133 [2024-11-20 06:42:45.713176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:14.133 [2024-11-20 06:42:45.713185] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:14.133 [2024-11-20 06:42:45.713362] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:14.133 [2024-11-20 06:42:45.713536] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:14.133 [2024-11-20 06:42:45.713546] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:14.134 [2024-11-20 06:42:45.713553] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:14.134 [2024-11-20 06:42:45.713560] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:14.134 [2024-11-20 06:42:45.725813] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:14.134 [2024-11-20 06:42:45.726241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.134 [2024-11-20 06:42:45.726260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:14.134 [2024-11-20 06:42:45.726269] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:14.134 [2024-11-20 06:42:45.726442] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:14.134 [2024-11-20 06:42:45.726615] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:14.134 [2024-11-20 06:42:45.726625] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:14.134 [2024-11-20 06:42:45.726632] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:14.134 [2024-11-20 06:42:45.726639] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:14.134 [2024-11-20 06:42:45.738914] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:14.134 [2024-11-20 06:42:45.739264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.134 [2024-11-20 06:42:45.739282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:14.134 [2024-11-20 06:42:45.739291] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:14.134 [2024-11-20 06:42:45.739463] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:14.134 [2024-11-20 06:42:45.739637] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:14.134 [2024-11-20 06:42:45.739647] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:14.134 [2024-11-20 06:42:45.739654] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:14.134 [2024-11-20 06:42:45.739666] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:14.134 [2024-11-20 06:42:45.751938] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:14.134 [2024-11-20 06:42:45.752270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.134 [2024-11-20 06:42:45.752288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:14.134 [2024-11-20 06:42:45.752296] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:14.134 [2024-11-20 06:42:45.752469] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:14.134 [2024-11-20 06:42:45.752642] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:14.134 [2024-11-20 06:42:45.752651] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:14.134 [2024-11-20 06:42:45.752658] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:14.134 [2024-11-20 06:42:45.752665] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:14.134 [2024-11-20 06:42:45.764921] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:14.134 [2024-11-20 06:42:45.765354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.134 [2024-11-20 06:42:45.765372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:14.134 [2024-11-20 06:42:45.765380] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:14.134 [2024-11-20 06:42:45.765552] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:14.134 [2024-11-20 06:42:45.765723] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:14.134 [2024-11-20 06:42:45.765733] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:14.134 [2024-11-20 06:42:45.765740] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:14.134 [2024-11-20 06:42:45.765746] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:14.134 [2024-11-20 06:42:45.777987] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:14.134 [2024-11-20 06:42:45.778411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.134 [2024-11-20 06:42:45.778430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:14.134 [2024-11-20 06:42:45.778438] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:14.134 [2024-11-20 06:42:45.778611] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:14.134 [2024-11-20 06:42:45.778783] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:14.134 [2024-11-20 06:42:45.778793] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:14.134 [2024-11-20 06:42:45.778800] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:14.134 [2024-11-20 06:42:45.778807] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:14.134 [2024-11-20 06:42:45.791058] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:14.134 [2024-11-20 06:42:45.791489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.134 [2024-11-20 06:42:45.791511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:14.134 [2024-11-20 06:42:45.791519] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:14.134 [2024-11-20 06:42:45.791690] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:14.134 [2024-11-20 06:42:45.791864] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:14.134 [2024-11-20 06:42:45.791874] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:14.134 [2024-11-20 06:42:45.791881] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:14.134 [2024-11-20 06:42:45.791887] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:14.134 [2024-11-20 06:42:45.804153] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:14.134 [2024-11-20 06:42:45.804565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.134 [2024-11-20 06:42:45.804583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:14.134 [2024-11-20 06:42:45.804592] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:14.134 [2024-11-20 06:42:45.804763] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:14.134 [2024-11-20 06:42:45.804936] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:14.134 [2024-11-20 06:42:45.804945] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:14.134 [2024-11-20 06:42:45.804952] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:14.134 [2024-11-20 06:42:45.804959] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:14.134 [2024-11-20 06:42:45.817226] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:14.134 [2024-11-20 06:42:45.817652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.134 [2024-11-20 06:42:45.817670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:14.134 [2024-11-20 06:42:45.817677] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:14.134 [2024-11-20 06:42:45.817849] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:14.134 [2024-11-20 06:42:45.818020] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:14.134 [2024-11-20 06:42:45.818031] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:14.134 [2024-11-20 06:42:45.818038] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:14.134 [2024-11-20 06:42:45.818044] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:14.134 [2024-11-20 06:42:45.830300] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:14.134 [2024-11-20 06:42:45.830721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.134 [2024-11-20 06:42:45.830739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:14.134 [2024-11-20 06:42:45.830746] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:14.135 [2024-11-20 06:42:45.830925] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:14.135 [2024-11-20 06:42:45.831099] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:14.135 [2024-11-20 06:42:45.831110] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:14.135 [2024-11-20 06:42:45.831117] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:14.135 [2024-11-20 06:42:45.831124] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:14.135 [2024-11-20 06:42:45.843397] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:14.135 [2024-11-20 06:42:45.843828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.135 [2024-11-20 06:42:45.843846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:14.135 [2024-11-20 06:42:45.843853] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:14.135 [2024-11-20 06:42:45.844025] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:14.135 [2024-11-20 06:42:45.844197] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:14.135 [2024-11-20 06:42:45.844212] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:14.135 [2024-11-20 06:42:45.844219] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:14.135 [2024-11-20 06:42:45.844226] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:14.135 [2024-11-20 06:42:45.856485] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:14.135 [2024-11-20 06:42:45.856843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.135 [2024-11-20 06:42:45.856861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:14.135 [2024-11-20 06:42:45.856868] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:14.135 [2024-11-20 06:42:45.857039] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:14.135 [2024-11-20 06:42:45.857216] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:14.135 [2024-11-20 06:42:45.857227] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:14.135 [2024-11-20 06:42:45.857235] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:14.135 [2024-11-20 06:42:45.857242] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:14.135 [2024-11-20 06:42:45.869475] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:14.135 [2024-11-20 06:42:45.869901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.135 [2024-11-20 06:42:45.869918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:14.135 [2024-11-20 06:42:45.869926] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:14.135 [2024-11-20 06:42:45.870098] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:14.135 [2024-11-20 06:42:45.870274] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:14.135 [2024-11-20 06:42:45.870289] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:14.135 [2024-11-20 06:42:45.870296] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:14.135 [2024-11-20 06:42:45.870302] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:14.135 [2024-11-20 06:42:45.882536] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:14.135 [2024-11-20 06:42:45.882934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.135 [2024-11-20 06:42:45.882952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:14.135 [2024-11-20 06:42:45.882960] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:14.135 [2024-11-20 06:42:45.883131] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:14.135 [2024-11-20 06:42:45.883307] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:14.135 [2024-11-20 06:42:45.883318] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:14.135 [2024-11-20 06:42:45.883325] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:14.135 [2024-11-20 06:42:45.883331] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:14.135 [2024-11-20 06:42:45.895610] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:14.135 [2024-11-20 06:42:45.896042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.135 [2024-11-20 06:42:45.896060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:14.135 [2024-11-20 06:42:45.896068] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:14.135 [2024-11-20 06:42:45.896245] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:14.135 [2024-11-20 06:42:45.896419] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:14.135 [2024-11-20 06:42:45.896430] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:14.135 [2024-11-20 06:42:45.896436] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:14.135 [2024-11-20 06:42:45.896443] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:14.135 [2024-11-20 06:42:45.908704] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:14.135 [2024-11-20 06:42:45.909134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.135 [2024-11-20 06:42:45.909152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:14.135 [2024-11-20 06:42:45.909160] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:14.135 [2024-11-20 06:42:45.909337] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:14.135 [2024-11-20 06:42:45.909511] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:14.135 [2024-11-20 06:42:45.909521] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:14.135 [2024-11-20 06:42:45.909528] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:14.135 [2024-11-20 06:42:45.909539] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:14.135 [2024-11-20 06:42:45.921786] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:14.135 [2024-11-20 06:42:45.922214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.135 [2024-11-20 06:42:45.922233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:14.135 [2024-11-20 06:42:45.922242] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:14.135 [2024-11-20 06:42:45.922415] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:14.135 [2024-11-20 06:42:45.922590] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:14.135 [2024-11-20 06:42:45.922601] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:14.135 [2024-11-20 06:42:45.922607] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:14.135 [2024-11-20 06:42:45.922615] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:14.135 [2024-11-20 06:42:45.934883] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:14.135 [2024-11-20 06:42:45.935305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.135 [2024-11-20 06:42:45.935324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:14.135 [2024-11-20 06:42:45.935332] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:14.136 [2024-11-20 06:42:45.935505] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:14.136 [2024-11-20 06:42:45.935677] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:14.136 [2024-11-20 06:42:45.935687] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:14.136 [2024-11-20 06:42:45.935694] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:14.136 [2024-11-20 06:42:45.935700] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:14.136 [2024-11-20 06:42:45.947817] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:14.136 [2024-11-20 06:42:45.948160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.136 [2024-11-20 06:42:45.948178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:14.136 [2024-11-20 06:42:45.948186] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:14.136 [2024-11-20 06:42:45.948363] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:14.136 [2024-11-20 06:42:45.948537] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:14.136 [2024-11-20 06:42:45.948547] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:14.136 [2024-11-20 06:42:45.948554] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:14.136 [2024-11-20 06:42:45.948561] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:14.136 [2024-11-20 06:42:45.960808] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:14.136 [2024-11-20 06:42:45.961168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.136 [2024-11-20 06:42:45.961190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:14.136 [2024-11-20 06:42:45.961198] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:14.136 [2024-11-20 06:42:45.961376] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:14.136 [2024-11-20 06:42:45.961549] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:14.136 [2024-11-20 06:42:45.961558] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:14.136 [2024-11-20 06:42:45.961565] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:14.136 [2024-11-20 06:42:45.961571] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:14.395 [2024-11-20 06:42:45.973843] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:14.395 [2024-11-20 06:42:45.974275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.395 [2024-11-20 06:42:45.974294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:14.395 [2024-11-20 06:42:45.974302] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:14.395 [2024-11-20 06:42:45.974474] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:14.395 [2024-11-20 06:42:45.974648] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:14.395 [2024-11-20 06:42:45.974659] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:14.395 [2024-11-20 06:42:45.974666] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:14.395 [2024-11-20 06:42:45.974672] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:14.395 [2024-11-20 06:42:45.986883] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:14.395 [2024-11-20 06:42:45.987261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.395 [2024-11-20 06:42:45.987280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:14.395 [2024-11-20 06:42:45.987288] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:14.395 [2024-11-20 06:42:45.987472] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:14.395 [2024-11-20 06:42:45.987656] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:14.395 [2024-11-20 06:42:45.987666] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:14.395 [2024-11-20 06:42:45.987673] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:14.395 [2024-11-20 06:42:45.987680] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:14.395 [2024-11-20 06:42:45.999905] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:14.395 [2024-11-20 06:42:46.000267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.395 [2024-11-20 06:42:46.000286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:14.395 [2024-11-20 06:42:46.000294] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:14.395 [2024-11-20 06:42:46.000471] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:14.395 [2024-11-20 06:42:46.000646] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:14.395 [2024-11-20 06:42:46.000656] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:14.395 [2024-11-20 06:42:46.000663] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:14.395 [2024-11-20 06:42:46.000670] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:14.395 [2024-11-20 06:42:46.012976] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:14.395 [2024-11-20 06:42:46.013364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.395 [2024-11-20 06:42:46.013382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:14.395 [2024-11-20 06:42:46.013391] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:14.395 [2024-11-20 06:42:46.013564] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:14.395 [2024-11-20 06:42:46.013737] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:14.395 [2024-11-20 06:42:46.013748] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:14.395 [2024-11-20 06:42:46.013755] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:14.395 [2024-11-20 06:42:46.013761] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:14.395 [2024-11-20 06:42:46.026061] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:14.395 [2024-11-20 06:42:46.026412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.395 [2024-11-20 06:42:46.026430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:14.395 [2024-11-20 06:42:46.026438] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:14.395 [2024-11-20 06:42:46.026610] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:14.395 [2024-11-20 06:42:46.026783] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:14.395 [2024-11-20 06:42:46.026795] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:14.395 [2024-11-20 06:42:46.026803] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:14.395 [2024-11-20 06:42:46.026810] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:14.395 [2024-11-20 06:42:46.039096] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:14.395 [2024-11-20 06:42:46.039503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.395 [2024-11-20 06:42:46.039522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:14.395 [2024-11-20 06:42:46.039530] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:14.395 [2024-11-20 06:42:46.039702] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:14.395 [2024-11-20 06:42:46.039875] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:14.395 [2024-11-20 06:42:46.039887] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:14.395 [2024-11-20 06:42:46.039897] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:14.395 [2024-11-20 06:42:46.039904] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:14.395 [2024-11-20 06:42:46.051996] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:14.395 [2024-11-20 06:42:46.052315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.395 [2024-11-20 06:42:46.052336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:14.395 [2024-11-20 06:42:46.052344] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:14.395 [2024-11-20 06:42:46.052516] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:14.396 [2024-11-20 06:42:46.052689] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:14.396 [2024-11-20 06:42:46.052700] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:14.396 [2024-11-20 06:42:46.052707] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:14.396 [2024-11-20 06:42:46.052713] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:14.396 [2024-11-20 06:42:46.064975] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:14.396 [2024-11-20 06:42:46.065269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.396 [2024-11-20 06:42:46.065289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:14.396 [2024-11-20 06:42:46.065297] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:14.396 [2024-11-20 06:42:46.065470] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:14.396 [2024-11-20 06:42:46.065643] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:14.396 [2024-11-20 06:42:46.065654] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:14.396 [2024-11-20 06:42:46.065661] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:14.396 [2024-11-20 06:42:46.065668] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:14.396 [2024-11-20 06:42:46.077959] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:14.396 [2024-11-20 06:42:46.078235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.396 [2024-11-20 06:42:46.078254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:14.396 [2024-11-20 06:42:46.078262] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:14.396 [2024-11-20 06:42:46.078434] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:14.396 [2024-11-20 06:42:46.078607] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:14.396 [2024-11-20 06:42:46.078617] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:14.396 [2024-11-20 06:42:46.078625] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:14.396 [2024-11-20 06:42:46.078631] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:14.396 [2024-11-20 06:42:46.090928] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:14.396 [2024-11-20 06:42:46.091217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.396 [2024-11-20 06:42:46.091235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:14.396 [2024-11-20 06:42:46.091243] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:14.396 [2024-11-20 06:42:46.091415] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:14.396 [2024-11-20 06:42:46.091588] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:14.396 [2024-11-20 06:42:46.091598] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:14.396 [2024-11-20 06:42:46.091605] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:14.396 [2024-11-20 06:42:46.091612] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:14.396 [2024-11-20 06:42:46.103908] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:14.396 [2024-11-20 06:42:46.104323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.396 [2024-11-20 06:42:46.104341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:14.396 [2024-11-20 06:42:46.104349] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:14.396 [2024-11-20 06:42:46.104520] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:14.396 [2024-11-20 06:42:46.104692] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:14.396 [2024-11-20 06:42:46.104702] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:14.396 [2024-11-20 06:42:46.104709] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:14.396 [2024-11-20 06:42:46.104716] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:14.396 06:42:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:32:14.396 06:42:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@866 -- # return 0 00:32:14.396 06:42:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:14.396 06:42:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:14.396 06:42:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:14.396 [2024-11-20 06:42:46.116982] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:14.396 [2024-11-20 06:42:46.117369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.396 [2024-11-20 06:42:46.117388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:14.396 [2024-11-20 06:42:46.117397] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:14.396 [2024-11-20 06:42:46.117569] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:14.396 [2024-11-20 06:42:46.117743] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:14.396 [2024-11-20 06:42:46.117753] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:14.396 [2024-11-20 06:42:46.117760] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:14.396 [2024-11-20 06:42:46.117770] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:14.396 [2024-11-20 06:42:46.130065] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:14.396 [2024-11-20 06:42:46.130354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.396 [2024-11-20 06:42:46.130373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:14.396 [2024-11-20 06:42:46.130381] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:14.396 [2024-11-20 06:42:46.130553] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:14.396 [2024-11-20 06:42:46.130725] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:14.396 [2024-11-20 06:42:46.130736] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:14.396 [2024-11-20 06:42:46.130743] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:14.396 [2024-11-20 06:42:46.130749] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:14.396 [2024-11-20 06:42:46.143036] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:14.396 [2024-11-20 06:42:46.143372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.396 [2024-11-20 06:42:46.143390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:14.396 [2024-11-20 06:42:46.143398] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:14.396 [2024-11-20 06:42:46.143571] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:14.396 [2024-11-20 06:42:46.143744] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:14.396 [2024-11-20 06:42:46.143755] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:14.396 [2024-11-20 06:42:46.143761] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:14.396 [2024-11-20 06:42:46.143768] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:14.396 06:42:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:14.396 06:42:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:14.396 06:42:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:14.396 06:42:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:14.396 [2024-11-20 06:42:46.150174] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:14.396 06:42:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:14.397 06:42:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:14.397 [2024-11-20 06:42:46.156046] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:14.397 06:42:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:14.397 [2024-11-20 06:42:46.156392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.397 [2024-11-20 06:42:46.156411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:14.397 [2024-11-20 06:42:46.156419] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:14.397 06:42:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:14.397 [2024-11-20 06:42:46.156598] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:14.397 [2024-11-20 06:42:46.156771] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:14.397 [2024-11-20 06:42:46.156781] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:14.397 [2024-11-20 06:42:46.156787] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:14.397 [2024-11-20 06:42:46.156794] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:14.397 [2024-11-20 06:42:46.169078] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:14.397 [2024-11-20 06:42:46.169418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.397 [2024-11-20 06:42:46.169436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:14.397 [2024-11-20 06:42:46.169444] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:14.397 [2024-11-20 06:42:46.169616] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:14.397 [2024-11-20 06:42:46.169789] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:14.397 [2024-11-20 06:42:46.169799] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:14.397 [2024-11-20 06:42:46.169806] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:14.397 [2024-11-20 06:42:46.169813] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:14.397 [2024-11-20 06:42:46.182083] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:14.397 [2024-11-20 06:42:46.182517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.397 [2024-11-20 06:42:46.182535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:14.397 [2024-11-20 06:42:46.182543] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:14.397 [2024-11-20 06:42:46.182714] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:14.397 [2024-11-20 06:42:46.182887] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:14.397 [2024-11-20 06:42:46.182898] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:14.397 [2024-11-20 06:42:46.182904] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:14.397 [2024-11-20 06:42:46.182911] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:14.397 Malloc0 00:32:14.397 06:42:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:14.397 06:42:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:14.397 06:42:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:14.397 06:42:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:14.397 [2024-11-20 06:42:46.195025] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:14.397 [2024-11-20 06:42:46.195458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.397 [2024-11-20 06:42:46.195476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1500 with addr=10.0.0.2, port=4420 00:32:14.397 [2024-11-20 06:42:46.195489] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1500 is same with the state(6) to be set 00:32:14.397 [2024-11-20 06:42:46.195662] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1500 (9): Bad file descriptor 00:32:14.397 [2024-11-20 06:42:46.195834] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:14.397 [2024-11-20 06:42:46.195845] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:14.397 [2024-11-20 06:42:46.195851] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:14.397 [2024-11-20 06:42:46.195858] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:14.397 06:42:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:14.397 06:42:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:14.397 06:42:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:14.397 06:42:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:14.397 06:42:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:14.397 06:42:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:14.397 06:42:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:14.397 06:42:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:14.397 [2024-11-20 06:42:46.208114] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:14.397 [2024-11-20 06:42:46.208349] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:14.397 06:42:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:14.397 06:42:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 704118 00:32:14.655 [2024-11-20 06:42:46.235086] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:32:15.846 4843.00 IOPS, 18.92 MiB/s [2024-11-20T05:42:49.055Z] 5687.12 IOPS, 22.22 MiB/s [2024-11-20T05:42:49.988Z] 6325.22 IOPS, 24.71 MiB/s [2024-11-20T05:42:50.921Z] 6843.80 IOPS, 26.73 MiB/s [2024-11-20T05:42:51.855Z] 7273.73 IOPS, 28.41 MiB/s [2024-11-20T05:42:52.789Z] 7620.83 IOPS, 29.77 MiB/s [2024-11-20T05:42:53.724Z] 7904.77 IOPS, 30.88 MiB/s [2024-11-20T05:42:55.097Z] 8166.29 IOPS, 31.90 MiB/s [2024-11-20T05:42:55.097Z] 8393.47 IOPS, 32.79 MiB/s 00:32:23.261 Latency(us) 00:32:23.261 [2024-11-20T05:42:55.097Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:23.261 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:32:23.261 Verification LBA range: start 0x0 length 0x4000 00:32:23.261 Nvme1n1 : 15.01 8398.71 32.81 13008.17 0.00 5959.99 639.76 16976.94 00:32:23.261 [2024-11-20T05:42:55.097Z] =================================================================================================================== 00:32:23.261 [2024-11-20T05:42:55.097Z] Total : 8398.71 32.81 13008.17 0.00 5959.99 639.76 16976.94 00:32:23.261 06:42:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:32:23.262 06:42:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:23.262 06:42:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:23.262 06:42:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:23.262 06:42:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:23.262 06:42:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:32:23.262 06:42:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:32:23.262 06:42:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:23.262 06:42:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:32:23.262 06:42:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:23.262 06:42:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:32:23.262 06:42:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:23.262 06:42:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:23.262 rmmod nvme_tcp 00:32:23.262 rmmod nvme_fabrics 00:32:23.262 rmmod nvme_keyring 00:32:23.262 06:42:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:23.262 06:42:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:32:23.262 06:42:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:32:23.262 06:42:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 705044 ']' 00:32:23.262 06:42:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 705044 00:32:23.262 06:42:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@952 -- # '[' -z 705044 ']' 00:32:23.262 06:42:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # kill -0 705044 00:32:23.262 06:42:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@957 -- # uname 00:32:23.262 06:42:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:32:23.262 06:42:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 705044 00:32:23.262 06:42:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:32:23.262 06:42:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:32:23.262 06:42:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@970 -- # echo 'killing process with pid 705044' 00:32:23.262 killing process with pid 705044 00:32:23.262 06:42:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@971 -- # kill 705044 00:32:23.262 06:42:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@976 -- # wait 705044 00:32:23.521 06:42:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:23.521 06:42:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:23.521 06:42:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:23.521 06:42:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:32:23.521 06:42:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-save 00:32:23.521 06:42:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:23.521 06:42:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-restore 00:32:23.521 06:42:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:23.521 06:42:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:23.521 06:42:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:23.521 06:42:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:23.521 06:42:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:25.428 06:42:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:25.428 00:32:25.428 real 0m26.055s 00:32:25.428 user 1m0.815s 00:32:25.428 sys 0m6.804s 00:32:25.428 06:42:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:32:25.428 06:42:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:25.428 ************************************ 00:32:25.428 END TEST nvmf_bdevperf 00:32:25.428 ************************************ 00:32:25.688 06:42:57 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:32:25.688 06:42:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:32:25.688 06:42:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:32:25.688 06:42:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:25.688 ************************************ 00:32:25.688 START TEST nvmf_target_disconnect 00:32:25.688 ************************************ 00:32:25.688 06:42:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:32:25.688 * Looking for test storage... 00:32:25.688 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:25.688 06:42:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:32:25.688 06:42:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1691 -- # lcov --version 00:32:25.688 06:42:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:32:25.688 06:42:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:32:25.688 06:42:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:25.688 06:42:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:25.688 06:42:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:25.688 06:42:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:32:25.688 06:42:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:32:25.688 06:42:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:32:25.688 06:42:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:32:25.688 06:42:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:32:25.688 06:42:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:32:25.688 06:42:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:32:25.688 06:42:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:25.688 06:42:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:32:25.688 06:42:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:32:25.688 06:42:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:25.688 06:42:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:25.688 06:42:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:32:25.688 06:42:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:32:25.688 06:42:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:25.688 06:42:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:32:25.688 06:42:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:32:25.688 06:42:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:32:25.688 06:42:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:32:25.688 06:42:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:25.688 06:42:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:32:25.688 06:42:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:32:25.688 06:42:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:25.688 06:42:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:25.688 06:42:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:32:25.688 06:42:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:25.688 06:42:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:32:25.688 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:25.688 --rc genhtml_branch_coverage=1 00:32:25.688 --rc genhtml_function_coverage=1 00:32:25.688 --rc genhtml_legend=1 00:32:25.688 --rc geninfo_all_blocks=1 00:32:25.688 --rc geninfo_unexecuted_blocks=1 00:32:25.688 00:32:25.688 ' 00:32:25.688 06:42:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:32:25.688 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:25.688 --rc genhtml_branch_coverage=1 00:32:25.688 --rc genhtml_function_coverage=1 00:32:25.688 --rc genhtml_legend=1 00:32:25.688 --rc geninfo_all_blocks=1 00:32:25.688 --rc geninfo_unexecuted_blocks=1 00:32:25.688 00:32:25.688 ' 00:32:25.688 06:42:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:32:25.688 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:25.688 --rc genhtml_branch_coverage=1 00:32:25.688 --rc genhtml_function_coverage=1 00:32:25.688 --rc genhtml_legend=1 00:32:25.688 --rc geninfo_all_blocks=1 00:32:25.688 --rc geninfo_unexecuted_blocks=1 00:32:25.688 00:32:25.688 ' 00:32:25.688 06:42:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:32:25.688 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:25.688 --rc genhtml_branch_coverage=1 00:32:25.688 --rc genhtml_function_coverage=1 00:32:25.688 --rc genhtml_legend=1 00:32:25.688 --rc geninfo_all_blocks=1 00:32:25.688 --rc geninfo_unexecuted_blocks=1 00:32:25.688 00:32:25.688 ' 00:32:25.688 06:42:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:25.688 06:42:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:32:25.688 06:42:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:25.688 06:42:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:25.688 06:42:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:25.688 06:42:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:25.688 06:42:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:25.688 06:42:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:25.688 06:42:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:25.688 06:42:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:25.688 06:42:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:25.688 06:42:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:25.688 06:42:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:32:25.688 06:42:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:32:25.688 06:42:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:25.688 06:42:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:25.688 06:42:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:25.688 06:42:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:25.688 06:42:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:25.688 06:42:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:32:25.688 06:42:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:25.688 06:42:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:25.688 06:42:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:25.688 06:42:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:25.688 06:42:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:25.688 06:42:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:25.688 06:42:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:32:25.688 06:42:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:25.688 06:42:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:32:25.689 06:42:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:25.689 06:42:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:25.689 06:42:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:25.689 06:42:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:25.689 06:42:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:25.689 06:42:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:25.689 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:25.689 06:42:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:25.689 06:42:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:25.689 06:42:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:25.689 06:42:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:32:25.689 06:42:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:32:25.689 06:42:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:32:25.689 06:42:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:32:25.689 06:42:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:25.689 06:42:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:25.689 06:42:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:25.689 06:42:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:25.689 06:42:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:25.689 06:42:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:25.689 06:42:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:25.689 06:42:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:25.689 06:42:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:25.689 06:42:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:25.689 06:42:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:32:25.689 06:42:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:32:32.259 06:43:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:32.259 06:43:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:32:32.259 06:43:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:32.259 06:43:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:32.259 06:43:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:32.260 06:43:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:32.260 06:43:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:32.260 06:43:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:32:32.260 06:43:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:32.260 06:43:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:32:32.260 06:43:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:32:32.260 06:43:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:32:32.260 06:43:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:32:32.260 06:43:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:32:32.260 06:43:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:32:32.260 06:43:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:32.260 06:43:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:32.260 06:43:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:32.260 06:43:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:32.260 06:43:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:32.260 06:43:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:32.260 06:43:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:32.260 06:43:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:32.260 06:43:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:32.260 06:43:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:32.260 06:43:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:32.260 06:43:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:32.260 06:43:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:32.260 06:43:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:32.260 06:43:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:32.260 06:43:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:32.260 06:43:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:32.260 06:43:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:32.260 06:43:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:32.260 06:43:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:32:32.260 Found 0000:86:00.0 (0x8086 - 0x159b) 00:32:32.260 06:43:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:32.260 06:43:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:32.260 06:43:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:32.260 06:43:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:32.260 06:43:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:32.260 06:43:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:32.260 06:43:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:32:32.260 Found 0000:86:00.1 (0x8086 - 0x159b) 00:32:32.260 06:43:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:32.260 06:43:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:32.260 06:43:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:32.260 06:43:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:32.260 06:43:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:32.260 06:43:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:32.260 06:43:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:32.260 06:43:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:32.260 06:43:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:32.260 06:43:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:32.260 06:43:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:32.260 06:43:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:32.260 06:43:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:32.260 06:43:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:32.260 06:43:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:32.260 06:43:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:32:32.260 Found net devices under 0000:86:00.0: cvl_0_0 00:32:32.260 06:43:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:32.260 06:43:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:32.260 06:43:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:32.260 06:43:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:32.260 06:43:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:32.260 06:43:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:32.260 06:43:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:32.260 06:43:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:32.260 06:43:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:32:32.260 Found net devices under 0000:86:00.1: cvl_0_1 00:32:32.260 06:43:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:32.260 06:43:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:32.260 06:43:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:32:32.260 06:43:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:32.260 06:43:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:32.260 06:43:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:32.260 06:43:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:32.260 06:43:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:32.260 06:43:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:32.260 06:43:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:32.260 06:43:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:32.260 06:43:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:32.260 06:43:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:32.260 06:43:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:32.260 06:43:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:32.260 06:43:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:32.260 06:43:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:32.260 06:43:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:32.260 06:43:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:32.260 06:43:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:32.260 06:43:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:32.260 06:43:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:32.260 06:43:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:32.260 06:43:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:32.260 06:43:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:32.260 06:43:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:32.260 06:43:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:32.260 06:43:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:32.260 06:43:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:32.260 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:32.260 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.419 ms 00:32:32.260 00:32:32.260 --- 10.0.0.2 ping statistics --- 00:32:32.260 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:32.260 rtt min/avg/max/mdev = 0.419/0.419/0.419/0.000 ms 00:32:32.260 06:43:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:32.261 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:32.261 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.192 ms 00:32:32.261 00:32:32.261 --- 10.0.0.1 ping statistics --- 00:32:32.261 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:32.261 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:32:32.261 06:43:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:32.261 06:43:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0 00:32:32.261 06:43:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:32.261 06:43:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:32.261 06:43:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:32.261 06:43:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:32.261 06:43:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:32.261 06:43:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:32.261 06:43:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:32.261 06:43:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:32:32.261 06:43:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:32:32.261 06:43:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1109 -- # xtrace_disable 00:32:32.261 06:43:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:32:32.261 ************************************ 00:32:32.261 START TEST nvmf_target_disconnect_tc1 00:32:32.261 ************************************ 00:32:32.261 06:43:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1127 -- # nvmf_target_disconnect_tc1 00:32:32.261 06:43:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:32.261 06:43:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # local es=0 00:32:32.261 06:43:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:32.261 06:43:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:32:32.261 06:43:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:32.261 06:43:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:32:32.261 06:43:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:32.261 06:43:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:32:32.261 06:43:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:32.261 06:43:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:32:32.261 06:43:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:32:32.261 06:43:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:32.261 [2024-11-20 06:43:03.608985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.261 [2024-11-20 06:43:03.609027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21c9ab0 with addr=10.0.0.2, port=4420 00:32:32.261 [2024-11-20 06:43:03.609047] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:32:32.261 [2024-11-20 06:43:03.609060] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:32:32.261 [2024-11-20 06:43:03.609066] nvme.c: 939:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:32:32.261 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:32:32.261 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:32:32.261 Initializing NVMe Controllers 00:32:32.261 06:43:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # es=1 00:32:32.261 06:43:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:32:32.261 06:43:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:32:32.261 06:43:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:32:32.261 00:32:32.261 real 0m0.119s 00:32:32.261 user 0m0.061s 00:32:32.261 sys 0m0.057s 00:32:32.261 06:43:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:32:32.261 06:43:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:32:32.261 ************************************ 00:32:32.261 END TEST nvmf_target_disconnect_tc1 00:32:32.261 ************************************ 00:32:32.261 06:43:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:32:32.261 06:43:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:32:32.261 06:43:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1109 -- # xtrace_disable 00:32:32.261 06:43:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:32:32.261 ************************************ 00:32:32.261 START TEST nvmf_target_disconnect_tc2 00:32:32.261 ************************************ 00:32:32.261 06:43:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1127 -- # nvmf_target_disconnect_tc2 00:32:32.261 06:43:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:32:32.261 06:43:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:32:32.261 06:43:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:32.261 06:43:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:32.261 06:43:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:32.261 06:43:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=710194 00:32:32.261 06:43:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 710194 00:32:32.261 06:43:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:32:32.261 06:43:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # '[' -z 710194 ']' 00:32:32.261 06:43:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:32.261 06:43:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # local max_retries=100 00:32:32.261 06:43:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:32.261 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:32.261 06:43:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # xtrace_disable 00:32:32.261 06:43:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:32.261 [2024-11-20 06:43:03.752511] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:32:32.261 [2024-11-20 06:43:03.752554] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:32.261 [2024-11-20 06:43:03.828793] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:32.261 [2024-11-20 06:43:03.869904] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:32.261 [2024-11-20 06:43:03.869939] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:32.261 [2024-11-20 06:43:03.869946] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:32.261 [2024-11-20 06:43:03.869952] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:32.261 [2024-11-20 06:43:03.869957] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:32.261 [2024-11-20 06:43:03.871612] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:32:32.261 [2024-11-20 06:43:03.871720] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:32:32.261 [2024-11-20 06:43:03.871828] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:32:32.261 [2024-11-20 06:43:03.871828] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:32:32.261 06:43:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:32:32.262 06:43:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@866 -- # return 0 00:32:32.262 06:43:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:32.262 06:43:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:32.262 06:43:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:32.262 06:43:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:32.262 06:43:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:32.262 06:43:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:32.262 06:43:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:32.262 Malloc0 00:32:32.262 06:43:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:32.262 06:43:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:32:32.262 06:43:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:32.262 06:43:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:32.262 [2024-11-20 06:43:04.037060] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:32.262 06:43:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:32.262 06:43:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:32.262 06:43:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:32.262 06:43:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:32.262 06:43:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:32.262 06:43:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:32.262 06:43:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:32.262 06:43:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:32.262 06:43:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:32.262 06:43:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:32.262 06:43:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:32.262 06:43:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:32.262 [2024-11-20 06:43:04.066253] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:32.262 06:43:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:32.262 06:43:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:32.262 06:43:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:32.262 06:43:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:32.262 06:43:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:32.262 06:43:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=710240 00:32:32.262 06:43:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:32:32.262 06:43:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:34.817 06:43:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 710194 00:32:34.817 06:43:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:32:34.817 Read completed with error (sct=0, sc=8) 00:32:34.817 starting I/O failed 00:32:34.817 Read completed with error (sct=0, sc=8) 00:32:34.817 starting I/O failed 00:32:34.817 Read completed with error (sct=0, sc=8) 00:32:34.817 starting I/O failed 00:32:34.817 Read completed with error (sct=0, sc=8) 00:32:34.817 starting I/O failed 00:32:34.817 Read completed with error (sct=0, sc=8) 00:32:34.817 starting I/O failed 00:32:34.817 Read completed with error (sct=0, sc=8) 00:32:34.817 starting I/O failed 00:32:34.817 Read completed with error (sct=0, sc=8) 00:32:34.817 starting I/O failed 00:32:34.817 Read completed with error (sct=0, sc=8) 00:32:34.817 starting I/O failed 00:32:34.817 Read completed with error (sct=0, sc=8) 00:32:34.817 starting I/O failed 00:32:34.817 Read completed with error (sct=0, sc=8) 00:32:34.817 starting I/O failed 00:32:34.817 Read completed with error (sct=0, sc=8) 00:32:34.817 starting I/O failed 00:32:34.817 Read completed with error (sct=0, sc=8) 00:32:34.817 starting I/O failed 00:32:34.817 Read completed with error (sct=0, sc=8) 00:32:34.817 starting I/O failed 00:32:34.817 Read completed with error (sct=0, sc=8) 00:32:34.817 starting I/O failed 00:32:34.817 Read completed with error (sct=0, sc=8) 00:32:34.817 starting I/O failed 00:32:34.817 Read completed with error (sct=0, sc=8) 00:32:34.817 starting I/O failed 00:32:34.817 Read completed with error (sct=0, sc=8) 00:32:34.817 starting I/O failed 00:32:34.817 Read completed with error (sct=0, sc=8) 00:32:34.817 starting I/O failed 00:32:34.817 Read completed with error (sct=0, sc=8) 00:32:34.817 starting I/O failed 00:32:34.817 Read completed with error (sct=0, sc=8) 00:32:34.817 starting I/O failed 00:32:34.817 Read completed with error (sct=0, sc=8) 00:32:34.817 starting I/O failed 00:32:34.817 Read completed with error (sct=0, sc=8) 00:32:34.817 starting I/O failed 00:32:34.817 Write completed with error (sct=0, sc=8) 00:32:34.817 starting I/O failed 00:32:34.817 Write completed with error (sct=0, sc=8) 00:32:34.817 starting I/O failed 00:32:34.817 Read completed with error (sct=0, sc=8) 00:32:34.817 starting I/O failed 00:32:34.817 Write completed with error (sct=0, sc=8) 00:32:34.817 starting I/O failed 00:32:34.817 Write completed with error (sct=0, sc=8) 00:32:34.817 starting I/O failed 00:32:34.817 Write completed with error (sct=0, sc=8) 00:32:34.817 starting I/O failed 00:32:34.817 Write completed with error (sct=0, sc=8) 00:32:34.817 starting I/O failed 00:32:34.817 Write completed with error (sct=0, sc=8) 00:32:34.817 starting I/O failed 00:32:34.817 Read completed with error (sct=0, sc=8) 00:32:34.817 starting I/O failed 00:32:34.817 Write completed with error (sct=0, sc=8) 00:32:34.817 starting I/O failed 00:32:34.817 Read completed with error (sct=0, sc=8) 00:32:34.817 starting I/O failed 00:32:34.817 Read completed with error (sct=0, sc=8) 00:32:34.817 starting I/O failed 00:32:34.817 Read completed with error (sct=0, sc=8) 00:32:34.817 starting I/O failed 00:32:34.817 Read completed with error (sct=0, sc=8) 00:32:34.817 starting I/O failed 00:32:34.817 Read completed with error (sct=0, sc=8) 00:32:34.817 starting I/O failed 00:32:34.817 Read completed with error (sct=0, sc=8) 00:32:34.817 starting I/O failed 00:32:34.817 Read completed with error (sct=0, sc=8) 00:32:34.817 starting I/O failed 00:32:34.817 [2024-11-20 06:43:06.094852] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:34.817 Write completed with error (sct=0, sc=8) 00:32:34.817 starting I/O failed 00:32:34.817 Read completed with error (sct=0, sc=8) 00:32:34.817 starting I/O failed 00:32:34.817 Write completed with error (sct=0, sc=8) 00:32:34.817 starting I/O failed 00:32:34.817 Write completed with error (sct=0, sc=8) 00:32:34.817 starting I/O failed 00:32:34.817 Read completed with error (sct=0, sc=8) 00:32:34.817 starting I/O failed 00:32:34.817 Write completed with error (sct=0, sc=8) 00:32:34.817 starting I/O failed 00:32:34.817 Read completed with error (sct=0, sc=8) 00:32:34.817 starting I/O failed 00:32:34.817 Read completed with error (sct=0, sc=8) 00:32:34.817 starting I/O failed 00:32:34.817 Write completed with error (sct=0, sc=8) 00:32:34.817 starting I/O failed 00:32:34.817 Write completed with error (sct=0, sc=8) 00:32:34.817 starting I/O failed 00:32:34.817 Read completed with error (sct=0, sc=8) 00:32:34.818 starting I/O failed 00:32:34.818 Read completed with error (sct=0, sc=8) 00:32:34.818 starting I/O failed 00:32:34.818 Read completed with error (sct=0, sc=8) 00:32:34.818 starting I/O failed 00:32:34.818 Read completed with error (sct=0, sc=8) 00:32:34.818 starting I/O failed 00:32:34.818 Read completed with error (sct=0, sc=8) 00:32:34.818 starting I/O failed 00:32:34.818 Read completed with error (sct=0, sc=8) 00:32:34.818 starting I/O failed 00:32:34.818 Write completed with error (sct=0, sc=8) 00:32:34.818 starting I/O failed 00:32:34.818 Write completed with error (sct=0, sc=8) 00:32:34.818 starting I/O failed 00:32:34.818 Read completed with error (sct=0, sc=8) 00:32:34.818 starting I/O failed 00:32:34.818 Write completed with error (sct=0, sc=8) 00:32:34.818 starting I/O failed 00:32:34.818 Write completed with error (sct=0, sc=8) 00:32:34.818 starting I/O failed 00:32:34.818 Write completed with error (sct=0, sc=8) 00:32:34.818 starting I/O failed 00:32:34.818 Read completed with error (sct=0, sc=8) 00:32:34.818 starting I/O failed 00:32:34.818 Write completed with error (sct=0, sc=8) 00:32:34.818 starting I/O failed 00:32:34.818 Read completed with error (sct=0, sc=8) 00:32:34.818 starting I/O failed 00:32:34.818 [2024-11-20 06:43:06.095049] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:34.818 Read completed with error (sct=0, sc=8) 00:32:34.818 starting I/O failed 00:32:34.818 Read completed with error (sct=0, sc=8) 00:32:34.818 starting I/O failed 00:32:34.818 Read completed with error (sct=0, sc=8) 00:32:34.818 starting I/O failed 00:32:34.818 Read completed with error (sct=0, sc=8) 00:32:34.818 starting I/O failed 00:32:34.818 Read completed with error (sct=0, sc=8) 00:32:34.818 starting I/O failed 00:32:34.818 Read completed with error (sct=0, sc=8) 00:32:34.818 starting I/O failed 00:32:34.818 Read completed with error (sct=0, sc=8) 00:32:34.818 starting I/O failed 00:32:34.818 Read completed with error (sct=0, sc=8) 00:32:34.818 starting I/O failed 00:32:34.818 Read completed with error (sct=0, sc=8) 00:32:34.818 starting I/O failed 00:32:34.818 Read completed with error (sct=0, sc=8) 00:32:34.818 starting I/O failed 00:32:34.818 Read completed with error (sct=0, sc=8) 00:32:34.818 starting I/O failed 00:32:34.818 Read completed with error (sct=0, sc=8) 00:32:34.818 starting I/O failed 00:32:34.818 Write completed with error (sct=0, sc=8) 00:32:34.818 starting I/O failed 00:32:34.818 Read completed with error (sct=0, sc=8) 00:32:34.818 starting I/O failed 00:32:34.818 Read completed with error (sct=0, sc=8) 00:32:34.818 starting I/O failed 00:32:34.818 Read completed with error (sct=0, sc=8) 00:32:34.818 starting I/O failed 00:32:34.818 Write completed with error (sct=0, sc=8) 00:32:34.818 starting I/O failed 00:32:34.818 Write completed with error (sct=0, sc=8) 00:32:34.818 starting I/O failed 00:32:34.818 Write completed with error (sct=0, sc=8) 00:32:34.818 starting I/O failed 00:32:34.818 Read completed with error (sct=0, sc=8) 00:32:34.818 starting I/O failed 00:32:34.818 Read completed with error (sct=0, sc=8) 00:32:34.818 starting I/O failed 00:32:34.818 Read completed with error (sct=0, sc=8) 00:32:34.818 starting I/O failed 00:32:34.818 Read completed with error (sct=0, sc=8) 00:32:34.818 starting I/O failed 00:32:34.818 Write completed with error (sct=0, sc=8) 00:32:34.818 starting I/O failed 00:32:34.818 Write completed with error (sct=0, sc=8) 00:32:34.818 starting I/O failed 00:32:34.818 Write completed with error (sct=0, sc=8) 00:32:34.818 starting I/O failed 00:32:34.818 Read completed with error (sct=0, sc=8) 00:32:34.818 starting I/O failed 00:32:34.818 Read completed with error (sct=0, sc=8) 00:32:34.818 starting I/O failed 00:32:34.818 Write completed with error (sct=0, sc=8) 00:32:34.818 starting I/O failed 00:32:34.818 Read completed with error (sct=0, sc=8) 00:32:34.818 starting I/O failed 00:32:34.818 Write completed with error (sct=0, sc=8) 00:32:34.818 starting I/O failed 00:32:34.818 Write completed with error (sct=0, sc=8) 00:32:34.818 starting I/O failed 00:32:34.818 [2024-11-20 06:43:06.095254] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:34.818 Read completed with error (sct=0, sc=8) 00:32:34.818 starting I/O failed 00:32:34.818 Read completed with error (sct=0, sc=8) 00:32:34.818 starting I/O failed 00:32:34.818 Read completed with error (sct=0, sc=8) 00:32:34.818 starting I/O failed 00:32:34.818 Read completed with error (sct=0, sc=8) 00:32:34.818 starting I/O failed 00:32:34.818 Read completed with error (sct=0, sc=8) 00:32:34.818 starting I/O failed 00:32:34.818 Read completed with error (sct=0, sc=8) 00:32:34.818 starting I/O failed 00:32:34.818 Read completed with error (sct=0, sc=8) 00:32:34.818 starting I/O failed 00:32:34.818 Read completed with error (sct=0, sc=8) 00:32:34.818 starting I/O failed 00:32:34.818 Read completed with error (sct=0, sc=8) 00:32:34.818 starting I/O failed 00:32:34.818 Read completed with error (sct=0, sc=8) 00:32:34.818 starting I/O failed 00:32:34.818 Read completed with error (sct=0, sc=8) 00:32:34.818 starting I/O failed 00:32:34.818 Write completed with error (sct=0, sc=8) 00:32:34.818 starting I/O failed 00:32:34.818 Write completed with error (sct=0, sc=8) 00:32:34.818 starting I/O failed 00:32:34.818 Read completed with error (sct=0, sc=8) 00:32:34.818 starting I/O failed 00:32:34.818 Read completed with error (sct=0, sc=8) 00:32:34.818 starting I/O failed 00:32:34.818 Write completed with error (sct=0, sc=8) 00:32:34.818 starting I/O failed 00:32:34.818 Read completed with error (sct=0, sc=8) 00:32:34.818 starting I/O failed 00:32:34.818 Write completed with error (sct=0, sc=8) 00:32:34.818 starting I/O failed 00:32:34.819 Read completed with error (sct=0, sc=8) 00:32:34.819 starting I/O failed 00:32:34.819 Read completed with error (sct=0, sc=8) 00:32:34.819 starting I/O failed 00:32:34.819 Write completed with error (sct=0, sc=8) 00:32:34.819 starting I/O failed 00:32:34.819 Write completed with error (sct=0, sc=8) 00:32:34.819 starting I/O failed 00:32:34.819 Read completed with error (sct=0, sc=8) 00:32:34.819 starting I/O failed 00:32:34.819 Read completed with error (sct=0, sc=8) 00:32:34.819 starting I/O failed 00:32:34.819 Write completed with error (sct=0, sc=8) 00:32:34.819 starting I/O failed 00:32:34.819 Read completed with error (sct=0, sc=8) 00:32:34.819 starting I/O failed 00:32:34.819 Write completed with error (sct=0, sc=8) 00:32:34.819 starting I/O failed 00:32:34.819 Read completed with error (sct=0, sc=8) 00:32:34.819 starting I/O failed 00:32:34.819 Read completed with error (sct=0, sc=8) 00:32:34.819 starting I/O failed 00:32:34.819 Read completed with error (sct=0, sc=8) 00:32:34.819 starting I/O failed 00:32:34.819 Write completed with error (sct=0, sc=8) 00:32:34.819 starting I/O failed 00:32:34.819 Write completed with error (sct=0, sc=8) 00:32:34.819 starting I/O failed 00:32:34.819 [2024-11-20 06:43:06.095448] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:32:34.819 [2024-11-20 06:43:06.095649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.819 [2024-11-20 06:43:06.095671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.819 qpair failed and we were unable to recover it. 00:32:34.819 [2024-11-20 06:43:06.095820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.819 [2024-11-20 06:43:06.095831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.819 qpair failed and we were unable to recover it. 00:32:34.819 [2024-11-20 06:43:06.096072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.819 [2024-11-20 06:43:06.096104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.819 qpair failed and we were unable to recover it. 00:32:34.819 [2024-11-20 06:43:06.096349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.819 [2024-11-20 06:43:06.096408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.819 qpair failed and we were unable to recover it. 00:32:34.819 [2024-11-20 06:43:06.096565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.819 [2024-11-20 06:43:06.096600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.819 qpair failed and we were unable to recover it. 00:32:34.819 [2024-11-20 06:43:06.096805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.819 [2024-11-20 06:43:06.096838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.819 qpair failed and we were unable to recover it. 00:32:34.819 [2024-11-20 06:43:06.096963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.819 [2024-11-20 06:43:06.097002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.819 qpair failed and we were unable to recover it. 00:32:34.819 [2024-11-20 06:43:06.097178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.819 [2024-11-20 06:43:06.097210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.819 qpair failed and we were unable to recover it. 00:32:34.819 [2024-11-20 06:43:06.097329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.819 [2024-11-20 06:43:06.097358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.819 qpair failed and we were unable to recover it. 00:32:34.819 [2024-11-20 06:43:06.097603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.819 [2024-11-20 06:43:06.097636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.819 qpair failed and we were unable to recover it. 00:32:34.819 [2024-11-20 06:43:06.097775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.819 [2024-11-20 06:43:06.097807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.819 qpair failed and we were unable to recover it. 00:32:34.819 [2024-11-20 06:43:06.098000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.819 [2024-11-20 06:43:06.098032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.819 qpair failed and we were unable to recover it. 00:32:34.819 [2024-11-20 06:43:06.098219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.819 [2024-11-20 06:43:06.098245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.819 qpair failed and we were unable to recover it. 00:32:34.819 [2024-11-20 06:43:06.098472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.819 [2024-11-20 06:43:06.098505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.819 qpair failed and we were unable to recover it. 00:32:34.819 [2024-11-20 06:43:06.098700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.819 [2024-11-20 06:43:06.098733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.819 qpair failed and we were unable to recover it. 00:32:34.819 [2024-11-20 06:43:06.098918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.819 [2024-11-20 06:43:06.098951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.819 qpair failed and we were unable to recover it. 00:32:34.819 [2024-11-20 06:43:06.099072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.819 [2024-11-20 06:43:06.099112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.819 qpair failed and we were unable to recover it. 00:32:34.819 [2024-11-20 06:43:06.099290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.819 [2024-11-20 06:43:06.099324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.819 qpair failed and we were unable to recover it. 00:32:34.820 [2024-11-20 06:43:06.099574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.820 [2024-11-20 06:43:06.099606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.820 qpair failed and we were unable to recover it. 00:32:34.820 [2024-11-20 06:43:06.099720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.820 [2024-11-20 06:43:06.099753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.820 qpair failed and we were unable to recover it. 00:32:34.820 [2024-11-20 06:43:06.099875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.820 [2024-11-20 06:43:06.099900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.820 qpair failed and we were unable to recover it. 00:32:34.820 [2024-11-20 06:43:06.099997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.820 [2024-11-20 06:43:06.100019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.820 qpair failed and we were unable to recover it. 00:32:34.820 [2024-11-20 06:43:06.100248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.820 [2024-11-20 06:43:06.100273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.820 qpair failed and we were unable to recover it. 00:32:34.820 [2024-11-20 06:43:06.100482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.820 [2024-11-20 06:43:06.100505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.820 qpair failed and we were unable to recover it. 00:32:34.820 [2024-11-20 06:43:06.100744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.820 [2024-11-20 06:43:06.100768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.820 qpair failed and we were unable to recover it. 00:32:34.820 [2024-11-20 06:43:06.100930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.820 [2024-11-20 06:43:06.100953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.820 qpair failed and we were unable to recover it. 00:32:34.820 [2024-11-20 06:43:06.101064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.820 [2024-11-20 06:43:06.101096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.820 qpair failed and we were unable to recover it. 00:32:34.820 [2024-11-20 06:43:06.101221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.820 [2024-11-20 06:43:06.101254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.820 qpair failed and we were unable to recover it. 00:32:34.820 [2024-11-20 06:43:06.101400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.820 [2024-11-20 06:43:06.101433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.820 qpair failed and we were unable to recover it. 00:32:34.820 [2024-11-20 06:43:06.101613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.820 [2024-11-20 06:43:06.101646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.820 qpair failed and we were unable to recover it. 00:32:34.820 [2024-11-20 06:43:06.101844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.820 [2024-11-20 06:43:06.101876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.820 qpair failed and we were unable to recover it. 00:32:34.820 [2024-11-20 06:43:06.102062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.820 [2024-11-20 06:43:06.102095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.820 qpair failed and we were unable to recover it. 00:32:34.820 [2024-11-20 06:43:06.102224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.820 [2024-11-20 06:43:06.102250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.820 qpair failed and we were unable to recover it. 00:32:34.820 [2024-11-20 06:43:06.102424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.820 [2024-11-20 06:43:06.102447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.820 qpair failed and we were unable to recover it. 00:32:34.820 [2024-11-20 06:43:06.102553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.820 [2024-11-20 06:43:06.102577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.820 qpair failed and we were unable to recover it. 00:32:34.820 [2024-11-20 06:43:06.102739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.820 [2024-11-20 06:43:06.102772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.820 qpair failed and we were unable to recover it. 00:32:34.820 [2024-11-20 06:43:06.102961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.820 [2024-11-20 06:43:06.102994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.820 qpair failed and we were unable to recover it. 00:32:34.820 [2024-11-20 06:43:06.103199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.820 [2024-11-20 06:43:06.103244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.820 qpair failed and we were unable to recover it. 00:32:34.820 [2024-11-20 06:43:06.103355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.820 [2024-11-20 06:43:06.103388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.820 qpair failed and we were unable to recover it. 00:32:34.820 [2024-11-20 06:43:06.103495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.820 [2024-11-20 06:43:06.103528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.820 qpair failed and we were unable to recover it. 00:32:34.820 [2024-11-20 06:43:06.103647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.820 [2024-11-20 06:43:06.103680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.820 qpair failed and we were unable to recover it. 00:32:34.820 [2024-11-20 06:43:06.103923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.820 [2024-11-20 06:43:06.103955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.820 qpair failed and we were unable to recover it. 00:32:34.820 [2024-11-20 06:43:06.104065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.820 [2024-11-20 06:43:06.104098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.820 qpair failed and we were unable to recover it. 00:32:34.821 [2024-11-20 06:43:06.104415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.821 [2024-11-20 06:43:06.104497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.821 qpair failed and we were unable to recover it. 00:32:34.821 [2024-11-20 06:43:06.104741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.821 [2024-11-20 06:43:06.104831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.821 qpair failed and we were unable to recover it. 00:32:34.821 [2024-11-20 06:43:06.105031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.821 [2024-11-20 06:43:06.105069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.821 qpair failed and we were unable to recover it. 00:32:34.821 [2024-11-20 06:43:06.105217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.821 [2024-11-20 06:43:06.105244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.821 qpair failed and we were unable to recover it. 00:32:34.821 [2024-11-20 06:43:06.105346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.821 [2024-11-20 06:43:06.105371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.821 qpair failed and we were unable to recover it. 00:32:34.821 [2024-11-20 06:43:06.105546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.821 [2024-11-20 06:43:06.105570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.821 qpair failed and we were unable to recover it. 00:32:34.821 [2024-11-20 06:43:06.105735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.821 [2024-11-20 06:43:06.105768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.821 qpair failed and we were unable to recover it. 00:32:34.821 [2024-11-20 06:43:06.105963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.821 [2024-11-20 06:43:06.105996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.821 qpair failed and we were unable to recover it. 00:32:34.821 [2024-11-20 06:43:06.106111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.821 [2024-11-20 06:43:06.106144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.821 qpair failed and we were unable to recover it. 00:32:34.821 [2024-11-20 06:43:06.106334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.821 [2024-11-20 06:43:06.106365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.821 qpair failed and we were unable to recover it. 00:32:34.821 [2024-11-20 06:43:06.106486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.821 [2024-11-20 06:43:06.106516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.821 qpair failed and we were unable to recover it. 00:32:34.821 [2024-11-20 06:43:06.106774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.821 [2024-11-20 06:43:06.106803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.821 qpair failed and we were unable to recover it. 00:32:34.821 [2024-11-20 06:43:06.106981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.821 [2024-11-20 06:43:06.107003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.821 qpair failed and we were unable to recover it. 00:32:34.821 [2024-11-20 06:43:06.107128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.821 [2024-11-20 06:43:06.107168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.821 qpair failed and we were unable to recover it. 00:32:34.821 [2024-11-20 06:43:06.107318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.821 [2024-11-20 06:43:06.107351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.821 qpair failed and we were unable to recover it. 00:32:34.821 [2024-11-20 06:43:06.107535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.821 [2024-11-20 06:43:06.107567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.821 qpair failed and we were unable to recover it. 00:32:34.821 [2024-11-20 06:43:06.107701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.821 [2024-11-20 06:43:06.107733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.821 qpair failed and we were unable to recover it. 00:32:34.821 [2024-11-20 06:43:06.107858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.821 [2024-11-20 06:43:06.107890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.821 qpair failed and we were unable to recover it. 00:32:34.821 [2024-11-20 06:43:06.108067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.821 [2024-11-20 06:43:06.108099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.821 qpair failed and we were unable to recover it. 00:32:34.821 [2024-11-20 06:43:06.108239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.821 [2024-11-20 06:43:06.108273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.821 qpair failed and we were unable to recover it. 00:32:34.821 [2024-11-20 06:43:06.108444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.821 [2024-11-20 06:43:06.108476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.821 qpair failed and we were unable to recover it. 00:32:34.821 [2024-11-20 06:43:06.108592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.821 [2024-11-20 06:43:06.108625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.821 qpair failed and we were unable to recover it. 00:32:34.821 [2024-11-20 06:43:06.108809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.821 [2024-11-20 06:43:06.108840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.821 qpair failed and we were unable to recover it. 00:32:34.821 [2024-11-20 06:43:06.109015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.821 [2024-11-20 06:43:06.109047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.821 qpair failed and we were unable to recover it. 00:32:34.821 [2024-11-20 06:43:06.109299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.821 [2024-11-20 06:43:06.109336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.821 qpair failed and we were unable to recover it. 00:32:34.821 [2024-11-20 06:43:06.109533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.822 [2024-11-20 06:43:06.109566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.822 qpair failed and we were unable to recover it. 00:32:34.822 [2024-11-20 06:43:06.109809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.822 [2024-11-20 06:43:06.109842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.822 qpair failed and we were unable to recover it. 00:32:34.822 [2024-11-20 06:43:06.109985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.822 [2024-11-20 06:43:06.110018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.822 qpair failed and we were unable to recover it. 00:32:34.822 [2024-11-20 06:43:06.110227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.822 [2024-11-20 06:43:06.110261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.822 qpair failed and we were unable to recover it. 00:32:34.822 [2024-11-20 06:43:06.110446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.822 [2024-11-20 06:43:06.110479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.822 qpair failed and we were unable to recover it. 00:32:34.822 [2024-11-20 06:43:06.110670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.822 [2024-11-20 06:43:06.110703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.822 qpair failed and we were unable to recover it. 00:32:34.822 [2024-11-20 06:43:06.110884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.822 [2024-11-20 06:43:06.110917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.822 qpair failed and we were unable to recover it. 00:32:34.822 [2024-11-20 06:43:06.111181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.822 [2024-11-20 06:43:06.111221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.822 qpair failed and we were unable to recover it. 00:32:34.822 [2024-11-20 06:43:06.111486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.822 [2024-11-20 06:43:06.111520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.822 qpair failed and we were unable to recover it. 00:32:34.822 [2024-11-20 06:43:06.111776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.822 [2024-11-20 06:43:06.111809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.822 qpair failed and we were unable to recover it. 00:32:34.822 [2024-11-20 06:43:06.111923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.822 [2024-11-20 06:43:06.111956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.822 qpair failed and we were unable to recover it. 00:32:34.822 [2024-11-20 06:43:06.112161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.822 [2024-11-20 06:43:06.112193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.822 qpair failed and we were unable to recover it. 00:32:34.822 [2024-11-20 06:43:06.112426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.822 [2024-11-20 06:43:06.112459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.822 qpair failed and we were unable to recover it. 00:32:34.822 [2024-11-20 06:43:06.112757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.822 [2024-11-20 06:43:06.112788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.822 qpair failed and we were unable to recover it. 00:32:34.822 [2024-11-20 06:43:06.112961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.822 [2024-11-20 06:43:06.112992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.822 qpair failed and we were unable to recover it. 00:32:34.822 [2024-11-20 06:43:06.113125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.822 [2024-11-20 06:43:06.113158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.822 qpair failed and we were unable to recover it. 00:32:34.822 [2024-11-20 06:43:06.113408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.822 [2024-11-20 06:43:06.113441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.822 qpair failed and we were unable to recover it. 00:32:34.822 [2024-11-20 06:43:06.113555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.822 [2024-11-20 06:43:06.113587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.822 qpair failed and we were unable to recover it. 00:32:34.822 [2024-11-20 06:43:06.113764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.822 [2024-11-20 06:43:06.113796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.822 qpair failed and we were unable to recover it. 00:32:34.822 [2024-11-20 06:43:06.113964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.822 [2024-11-20 06:43:06.113996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.822 qpair failed and we were unable to recover it. 00:32:34.822 [2024-11-20 06:43:06.114098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.822 [2024-11-20 06:43:06.114130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.822 qpair failed and we were unable to recover it. 00:32:34.822 [2024-11-20 06:43:06.114252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.822 [2024-11-20 06:43:06.114286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.822 qpair failed and we were unable to recover it. 00:32:34.822 [2024-11-20 06:43:06.114399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.822 [2024-11-20 06:43:06.114432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.822 qpair failed and we were unable to recover it. 00:32:34.822 [2024-11-20 06:43:06.114622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.822 [2024-11-20 06:43:06.114654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.822 qpair failed and we were unable to recover it. 00:32:34.822 [2024-11-20 06:43:06.114851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.822 [2024-11-20 06:43:06.114884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.822 qpair failed and we were unable to recover it. 00:32:34.822 [2024-11-20 06:43:06.114987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.822 [2024-11-20 06:43:06.115019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.822 qpair failed and we were unable to recover it. 00:32:34.822 [2024-11-20 06:43:06.115152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.822 [2024-11-20 06:43:06.115184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.822 qpair failed and we were unable to recover it. 00:32:34.822 [2024-11-20 06:43:06.115334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.822 [2024-11-20 06:43:06.115367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.822 qpair failed and we were unable to recover it. 00:32:34.822 [2024-11-20 06:43:06.115610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.822 [2024-11-20 06:43:06.115648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.822 qpair failed and we were unable to recover it. 00:32:34.822 [2024-11-20 06:43:06.115825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.822 [2024-11-20 06:43:06.115856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.822 qpair failed and we were unable to recover it. 00:32:34.822 [2024-11-20 06:43:06.116040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.822 [2024-11-20 06:43:06.116073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.822 qpair failed and we were unable to recover it. 00:32:34.822 [2024-11-20 06:43:06.116246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.822 [2024-11-20 06:43:06.116280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.822 qpair failed and we were unable to recover it. 00:32:34.822 [2024-11-20 06:43:06.116545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.822 [2024-11-20 06:43:06.116579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.823 qpair failed and we were unable to recover it. 00:32:34.823 [2024-11-20 06:43:06.116789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.823 [2024-11-20 06:43:06.116823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.823 qpair failed and we were unable to recover it. 00:32:34.823 [2024-11-20 06:43:06.116961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.823 [2024-11-20 06:43:06.116993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.823 qpair failed and we were unable to recover it. 00:32:34.823 [2024-11-20 06:43:06.117111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.823 [2024-11-20 06:43:06.117143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.823 qpair failed and we were unable to recover it. 00:32:34.823 [2024-11-20 06:43:06.117281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.823 [2024-11-20 06:43:06.117315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.823 qpair failed and we were unable to recover it. 00:32:34.823 [2024-11-20 06:43:06.117582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.823 [2024-11-20 06:43:06.117613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.823 qpair failed and we were unable to recover it. 00:32:34.823 [2024-11-20 06:43:06.117804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.823 [2024-11-20 06:43:06.117836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.823 qpair failed and we were unable to recover it. 00:32:34.823 [2024-11-20 06:43:06.118031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.823 [2024-11-20 06:43:06.118064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.823 qpair failed and we were unable to recover it. 00:32:34.823 [2024-11-20 06:43:06.118181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.823 [2024-11-20 06:43:06.118221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.823 qpair failed and we were unable to recover it. 00:32:34.823 [2024-11-20 06:43:06.118347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.823 [2024-11-20 06:43:06.118379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.823 qpair failed and we were unable to recover it. 00:32:34.823 [2024-11-20 06:43:06.118568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.823 [2024-11-20 06:43:06.118601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.823 qpair failed and we were unable to recover it. 00:32:34.823 [2024-11-20 06:43:06.118802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.823 [2024-11-20 06:43:06.118833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.823 qpair failed and we were unable to recover it. 00:32:34.823 [2024-11-20 06:43:06.118978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.823 [2024-11-20 06:43:06.119009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.823 qpair failed and we were unable to recover it. 00:32:34.823 [2024-11-20 06:43:06.119139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.823 [2024-11-20 06:43:06.119173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.823 qpair failed and we were unable to recover it. 00:32:34.823 [2024-11-20 06:43:06.119290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.823 [2024-11-20 06:43:06.119322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.823 qpair failed and we were unable to recover it. 00:32:34.823 [2024-11-20 06:43:06.119501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.823 [2024-11-20 06:43:06.119532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.823 qpair failed and we were unable to recover it. 00:32:34.823 [2024-11-20 06:43:06.119731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.823 [2024-11-20 06:43:06.119764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.823 qpair failed and we were unable to recover it. 00:32:34.823 [2024-11-20 06:43:06.119939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.823 [2024-11-20 06:43:06.119971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.823 qpair failed and we were unable to recover it. 00:32:34.823 [2024-11-20 06:43:06.120177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.823 [2024-11-20 06:43:06.120220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.823 qpair failed and we were unable to recover it. 00:32:34.823 [2024-11-20 06:43:06.120485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.823 [2024-11-20 06:43:06.120518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.823 qpair failed and we were unable to recover it. 00:32:34.823 [2024-11-20 06:43:06.120722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.823 [2024-11-20 06:43:06.120755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.823 qpair failed and we were unable to recover it. 00:32:34.823 [2024-11-20 06:43:06.120969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.823 [2024-11-20 06:43:06.121001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.823 qpair failed and we were unable to recover it. 00:32:34.823 [2024-11-20 06:43:06.121181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.823 [2024-11-20 06:43:06.121225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.823 qpair failed and we were unable to recover it. 00:32:34.823 [2024-11-20 06:43:06.121445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.823 [2024-11-20 06:43:06.121477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.823 qpair failed and we were unable to recover it. 00:32:34.823 [2024-11-20 06:43:06.121713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.823 [2024-11-20 06:43:06.121745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.823 qpair failed and we were unable to recover it. 00:32:34.823 [2024-11-20 06:43:06.121963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.823 [2024-11-20 06:43:06.121995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.823 qpair failed and we were unable to recover it. 00:32:34.823 [2024-11-20 06:43:06.122182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.823 [2024-11-20 06:43:06.122238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.823 qpair failed and we were unable to recover it. 00:32:34.823 [2024-11-20 06:43:06.122417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.823 [2024-11-20 06:43:06.122448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.823 qpair failed and we were unable to recover it. 00:32:34.823 [2024-11-20 06:43:06.122563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.823 [2024-11-20 06:43:06.122596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.823 qpair failed and we were unable to recover it. 00:32:34.823 [2024-11-20 06:43:06.122791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.823 [2024-11-20 06:43:06.122823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.823 qpair failed and we were unable to recover it. 00:32:34.823 [2024-11-20 06:43:06.123013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.823 [2024-11-20 06:43:06.123046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.823 qpair failed and we were unable to recover it. 00:32:34.823 [2024-11-20 06:43:06.123247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.824 [2024-11-20 06:43:06.123282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.824 qpair failed and we were unable to recover it. 00:32:34.824 [2024-11-20 06:43:06.123482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.824 [2024-11-20 06:43:06.123514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.824 qpair failed and we were unable to recover it. 00:32:34.824 [2024-11-20 06:43:06.123686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.824 [2024-11-20 06:43:06.123719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.824 qpair failed and we were unable to recover it. 00:32:34.824 [2024-11-20 06:43:06.123910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.824 [2024-11-20 06:43:06.123943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.824 qpair failed and we were unable to recover it. 00:32:34.824 [2024-11-20 06:43:06.124112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.824 [2024-11-20 06:43:06.124144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.824 qpair failed and we were unable to recover it. 00:32:34.824 [2024-11-20 06:43:06.124389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.824 [2024-11-20 06:43:06.124422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.824 qpair failed and we were unable to recover it. 00:32:34.824 [2024-11-20 06:43:06.124552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.824 [2024-11-20 06:43:06.124585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.824 qpair failed and we were unable to recover it. 00:32:34.824 [2024-11-20 06:43:06.124778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.824 [2024-11-20 06:43:06.124809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.824 qpair failed and we were unable to recover it. 00:32:34.824 [2024-11-20 06:43:06.124994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.824 [2024-11-20 06:43:06.125025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.824 qpair failed and we were unable to recover it. 00:32:34.824 [2024-11-20 06:43:06.125147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.824 [2024-11-20 06:43:06.125179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.824 qpair failed and we were unable to recover it. 00:32:34.824 [2024-11-20 06:43:06.125459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.824 [2024-11-20 06:43:06.125491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.824 qpair failed and we were unable to recover it. 00:32:34.824 [2024-11-20 06:43:06.125674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.824 [2024-11-20 06:43:06.125706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.824 qpair failed and we were unable to recover it. 00:32:34.824 [2024-11-20 06:43:06.125899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.824 [2024-11-20 06:43:06.125932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.824 qpair failed and we were unable to recover it. 00:32:34.824 [2024-11-20 06:43:06.126127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.824 [2024-11-20 06:43:06.126159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.824 qpair failed and we were unable to recover it. 00:32:34.824 [2024-11-20 06:43:06.126353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.824 [2024-11-20 06:43:06.126386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.824 qpair failed and we were unable to recover it. 00:32:34.824 [2024-11-20 06:43:06.126499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.824 [2024-11-20 06:43:06.126532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.824 qpair failed and we were unable to recover it. 00:32:34.824 [2024-11-20 06:43:06.126716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.824 [2024-11-20 06:43:06.126747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.824 qpair failed and we were unable to recover it. 00:32:34.824 [2024-11-20 06:43:06.126923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.824 [2024-11-20 06:43:06.126954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.824 qpair failed and we were unable to recover it. 00:32:34.824 [2024-11-20 06:43:06.127250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.824 [2024-11-20 06:43:06.127284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.824 qpair failed and we were unable to recover it. 00:32:34.824 [2024-11-20 06:43:06.127475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.824 [2024-11-20 06:43:06.127508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.824 qpair failed and we were unable to recover it. 00:32:34.824 [2024-11-20 06:43:06.127705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.824 [2024-11-20 06:43:06.127737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.824 qpair failed and we were unable to recover it. 00:32:34.824 [2024-11-20 06:43:06.127916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.824 [2024-11-20 06:43:06.127949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.824 qpair failed and we were unable to recover it. 00:32:34.824 [2024-11-20 06:43:06.128130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.824 [2024-11-20 06:43:06.128162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.824 qpair failed and we were unable to recover it. 00:32:34.824 [2024-11-20 06:43:06.128435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.824 [2024-11-20 06:43:06.128468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.824 qpair failed and we were unable to recover it. 00:32:34.824 [2024-11-20 06:43:06.128611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.824 [2024-11-20 06:43:06.128645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.824 qpair failed and we were unable to recover it. 00:32:34.824 [2024-11-20 06:43:06.128833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.824 [2024-11-20 06:43:06.128864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.824 qpair failed and we were unable to recover it. 00:32:34.824 [2024-11-20 06:43:06.129039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.824 [2024-11-20 06:43:06.129071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.824 qpair failed and we were unable to recover it. 00:32:34.824 [2024-11-20 06:43:06.129266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.824 [2024-11-20 06:43:06.129300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.824 qpair failed and we were unable to recover it. 00:32:34.824 [2024-11-20 06:43:06.129567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.824 [2024-11-20 06:43:06.129600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.824 qpair failed and we were unable to recover it. 00:32:34.824 [2024-11-20 06:43:06.129715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.824 [2024-11-20 06:43:06.129747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.824 qpair failed and we were unable to recover it. 00:32:34.824 [2024-11-20 06:43:06.130039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.824 [2024-11-20 06:43:06.130072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.824 qpair failed and we were unable to recover it. 00:32:34.824 [2024-11-20 06:43:06.130264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.824 [2024-11-20 06:43:06.130298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.824 qpair failed and we were unable to recover it. 00:32:34.824 [2024-11-20 06:43:06.130561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.824 [2024-11-20 06:43:06.130598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.824 qpair failed and we were unable to recover it. 00:32:34.824 [2024-11-20 06:43:06.130793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.824 [2024-11-20 06:43:06.130833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.824 qpair failed and we were unable to recover it. 00:32:34.824 [2024-11-20 06:43:06.131050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.824 [2024-11-20 06:43:06.131081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.824 qpair failed and we were unable to recover it. 00:32:34.824 [2024-11-20 06:43:06.131237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.824 [2024-11-20 06:43:06.131272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.824 qpair failed and we were unable to recover it. 00:32:34.824 [2024-11-20 06:43:06.131458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.824 [2024-11-20 06:43:06.131491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.824 qpair failed and we were unable to recover it. 00:32:34.824 [2024-11-20 06:43:06.131607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.824 [2024-11-20 06:43:06.131638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.824 qpair failed and we were unable to recover it. 00:32:34.825 [2024-11-20 06:43:06.131820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.825 [2024-11-20 06:43:06.131851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.825 qpair failed and we were unable to recover it. 00:32:34.825 [2024-11-20 06:43:06.132039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.825 [2024-11-20 06:43:06.132072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.825 qpair failed and we were unable to recover it. 00:32:34.825 [2024-11-20 06:43:06.132216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.825 [2024-11-20 06:43:06.132248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.825 qpair failed and we were unable to recover it. 00:32:34.825 [2024-11-20 06:43:06.132370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.825 [2024-11-20 06:43:06.132403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.825 qpair failed and we were unable to recover it. 00:32:34.825 [2024-11-20 06:43:06.132677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.825 [2024-11-20 06:43:06.132710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.825 qpair failed and we were unable to recover it. 00:32:34.825 [2024-11-20 06:43:06.132829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.825 [2024-11-20 06:43:06.132860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.825 qpair failed and we were unable to recover it. 00:32:34.825 [2024-11-20 06:43:06.133034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.825 [2024-11-20 06:43:06.133067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.825 qpair failed and we were unable to recover it. 00:32:34.825 [2024-11-20 06:43:06.133192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.825 [2024-11-20 06:43:06.133238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.825 qpair failed and we were unable to recover it. 00:32:34.825 [2024-11-20 06:43:06.133430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.825 [2024-11-20 06:43:06.133463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.825 qpair failed and we were unable to recover it. 00:32:34.825 [2024-11-20 06:43:06.133709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.825 [2024-11-20 06:43:06.133741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.825 qpair failed and we were unable to recover it. 00:32:34.825 [2024-11-20 06:43:06.133950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.825 [2024-11-20 06:43:06.133983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.825 qpair failed and we were unable to recover it. 00:32:34.825 [2024-11-20 06:43:06.134165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.825 [2024-11-20 06:43:06.134197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.825 qpair failed and we were unable to recover it. 00:32:34.825 [2024-11-20 06:43:06.134420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.825 [2024-11-20 06:43:06.134453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.825 qpair failed and we were unable to recover it. 00:32:34.825 [2024-11-20 06:43:06.134575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.825 [2024-11-20 06:43:06.134607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.825 qpair failed and we were unable to recover it. 00:32:34.825 [2024-11-20 06:43:06.134785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.825 [2024-11-20 06:43:06.134817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.825 qpair failed and we were unable to recover it. 00:32:34.825 [2024-11-20 06:43:06.134999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.825 [2024-11-20 06:43:06.135030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.825 qpair failed and we were unable to recover it. 00:32:34.825 [2024-11-20 06:43:06.135155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.825 [2024-11-20 06:43:06.135187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.825 qpair failed and we were unable to recover it. 00:32:34.825 [2024-11-20 06:43:06.135464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.825 [2024-11-20 06:43:06.135496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.825 qpair failed and we were unable to recover it. 00:32:34.825 [2024-11-20 06:43:06.135673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.825 [2024-11-20 06:43:06.135707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.825 qpair failed and we were unable to recover it. 00:32:34.825 [2024-11-20 06:43:06.135883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.825 [2024-11-20 06:43:06.135916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.825 qpair failed and we were unable to recover it. 00:32:34.825 [2024-11-20 06:43:06.136126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.825 [2024-11-20 06:43:06.136158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.825 qpair failed and we were unable to recover it. 00:32:34.825 [2024-11-20 06:43:06.136355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.825 [2024-11-20 06:43:06.136389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.825 qpair failed and we were unable to recover it. 00:32:34.825 [2024-11-20 06:43:06.136668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.825 [2024-11-20 06:43:06.136700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.825 qpair failed and we were unable to recover it. 00:32:34.825 [2024-11-20 06:43:06.136913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.825 [2024-11-20 06:43:06.136945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.825 qpair failed and we were unable to recover it. 00:32:34.825 [2024-11-20 06:43:06.137251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.825 [2024-11-20 06:43:06.137285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.825 qpair failed and we were unable to recover it. 00:32:34.825 [2024-11-20 06:43:06.137403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.825 [2024-11-20 06:43:06.137435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.825 qpair failed and we were unable to recover it. 00:32:34.825 [2024-11-20 06:43:06.137629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.825 [2024-11-20 06:43:06.137662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.825 qpair failed and we were unable to recover it. 00:32:34.825 [2024-11-20 06:43:06.137905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.825 [2024-11-20 06:43:06.137936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.825 qpair failed and we were unable to recover it. 00:32:34.825 [2024-11-20 06:43:06.138183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.825 [2024-11-20 06:43:06.138247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.825 qpair failed and we were unable to recover it. 00:32:34.825 [2024-11-20 06:43:06.138361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.825 [2024-11-20 06:43:06.138394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.825 qpair failed and we were unable to recover it. 00:32:34.825 [2024-11-20 06:43:06.138569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.825 [2024-11-20 06:43:06.138602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.825 qpair failed and we were unable to recover it. 00:32:34.825 [2024-11-20 06:43:06.138785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.825 [2024-11-20 06:43:06.138818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.825 qpair failed and we were unable to recover it. 00:32:34.825 [2024-11-20 06:43:06.139085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.825 [2024-11-20 06:43:06.139116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.825 qpair failed and we were unable to recover it. 00:32:34.825 [2024-11-20 06:43:06.139245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.825 [2024-11-20 06:43:06.139279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.825 qpair failed and we were unable to recover it. 00:32:34.825 [2024-11-20 06:43:06.139522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.825 [2024-11-20 06:43:06.139559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.825 qpair failed and we were unable to recover it. 00:32:34.825 [2024-11-20 06:43:06.139744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.825 [2024-11-20 06:43:06.139775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.825 qpair failed and we were unable to recover it. 00:32:34.825 [2024-11-20 06:43:06.140028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.825 [2024-11-20 06:43:06.140060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.825 qpair failed and we were unable to recover it. 00:32:34.825 [2024-11-20 06:43:06.140253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.826 [2024-11-20 06:43:06.140287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.826 qpair failed and we were unable to recover it. 00:32:34.826 [2024-11-20 06:43:06.140506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.826 [2024-11-20 06:43:06.140538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.826 qpair failed and we were unable to recover it. 00:32:34.826 [2024-11-20 06:43:06.140706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.826 [2024-11-20 06:43:06.140738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.826 qpair failed and we were unable to recover it. 00:32:34.826 [2024-11-20 06:43:06.140869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.826 [2024-11-20 06:43:06.140903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.826 qpair failed and we were unable to recover it. 00:32:34.826 [2024-11-20 06:43:06.141147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.826 [2024-11-20 06:43:06.141179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.826 qpair failed and we were unable to recover it. 00:32:34.826 [2024-11-20 06:43:06.141454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.826 [2024-11-20 06:43:06.141487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.826 qpair failed and we were unable to recover it. 00:32:34.826 [2024-11-20 06:43:06.141731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.826 [2024-11-20 06:43:06.141765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.826 qpair failed and we were unable to recover it. 00:32:34.826 [2024-11-20 06:43:06.141881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.826 [2024-11-20 06:43:06.141913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.826 qpair failed and we were unable to recover it. 00:32:34.826 [2024-11-20 06:43:06.142094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.826 [2024-11-20 06:43:06.142125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.826 qpair failed and we were unable to recover it. 00:32:34.826 [2024-11-20 06:43:06.142339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.826 [2024-11-20 06:43:06.142374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.826 qpair failed and we were unable to recover it. 00:32:34.826 [2024-11-20 06:43:06.142614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.826 [2024-11-20 06:43:06.142645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.826 qpair failed and we were unable to recover it. 00:32:34.826 [2024-11-20 06:43:06.142892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.826 [2024-11-20 06:43:06.142925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.826 qpair failed and we were unable to recover it. 00:32:34.826 [2024-11-20 06:43:06.143121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.826 [2024-11-20 06:43:06.143153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.826 qpair failed and we were unable to recover it. 00:32:34.826 [2024-11-20 06:43:06.143370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.826 [2024-11-20 06:43:06.143402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.826 qpair failed and we were unable to recover it. 00:32:34.826 [2024-11-20 06:43:06.143580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.826 [2024-11-20 06:43:06.143613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.826 qpair failed and we were unable to recover it. 00:32:34.826 [2024-11-20 06:43:06.143801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.826 [2024-11-20 06:43:06.143833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.826 qpair failed and we were unable to recover it. 00:32:34.826 [2024-11-20 06:43:06.143961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.826 [2024-11-20 06:43:06.143994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.826 qpair failed and we were unable to recover it. 00:32:34.826 [2024-11-20 06:43:06.144177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.826 [2024-11-20 06:43:06.144219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.826 qpair failed and we were unable to recover it. 00:32:34.826 [2024-11-20 06:43:06.144430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.826 [2024-11-20 06:43:06.144464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.826 qpair failed and we were unable to recover it. 00:32:34.826 [2024-11-20 06:43:06.144724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.826 [2024-11-20 06:43:06.144756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.826 qpair failed and we were unable to recover it. 00:32:34.826 [2024-11-20 06:43:06.144927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.826 [2024-11-20 06:43:06.144958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.826 qpair failed and we were unable to recover it. 00:32:34.826 [2024-11-20 06:43:06.145137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.826 [2024-11-20 06:43:06.145170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.826 qpair failed and we were unable to recover it. 00:32:34.826 [2024-11-20 06:43:06.145447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.826 [2024-11-20 06:43:06.145481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.826 qpair failed and we were unable to recover it. 00:32:34.826 [2024-11-20 06:43:06.145615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.826 [2024-11-20 06:43:06.145648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.826 qpair failed and we were unable to recover it. 00:32:34.826 [2024-11-20 06:43:06.145784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.826 [2024-11-20 06:43:06.145823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.826 qpair failed and we were unable to recover it. 00:32:34.826 [2024-11-20 06:43:06.146083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.826 [2024-11-20 06:43:06.146114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.826 qpair failed and we were unable to recover it. 00:32:34.826 [2024-11-20 06:43:06.146380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.826 [2024-11-20 06:43:06.146413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.826 qpair failed and we were unable to recover it. 00:32:34.826 [2024-11-20 06:43:06.146558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.826 [2024-11-20 06:43:06.146589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.826 qpair failed and we were unable to recover it. 00:32:34.826 [2024-11-20 06:43:06.146880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.826 [2024-11-20 06:43:06.146911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.826 qpair failed and we were unable to recover it. 00:32:34.826 [2024-11-20 06:43:06.147120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.826 [2024-11-20 06:43:06.147153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.826 qpair failed and we were unable to recover it. 00:32:34.826 [2024-11-20 06:43:06.147356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.826 [2024-11-20 06:43:06.147389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.826 qpair failed and we were unable to recover it. 00:32:34.826 [2024-11-20 06:43:06.147575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.826 [2024-11-20 06:43:06.147607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.826 qpair failed and we were unable to recover it. 00:32:34.826 [2024-11-20 06:43:06.147820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.826 [2024-11-20 06:43:06.147851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.826 qpair failed and we were unable to recover it. 00:32:34.826 [2024-11-20 06:43:06.148044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.826 [2024-11-20 06:43:06.148076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.826 qpair failed and we were unable to recover it. 00:32:34.826 [2024-11-20 06:43:06.148358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.826 [2024-11-20 06:43:06.148391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.826 qpair failed and we were unable to recover it. 00:32:34.826 [2024-11-20 06:43:06.148592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.826 [2024-11-20 06:43:06.148623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.826 qpair failed and we were unable to recover it. 00:32:34.826 [2024-11-20 06:43:06.148813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.826 [2024-11-20 06:43:06.148846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.826 qpair failed and we were unable to recover it. 00:32:34.826 [2024-11-20 06:43:06.149121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.827 [2024-11-20 06:43:06.149164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.827 qpair failed and we were unable to recover it. 00:32:34.827 [2024-11-20 06:43:06.149351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.827 [2024-11-20 06:43:06.149384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.827 qpair failed and we were unable to recover it. 00:32:34.827 [2024-11-20 06:43:06.149596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.827 [2024-11-20 06:43:06.149628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.827 qpair failed and we were unable to recover it. 00:32:34.827 [2024-11-20 06:43:06.149818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.827 [2024-11-20 06:43:06.149850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.827 qpair failed and we were unable to recover it. 00:32:34.827 [2024-11-20 06:43:06.150098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.827 [2024-11-20 06:43:06.150129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.827 qpair failed and we were unable to recover it. 00:32:34.827 [2024-11-20 06:43:06.150381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.827 [2024-11-20 06:43:06.150416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.827 qpair failed and we were unable to recover it. 00:32:34.827 [2024-11-20 06:43:06.150624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.827 [2024-11-20 06:43:06.150656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.827 qpair failed and we were unable to recover it. 00:32:34.827 [2024-11-20 06:43:06.150791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.827 [2024-11-20 06:43:06.150823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.827 qpair failed and we were unable to recover it. 00:32:34.827 [2024-11-20 06:43:06.150949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.827 [2024-11-20 06:43:06.150982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.827 qpair failed and we were unable to recover it. 00:32:34.827 [2024-11-20 06:43:06.151152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.827 [2024-11-20 06:43:06.151184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.827 qpair failed and we were unable to recover it. 00:32:34.827 [2024-11-20 06:43:06.151385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.827 [2024-11-20 06:43:06.151418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.827 qpair failed and we were unable to recover it. 00:32:34.827 [2024-11-20 06:43:06.151683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.827 [2024-11-20 06:43:06.151715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.827 qpair failed and we were unable to recover it. 00:32:34.827 [2024-11-20 06:43:06.151888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.827 [2024-11-20 06:43:06.151919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.827 qpair failed and we were unable to recover it. 00:32:34.827 [2024-11-20 06:43:06.152019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.827 [2024-11-20 06:43:06.152051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.827 qpair failed and we were unable to recover it. 00:32:34.827 [2024-11-20 06:43:06.152255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.827 [2024-11-20 06:43:06.152289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.827 qpair failed and we were unable to recover it. 00:32:34.827 [2024-11-20 06:43:06.152463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.827 [2024-11-20 06:43:06.152494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.827 qpair failed and we were unable to recover it. 00:32:34.827 [2024-11-20 06:43:06.152680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.827 [2024-11-20 06:43:06.152712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.827 qpair failed and we were unable to recover it. 00:32:34.827 [2024-11-20 06:43:06.152960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.827 [2024-11-20 06:43:06.152994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.827 qpair failed and we were unable to recover it. 00:32:34.827 [2024-11-20 06:43:06.153119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.827 [2024-11-20 06:43:06.153151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.827 qpair failed and we were unable to recover it. 00:32:34.827 [2024-11-20 06:43:06.153397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.827 [2024-11-20 06:43:06.153430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.827 qpair failed and we were unable to recover it. 00:32:34.827 [2024-11-20 06:43:06.153722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.827 [2024-11-20 06:43:06.153754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.827 qpair failed and we were unable to recover it. 00:32:34.827 [2024-11-20 06:43:06.153929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.827 [2024-11-20 06:43:06.153960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.827 qpair failed and we were unable to recover it. 00:32:34.827 [2024-11-20 06:43:06.154085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.827 [2024-11-20 06:43:06.154116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.827 qpair failed and we were unable to recover it. 00:32:34.827 [2024-11-20 06:43:06.154370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.827 [2024-11-20 06:43:06.154403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.827 qpair failed and we were unable to recover it. 00:32:34.827 [2024-11-20 06:43:06.154537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.827 [2024-11-20 06:43:06.154569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.827 qpair failed and we were unable to recover it. 00:32:34.827 [2024-11-20 06:43:06.154776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.827 [2024-11-20 06:43:06.154807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.827 qpair failed and we were unable to recover it. 00:32:34.827 [2024-11-20 06:43:06.154982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.827 [2024-11-20 06:43:06.155015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.827 qpair failed and we were unable to recover it. 00:32:34.827 [2024-11-20 06:43:06.155130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.827 [2024-11-20 06:43:06.155162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.827 qpair failed and we were unable to recover it. 00:32:34.827 [2024-11-20 06:43:06.155433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.827 [2024-11-20 06:43:06.155465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.827 qpair failed and we were unable to recover it. 00:32:34.827 [2024-11-20 06:43:06.155645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.827 [2024-11-20 06:43:06.155677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.827 qpair failed and we were unable to recover it. 00:32:34.827 [2024-11-20 06:43:06.155960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.827 [2024-11-20 06:43:06.155991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.827 qpair failed and we were unable to recover it. 00:32:34.827 [2024-11-20 06:43:06.156178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.827 [2024-11-20 06:43:06.156218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.827 qpair failed and we were unable to recover it. 00:32:34.827 [2024-11-20 06:43:06.156464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.827 [2024-11-20 06:43:06.156496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.827 qpair failed and we were unable to recover it. 00:32:34.827 [2024-11-20 06:43:06.156705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.827 [2024-11-20 06:43:06.156737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.827 qpair failed and we were unable to recover it. 00:32:34.827 [2024-11-20 06:43:06.156921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.827 [2024-11-20 06:43:06.156953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.827 qpair failed and we were unable to recover it. 00:32:34.827 [2024-11-20 06:43:06.157141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.827 [2024-11-20 06:43:06.157173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.827 qpair failed and we were unable to recover it. 00:32:34.827 [2024-11-20 06:43:06.157284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.827 [2024-11-20 06:43:06.157316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.827 qpair failed and we were unable to recover it. 00:32:34.827 [2024-11-20 06:43:06.157557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.827 [2024-11-20 06:43:06.157589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.827 qpair failed and we were unable to recover it. 00:32:34.828 [2024-11-20 06:43:06.157769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.828 [2024-11-20 06:43:06.157802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.828 qpair failed and we were unable to recover it. 00:32:34.828 [2024-11-20 06:43:06.158065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.828 [2024-11-20 06:43:06.158097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.828 qpair failed and we were unable to recover it. 00:32:34.828 [2024-11-20 06:43:06.158359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.828 [2024-11-20 06:43:06.158399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.828 qpair failed and we were unable to recover it. 00:32:34.828 [2024-11-20 06:43:06.158592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.828 [2024-11-20 06:43:06.158623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.828 qpair failed and we were unable to recover it. 00:32:34.828 [2024-11-20 06:43:06.158730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.828 [2024-11-20 06:43:06.158761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.828 qpair failed and we were unable to recover it. 00:32:34.828 [2024-11-20 06:43:06.158877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.828 [2024-11-20 06:43:06.158908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.828 qpair failed and we were unable to recover it. 00:32:34.828 [2024-11-20 06:43:06.159105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.828 [2024-11-20 06:43:06.159138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.828 qpair failed and we were unable to recover it. 00:32:34.828 [2024-11-20 06:43:06.159268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.828 [2024-11-20 06:43:06.159302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.828 qpair failed and we were unable to recover it. 00:32:34.828 [2024-11-20 06:43:06.159539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.828 [2024-11-20 06:43:06.159570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.828 qpair failed and we were unable to recover it. 00:32:34.828 [2024-11-20 06:43:06.159748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.828 [2024-11-20 06:43:06.159781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.828 qpair failed and we were unable to recover it. 00:32:34.828 [2024-11-20 06:43:06.159964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.828 [2024-11-20 06:43:06.159997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.828 qpair failed and we were unable to recover it. 00:32:34.828 [2024-11-20 06:43:06.160218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.828 [2024-11-20 06:43:06.160251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.828 qpair failed and we were unable to recover it. 00:32:34.828 [2024-11-20 06:43:06.160444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.828 [2024-11-20 06:43:06.160477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.828 qpair failed and we were unable to recover it. 00:32:34.828 [2024-11-20 06:43:06.160593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.828 [2024-11-20 06:43:06.160624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.828 qpair failed and we were unable to recover it. 00:32:34.828 [2024-11-20 06:43:06.160815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.828 [2024-11-20 06:43:06.160847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.828 qpair failed and we were unable to recover it. 00:32:34.828 [2024-11-20 06:43:06.161032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.828 [2024-11-20 06:43:06.161065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.828 qpair failed and we were unable to recover it. 00:32:34.828 [2024-11-20 06:43:06.161264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.828 [2024-11-20 06:43:06.161297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.828 qpair failed and we were unable to recover it. 00:32:34.828 [2024-11-20 06:43:06.161491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.828 [2024-11-20 06:43:06.161522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.828 qpair failed and we were unable to recover it. 00:32:34.828 [2024-11-20 06:43:06.161642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.828 [2024-11-20 06:43:06.161675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.828 qpair failed and we were unable to recover it. 00:32:34.828 [2024-11-20 06:43:06.161860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.828 [2024-11-20 06:43:06.161892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.828 qpair failed and we were unable to recover it. 00:32:34.828 [2024-11-20 06:43:06.162090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.828 [2024-11-20 06:43:06.162122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.828 qpair failed and we were unable to recover it. 00:32:34.828 [2024-11-20 06:43:06.162275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.828 [2024-11-20 06:43:06.162309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.828 qpair failed and we were unable to recover it. 00:32:34.828 [2024-11-20 06:43:06.162424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.828 [2024-11-20 06:43:06.162457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.828 qpair failed and we were unable to recover it. 00:32:34.828 [2024-11-20 06:43:06.162652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.828 [2024-11-20 06:43:06.162684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.828 qpair failed and we were unable to recover it. 00:32:34.828 [2024-11-20 06:43:06.162888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.828 [2024-11-20 06:43:06.162921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.828 qpair failed and we were unable to recover it. 00:32:34.828 [2024-11-20 06:43:06.163147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.828 [2024-11-20 06:43:06.163180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.828 qpair failed and we were unable to recover it. 00:32:34.828 [2024-11-20 06:43:06.163451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.828 [2024-11-20 06:43:06.163483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.828 qpair failed and we were unable to recover it. 00:32:34.828 [2024-11-20 06:43:06.163591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.828 [2024-11-20 06:43:06.163624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.828 qpair failed and we were unable to recover it. 00:32:34.828 [2024-11-20 06:43:06.163801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.828 [2024-11-20 06:43:06.163833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.828 qpair failed and we were unable to recover it. 00:32:34.828 [2024-11-20 06:43:06.164107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.828 [2024-11-20 06:43:06.164139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.828 qpair failed and we were unable to recover it. 00:32:34.828 [2024-11-20 06:43:06.164354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.828 [2024-11-20 06:43:06.164387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.828 qpair failed and we were unable to recover it. 00:32:34.828 [2024-11-20 06:43:06.164573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.828 [2024-11-20 06:43:06.164605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.828 qpair failed and we were unable to recover it. 00:32:34.828 [2024-11-20 06:43:06.164845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.828 [2024-11-20 06:43:06.164877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.829 qpair failed and we were unable to recover it. 00:32:34.829 [2024-11-20 06:43:06.165129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.829 [2024-11-20 06:43:06.165162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.829 qpair failed and we were unable to recover it. 00:32:34.829 [2024-11-20 06:43:06.165288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.829 [2024-11-20 06:43:06.165320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.829 qpair failed and we were unable to recover it. 00:32:34.829 [2024-11-20 06:43:06.165451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.829 [2024-11-20 06:43:06.165483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.829 qpair failed and we were unable to recover it. 00:32:34.829 [2024-11-20 06:43:06.165595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.829 [2024-11-20 06:43:06.165628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.829 qpair failed and we were unable to recover it. 00:32:34.829 [2024-11-20 06:43:06.165895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.829 [2024-11-20 06:43:06.165927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.829 qpair failed and we were unable to recover it. 00:32:34.829 [2024-11-20 06:43:06.166044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.829 [2024-11-20 06:43:06.166075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.829 qpair failed and we were unable to recover it. 00:32:34.829 [2024-11-20 06:43:06.166292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.829 [2024-11-20 06:43:06.166326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.829 qpair failed and we were unable to recover it. 00:32:34.829 [2024-11-20 06:43:06.166450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.829 [2024-11-20 06:43:06.166482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.829 qpair failed and we were unable to recover it. 00:32:34.829 [2024-11-20 06:43:06.166688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.829 [2024-11-20 06:43:06.166721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.829 qpair failed and we were unable to recover it. 00:32:34.829 [2024-11-20 06:43:06.166865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.829 [2024-11-20 06:43:06.166903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.829 qpair failed and we were unable to recover it. 00:32:34.829 [2024-11-20 06:43:06.167024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.829 [2024-11-20 06:43:06.167058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.829 qpair failed and we were unable to recover it. 00:32:34.829 [2024-11-20 06:43:06.167182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.829 [2024-11-20 06:43:06.167237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.829 qpair failed and we were unable to recover it. 00:32:34.829 [2024-11-20 06:43:06.167381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.829 [2024-11-20 06:43:06.167413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.829 qpair failed and we were unable to recover it. 00:32:34.829 [2024-11-20 06:43:06.167538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.829 [2024-11-20 06:43:06.167570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.829 qpair failed and we were unable to recover it. 00:32:34.829 [2024-11-20 06:43:06.167750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.829 [2024-11-20 06:43:06.167782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.829 qpair failed and we were unable to recover it. 00:32:34.829 [2024-11-20 06:43:06.167994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.829 [2024-11-20 06:43:06.168026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.829 qpair failed and we were unable to recover it. 00:32:34.829 [2024-11-20 06:43:06.168196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.829 [2024-11-20 06:43:06.168239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.829 qpair failed and we were unable to recover it. 00:32:34.829 [2024-11-20 06:43:06.168490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.829 [2024-11-20 06:43:06.168523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.829 qpair failed and we were unable to recover it. 00:32:34.829 [2024-11-20 06:43:06.168660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.829 [2024-11-20 06:43:06.168693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.829 qpair failed and we were unable to recover it. 00:32:34.829 [2024-11-20 06:43:06.168868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.829 [2024-11-20 06:43:06.168899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.829 qpair failed and we were unable to recover it. 00:32:34.829 [2024-11-20 06:43:06.169113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.829 [2024-11-20 06:43:06.169145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.829 qpair failed and we were unable to recover it. 00:32:34.829 [2024-11-20 06:43:06.169284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.829 [2024-11-20 06:43:06.169317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.829 qpair failed and we were unable to recover it. 00:32:34.829 [2024-11-20 06:43:06.169509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.829 [2024-11-20 06:43:06.169541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.829 qpair failed and we were unable to recover it. 00:32:34.829 [2024-11-20 06:43:06.169811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.829 [2024-11-20 06:43:06.169843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.829 qpair failed and we were unable to recover it. 00:32:34.829 [2024-11-20 06:43:06.170030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.829 [2024-11-20 06:43:06.170063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.829 qpair failed and we were unable to recover it. 00:32:34.829 [2024-11-20 06:43:06.170338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.829 [2024-11-20 06:43:06.170372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.829 qpair failed and we were unable to recover it. 00:32:34.829 [2024-11-20 06:43:06.170559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.829 [2024-11-20 06:43:06.170591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.829 qpair failed and we were unable to recover it. 00:32:34.829 [2024-11-20 06:43:06.170787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.829 [2024-11-20 06:43:06.170820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.829 qpair failed and we were unable to recover it. 00:32:34.829 [2024-11-20 06:43:06.170994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.829 [2024-11-20 06:43:06.171026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.829 qpair failed and we were unable to recover it. 00:32:34.829 [2024-11-20 06:43:06.171267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.829 [2024-11-20 06:43:06.171300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.829 qpair failed and we were unable to recover it. 00:32:34.829 [2024-11-20 06:43:06.171559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.829 [2024-11-20 06:43:06.171592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.829 qpair failed and we were unable to recover it. 00:32:34.829 [2024-11-20 06:43:06.171818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.829 [2024-11-20 06:43:06.171849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.829 qpair failed and we were unable to recover it. 00:32:34.829 [2024-11-20 06:43:06.172078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.829 [2024-11-20 06:43:06.172109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.829 qpair failed and we were unable to recover it. 00:32:34.829 [2024-11-20 06:43:06.172422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.829 [2024-11-20 06:43:06.172456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.829 qpair failed and we were unable to recover it. 00:32:34.829 [2024-11-20 06:43:06.172633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.829 [2024-11-20 06:43:06.172664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.829 qpair failed and we were unable to recover it. 00:32:34.829 [2024-11-20 06:43:06.172841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.829 [2024-11-20 06:43:06.172873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.829 qpair failed and we were unable to recover it. 00:32:34.829 [2024-11-20 06:43:06.173074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.829 [2024-11-20 06:43:06.173107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.829 qpair failed and we were unable to recover it. 00:32:34.830 [2024-11-20 06:43:06.173354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.830 [2024-11-20 06:43:06.173386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.830 qpair failed and we were unable to recover it. 00:32:34.830 [2024-11-20 06:43:06.173579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.830 [2024-11-20 06:43:06.173611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.830 qpair failed and we were unable to recover it. 00:32:34.830 [2024-11-20 06:43:06.173791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.830 [2024-11-20 06:43:06.173825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.830 qpair failed and we were unable to recover it. 00:32:34.830 [2024-11-20 06:43:06.173956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.830 [2024-11-20 06:43:06.173987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.830 qpair failed and we were unable to recover it. 00:32:34.830 [2024-11-20 06:43:06.174114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.830 [2024-11-20 06:43:06.174146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.830 qpair failed and we were unable to recover it. 00:32:34.830 [2024-11-20 06:43:06.174370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.830 [2024-11-20 06:43:06.174404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.830 qpair failed and we were unable to recover it. 00:32:34.830 [2024-11-20 06:43:06.174513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.830 [2024-11-20 06:43:06.174544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.830 qpair failed and we were unable to recover it. 00:32:34.830 [2024-11-20 06:43:06.174718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.830 [2024-11-20 06:43:06.174749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.830 qpair failed and we were unable to recover it. 00:32:34.830 [2024-11-20 06:43:06.174879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.830 [2024-11-20 06:43:06.174913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.830 qpair failed and we were unable to recover it. 00:32:34.830 [2024-11-20 06:43:06.175171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.830 [2024-11-20 06:43:06.175213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.830 qpair failed and we were unable to recover it. 00:32:34.830 [2024-11-20 06:43:06.175399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.830 [2024-11-20 06:43:06.175431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.830 qpair failed and we were unable to recover it. 00:32:34.830 [2024-11-20 06:43:06.175609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.830 [2024-11-20 06:43:06.175641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.830 qpair failed and we were unable to recover it. 00:32:34.830 [2024-11-20 06:43:06.175831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.830 [2024-11-20 06:43:06.175869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.830 qpair failed and we were unable to recover it. 00:32:34.830 [2024-11-20 06:43:06.176076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.830 [2024-11-20 06:43:06.176108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.830 qpair failed and we were unable to recover it. 00:32:34.830 [2024-11-20 06:43:06.176294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.830 [2024-11-20 06:43:06.176329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.830 qpair failed and we were unable to recover it. 00:32:34.830 [2024-11-20 06:43:06.176594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.830 [2024-11-20 06:43:06.176625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.830 qpair failed and we were unable to recover it. 00:32:34.830 [2024-11-20 06:43:06.176811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.830 [2024-11-20 06:43:06.176843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.830 qpair failed and we were unable to recover it. 00:32:34.830 [2024-11-20 06:43:06.176973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.830 [2024-11-20 06:43:06.177005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.830 qpair failed and we were unable to recover it. 00:32:34.830 [2024-11-20 06:43:06.177173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.830 [2024-11-20 06:43:06.177214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.830 qpair failed and we were unable to recover it. 00:32:34.830 [2024-11-20 06:43:06.177398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.830 [2024-11-20 06:43:06.177431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.830 qpair failed and we were unable to recover it. 00:32:34.830 [2024-11-20 06:43:06.177618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.830 [2024-11-20 06:43:06.177650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.830 qpair failed and we were unable to recover it. 00:32:34.830 [2024-11-20 06:43:06.177822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.830 [2024-11-20 06:43:06.177854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.830 qpair failed and we were unable to recover it. 00:32:34.830 [2024-11-20 06:43:06.178132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.830 [2024-11-20 06:43:06.178165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.830 qpair failed and we were unable to recover it. 00:32:34.830 [2024-11-20 06:43:06.178332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.830 [2024-11-20 06:43:06.178366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.830 qpair failed and we were unable to recover it. 00:32:34.830 [2024-11-20 06:43:06.178547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.830 [2024-11-20 06:43:06.178580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.830 qpair failed and we were unable to recover it. 00:32:34.830 [2024-11-20 06:43:06.178757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.830 [2024-11-20 06:43:06.178789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.830 qpair failed and we were unable to recover it. 00:32:34.830 [2024-11-20 06:43:06.178918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.830 [2024-11-20 06:43:06.178952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.830 qpair failed and we were unable to recover it. 00:32:34.830 [2024-11-20 06:43:06.179075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.830 [2024-11-20 06:43:06.179107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.830 qpair failed and we were unable to recover it. 00:32:34.830 [2024-11-20 06:43:06.179238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.830 [2024-11-20 06:43:06.179271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.830 qpair failed and we were unable to recover it. 00:32:34.830 [2024-11-20 06:43:06.179459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.830 [2024-11-20 06:43:06.179491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.830 qpair failed and we were unable to recover it. 00:32:34.830 [2024-11-20 06:43:06.179660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.830 [2024-11-20 06:43:06.179692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.830 qpair failed and we were unable to recover it. 00:32:34.830 [2024-11-20 06:43:06.179869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.830 [2024-11-20 06:43:06.179900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.830 qpair failed and we were unable to recover it. 00:32:34.830 [2024-11-20 06:43:06.180160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.830 [2024-11-20 06:43:06.180193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.830 qpair failed and we were unable to recover it. 00:32:34.830 [2024-11-20 06:43:06.180395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.830 [2024-11-20 06:43:06.180427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.830 qpair failed and we were unable to recover it. 00:32:34.830 [2024-11-20 06:43:06.180666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.830 [2024-11-20 06:43:06.180698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.830 qpair failed and we were unable to recover it. 00:32:34.830 [2024-11-20 06:43:06.180836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.830 [2024-11-20 06:43:06.180870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.830 qpair failed and we were unable to recover it. 00:32:34.830 [2024-11-20 06:43:06.181158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.830 [2024-11-20 06:43:06.181189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.830 qpair failed and we were unable to recover it. 00:32:34.830 [2024-11-20 06:43:06.181390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.831 [2024-11-20 06:43:06.181422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.831 qpair failed and we were unable to recover it. 00:32:34.831 [2024-11-20 06:43:06.181638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.831 [2024-11-20 06:43:06.181671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.831 qpair failed and we were unable to recover it. 00:32:34.831 [2024-11-20 06:43:06.181815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.831 [2024-11-20 06:43:06.181847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.831 qpair failed and we were unable to recover it. 00:32:34.831 [2024-11-20 06:43:06.182037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.831 [2024-11-20 06:43:06.182069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.831 qpair failed and we were unable to recover it. 00:32:34.831 [2024-11-20 06:43:06.182215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.831 [2024-11-20 06:43:06.182248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.831 qpair failed and we were unable to recover it. 00:32:34.831 [2024-11-20 06:43:06.182429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.831 [2024-11-20 06:43:06.182461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.831 qpair failed and we were unable to recover it. 00:32:34.831 [2024-11-20 06:43:06.182572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.831 [2024-11-20 06:43:06.182603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.831 qpair failed and we were unable to recover it. 00:32:34.831 [2024-11-20 06:43:06.182848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.831 [2024-11-20 06:43:06.182880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.831 qpair failed and we were unable to recover it. 00:32:34.831 [2024-11-20 06:43:06.182995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.831 [2024-11-20 06:43:06.183026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.831 qpair failed and we were unable to recover it. 00:32:34.831 [2024-11-20 06:43:06.183215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.831 [2024-11-20 06:43:06.183249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.831 qpair failed and we were unable to recover it. 00:32:34.831 [2024-11-20 06:43:06.183500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.831 [2024-11-20 06:43:06.183533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.831 qpair failed and we were unable to recover it. 00:32:34.831 [2024-11-20 06:43:06.183661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.831 [2024-11-20 06:43:06.183692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.831 qpair failed and we were unable to recover it. 00:32:34.831 [2024-11-20 06:43:06.183823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.831 [2024-11-20 06:43:06.183854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.831 qpair failed and we were unable to recover it. 00:32:34.831 [2024-11-20 06:43:06.183991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.831 [2024-11-20 06:43:06.184023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.831 qpair failed and we were unable to recover it. 00:32:34.831 [2024-11-20 06:43:06.184134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.831 [2024-11-20 06:43:06.184166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.831 qpair failed and we were unable to recover it. 00:32:34.831 [2024-11-20 06:43:06.184428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.831 [2024-11-20 06:43:06.184468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.831 qpair failed and we were unable to recover it. 00:32:34.831 [2024-11-20 06:43:06.184643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.831 [2024-11-20 06:43:06.184676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.831 qpair failed and we were unable to recover it. 00:32:34.831 [2024-11-20 06:43:06.184797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.831 [2024-11-20 06:43:06.184828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.831 qpair failed and we were unable to recover it. 00:32:34.831 [2024-11-20 06:43:06.185093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.831 [2024-11-20 06:43:06.185124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.831 qpair failed and we were unable to recover it. 00:32:34.831 [2024-11-20 06:43:06.185312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.831 [2024-11-20 06:43:06.185346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.831 qpair failed and we were unable to recover it. 00:32:34.831 [2024-11-20 06:43:06.185456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.831 [2024-11-20 06:43:06.185488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.831 qpair failed and we were unable to recover it. 00:32:34.831 [2024-11-20 06:43:06.185660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.831 [2024-11-20 06:43:06.185691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.831 qpair failed and we were unable to recover it. 00:32:34.831 [2024-11-20 06:43:06.185960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.831 [2024-11-20 06:43:06.185992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.831 qpair failed and we were unable to recover it. 00:32:34.831 [2024-11-20 06:43:06.186183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.831 [2024-11-20 06:43:06.186224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.831 qpair failed and we were unable to recover it. 00:32:34.831 [2024-11-20 06:43:06.186348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.831 [2024-11-20 06:43:06.186380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.831 qpair failed and we were unable to recover it. 00:32:34.831 [2024-11-20 06:43:06.186682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.831 [2024-11-20 06:43:06.186716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.831 qpair failed and we were unable to recover it. 00:32:34.831 [2024-11-20 06:43:06.186968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.831 [2024-11-20 06:43:06.186999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.831 qpair failed and we were unable to recover it. 00:32:34.831 [2024-11-20 06:43:06.187261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.831 [2024-11-20 06:43:06.187294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.831 qpair failed and we were unable to recover it. 00:32:34.831 [2024-11-20 06:43:06.187512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.831 [2024-11-20 06:43:06.187545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.831 qpair failed and we were unable to recover it. 00:32:34.831 [2024-11-20 06:43:06.187741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.831 [2024-11-20 06:43:06.187773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.831 qpair failed and we were unable to recover it. 00:32:34.831 [2024-11-20 06:43:06.187949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.831 [2024-11-20 06:43:06.187981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.831 qpair failed and we were unable to recover it. 00:32:34.831 [2024-11-20 06:43:06.188100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.831 [2024-11-20 06:43:06.188133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.831 qpair failed and we were unable to recover it. 00:32:34.831 [2024-11-20 06:43:06.188258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.831 [2024-11-20 06:43:06.188290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.831 qpair failed and we were unable to recover it. 00:32:34.831 [2024-11-20 06:43:06.188426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.831 [2024-11-20 06:43:06.188459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.831 qpair failed and we were unable to recover it. 00:32:34.831 [2024-11-20 06:43:06.188636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.831 [2024-11-20 06:43:06.188668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.831 qpair failed and we were unable to recover it. 00:32:34.831 [2024-11-20 06:43:06.188878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.831 [2024-11-20 06:43:06.188910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.831 qpair failed and we were unable to recover it. 00:32:34.831 [2024-11-20 06:43:06.189092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.831 [2024-11-20 06:43:06.189123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.831 qpair failed and we were unable to recover it. 00:32:34.831 [2024-11-20 06:43:06.189297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.831 [2024-11-20 06:43:06.189331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.831 qpair failed and we were unable to recover it. 00:32:34.832 [2024-11-20 06:43:06.189591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.832 [2024-11-20 06:43:06.189623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.832 qpair failed and we were unable to recover it. 00:32:34.832 [2024-11-20 06:43:06.189860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.832 [2024-11-20 06:43:06.189892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.832 qpair failed and we were unable to recover it. 00:32:34.832 [2024-11-20 06:43:06.190068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.832 [2024-11-20 06:43:06.190100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.832 qpair failed and we were unable to recover it. 00:32:34.832 [2024-11-20 06:43:06.190341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.832 [2024-11-20 06:43:06.190374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.832 qpair failed and we were unable to recover it. 00:32:34.832 [2024-11-20 06:43:06.190676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.832 [2024-11-20 06:43:06.190710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.832 qpair failed and we were unable to recover it. 00:32:34.832 [2024-11-20 06:43:06.190831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.832 [2024-11-20 06:43:06.190864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.832 qpair failed and we were unable to recover it. 00:32:34.832 [2024-11-20 06:43:06.191128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.832 [2024-11-20 06:43:06.191159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.832 qpair failed and we were unable to recover it. 00:32:34.832 [2024-11-20 06:43:06.191348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.832 [2024-11-20 06:43:06.191381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.832 qpair failed and we were unable to recover it. 00:32:34.832 [2024-11-20 06:43:06.191624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.832 [2024-11-20 06:43:06.191656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.832 qpair failed and we were unable to recover it. 00:32:34.832 [2024-11-20 06:43:06.191863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.832 [2024-11-20 06:43:06.191896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.832 qpair failed and we were unable to recover it. 00:32:34.832 [2024-11-20 06:43:06.192176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.832 [2024-11-20 06:43:06.192214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.832 qpair failed and we were unable to recover it. 00:32:34.832 [2024-11-20 06:43:06.192417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.832 [2024-11-20 06:43:06.192450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.832 qpair failed and we were unable to recover it. 00:32:34.832 [2024-11-20 06:43:06.192624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.832 [2024-11-20 06:43:06.192656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.832 qpair failed and we were unable to recover it. 00:32:34.832 [2024-11-20 06:43:06.192862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.832 [2024-11-20 06:43:06.192894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.832 qpair failed and we were unable to recover it. 00:32:34.832 [2024-11-20 06:43:06.193033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.832 [2024-11-20 06:43:06.193066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.832 qpair failed and we were unable to recover it. 00:32:34.832 [2024-11-20 06:43:06.193259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.832 [2024-11-20 06:43:06.193292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.832 qpair failed and we were unable to recover it. 00:32:34.832 [2024-11-20 06:43:06.193471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.832 [2024-11-20 06:43:06.193502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.832 qpair failed and we were unable to recover it. 00:32:34.832 [2024-11-20 06:43:06.193746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.832 [2024-11-20 06:43:06.193790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.832 qpair failed and we were unable to recover it. 00:32:34.832 [2024-11-20 06:43:06.193903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.832 [2024-11-20 06:43:06.193934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.832 qpair failed and we were unable to recover it. 00:32:34.832 [2024-11-20 06:43:06.194125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.832 [2024-11-20 06:43:06.194156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.832 qpair failed and we were unable to recover it. 00:32:34.832 [2024-11-20 06:43:06.194291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.832 [2024-11-20 06:43:06.194325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.832 qpair failed and we were unable to recover it. 00:32:34.832 [2024-11-20 06:43:06.194520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.832 [2024-11-20 06:43:06.194553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.832 qpair failed and we were unable to recover it. 00:32:34.832 [2024-11-20 06:43:06.194755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.832 [2024-11-20 06:43:06.194786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.832 qpair failed and we were unable to recover it. 00:32:34.832 [2024-11-20 06:43:06.194907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.832 [2024-11-20 06:43:06.194939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.832 qpair failed and we were unable to recover it. 00:32:34.832 [2024-11-20 06:43:06.195119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.832 [2024-11-20 06:43:06.195152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.832 qpair failed and we were unable to recover it. 00:32:34.832 [2024-11-20 06:43:06.195354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.832 [2024-11-20 06:43:06.195387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.832 qpair failed and we were unable to recover it. 00:32:34.832 [2024-11-20 06:43:06.195631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.832 [2024-11-20 06:43:06.195664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.832 qpair failed and we were unable to recover it. 00:32:34.832 [2024-11-20 06:43:06.195902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.832 [2024-11-20 06:43:06.195934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.832 qpair failed and we were unable to recover it. 00:32:34.832 [2024-11-20 06:43:06.196230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.832 [2024-11-20 06:43:06.196265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.832 qpair failed and we were unable to recover it. 00:32:34.832 [2024-11-20 06:43:06.196387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.832 [2024-11-20 06:43:06.196420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.832 qpair failed and we were unable to recover it. 00:32:34.832 [2024-11-20 06:43:06.196733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.832 [2024-11-20 06:43:06.196764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.832 qpair failed and we were unable to recover it. 00:32:34.832 [2024-11-20 06:43:06.197032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.832 [2024-11-20 06:43:06.197065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.832 qpair failed and we were unable to recover it. 00:32:34.832 [2024-11-20 06:43:06.197280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.832 [2024-11-20 06:43:06.197313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.832 qpair failed and we were unable to recover it. 00:32:34.832 [2024-11-20 06:43:06.197488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.832 [2024-11-20 06:43:06.197520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.832 qpair failed and we were unable to recover it. 00:32:34.832 [2024-11-20 06:43:06.197710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.832 [2024-11-20 06:43:06.197741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.832 qpair failed and we were unable to recover it. 00:32:34.832 [2024-11-20 06:43:06.197931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.832 [2024-11-20 06:43:06.197963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.832 qpair failed and we were unable to recover it. 00:32:34.832 [2024-11-20 06:43:06.198139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.832 [2024-11-20 06:43:06.198170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.832 qpair failed and we were unable to recover it. 00:32:34.833 [2024-11-20 06:43:06.198471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.833 [2024-11-20 06:43:06.198503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.833 qpair failed and we were unable to recover it. 00:32:34.833 [2024-11-20 06:43:06.198705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.833 [2024-11-20 06:43:06.198737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.833 qpair failed and we were unable to recover it. 00:32:34.833 [2024-11-20 06:43:06.199006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.833 [2024-11-20 06:43:06.199038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.833 qpair failed and we were unable to recover it. 00:32:34.833 [2024-11-20 06:43:06.199176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.833 [2024-11-20 06:43:06.199217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.833 qpair failed and we were unable to recover it. 00:32:34.833 [2024-11-20 06:43:06.199404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.833 [2024-11-20 06:43:06.199442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.833 qpair failed and we were unable to recover it. 00:32:34.833 [2024-11-20 06:43:06.199634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.833 [2024-11-20 06:43:06.199666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.833 qpair failed and we were unable to recover it. 00:32:34.833 [2024-11-20 06:43:06.199946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.833 [2024-11-20 06:43:06.199985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.833 qpair failed and we were unable to recover it. 00:32:34.833 [2024-11-20 06:43:06.200209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.833 [2024-11-20 06:43:06.200244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.833 qpair failed and we were unable to recover it. 00:32:34.833 [2024-11-20 06:43:06.200368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.833 [2024-11-20 06:43:06.200400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.833 qpair failed and we were unable to recover it. 00:32:34.833 [2024-11-20 06:43:06.200648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.833 [2024-11-20 06:43:06.200680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.833 qpair failed and we were unable to recover it. 00:32:34.833 [2024-11-20 06:43:06.200800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.833 [2024-11-20 06:43:06.200832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.833 qpair failed and we were unable to recover it. 00:32:34.833 [2024-11-20 06:43:06.201102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.833 [2024-11-20 06:43:06.201135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.833 qpair failed and we were unable to recover it. 00:32:34.833 [2024-11-20 06:43:06.201330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.833 [2024-11-20 06:43:06.201364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.833 qpair failed and we were unable to recover it. 00:32:34.833 [2024-11-20 06:43:06.201587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.833 [2024-11-20 06:43:06.201619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.833 qpair failed and we were unable to recover it. 00:32:34.833 [2024-11-20 06:43:06.201865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.833 [2024-11-20 06:43:06.201896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.833 qpair failed and we were unable to recover it. 00:32:34.833 [2024-11-20 06:43:06.202079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.833 [2024-11-20 06:43:06.202111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.833 qpair failed and we were unable to recover it. 00:32:34.833 [2024-11-20 06:43:06.202372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.833 [2024-11-20 06:43:06.202406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.833 qpair failed and we were unable to recover it. 00:32:34.833 [2024-11-20 06:43:06.202674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.833 [2024-11-20 06:43:06.202705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.833 qpair failed and we were unable to recover it. 00:32:34.833 [2024-11-20 06:43:06.202942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.833 [2024-11-20 06:43:06.202975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.833 qpair failed and we were unable to recover it. 00:32:34.833 [2024-11-20 06:43:06.203237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.833 [2024-11-20 06:43:06.203271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.833 qpair failed and we were unable to recover it. 00:32:34.833 [2024-11-20 06:43:06.203525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.833 [2024-11-20 06:43:06.203562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.833 qpair failed and we were unable to recover it. 00:32:34.833 [2024-11-20 06:43:06.203760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.833 [2024-11-20 06:43:06.203791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.833 qpair failed and we were unable to recover it. 00:32:34.833 [2024-11-20 06:43:06.203917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.833 [2024-11-20 06:43:06.203949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.833 qpair failed and we were unable to recover it. 00:32:34.833 [2024-11-20 06:43:06.204065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.833 [2024-11-20 06:43:06.204096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.833 qpair failed and we were unable to recover it. 00:32:34.833 [2024-11-20 06:43:06.204283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.833 [2024-11-20 06:43:06.204316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.833 qpair failed and we were unable to recover it. 00:32:34.833 [2024-11-20 06:43:06.204563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.833 [2024-11-20 06:43:06.204596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.833 qpair failed and we were unable to recover it. 00:32:34.833 [2024-11-20 06:43:06.204775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.833 [2024-11-20 06:43:06.204807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.833 qpair failed and we were unable to recover it. 00:32:34.833 [2024-11-20 06:43:06.204933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.833 [2024-11-20 06:43:06.204965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.833 qpair failed and we were unable to recover it. 00:32:34.833 [2024-11-20 06:43:06.205175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.833 [2024-11-20 06:43:06.205227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.833 qpair failed and we were unable to recover it. 00:32:34.833 [2024-11-20 06:43:06.205348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.833 [2024-11-20 06:43:06.205380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.833 qpair failed and we were unable to recover it. 00:32:34.833 [2024-11-20 06:43:06.205581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.833 [2024-11-20 06:43:06.205612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.833 qpair failed and we were unable to recover it. 00:32:34.833 [2024-11-20 06:43:06.205805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.833 [2024-11-20 06:43:06.205838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.833 qpair failed and we were unable to recover it. 00:32:34.833 [2024-11-20 06:43:06.206103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.834 [2024-11-20 06:43:06.206134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.834 qpair failed and we were unable to recover it. 00:32:34.834 [2024-11-20 06:43:06.206329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.834 [2024-11-20 06:43:06.206362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.834 qpair failed and we were unable to recover it. 00:32:34.834 [2024-11-20 06:43:06.206564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.834 [2024-11-20 06:43:06.206598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.834 qpair failed and we were unable to recover it. 00:32:34.834 [2024-11-20 06:43:06.206743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.834 [2024-11-20 06:43:06.206775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.834 qpair failed and we were unable to recover it. 00:32:34.834 [2024-11-20 06:43:06.206981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.834 [2024-11-20 06:43:06.207012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.834 qpair failed and we were unable to recover it. 00:32:34.834 [2024-11-20 06:43:06.207281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.834 [2024-11-20 06:43:06.207315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.834 qpair failed and we were unable to recover it. 00:32:34.834 [2024-11-20 06:43:06.207505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.834 [2024-11-20 06:43:06.207537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.834 qpair failed and we were unable to recover it. 00:32:34.834 [2024-11-20 06:43:06.207720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.834 [2024-11-20 06:43:06.207753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.834 qpair failed and we were unable to recover it. 00:32:34.834 [2024-11-20 06:43:06.207931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.834 [2024-11-20 06:43:06.207963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.834 qpair failed and we were unable to recover it. 00:32:34.834 [2024-11-20 06:43:06.208070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.834 [2024-11-20 06:43:06.208101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.834 qpair failed and we were unable to recover it. 00:32:34.834 [2024-11-20 06:43:06.208275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.834 [2024-11-20 06:43:06.208307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.834 qpair failed and we were unable to recover it. 00:32:34.834 [2024-11-20 06:43:06.208451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.834 [2024-11-20 06:43:06.208484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.834 qpair failed and we were unable to recover it. 00:32:34.834 [2024-11-20 06:43:06.208668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.834 [2024-11-20 06:43:06.208700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.834 qpair failed and we were unable to recover it. 00:32:34.834 [2024-11-20 06:43:06.208883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.834 [2024-11-20 06:43:06.208914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.834 qpair failed and we were unable to recover it. 00:32:34.834 [2024-11-20 06:43:06.209056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.834 [2024-11-20 06:43:06.209088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.834 qpair failed and we were unable to recover it. 00:32:34.834 [2024-11-20 06:43:06.209288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.834 [2024-11-20 06:43:06.209322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.834 qpair failed and we were unable to recover it. 00:32:34.834 [2024-11-20 06:43:06.209584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.834 [2024-11-20 06:43:06.209616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.834 qpair failed and we were unable to recover it. 00:32:34.834 [2024-11-20 06:43:06.209791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.834 [2024-11-20 06:43:06.209824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.834 qpair failed and we were unable to recover it. 00:32:34.834 [2024-11-20 06:43:06.210065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.834 [2024-11-20 06:43:06.210096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.834 qpair failed and we were unable to recover it. 00:32:34.834 [2024-11-20 06:43:06.210314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.834 [2024-11-20 06:43:06.210347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.834 qpair failed and we were unable to recover it. 00:32:34.834 [2024-11-20 06:43:06.210550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.834 [2024-11-20 06:43:06.210583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.834 qpair failed and we were unable to recover it. 00:32:34.834 [2024-11-20 06:43:06.210846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.834 [2024-11-20 06:43:06.210877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.834 qpair failed and we were unable to recover it. 00:32:34.834 [2024-11-20 06:43:06.211084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.834 [2024-11-20 06:43:06.211115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.834 qpair failed and we were unable to recover it. 00:32:34.834 [2024-11-20 06:43:06.211258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.834 [2024-11-20 06:43:06.211291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.834 qpair failed and we were unable to recover it. 00:32:34.834 [2024-11-20 06:43:06.211413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.834 [2024-11-20 06:43:06.211446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.834 qpair failed and we were unable to recover it. 00:32:34.834 [2024-11-20 06:43:06.211574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.834 [2024-11-20 06:43:06.211606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.834 qpair failed and we were unable to recover it. 00:32:34.834 [2024-11-20 06:43:06.211801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.834 [2024-11-20 06:43:06.211833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.834 qpair failed and we were unable to recover it. 00:32:34.834 [2024-11-20 06:43:06.212013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.834 [2024-11-20 06:43:06.212046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.834 qpair failed and we were unable to recover it. 00:32:34.834 [2024-11-20 06:43:06.212180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.834 [2024-11-20 06:43:06.212237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.834 qpair failed and we were unable to recover it. 00:32:34.834 [2024-11-20 06:43:06.212511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.834 [2024-11-20 06:43:06.212544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.834 qpair failed and we were unable to recover it. 00:32:34.834 [2024-11-20 06:43:06.212718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.834 [2024-11-20 06:43:06.212749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.834 qpair failed and we were unable to recover it. 00:32:34.834 [2024-11-20 06:43:06.212989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.834 [2024-11-20 06:43:06.213021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.834 qpair failed and we were unable to recover it. 00:32:34.834 [2024-11-20 06:43:06.213249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.834 [2024-11-20 06:43:06.213313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.834 qpair failed and we were unable to recover it. 00:32:34.834 [2024-11-20 06:43:06.213525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.834 [2024-11-20 06:43:06.213558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.834 qpair failed and we were unable to recover it. 00:32:34.834 [2024-11-20 06:43:06.213687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.834 [2024-11-20 06:43:06.213718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.834 qpair failed and we were unable to recover it. 00:32:34.834 [2024-11-20 06:43:06.213913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.834 [2024-11-20 06:43:06.213946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.834 qpair failed and we were unable to recover it. 00:32:34.834 [2024-11-20 06:43:06.214186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.834 [2024-11-20 06:43:06.214230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.834 qpair failed and we were unable to recover it. 00:32:34.834 [2024-11-20 06:43:06.214425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.835 [2024-11-20 06:43:06.214456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.835 qpair failed and we were unable to recover it. 00:32:34.835 [2024-11-20 06:43:06.214656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.835 [2024-11-20 06:43:06.214689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.835 qpair failed and we were unable to recover it. 00:32:34.835 [2024-11-20 06:43:06.214897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.835 [2024-11-20 06:43:06.214928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.835 qpair failed and we were unable to recover it. 00:32:34.835 [2024-11-20 06:43:06.215113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.835 [2024-11-20 06:43:06.215144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.835 qpair failed and we were unable to recover it. 00:32:34.835 [2024-11-20 06:43:06.215291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.835 [2024-11-20 06:43:06.215324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.835 qpair failed and we were unable to recover it. 00:32:34.835 [2024-11-20 06:43:06.215503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.835 [2024-11-20 06:43:06.215535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.835 qpair failed and we were unable to recover it. 00:32:34.835 [2024-11-20 06:43:06.215721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.835 [2024-11-20 06:43:06.215753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.835 qpair failed and we were unable to recover it. 00:32:34.835 [2024-11-20 06:43:06.215875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.835 [2024-11-20 06:43:06.215907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.835 qpair failed and we were unable to recover it. 00:32:34.835 [2024-11-20 06:43:06.216166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.835 [2024-11-20 06:43:06.216198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.835 qpair failed and we were unable to recover it. 00:32:34.835 [2024-11-20 06:43:06.216400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.835 [2024-11-20 06:43:06.216432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.835 qpair failed and we were unable to recover it. 00:32:34.835 [2024-11-20 06:43:06.216565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.835 [2024-11-20 06:43:06.216598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.835 qpair failed and we were unable to recover it. 00:32:34.835 [2024-11-20 06:43:06.216768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.835 [2024-11-20 06:43:06.216800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.835 qpair failed and we were unable to recover it. 00:32:34.835 [2024-11-20 06:43:06.217017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.835 [2024-11-20 06:43:06.217049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.835 qpair failed and we were unable to recover it. 00:32:34.835 [2024-11-20 06:43:06.217265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.835 [2024-11-20 06:43:06.217300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.835 qpair failed and we were unable to recover it. 00:32:34.835 [2024-11-20 06:43:06.217477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.835 [2024-11-20 06:43:06.217509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.835 qpair failed and we were unable to recover it. 00:32:34.835 [2024-11-20 06:43:06.217638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.835 [2024-11-20 06:43:06.217669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.835 qpair failed and we were unable to recover it. 00:32:34.835 [2024-11-20 06:43:06.217861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.835 [2024-11-20 06:43:06.217893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.835 qpair failed and we were unable to recover it. 00:32:34.835 [2024-11-20 06:43:06.218154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.835 [2024-11-20 06:43:06.218186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.835 qpair failed and we were unable to recover it. 00:32:34.835 [2024-11-20 06:43:06.218378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.835 [2024-11-20 06:43:06.218412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.835 qpair failed and we were unable to recover it. 00:32:34.835 [2024-11-20 06:43:06.218687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.835 [2024-11-20 06:43:06.218720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.835 qpair failed and we were unable to recover it. 00:32:34.835 [2024-11-20 06:43:06.218920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.835 [2024-11-20 06:43:06.218951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.835 qpair failed and we were unable to recover it. 00:32:34.835 [2024-11-20 06:43:06.219147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.835 [2024-11-20 06:43:06.219179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.835 qpair failed and we were unable to recover it. 00:32:34.835 [2024-11-20 06:43:06.219460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.835 [2024-11-20 06:43:06.219494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.835 qpair failed and we were unable to recover it. 00:32:34.835 [2024-11-20 06:43:06.219755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.835 [2024-11-20 06:43:06.219786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.835 qpair failed and we were unable to recover it. 00:32:34.835 [2024-11-20 06:43:06.220052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.835 [2024-11-20 06:43:06.220089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.835 qpair failed and we were unable to recover it. 00:32:34.835 [2024-11-20 06:43:06.220268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.835 [2024-11-20 06:43:06.220303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.835 qpair failed and we were unable to recover it. 00:32:34.835 [2024-11-20 06:43:06.220543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.835 [2024-11-20 06:43:06.220576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.835 qpair failed and we were unable to recover it. 00:32:34.835 [2024-11-20 06:43:06.220789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.835 [2024-11-20 06:43:06.220821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.835 qpair failed and we were unable to recover it. 00:32:34.835 [2024-11-20 06:43:06.220944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.835 [2024-11-20 06:43:06.220976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.835 qpair failed and we were unable to recover it. 00:32:34.835 [2024-11-20 06:43:06.221168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.835 [2024-11-20 06:43:06.221225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.835 qpair failed and we were unable to recover it. 00:32:34.835 [2024-11-20 06:43:06.221355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.835 [2024-11-20 06:43:06.221388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.835 qpair failed and we were unable to recover it. 00:32:34.835 [2024-11-20 06:43:06.221526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.835 [2024-11-20 06:43:06.221565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.835 qpair failed and we were unable to recover it. 00:32:34.835 [2024-11-20 06:43:06.221746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.835 [2024-11-20 06:43:06.221777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.835 qpair failed and we were unable to recover it. 00:32:34.835 [2024-11-20 06:43:06.221967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.835 [2024-11-20 06:43:06.221999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.835 qpair failed and we were unable to recover it. 00:32:34.835 [2024-11-20 06:43:06.222295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.835 [2024-11-20 06:43:06.222331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.835 qpair failed and we were unable to recover it. 00:32:34.835 [2024-11-20 06:43:06.222440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.835 [2024-11-20 06:43:06.222473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.835 qpair failed and we were unable to recover it. 00:32:34.835 [2024-11-20 06:43:06.222585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.835 [2024-11-20 06:43:06.222616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.835 qpair failed and we were unable to recover it. 00:32:34.836 [2024-11-20 06:43:06.222794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.836 [2024-11-20 06:43:06.222826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.836 qpair failed and we were unable to recover it. 00:32:34.836 [2024-11-20 06:43:06.222939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.836 [2024-11-20 06:43:06.222971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.836 qpair failed and we were unable to recover it. 00:32:34.836 [2024-11-20 06:43:06.223084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.836 [2024-11-20 06:43:06.223115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.836 qpair failed and we were unable to recover it. 00:32:34.836 [2024-11-20 06:43:06.223301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.836 [2024-11-20 06:43:06.223336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.836 qpair failed and we were unable to recover it. 00:32:34.836 [2024-11-20 06:43:06.223516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.836 [2024-11-20 06:43:06.223547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.836 qpair failed and we were unable to recover it. 00:32:34.836 [2024-11-20 06:43:06.223737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.836 [2024-11-20 06:43:06.223768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.836 qpair failed and we were unable to recover it. 00:32:34.836 [2024-11-20 06:43:06.224020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.836 [2024-11-20 06:43:06.224052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.836 qpair failed and we were unable to recover it. 00:32:34.836 [2024-11-20 06:43:06.224229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.836 [2024-11-20 06:43:06.224260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.836 qpair failed and we were unable to recover it. 00:32:34.836 [2024-11-20 06:43:06.224441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.836 [2024-11-20 06:43:06.224474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.836 qpair failed and we were unable to recover it. 00:32:34.836 [2024-11-20 06:43:06.224598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.836 [2024-11-20 06:43:06.224630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.836 qpair failed and we were unable to recover it. 00:32:34.836 [2024-11-20 06:43:06.224808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.836 [2024-11-20 06:43:06.224840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.836 qpair failed and we were unable to recover it. 00:32:34.836 [2024-11-20 06:43:06.224964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.836 [2024-11-20 06:43:06.224996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.836 qpair failed and we were unable to recover it. 00:32:34.836 [2024-11-20 06:43:06.225217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.836 [2024-11-20 06:43:06.225252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.836 qpair failed and we were unable to recover it. 00:32:34.836 [2024-11-20 06:43:06.225361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.836 [2024-11-20 06:43:06.225394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.836 qpair failed and we were unable to recover it. 00:32:34.836 [2024-11-20 06:43:06.225563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.836 [2024-11-20 06:43:06.225595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.836 qpair failed and we were unable to recover it. 00:32:34.836 [2024-11-20 06:43:06.225731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.836 [2024-11-20 06:43:06.225763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.836 qpair failed and we were unable to recover it. 00:32:34.836 [2024-11-20 06:43:06.225957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.836 [2024-11-20 06:43:06.225988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.836 qpair failed and we were unable to recover it. 00:32:34.836 [2024-11-20 06:43:06.226225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.836 [2024-11-20 06:43:06.226260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.836 qpair failed and we were unable to recover it. 00:32:34.836 [2024-11-20 06:43:06.226393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.836 [2024-11-20 06:43:06.226425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.836 qpair failed and we were unable to recover it. 00:32:34.836 [2024-11-20 06:43:06.226567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.836 [2024-11-20 06:43:06.226599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.836 qpair failed and we were unable to recover it. 00:32:34.836 [2024-11-20 06:43:06.226714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.836 [2024-11-20 06:43:06.226745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.836 qpair failed and we were unable to recover it. 00:32:34.836 [2024-11-20 06:43:06.226930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.836 [2024-11-20 06:43:06.226963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.836 qpair failed and we were unable to recover it. 00:32:34.836 [2024-11-20 06:43:06.227211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.836 [2024-11-20 06:43:06.227243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.836 qpair failed and we were unable to recover it. 00:32:34.836 [2024-11-20 06:43:06.227423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.836 [2024-11-20 06:43:06.227456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.836 qpair failed and we were unable to recover it. 00:32:34.836 [2024-11-20 06:43:06.227582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.836 [2024-11-20 06:43:06.227615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.836 qpair failed and we were unable to recover it. 00:32:34.836 [2024-11-20 06:43:06.227810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.836 [2024-11-20 06:43:06.227842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.836 qpair failed and we were unable to recover it. 00:32:34.836 [2024-11-20 06:43:06.227954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.836 [2024-11-20 06:43:06.227986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.836 qpair failed and we were unable to recover it. 00:32:34.836 [2024-11-20 06:43:06.228179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.836 [2024-11-20 06:43:06.228222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.836 qpair failed and we were unable to recover it. 00:32:34.836 [2024-11-20 06:43:06.228430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.836 [2024-11-20 06:43:06.228461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.836 qpair failed and we were unable to recover it. 00:32:34.836 [2024-11-20 06:43:06.228590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.836 [2024-11-20 06:43:06.228622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.836 qpair failed and we were unable to recover it. 00:32:34.836 [2024-11-20 06:43:06.228803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.836 [2024-11-20 06:43:06.228835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.836 qpair failed and we were unable to recover it. 00:32:34.836 [2024-11-20 06:43:06.228948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.836 [2024-11-20 06:43:06.228979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.836 qpair failed and we were unable to recover it. 00:32:34.836 [2024-11-20 06:43:06.229168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.836 [2024-11-20 06:43:06.229200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.836 qpair failed and we were unable to recover it. 00:32:34.836 [2024-11-20 06:43:06.229432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.836 [2024-11-20 06:43:06.229465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.836 qpair failed and we were unable to recover it. 00:32:34.836 [2024-11-20 06:43:06.229644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.836 [2024-11-20 06:43:06.229682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.836 qpair failed and we were unable to recover it. 00:32:34.836 [2024-11-20 06:43:06.229786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.836 [2024-11-20 06:43:06.229818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.836 qpair failed and we were unable to recover it. 00:32:34.836 [2024-11-20 06:43:06.229942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.836 [2024-11-20 06:43:06.229974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.836 qpair failed and we were unable to recover it. 00:32:34.837 [2024-11-20 06:43:06.230103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.837 [2024-11-20 06:43:06.230133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.837 qpair failed and we were unable to recover it. 00:32:34.837 [2024-11-20 06:43:06.230306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.837 [2024-11-20 06:43:06.230340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.837 qpair failed and we were unable to recover it. 00:32:34.837 [2024-11-20 06:43:06.230611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.837 [2024-11-20 06:43:06.230644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.837 qpair failed and we were unable to recover it. 00:32:34.837 [2024-11-20 06:43:06.230881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.837 [2024-11-20 06:43:06.230919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.837 qpair failed and we were unable to recover it. 00:32:34.837 [2024-11-20 06:43:06.231104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.837 [2024-11-20 06:43:06.231137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.837 qpair failed and we were unable to recover it. 00:32:34.837 [2024-11-20 06:43:06.231330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.837 [2024-11-20 06:43:06.231363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.837 qpair failed and we were unable to recover it. 00:32:34.837 [2024-11-20 06:43:06.231479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.837 [2024-11-20 06:43:06.231511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.837 qpair failed and we were unable to recover it. 00:32:34.837 [2024-11-20 06:43:06.231752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.837 [2024-11-20 06:43:06.231783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.837 qpair failed and we were unable to recover it. 00:32:34.837 [2024-11-20 06:43:06.231922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.837 [2024-11-20 06:43:06.231954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.837 qpair failed and we were unable to recover it. 00:32:34.837 [2024-11-20 06:43:06.232064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.837 [2024-11-20 06:43:06.232096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.837 qpair failed and we were unable to recover it. 00:32:34.837 [2024-11-20 06:43:06.232226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.837 [2024-11-20 06:43:06.232259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.837 qpair failed and we were unable to recover it. 00:32:34.837 [2024-11-20 06:43:06.232456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.837 [2024-11-20 06:43:06.232488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.837 qpair failed and we were unable to recover it. 00:32:34.837 [2024-11-20 06:43:06.232699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.837 [2024-11-20 06:43:06.232732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.837 qpair failed and we were unable to recover it. 00:32:34.837 [2024-11-20 06:43:06.232995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.837 [2024-11-20 06:43:06.233026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.837 qpair failed and we were unable to recover it. 00:32:34.837 [2024-11-20 06:43:06.233242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.837 [2024-11-20 06:43:06.233276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.837 qpair failed and we were unable to recover it. 00:32:34.837 [2024-11-20 06:43:06.233458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.837 [2024-11-20 06:43:06.233491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.837 qpair failed and we were unable to recover it. 00:32:34.837 [2024-11-20 06:43:06.233628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.837 [2024-11-20 06:43:06.233659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.837 qpair failed and we were unable to recover it. 00:32:34.837 [2024-11-20 06:43:06.233796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.837 [2024-11-20 06:43:06.233829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.837 qpair failed and we were unable to recover it. 00:32:34.837 [2024-11-20 06:43:06.233999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.837 [2024-11-20 06:43:06.234030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.837 qpair failed and we were unable to recover it. 00:32:34.837 [2024-11-20 06:43:06.234238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.837 [2024-11-20 06:43:06.234273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.837 qpair failed and we were unable to recover it. 00:32:34.837 [2024-11-20 06:43:06.234392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.837 [2024-11-20 06:43:06.234424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.837 qpair failed and we were unable to recover it. 00:32:34.837 [2024-11-20 06:43:06.234546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.837 [2024-11-20 06:43:06.234577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.837 qpair failed and we were unable to recover it. 00:32:34.837 [2024-11-20 06:43:06.234698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.837 [2024-11-20 06:43:06.234731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.837 qpair failed and we were unable to recover it. 00:32:34.837 [2024-11-20 06:43:06.234856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.837 [2024-11-20 06:43:06.234888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.837 qpair failed and we were unable to recover it. 00:32:34.837 [2024-11-20 06:43:06.235149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.837 [2024-11-20 06:43:06.235233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.837 qpair failed and we were unable to recover it. 00:32:34.837 [2024-11-20 06:43:06.235528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.837 [2024-11-20 06:43:06.235565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.837 qpair failed and we were unable to recover it. 00:32:34.837 [2024-11-20 06:43:06.235756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.837 [2024-11-20 06:43:06.235791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.837 qpair failed and we were unable to recover it. 00:32:34.837 [2024-11-20 06:43:06.235970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.837 [2024-11-20 06:43:06.236002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.837 qpair failed and we were unable to recover it. 00:32:34.837 [2024-11-20 06:43:06.236267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.837 [2024-11-20 06:43:06.236301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.837 qpair failed and we were unable to recover it. 00:32:34.837 [2024-11-20 06:43:06.236442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.837 [2024-11-20 06:43:06.236475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.837 qpair failed and we were unable to recover it. 00:32:34.837 [2024-11-20 06:43:06.236670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.837 [2024-11-20 06:43:06.236703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.837 qpair failed and we were unable to recover it. 00:32:34.837 [2024-11-20 06:43:06.236813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.837 [2024-11-20 06:43:06.236844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.837 qpair failed and we were unable to recover it. 00:32:34.837 [2024-11-20 06:43:06.237034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.838 [2024-11-20 06:43:06.237068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.838 qpair failed and we were unable to recover it. 00:32:34.838 [2024-11-20 06:43:06.237332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.838 [2024-11-20 06:43:06.237365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.838 qpair failed and we were unable to recover it. 00:32:34.838 [2024-11-20 06:43:06.237482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.838 [2024-11-20 06:43:06.237514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.838 qpair failed and we were unable to recover it. 00:32:34.838 [2024-11-20 06:43:06.237692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.838 [2024-11-20 06:43:06.237724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.838 qpair failed and we were unable to recover it. 00:32:34.838 [2024-11-20 06:43:06.237854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.838 [2024-11-20 06:43:06.237887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.838 qpair failed and we were unable to recover it. 00:32:34.838 [2024-11-20 06:43:06.238065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.838 [2024-11-20 06:43:06.238113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.838 qpair failed and we were unable to recover it. 00:32:34.838 [2024-11-20 06:43:06.238398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.838 [2024-11-20 06:43:06.238433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.838 qpair failed and we were unable to recover it. 00:32:34.838 [2024-11-20 06:43:06.238543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.838 [2024-11-20 06:43:06.238575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.838 qpair failed and we were unable to recover it. 00:32:34.838 [2024-11-20 06:43:06.238695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.838 [2024-11-20 06:43:06.238728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.838 qpair failed and we were unable to recover it. 00:32:34.838 [2024-11-20 06:43:06.238934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.838 [2024-11-20 06:43:06.238967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.838 qpair failed and we were unable to recover it. 00:32:34.838 [2024-11-20 06:43:06.239157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.838 [2024-11-20 06:43:06.239188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.838 qpair failed and we were unable to recover it. 00:32:34.838 [2024-11-20 06:43:06.239370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.838 [2024-11-20 06:43:06.239402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.838 qpair failed and we were unable to recover it. 00:32:34.838 [2024-11-20 06:43:06.239590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.838 [2024-11-20 06:43:06.239621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.838 qpair failed and we were unable to recover it. 00:32:34.838 [2024-11-20 06:43:06.239839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.838 [2024-11-20 06:43:06.239872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.838 qpair failed and we were unable to recover it. 00:32:34.838 [2024-11-20 06:43:06.240114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.838 [2024-11-20 06:43:06.240145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.838 qpair failed and we were unable to recover it. 00:32:34.838 [2024-11-20 06:43:06.240343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.838 [2024-11-20 06:43:06.240377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.838 qpair failed and we were unable to recover it. 00:32:34.838 [2024-11-20 06:43:06.240569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.838 [2024-11-20 06:43:06.240600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.838 qpair failed and we were unable to recover it. 00:32:34.838 [2024-11-20 06:43:06.240821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.838 [2024-11-20 06:43:06.240853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.838 qpair failed and we were unable to recover it. 00:32:34.838 [2024-11-20 06:43:06.241051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.838 [2024-11-20 06:43:06.241083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.838 qpair failed and we were unable to recover it. 00:32:34.838 [2024-11-20 06:43:06.241354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.838 [2024-11-20 06:43:06.241388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.838 qpair failed and we were unable to recover it. 00:32:34.838 [2024-11-20 06:43:06.241654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.838 [2024-11-20 06:43:06.241687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.838 qpair failed and we were unable to recover it. 00:32:34.838 [2024-11-20 06:43:06.241800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.838 [2024-11-20 06:43:06.241833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.838 qpair failed and we were unable to recover it. 00:32:34.838 [2024-11-20 06:43:06.242042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.838 [2024-11-20 06:43:06.242074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.838 qpair failed and we were unable to recover it. 00:32:34.838 [2024-11-20 06:43:06.242186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.838 [2024-11-20 06:43:06.242228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.838 qpair failed and we were unable to recover it. 00:32:34.838 [2024-11-20 06:43:06.242427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.838 [2024-11-20 06:43:06.242458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.838 qpair failed and we were unable to recover it. 00:32:34.838 [2024-11-20 06:43:06.242720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.838 [2024-11-20 06:43:06.242755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.838 qpair failed and we were unable to recover it. 00:32:34.838 [2024-11-20 06:43:06.242931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.838 [2024-11-20 06:43:06.242963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.838 qpair failed and we were unable to recover it. 00:32:34.838 [2024-11-20 06:43:06.243217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.838 [2024-11-20 06:43:06.243251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.838 qpair failed and we were unable to recover it. 00:32:34.838 [2024-11-20 06:43:06.243449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.838 [2024-11-20 06:43:06.243482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.838 qpair failed and we were unable to recover it. 00:32:34.838 [2024-11-20 06:43:06.243606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.838 [2024-11-20 06:43:06.243637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.838 qpair failed and we were unable to recover it. 00:32:34.838 [2024-11-20 06:43:06.243857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.838 [2024-11-20 06:43:06.243890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.838 qpair failed and we were unable to recover it. 00:32:34.838 [2024-11-20 06:43:06.244104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.838 [2024-11-20 06:43:06.244136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.838 qpair failed and we were unable to recover it. 00:32:34.838 [2024-11-20 06:43:06.244269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.838 [2024-11-20 06:43:06.244307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.838 qpair failed and we were unable to recover it. 00:32:34.838 [2024-11-20 06:43:06.244524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.838 [2024-11-20 06:43:06.244556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.838 qpair failed and we were unable to recover it. 00:32:34.839 [2024-11-20 06:43:06.244679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.839 [2024-11-20 06:43:06.244710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.839 qpair failed and we were unable to recover it. 00:32:34.839 [2024-11-20 06:43:06.244885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.839 [2024-11-20 06:43:06.244917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.839 qpair failed and we were unable to recover it. 00:32:34.839 [2024-11-20 06:43:06.245095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.839 [2024-11-20 06:43:06.245129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.839 qpair failed and we were unable to recover it. 00:32:34.839 [2024-11-20 06:43:06.245357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.839 [2024-11-20 06:43:06.245392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.839 qpair failed and we were unable to recover it. 00:32:34.839 [2024-11-20 06:43:06.245585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.839 [2024-11-20 06:43:06.245617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.839 qpair failed and we were unable to recover it. 00:32:34.839 [2024-11-20 06:43:06.245768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.839 [2024-11-20 06:43:06.245801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.839 qpair failed and we were unable to recover it. 00:32:34.839 [2024-11-20 06:43:06.245918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.839 [2024-11-20 06:43:06.245950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.839 qpair failed and we were unable to recover it. 00:32:34.839 [2024-11-20 06:43:06.246053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.839 [2024-11-20 06:43:06.246085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.839 qpair failed and we were unable to recover it. 00:32:34.839 [2024-11-20 06:43:06.246190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.839 [2024-11-20 06:43:06.246232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.839 qpair failed and we were unable to recover it. 00:32:34.839 [2024-11-20 06:43:06.246367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.839 [2024-11-20 06:43:06.246402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.839 qpair failed and we were unable to recover it. 00:32:34.839 [2024-11-20 06:43:06.246608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.839 [2024-11-20 06:43:06.246639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.839 qpair failed and we were unable to recover it. 00:32:34.839 [2024-11-20 06:43:06.246830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.839 [2024-11-20 06:43:06.246863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.839 qpair failed and we were unable to recover it. 00:32:34.839 [2024-11-20 06:43:06.247145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.839 [2024-11-20 06:43:06.247176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.839 qpair failed and we were unable to recover it. 00:32:34.839 [2024-11-20 06:43:06.247429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.839 [2024-11-20 06:43:06.247462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.839 qpair failed and we were unable to recover it. 00:32:34.839 [2024-11-20 06:43:06.247590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.839 [2024-11-20 06:43:06.247624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.839 qpair failed and we were unable to recover it. 00:32:34.839 [2024-11-20 06:43:06.247749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.839 [2024-11-20 06:43:06.247783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.839 qpair failed and we were unable to recover it. 00:32:34.839 [2024-11-20 06:43:06.247967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.839 [2024-11-20 06:43:06.247998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.839 qpair failed and we were unable to recover it. 00:32:34.839 [2024-11-20 06:43:06.248115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.839 [2024-11-20 06:43:06.248148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.839 qpair failed and we were unable to recover it. 00:32:34.839 [2024-11-20 06:43:06.248262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.839 [2024-11-20 06:43:06.248297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.839 qpair failed and we were unable to recover it. 00:32:34.839 [2024-11-20 06:43:06.248476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.839 [2024-11-20 06:43:06.248510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.839 qpair failed and we were unable to recover it. 00:32:34.839 [2024-11-20 06:43:06.248641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.839 [2024-11-20 06:43:06.248673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.839 qpair failed and we were unable to recover it. 00:32:34.839 [2024-11-20 06:43:06.248852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.839 [2024-11-20 06:43:06.248884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.839 qpair failed and we were unable to recover it. 00:32:34.839 [2024-11-20 06:43:06.249016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.839 [2024-11-20 06:43:06.249048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.839 qpair failed and we were unable to recover it. 00:32:34.839 [2024-11-20 06:43:06.249192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.839 [2024-11-20 06:43:06.249236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.839 qpair failed and we were unable to recover it. 00:32:34.839 [2024-11-20 06:43:06.249357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.840 [2024-11-20 06:43:06.249389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.840 qpair failed and we were unable to recover it. 00:32:34.840 [2024-11-20 06:43:06.249524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.840 [2024-11-20 06:43:06.249556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.840 qpair failed and we were unable to recover it. 00:32:34.840 [2024-11-20 06:43:06.249739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.840 [2024-11-20 06:43:06.249772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.840 qpair failed and we were unable to recover it. 00:32:34.840 [2024-11-20 06:43:06.249964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.840 [2024-11-20 06:43:06.249996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.840 qpair failed and we were unable to recover it. 00:32:34.840 [2024-11-20 06:43:06.250239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.840 [2024-11-20 06:43:06.250277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.840 qpair failed and we were unable to recover it. 00:32:34.840 [2024-11-20 06:43:06.250409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.840 [2024-11-20 06:43:06.250441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.840 qpair failed and we were unable to recover it. 00:32:34.840 [2024-11-20 06:43:06.250630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.840 [2024-11-20 06:43:06.250662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.840 qpair failed and we were unable to recover it. 00:32:34.840 [2024-11-20 06:43:06.250836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.840 [2024-11-20 06:43:06.250868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.840 qpair failed and we were unable to recover it. 00:32:34.840 [2024-11-20 06:43:06.251060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.840 [2024-11-20 06:43:06.251093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.840 qpair failed and we were unable to recover it. 00:32:34.840 [2024-11-20 06:43:06.251222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.840 [2024-11-20 06:43:06.251256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.840 qpair failed and we were unable to recover it. 00:32:34.840 [2024-11-20 06:43:06.251371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.840 [2024-11-20 06:43:06.251403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.840 qpair failed and we were unable to recover it. 00:32:34.840 [2024-11-20 06:43:06.251523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.840 [2024-11-20 06:43:06.251554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.840 qpair failed and we were unable to recover it. 00:32:34.840 [2024-11-20 06:43:06.251723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.840 [2024-11-20 06:43:06.251755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.840 qpair failed and we were unable to recover it. 00:32:34.840 [2024-11-20 06:43:06.251906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.840 [2024-11-20 06:43:06.251937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.840 qpair failed and we were unable to recover it. 00:32:34.840 [2024-11-20 06:43:06.252052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.840 [2024-11-20 06:43:06.252090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.840 qpair failed and we were unable to recover it. 00:32:34.840 [2024-11-20 06:43:06.252260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.840 [2024-11-20 06:43:06.252293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.840 qpair failed and we were unable to recover it. 00:32:34.840 [2024-11-20 06:43:06.252482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.840 [2024-11-20 06:43:06.252514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.840 qpair failed and we were unable to recover it. 00:32:34.840 [2024-11-20 06:43:06.252654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.840 [2024-11-20 06:43:06.252687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.840 qpair failed and we were unable to recover it. 00:32:34.840 [2024-11-20 06:43:06.252905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.840 [2024-11-20 06:43:06.252936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.840 qpair failed and we were unable to recover it. 00:32:34.840 [2024-11-20 06:43:06.253049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.840 [2024-11-20 06:43:06.253082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.840 qpair failed and we were unable to recover it. 00:32:34.840 [2024-11-20 06:43:06.253272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.840 [2024-11-20 06:43:06.253306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.840 qpair failed and we were unable to recover it. 00:32:34.840 [2024-11-20 06:43:06.253439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.840 [2024-11-20 06:43:06.253471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.840 qpair failed and we were unable to recover it. 00:32:34.840 [2024-11-20 06:43:06.253658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.840 [2024-11-20 06:43:06.253690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.840 qpair failed and we were unable to recover it. 00:32:34.840 [2024-11-20 06:43:06.253802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.840 [2024-11-20 06:43:06.253834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.840 qpair failed and we were unable to recover it. 00:32:34.840 [2024-11-20 06:43:06.254010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.841 [2024-11-20 06:43:06.254041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.841 qpair failed and we were unable to recover it. 00:32:34.841 [2024-11-20 06:43:06.254324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.841 [2024-11-20 06:43:06.254357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.841 qpair failed and we were unable to recover it. 00:32:34.841 [2024-11-20 06:43:06.254475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.841 [2024-11-20 06:43:06.254508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.841 qpair failed and we were unable to recover it. 00:32:34.841 [2024-11-20 06:43:06.254707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.841 [2024-11-20 06:43:06.254739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.841 qpair failed and we were unable to recover it. 00:32:34.841 [2024-11-20 06:43:06.254984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.841 [2024-11-20 06:43:06.255016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.841 qpair failed and we were unable to recover it. 00:32:34.841 [2024-11-20 06:43:06.255219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.841 [2024-11-20 06:43:06.255253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.841 qpair failed and we were unable to recover it. 00:32:34.841 [2024-11-20 06:43:06.255364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.841 [2024-11-20 06:43:06.255397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.841 qpair failed and we were unable to recover it. 00:32:34.841 [2024-11-20 06:43:06.255637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.841 [2024-11-20 06:43:06.255670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.841 qpair failed and we were unable to recover it. 00:32:34.841 [2024-11-20 06:43:06.255865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.841 [2024-11-20 06:43:06.255899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.841 qpair failed and we were unable to recover it. 00:32:34.841 [2024-11-20 06:43:06.256094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.841 [2024-11-20 06:43:06.256126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.841 qpair failed and we were unable to recover it. 00:32:34.841 [2024-11-20 06:43:06.256231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.841 [2024-11-20 06:43:06.256265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.841 qpair failed and we were unable to recover it. 00:32:34.841 [2024-11-20 06:43:06.256386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.841 [2024-11-20 06:43:06.256419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.841 qpair failed and we were unable to recover it. 00:32:34.841 [2024-11-20 06:43:06.256537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.841 [2024-11-20 06:43:06.256572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.841 qpair failed and we were unable to recover it. 00:32:34.841 [2024-11-20 06:43:06.256683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.841 [2024-11-20 06:43:06.256714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.841 qpair failed and we were unable to recover it. 00:32:34.841 [2024-11-20 06:43:06.256831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.841 [2024-11-20 06:43:06.256864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.841 qpair failed and we were unable to recover it. 00:32:34.841 [2024-11-20 06:43:06.257050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.841 [2024-11-20 06:43:06.257082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.841 qpair failed and we were unable to recover it. 00:32:34.841 [2024-11-20 06:43:06.257259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.841 [2024-11-20 06:43:06.257292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.841 qpair failed and we were unable to recover it. 00:32:34.841 [2024-11-20 06:43:06.257424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.841 [2024-11-20 06:43:06.257457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.841 qpair failed and we were unable to recover it. 00:32:34.841 [2024-11-20 06:43:06.257628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.841 [2024-11-20 06:43:06.257660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.841 qpair failed and we were unable to recover it. 00:32:34.841 [2024-11-20 06:43:06.257774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.841 [2024-11-20 06:43:06.257806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.841 qpair failed and we were unable to recover it. 00:32:34.841 [2024-11-20 06:43:06.257942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.841 [2024-11-20 06:43:06.257973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.841 qpair failed and we were unable to recover it. 00:32:34.841 [2024-11-20 06:43:06.258077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.841 [2024-11-20 06:43:06.258109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.841 qpair failed and we were unable to recover it. 00:32:34.841 [2024-11-20 06:43:06.258255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.841 [2024-11-20 06:43:06.258289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.841 qpair failed and we were unable to recover it. 00:32:34.841 [2024-11-20 06:43:06.258476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.841 [2024-11-20 06:43:06.258508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.841 qpair failed and we were unable to recover it. 00:32:34.842 [2024-11-20 06:43:06.258619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.842 [2024-11-20 06:43:06.258651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.842 qpair failed and we were unable to recover it. 00:32:34.842 [2024-11-20 06:43:06.258779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.842 [2024-11-20 06:43:06.258810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.842 qpair failed and we were unable to recover it. 00:32:34.842 [2024-11-20 06:43:06.258991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.842 [2024-11-20 06:43:06.259023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.842 qpair failed and we were unable to recover it. 00:32:34.842 [2024-11-20 06:43:06.259142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.842 [2024-11-20 06:43:06.259175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.842 qpair failed and we were unable to recover it. 00:32:34.842 [2024-11-20 06:43:06.259296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.842 [2024-11-20 06:43:06.259328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.842 qpair failed and we were unable to recover it. 00:32:34.842 [2024-11-20 06:43:06.259442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.842 [2024-11-20 06:43:06.259475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.842 qpair failed and we were unable to recover it. 00:32:34.842 [2024-11-20 06:43:06.259608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.842 [2024-11-20 06:43:06.259647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.842 qpair failed and we were unable to recover it. 00:32:34.842 [2024-11-20 06:43:06.259754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.842 [2024-11-20 06:43:06.259785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.842 qpair failed and we were unable to recover it. 00:32:34.842 [2024-11-20 06:43:06.259904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.842 [2024-11-20 06:43:06.259936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.842 qpair failed and we were unable to recover it. 00:32:34.842 [2024-11-20 06:43:06.260117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.842 [2024-11-20 06:43:06.260149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.842 qpair failed and we were unable to recover it. 00:32:34.842 [2024-11-20 06:43:06.260279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.842 [2024-11-20 06:43:06.260311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.842 qpair failed and we were unable to recover it. 00:32:34.842 [2024-11-20 06:43:06.260435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.842 [2024-11-20 06:43:06.260468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.842 qpair failed and we were unable to recover it. 00:32:34.842 [2024-11-20 06:43:06.260593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.842 [2024-11-20 06:43:06.260625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.842 qpair failed and we were unable to recover it. 00:32:34.842 [2024-11-20 06:43:06.260815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.842 [2024-11-20 06:43:06.260846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.842 qpair failed and we were unable to recover it. 00:32:34.842 [2024-11-20 06:43:06.261027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.842 [2024-11-20 06:43:06.261060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.842 qpair failed and we were unable to recover it. 00:32:34.842 [2024-11-20 06:43:06.261186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.842 [2024-11-20 06:43:06.261228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.842 qpair failed and we were unable to recover it. 00:32:34.842 [2024-11-20 06:43:06.261337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.842 [2024-11-20 06:43:06.261370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.842 qpair failed and we were unable to recover it. 00:32:34.842 [2024-11-20 06:43:06.261486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.842 [2024-11-20 06:43:06.261519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.842 qpair failed and we were unable to recover it. 00:32:34.842 [2024-11-20 06:43:06.261627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.842 [2024-11-20 06:43:06.261659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.842 qpair failed and we were unable to recover it. 00:32:34.842 [2024-11-20 06:43:06.261831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.842 [2024-11-20 06:43:06.261863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.842 qpair failed and we were unable to recover it. 00:32:34.842 [2024-11-20 06:43:06.261992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.842 [2024-11-20 06:43:06.262025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.842 qpair failed and we were unable to recover it. 00:32:34.842 [2024-11-20 06:43:06.262253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.842 [2024-11-20 06:43:06.262287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.842 qpair failed and we were unable to recover it. 00:32:34.842 [2024-11-20 06:43:06.262480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.842 [2024-11-20 06:43:06.262513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.842 qpair failed and we were unable to recover it. 00:32:34.842 [2024-11-20 06:43:06.262639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.842 [2024-11-20 06:43:06.262673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.842 qpair failed and we were unable to recover it. 00:32:34.842 [2024-11-20 06:43:06.262849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.843 [2024-11-20 06:43:06.262880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.843 qpair failed and we were unable to recover it. 00:32:34.843 [2024-11-20 06:43:06.262980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.843 [2024-11-20 06:43:06.263012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.843 qpair failed and we were unable to recover it. 00:32:34.843 [2024-11-20 06:43:06.263130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.843 [2024-11-20 06:43:06.263162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.843 qpair failed and we were unable to recover it. 00:32:34.843 [2024-11-20 06:43:06.263301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.843 [2024-11-20 06:43:06.263335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.843 qpair failed and we were unable to recover it. 00:32:34.843 [2024-11-20 06:43:06.263446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.843 [2024-11-20 06:43:06.263477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.843 qpair failed and we were unable to recover it. 00:32:34.843 [2024-11-20 06:43:06.263647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.843 [2024-11-20 06:43:06.263680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.843 qpair failed and we were unable to recover it. 00:32:34.843 [2024-11-20 06:43:06.263786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.843 [2024-11-20 06:43:06.263817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.843 qpair failed and we were unable to recover it. 00:32:34.843 [2024-11-20 06:43:06.264003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.843 [2024-11-20 06:43:06.264035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.843 qpair failed and we were unable to recover it. 00:32:34.843 [2024-11-20 06:43:06.264220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.843 [2024-11-20 06:43:06.264255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.843 qpair failed and we were unable to recover it. 00:32:34.843 [2024-11-20 06:43:06.264371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.843 [2024-11-20 06:43:06.264403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.843 qpair failed and we were unable to recover it. 00:32:34.843 [2024-11-20 06:43:06.264517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.843 [2024-11-20 06:43:06.264549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.843 qpair failed and we were unable to recover it. 00:32:34.843 [2024-11-20 06:43:06.264668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.843 [2024-11-20 06:43:06.264700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.843 qpair failed and we were unable to recover it. 00:32:34.843 [2024-11-20 06:43:06.264810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.843 [2024-11-20 06:43:06.264841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.843 qpair failed and we were unable to recover it. 00:32:34.843 [2024-11-20 06:43:06.265042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.843 [2024-11-20 06:43:06.265074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.843 qpair failed and we were unable to recover it. 00:32:34.843 [2024-11-20 06:43:06.265265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.843 [2024-11-20 06:43:06.265299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.843 qpair failed and we were unable to recover it. 00:32:34.843 [2024-11-20 06:43:06.265432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.843 [2024-11-20 06:43:06.265465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.843 qpair failed and we were unable to recover it. 00:32:34.843 [2024-11-20 06:43:06.265596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.843 [2024-11-20 06:43:06.265629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.843 qpair failed and we were unable to recover it. 00:32:34.843 [2024-11-20 06:43:06.265786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.843 [2024-11-20 06:43:06.265818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.843 qpair failed and we were unable to recover it. 00:32:34.843 [2024-11-20 06:43:06.265920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.843 [2024-11-20 06:43:06.265951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.843 qpair failed and we were unable to recover it. 00:32:34.843 [2024-11-20 06:43:06.266132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.843 [2024-11-20 06:43:06.266164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.843 qpair failed and we were unable to recover it. 00:32:34.843 [2024-11-20 06:43:06.266310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.843 [2024-11-20 06:43:06.266344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.843 qpair failed and we were unable to recover it. 00:32:34.843 [2024-11-20 06:43:06.266532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.843 [2024-11-20 06:43:06.266564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.843 qpair failed and we were unable to recover it. 00:32:34.843 [2024-11-20 06:43:06.266680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.843 [2024-11-20 06:43:06.266719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.843 qpair failed and we were unable to recover it. 00:32:34.843 [2024-11-20 06:43:06.266858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.843 [2024-11-20 06:43:06.266891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.843 qpair failed and we were unable to recover it. 00:32:34.843 [2024-11-20 06:43:06.267079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.843 [2024-11-20 06:43:06.267111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.843 qpair failed and we were unable to recover it. 00:32:34.843 [2024-11-20 06:43:06.267229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.843 [2024-11-20 06:43:06.267264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.843 qpair failed and we were unable to recover it. 00:32:34.844 [2024-11-20 06:43:06.267378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.844 [2024-11-20 06:43:06.267410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.844 qpair failed and we were unable to recover it. 00:32:34.844 [2024-11-20 06:43:06.267587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.844 [2024-11-20 06:43:06.267618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.844 qpair failed and we were unable to recover it. 00:32:34.844 [2024-11-20 06:43:06.267877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.844 [2024-11-20 06:43:06.267909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.844 qpair failed and we were unable to recover it. 00:32:34.844 [2024-11-20 06:43:06.268062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.844 [2024-11-20 06:43:06.268093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.844 qpair failed and we were unable to recover it. 00:32:34.844 [2024-11-20 06:43:06.268216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.844 [2024-11-20 06:43:06.268249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.844 qpair failed and we were unable to recover it. 00:32:34.844 [2024-11-20 06:43:06.268369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.844 [2024-11-20 06:43:06.268403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.844 qpair failed and we were unable to recover it. 00:32:34.844 [2024-11-20 06:43:06.268517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.844 [2024-11-20 06:43:06.268548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.844 qpair failed and we were unable to recover it. 00:32:34.844 [2024-11-20 06:43:06.268673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.844 [2024-11-20 06:43:06.268703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.844 qpair failed and we were unable to recover it. 00:32:34.844 [2024-11-20 06:43:06.268873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.844 [2024-11-20 06:43:06.268903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.844 qpair failed and we were unable to recover it. 00:32:34.844 [2024-11-20 06:43:06.269006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.844 [2024-11-20 06:43:06.269037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.844 qpair failed and we were unable to recover it. 00:32:34.844 [2024-11-20 06:43:06.269224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.844 [2024-11-20 06:43:06.269257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.844 qpair failed and we were unable to recover it. 00:32:34.844 [2024-11-20 06:43:06.269385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.844 [2024-11-20 06:43:06.269419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.844 qpair failed and we were unable to recover it. 00:32:34.844 [2024-11-20 06:43:06.269538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.844 [2024-11-20 06:43:06.269570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.844 qpair failed and we were unable to recover it. 00:32:34.844 [2024-11-20 06:43:06.269748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.844 [2024-11-20 06:43:06.269780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.844 qpair failed and we were unable to recover it. 00:32:34.844 [2024-11-20 06:43:06.269899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.844 [2024-11-20 06:43:06.269933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.844 qpair failed and we were unable to recover it. 00:32:34.844 [2024-11-20 06:43:06.270120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.844 [2024-11-20 06:43:06.270153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.844 qpair failed and we were unable to recover it. 00:32:34.844 [2024-11-20 06:43:06.270304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.844 [2024-11-20 06:43:06.270338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.844 qpair failed and we were unable to recover it. 00:32:34.845 [2024-11-20 06:43:06.270463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.845 [2024-11-20 06:43:06.270492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.845 qpair failed and we were unable to recover it. 00:32:34.845 [2024-11-20 06:43:06.270610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.845 [2024-11-20 06:43:06.270640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.845 qpair failed and we were unable to recover it. 00:32:34.845 [2024-11-20 06:43:06.270902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.845 [2024-11-20 06:43:06.270931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.845 qpair failed and we were unable to recover it. 00:32:34.845 [2024-11-20 06:43:06.271044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.845 [2024-11-20 06:43:06.271074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.845 qpair failed and we were unable to recover it. 00:32:34.845 [2024-11-20 06:43:06.271199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.845 [2024-11-20 06:43:06.271254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.845 qpair failed and we were unable to recover it. 00:32:34.845 [2024-11-20 06:43:06.271380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.845 [2024-11-20 06:43:06.271412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.845 qpair failed and we were unable to recover it. 00:32:34.845 [2024-11-20 06:43:06.271540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.845 [2024-11-20 06:43:06.271572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.845 qpair failed and we were unable to recover it. 00:32:34.845 [2024-11-20 06:43:06.271697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.845 [2024-11-20 06:43:06.271729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.845 qpair failed and we were unable to recover it. 00:32:34.845 [2024-11-20 06:43:06.271922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.845 [2024-11-20 06:43:06.271955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.845 qpair failed and we were unable to recover it. 00:32:34.845 [2024-11-20 06:43:06.272148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.845 [2024-11-20 06:43:06.272180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.845 qpair failed and we were unable to recover it. 00:32:34.845 [2024-11-20 06:43:06.272371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.845 [2024-11-20 06:43:06.272403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.845 qpair failed and we were unable to recover it. 00:32:34.845 [2024-11-20 06:43:06.272519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.845 [2024-11-20 06:43:06.272548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.845 qpair failed and we were unable to recover it. 00:32:34.845 [2024-11-20 06:43:06.272718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.845 [2024-11-20 06:43:06.272747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.845 qpair failed and we were unable to recover it. 00:32:34.845 [2024-11-20 06:43:06.272867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.845 [2024-11-20 06:43:06.272896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.845 qpair failed and we were unable to recover it. 00:32:34.845 [2024-11-20 06:43:06.273089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.845 [2024-11-20 06:43:06.273121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.845 qpair failed and we were unable to recover it. 00:32:34.845 [2024-11-20 06:43:06.273233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.845 [2024-11-20 06:43:06.273267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.845 qpair failed and we were unable to recover it. 00:32:34.845 [2024-11-20 06:43:06.273401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.845 [2024-11-20 06:43:06.273434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.845 qpair failed and we were unable to recover it. 00:32:34.845 [2024-11-20 06:43:06.273555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.845 [2024-11-20 06:43:06.273587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.845 qpair failed and we were unable to recover it. 00:32:34.845 [2024-11-20 06:43:06.273769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.845 [2024-11-20 06:43:06.273801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.845 qpair failed and we were unable to recover it. 00:32:34.845 [2024-11-20 06:43:06.273917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.845 [2024-11-20 06:43:06.273955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.845 qpair failed and we were unable to recover it. 00:32:34.845 [2024-11-20 06:43:06.274067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.845 [2024-11-20 06:43:06.274099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.845 qpair failed and we were unable to recover it. 00:32:34.845 [2024-11-20 06:43:06.274210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.845 [2024-11-20 06:43:06.274243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.845 qpair failed and we were unable to recover it. 00:32:34.845 [2024-11-20 06:43:06.274510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.845 [2024-11-20 06:43:06.274539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.845 qpair failed and we were unable to recover it. 00:32:34.845 [2024-11-20 06:43:06.274710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.845 [2024-11-20 06:43:06.274739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.845 qpair failed and we were unable to recover it. 00:32:34.845 [2024-11-20 06:43:06.274841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.845 [2024-11-20 06:43:06.274870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.845 qpair failed and we were unable to recover it. 00:32:34.845 [2024-11-20 06:43:06.274997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.845 [2024-11-20 06:43:06.275027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.845 qpair failed and we were unable to recover it. 00:32:34.845 [2024-11-20 06:43:06.275124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.845 [2024-11-20 06:43:06.275152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.845 qpair failed and we were unable to recover it. 00:32:34.845 [2024-11-20 06:43:06.275411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.845 [2024-11-20 06:43:06.275441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.845 qpair failed and we were unable to recover it. 00:32:34.845 [2024-11-20 06:43:06.275546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.845 [2024-11-20 06:43:06.275577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.845 qpair failed and we were unable to recover it. 00:32:34.845 [2024-11-20 06:43:06.275697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.845 [2024-11-20 06:43:06.275726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.845 qpair failed and we were unable to recover it. 00:32:34.845 [2024-11-20 06:43:06.275907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.845 [2024-11-20 06:43:06.275938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.845 qpair failed and we were unable to recover it. 00:32:34.846 [2024-11-20 06:43:06.276060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.846 [2024-11-20 06:43:06.276089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.846 qpair failed and we were unable to recover it. 00:32:34.846 [2024-11-20 06:43:06.276258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.846 [2024-11-20 06:43:06.276289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.846 qpair failed and we were unable to recover it. 00:32:34.846 [2024-11-20 06:43:06.276418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.846 [2024-11-20 06:43:06.276447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.846 qpair failed and we were unable to recover it. 00:32:34.846 [2024-11-20 06:43:06.276556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.846 [2024-11-20 06:43:06.276585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.846 qpair failed and we were unable to recover it. 00:32:34.846 [2024-11-20 06:43:06.276762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.846 [2024-11-20 06:43:06.276790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.846 qpair failed and we were unable to recover it. 00:32:34.846 [2024-11-20 06:43:06.276894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.846 [2024-11-20 06:43:06.276923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.846 qpair failed and we were unable to recover it. 00:32:34.846 [2024-11-20 06:43:06.277042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.846 [2024-11-20 06:43:06.277072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.846 qpair failed and we were unable to recover it. 00:32:34.846 [2024-11-20 06:43:06.277258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.846 [2024-11-20 06:43:06.277288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.846 qpair failed and we were unable to recover it. 00:32:34.846 [2024-11-20 06:43:06.277457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.846 [2024-11-20 06:43:06.277486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.846 qpair failed and we were unable to recover it. 00:32:34.846 [2024-11-20 06:43:06.277594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.846 [2024-11-20 06:43:06.277624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.846 qpair failed and we were unable to recover it. 00:32:34.846 [2024-11-20 06:43:06.277726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.846 [2024-11-20 06:43:06.277755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.846 qpair failed and we were unable to recover it. 00:32:34.846 [2024-11-20 06:43:06.277859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.846 [2024-11-20 06:43:06.277889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.846 qpair failed and we were unable to recover it. 00:32:34.846 [2024-11-20 06:43:06.278072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.846 [2024-11-20 06:43:06.278102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.846 qpair failed and we were unable to recover it. 00:32:34.846 [2024-11-20 06:43:06.278273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.846 [2024-11-20 06:43:06.278305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.846 qpair failed and we were unable to recover it. 00:32:34.846 [2024-11-20 06:43:06.278471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.846 [2024-11-20 06:43:06.278500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.846 qpair failed and we were unable to recover it. 00:32:34.846 [2024-11-20 06:43:06.278711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.846 [2024-11-20 06:43:06.278742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.846 qpair failed and we were unable to recover it. 00:32:34.846 [2024-11-20 06:43:06.278909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.846 [2024-11-20 06:43:06.278936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.846 qpair failed and we were unable to recover it. 00:32:34.846 [2024-11-20 06:43:06.279042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.846 [2024-11-20 06:43:06.279069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.846 qpair failed and we were unable to recover it. 00:32:34.846 [2024-11-20 06:43:06.279243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.846 [2024-11-20 06:43:06.279272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.846 qpair failed and we were unable to recover it. 00:32:34.846 [2024-11-20 06:43:06.279378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.846 [2024-11-20 06:43:06.279421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.846 qpair failed and we were unable to recover it. 00:32:34.846 [2024-11-20 06:43:06.279595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.846 [2024-11-20 06:43:06.279626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.846 qpair failed and we were unable to recover it. 00:32:34.846 [2024-11-20 06:43:06.279808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.846 [2024-11-20 06:43:06.279841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.846 qpair failed and we were unable to recover it. 00:32:34.846 [2024-11-20 06:43:06.279962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.846 [2024-11-20 06:43:06.279994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.846 qpair failed and we were unable to recover it. 00:32:34.846 [2024-11-20 06:43:06.280174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.846 [2024-11-20 06:43:06.280214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.846 qpair failed and we were unable to recover it. 00:32:34.846 [2024-11-20 06:43:06.280407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.846 [2024-11-20 06:43:06.280440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.846 qpair failed and we were unable to recover it. 00:32:34.846 [2024-11-20 06:43:06.280569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.846 [2024-11-20 06:43:06.280595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.846 qpair failed and we were unable to recover it. 00:32:34.846 [2024-11-20 06:43:06.280829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.846 [2024-11-20 06:43:06.280856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.846 qpair failed and we were unable to recover it. 00:32:34.846 [2024-11-20 06:43:06.281054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.846 [2024-11-20 06:43:06.281081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.846 qpair failed and we were unable to recover it. 00:32:34.846 [2024-11-20 06:43:06.281219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.846 [2024-11-20 06:43:06.281252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.846 qpair failed and we were unable to recover it. 00:32:34.846 [2024-11-20 06:43:06.281373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.846 [2024-11-20 06:43:06.281400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.846 qpair failed and we were unable to recover it. 00:32:34.846 [2024-11-20 06:43:06.281518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.846 [2024-11-20 06:43:06.281546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.846 qpair failed and we were unable to recover it. 00:32:34.846 [2024-11-20 06:43:06.281667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.846 [2024-11-20 06:43:06.281694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.846 qpair failed and we were unable to recover it. 00:32:34.846 [2024-11-20 06:43:06.281811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.846 [2024-11-20 06:43:06.281838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.846 qpair failed and we were unable to recover it. 00:32:34.846 [2024-11-20 06:43:06.281947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.846 [2024-11-20 06:43:06.281975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.846 qpair failed and we were unable to recover it. 00:32:34.846 [2024-11-20 06:43:06.282213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.846 [2024-11-20 06:43:06.282241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.846 qpair failed and we were unable to recover it. 00:32:34.846 [2024-11-20 06:43:06.282334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.847 [2024-11-20 06:43:06.282361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.847 qpair failed and we were unable to recover it. 00:32:34.847 [2024-11-20 06:43:06.282608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.847 [2024-11-20 06:43:06.282636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.847 qpair failed and we were unable to recover it. 00:32:34.847 [2024-11-20 06:43:06.282814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.847 [2024-11-20 06:43:06.282841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.847 qpair failed and we were unable to recover it. 00:32:34.847 [2024-11-20 06:43:06.283023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.847 [2024-11-20 06:43:06.283049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.847 qpair failed and we were unable to recover it. 00:32:34.847 [2024-11-20 06:43:06.283174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.847 [2024-11-20 06:43:06.283209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.847 qpair failed and we were unable to recover it. 00:32:34.847 [2024-11-20 06:43:06.283407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.847 [2024-11-20 06:43:06.283439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.847 qpair failed and we were unable to recover it. 00:32:34.847 [2024-11-20 06:43:06.283561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.847 [2024-11-20 06:43:06.283594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.847 qpair failed and we were unable to recover it. 00:32:34.847 [2024-11-20 06:43:06.283712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.847 [2024-11-20 06:43:06.283745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.847 qpair failed and we were unable to recover it. 00:32:34.847 [2024-11-20 06:43:06.283861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.847 [2024-11-20 06:43:06.283893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.847 qpair failed and we were unable to recover it. 00:32:34.847 [2024-11-20 06:43:06.284089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.847 [2024-11-20 06:43:06.284121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.847 qpair failed and we were unable to recover it. 00:32:34.847 [2024-11-20 06:43:06.284298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.847 [2024-11-20 06:43:06.284334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.847 qpair failed and we were unable to recover it. 00:32:34.847 [2024-11-20 06:43:06.284506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.847 [2024-11-20 06:43:06.284533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.847 qpair failed and we were unable to recover it. 00:32:34.847 [2024-11-20 06:43:06.284636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.847 [2024-11-20 06:43:06.284663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.847 qpair failed and we were unable to recover it. 00:32:34.847 [2024-11-20 06:43:06.284758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.847 [2024-11-20 06:43:06.284786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.847 qpair failed and we were unable to recover it. 00:32:34.847 [2024-11-20 06:43:06.284878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.847 [2024-11-20 06:43:06.284904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.847 qpair failed and we were unable to recover it. 00:32:34.847 [2024-11-20 06:43:06.284998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.847 [2024-11-20 06:43:06.285025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.847 qpair failed and we were unable to recover it. 00:32:34.847 [2024-11-20 06:43:06.285218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.847 [2024-11-20 06:43:06.285247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.847 qpair failed and we were unable to recover it. 00:32:34.847 [2024-11-20 06:43:06.285367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.847 [2024-11-20 06:43:06.285393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.847 qpair failed and we were unable to recover it. 00:32:34.847 [2024-11-20 06:43:06.285595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.847 [2024-11-20 06:43:06.285627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.847 qpair failed and we were unable to recover it. 00:32:34.847 [2024-11-20 06:43:06.285742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.847 [2024-11-20 06:43:06.285776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.847 qpair failed and we were unable to recover it. 00:32:34.847 [2024-11-20 06:43:06.285905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.847 [2024-11-20 06:43:06.285937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.847 qpair failed and we were unable to recover it. 00:32:34.847 [2024-11-20 06:43:06.286066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.847 [2024-11-20 06:43:06.286097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.847 qpair failed and we were unable to recover it. 00:32:34.847 [2024-11-20 06:43:06.286282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.847 [2024-11-20 06:43:06.286311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.847 qpair failed and we were unable to recover it. 00:32:34.847 [2024-11-20 06:43:06.286418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.847 [2024-11-20 06:43:06.286445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.847 qpair failed and we were unable to recover it. 00:32:34.847 [2024-11-20 06:43:06.286620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.847 [2024-11-20 06:43:06.286648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.847 qpair failed and we were unable to recover it. 00:32:34.847 [2024-11-20 06:43:06.286752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.847 [2024-11-20 06:43:06.286779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.847 qpair failed and we were unable to recover it. 00:32:34.847 [2024-11-20 06:43:06.286950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.847 [2024-11-20 06:43:06.286977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.847 qpair failed and we were unable to recover it. 00:32:34.848 [2024-11-20 06:43:06.287177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.848 [2024-11-20 06:43:06.287211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.848 qpair failed and we were unable to recover it. 00:32:34.848 [2024-11-20 06:43:06.287312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.848 [2024-11-20 06:43:06.287339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.848 qpair failed and we were unable to recover it. 00:32:34.848 [2024-11-20 06:43:06.287453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.848 [2024-11-20 06:43:06.287479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.848 qpair failed and we were unable to recover it. 00:32:34.848 [2024-11-20 06:43:06.287640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.848 [2024-11-20 06:43:06.287667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.848 qpair failed and we were unable to recover it. 00:32:34.848 [2024-11-20 06:43:06.287780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.848 [2024-11-20 06:43:06.287807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.848 qpair failed and we were unable to recover it. 00:32:34.848 [2024-11-20 06:43:06.287912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.848 [2024-11-20 06:43:06.287939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.848 qpair failed and we were unable to recover it. 00:32:34.848 [2024-11-20 06:43:06.288031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.848 [2024-11-20 06:43:06.288063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.848 qpair failed and we were unable to recover it. 00:32:34.848 [2024-11-20 06:43:06.288236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.848 [2024-11-20 06:43:06.288263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.848 qpair failed and we were unable to recover it. 00:32:34.848 [2024-11-20 06:43:06.288448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.848 [2024-11-20 06:43:06.288479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.848 qpair failed and we were unable to recover it. 00:32:34.848 [2024-11-20 06:43:06.288664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.848 [2024-11-20 06:43:06.288695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.848 qpair failed and we were unable to recover it. 00:32:34.848 [2024-11-20 06:43:06.288948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.848 [2024-11-20 06:43:06.288981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.848 qpair failed and we were unable to recover it. 00:32:34.848 [2024-11-20 06:43:06.289171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.848 [2024-11-20 06:43:06.289213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.848 qpair failed and we were unable to recover it. 00:32:34.848 [2024-11-20 06:43:06.289342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.848 [2024-11-20 06:43:06.289374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.848 qpair failed and we were unable to recover it. 00:32:34.848 [2024-11-20 06:43:06.289484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.848 [2024-11-20 06:43:06.289523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.848 qpair failed and we were unable to recover it. 00:32:34.848 [2024-11-20 06:43:06.289659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.848 [2024-11-20 06:43:06.289691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.848 qpair failed and we were unable to recover it. 00:32:34.848 [2024-11-20 06:43:06.289808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.848 [2024-11-20 06:43:06.289840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.848 qpair failed and we were unable to recover it. 00:32:34.848 [2024-11-20 06:43:06.289957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.848 [2024-11-20 06:43:06.289990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.848 qpair failed and we were unable to recover it. 00:32:34.848 [2024-11-20 06:43:06.290181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.848 [2024-11-20 06:43:06.290222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.848 qpair failed and we were unable to recover it. 00:32:34.848 [2024-11-20 06:43:06.290489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.848 [2024-11-20 06:43:06.290521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.848 qpair failed and we were unable to recover it. 00:32:34.848 [2024-11-20 06:43:06.290628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.848 [2024-11-20 06:43:06.290660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.848 qpair failed and we were unable to recover it. 00:32:34.848 [2024-11-20 06:43:06.290915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.848 [2024-11-20 06:43:06.290947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.848 qpair failed and we were unable to recover it. 00:32:34.848 [2024-11-20 06:43:06.291050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.848 [2024-11-20 06:43:06.291081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.848 qpair failed and we were unable to recover it. 00:32:34.848 [2024-11-20 06:43:06.291260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.848 [2024-11-20 06:43:06.291293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.848 qpair failed and we were unable to recover it. 00:32:34.848 [2024-11-20 06:43:06.291430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.848 [2024-11-20 06:43:06.291462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.848 qpair failed and we were unable to recover it. 00:32:34.848 [2024-11-20 06:43:06.291584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.849 [2024-11-20 06:43:06.291616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.849 qpair failed and we were unable to recover it. 00:32:34.849 [2024-11-20 06:43:06.291728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.849 [2024-11-20 06:43:06.291760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.849 qpair failed and we were unable to recover it. 00:32:34.849 [2024-11-20 06:43:06.291894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.849 [2024-11-20 06:43:06.291926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.849 qpair failed and we were unable to recover it. 00:32:34.849 [2024-11-20 06:43:06.292096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.849 [2024-11-20 06:43:06.292128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.849 qpair failed and we were unable to recover it. 00:32:34.849 [2024-11-20 06:43:06.292334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.849 [2024-11-20 06:43:06.292368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.849 qpair failed and we were unable to recover it. 00:32:34.849 [2024-11-20 06:43:06.292501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.849 [2024-11-20 06:43:06.292533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.849 qpair failed and we were unable to recover it. 00:32:34.849 [2024-11-20 06:43:06.292666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.849 [2024-11-20 06:43:06.292698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.849 qpair failed and we were unable to recover it. 00:32:34.849 [2024-11-20 06:43:06.292821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.849 [2024-11-20 06:43:06.292853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.849 qpair failed and we were unable to recover it. 00:32:34.849 [2024-11-20 06:43:06.292962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.849 [2024-11-20 06:43:06.292993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.849 qpair failed and we were unable to recover it. 00:32:34.849 [2024-11-20 06:43:06.293152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.849 [2024-11-20 06:43:06.293236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.849 qpair failed and we were unable to recover it. 00:32:34.849 [2024-11-20 06:43:06.293373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.849 [2024-11-20 06:43:06.293409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.849 qpair failed and we were unable to recover it. 00:32:34.849 [2024-11-20 06:43:06.293515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.849 [2024-11-20 06:43:06.293547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.849 qpair failed and we were unable to recover it. 00:32:34.849 [2024-11-20 06:43:06.293734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.849 [2024-11-20 06:43:06.293765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.849 qpair failed and we were unable to recover it. 00:32:34.849 [2024-11-20 06:43:06.293891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.849 [2024-11-20 06:43:06.293924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.849 qpair failed and we were unable to recover it. 00:32:34.849 [2024-11-20 06:43:06.294031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.849 [2024-11-20 06:43:06.294062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.849 qpair failed and we were unable to recover it. 00:32:34.849 [2024-11-20 06:43:06.294181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.849 [2024-11-20 06:43:06.294229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.849 qpair failed and we were unable to recover it. 00:32:34.849 [2024-11-20 06:43:06.294348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.849 [2024-11-20 06:43:06.294380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.849 qpair failed and we were unable to recover it. 00:32:34.849 [2024-11-20 06:43:06.294485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.849 [2024-11-20 06:43:06.294518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.849 qpair failed and we were unable to recover it. 00:32:34.849 [2024-11-20 06:43:06.294624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.849 [2024-11-20 06:43:06.294656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.849 qpair failed and we were unable to recover it. 00:32:34.849 [2024-11-20 06:43:06.294842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.849 [2024-11-20 06:43:06.294874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.849 qpair failed and we were unable to recover it. 00:32:34.849 [2024-11-20 06:43:06.294999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.849 [2024-11-20 06:43:06.295031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.849 qpair failed and we were unable to recover it. 00:32:34.849 [2024-11-20 06:43:06.295138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.849 [2024-11-20 06:43:06.295170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.849 qpair failed and we were unable to recover it. 00:32:34.849 [2024-11-20 06:43:06.295310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.849 [2024-11-20 06:43:06.295353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.849 qpair failed and we were unable to recover it. 00:32:34.849 [2024-11-20 06:43:06.295500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.849 [2024-11-20 06:43:06.295532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.849 qpair failed and we were unable to recover it. 00:32:34.849 [2024-11-20 06:43:06.295770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.849 [2024-11-20 06:43:06.295801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.849 qpair failed and we were unable to recover it. 00:32:34.849 [2024-11-20 06:43:06.295986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.850 [2024-11-20 06:43:06.296019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.850 qpair failed and we were unable to recover it. 00:32:34.850 [2024-11-20 06:43:06.296150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.850 [2024-11-20 06:43:06.296182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.850 qpair failed and we were unable to recover it. 00:32:34.850 [2024-11-20 06:43:06.296370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.850 [2024-11-20 06:43:06.296402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.850 qpair failed and we were unable to recover it. 00:32:34.850 [2024-11-20 06:43:06.296658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.850 [2024-11-20 06:43:06.296690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.850 qpair failed and we were unable to recover it. 00:32:34.850 [2024-11-20 06:43:06.296811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.850 [2024-11-20 06:43:06.296842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.850 qpair failed and we were unable to recover it. 00:32:34.850 [2024-11-20 06:43:06.296972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.850 [2024-11-20 06:43:06.297003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.850 qpair failed and we were unable to recover it. 00:32:34.850 [2024-11-20 06:43:06.297112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.850 [2024-11-20 06:43:06.297145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.850 qpair failed and we were unable to recover it. 00:32:34.850 [2024-11-20 06:43:06.297266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.850 [2024-11-20 06:43:06.297298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.850 qpair failed and we were unable to recover it. 00:32:34.850 [2024-11-20 06:43:06.297491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.850 [2024-11-20 06:43:06.297522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.850 qpair failed and we were unable to recover it. 00:32:34.850 [2024-11-20 06:43:06.297706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.850 [2024-11-20 06:43:06.297738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.850 qpair failed and we were unable to recover it. 00:32:34.850 [2024-11-20 06:43:06.297980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.850 [2024-11-20 06:43:06.298011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.850 qpair failed and we were unable to recover it. 00:32:34.850 [2024-11-20 06:43:06.298199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.850 [2024-11-20 06:43:06.298241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.850 qpair failed and we were unable to recover it. 00:32:34.850 [2024-11-20 06:43:06.298354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.850 [2024-11-20 06:43:06.298386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.850 qpair failed and we were unable to recover it. 00:32:34.850 [2024-11-20 06:43:06.298489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.850 [2024-11-20 06:43:06.298520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.850 qpair failed and we were unable to recover it. 00:32:34.850 [2024-11-20 06:43:06.298640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.850 [2024-11-20 06:43:06.298671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.850 qpair failed and we were unable to recover it. 00:32:34.850 [2024-11-20 06:43:06.298857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.850 [2024-11-20 06:43:06.298888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.850 qpair failed and we were unable to recover it. 00:32:34.850 [2024-11-20 06:43:06.299005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.850 [2024-11-20 06:43:06.299036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.850 qpair failed and we were unable to recover it. 00:32:34.850 [2024-11-20 06:43:06.299154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.850 [2024-11-20 06:43:06.299186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.850 qpair failed and we were unable to recover it. 00:32:34.850 [2024-11-20 06:43:06.299371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.850 [2024-11-20 06:43:06.299404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.850 qpair failed and we were unable to recover it. 00:32:34.850 [2024-11-20 06:43:06.299530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.851 [2024-11-20 06:43:06.299561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.851 qpair failed and we were unable to recover it. 00:32:34.851 [2024-11-20 06:43:06.299854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.851 [2024-11-20 06:43:06.299885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.851 qpair failed and we were unable to recover it. 00:32:34.851 [2024-11-20 06:43:06.300012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.851 [2024-11-20 06:43:06.300044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.851 qpair failed and we were unable to recover it. 00:32:34.851 [2024-11-20 06:43:06.300225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.851 [2024-11-20 06:43:06.300258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.851 qpair failed and we were unable to recover it. 00:32:34.851 [2024-11-20 06:43:06.300448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.851 [2024-11-20 06:43:06.300479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.851 qpair failed and we were unable to recover it. 00:32:34.851 [2024-11-20 06:43:06.300588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.851 [2024-11-20 06:43:06.300621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.851 qpair failed and we were unable to recover it. 00:32:34.851 [2024-11-20 06:43:06.300742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.851 [2024-11-20 06:43:06.300772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.851 qpair failed and we were unable to recover it. 00:32:34.851 [2024-11-20 06:43:06.300896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.851 [2024-11-20 06:43:06.300928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.851 qpair failed and we were unable to recover it. 00:32:34.851 [2024-11-20 06:43:06.301104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.851 [2024-11-20 06:43:06.301136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.851 qpair failed and we were unable to recover it. 00:32:34.851 [2024-11-20 06:43:06.301320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.851 [2024-11-20 06:43:06.301353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.851 qpair failed and we were unable to recover it. 00:32:34.851 [2024-11-20 06:43:06.301532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.851 [2024-11-20 06:43:06.301563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.851 qpair failed and we were unable to recover it. 00:32:34.851 [2024-11-20 06:43:06.301673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.851 [2024-11-20 06:43:06.301708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.851 qpair failed and we were unable to recover it. 00:32:34.851 [2024-11-20 06:43:06.301887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.851 [2024-11-20 06:43:06.301917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.851 qpair failed and we were unable to recover it. 00:32:34.851 [2024-11-20 06:43:06.302017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.851 [2024-11-20 06:43:06.302047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.851 qpair failed and we were unable to recover it. 00:32:34.851 [2024-11-20 06:43:06.302224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.851 [2024-11-20 06:43:06.302258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.851 qpair failed and we were unable to recover it. 00:32:34.851 [2024-11-20 06:43:06.302434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.851 [2024-11-20 06:43:06.302465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.851 qpair failed and we were unable to recover it. 00:32:34.851 [2024-11-20 06:43:06.302636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.851 [2024-11-20 06:43:06.302668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.851 qpair failed and we were unable to recover it. 00:32:34.851 [2024-11-20 06:43:06.302843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.851 [2024-11-20 06:43:06.302873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.851 qpair failed and we were unable to recover it. 00:32:34.851 [2024-11-20 06:43:06.302992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.851 [2024-11-20 06:43:06.303031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.851 qpair failed and we were unable to recover it. 00:32:34.851 [2024-11-20 06:43:06.303222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.851 [2024-11-20 06:43:06.303256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.851 qpair failed and we were unable to recover it. 00:32:34.851 [2024-11-20 06:43:06.303373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.851 [2024-11-20 06:43:06.303405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.851 qpair failed and we were unable to recover it. 00:32:34.851 [2024-11-20 06:43:06.303580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.851 [2024-11-20 06:43:06.303611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.851 qpair failed and we were unable to recover it. 00:32:34.851 [2024-11-20 06:43:06.303788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.851 [2024-11-20 06:43:06.303820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.851 qpair failed and we were unable to recover it. 00:32:34.851 [2024-11-20 06:43:06.304020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.851 [2024-11-20 06:43:06.304053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.851 qpair failed and we were unable to recover it. 00:32:34.851 [2024-11-20 06:43:06.304165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.851 [2024-11-20 06:43:06.304197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.851 qpair failed and we were unable to recover it. 00:32:34.851 [2024-11-20 06:43:06.304309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.851 [2024-11-20 06:43:06.304342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.851 qpair failed and we were unable to recover it. 00:32:34.852 [2024-11-20 06:43:06.304540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.852 [2024-11-20 06:43:06.304572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.852 qpair failed and we were unable to recover it. 00:32:34.852 [2024-11-20 06:43:06.304675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.852 [2024-11-20 06:43:06.304707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.852 qpair failed and we were unable to recover it. 00:32:34.852 [2024-11-20 06:43:06.304812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.852 [2024-11-20 06:43:06.304845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.852 qpair failed and we were unable to recover it. 00:32:34.852 [2024-11-20 06:43:06.304953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.852 [2024-11-20 06:43:06.304986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.852 qpair failed and we were unable to recover it. 00:32:34.852 [2024-11-20 06:43:06.305186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.852 [2024-11-20 06:43:06.305230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.852 qpair failed and we were unable to recover it. 00:32:34.852 [2024-11-20 06:43:06.305367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.852 [2024-11-20 06:43:06.305400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.852 qpair failed and we were unable to recover it. 00:32:34.852 [2024-11-20 06:43:06.305529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.852 [2024-11-20 06:43:06.305562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.852 qpair failed and we were unable to recover it. 00:32:34.852 [2024-11-20 06:43:06.305688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.852 [2024-11-20 06:43:06.305720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.852 qpair failed and we were unable to recover it. 00:32:34.852 [2024-11-20 06:43:06.305895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.852 [2024-11-20 06:43:06.305927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.852 qpair failed and we were unable to recover it. 00:32:34.852 [2024-11-20 06:43:06.306133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.852 [2024-11-20 06:43:06.306165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.852 qpair failed and we were unable to recover it. 00:32:34.852 [2024-11-20 06:43:06.306354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.852 [2024-11-20 06:43:06.306386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.852 qpair failed and we were unable to recover it. 00:32:34.852 [2024-11-20 06:43:06.306489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.852 [2024-11-20 06:43:06.306521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.852 qpair failed and we were unable to recover it. 00:32:34.852 [2024-11-20 06:43:06.306647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.852 [2024-11-20 06:43:06.306681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.852 qpair failed and we were unable to recover it. 00:32:34.852 [2024-11-20 06:43:06.306926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.852 [2024-11-20 06:43:06.306958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.852 qpair failed and we were unable to recover it. 00:32:34.852 [2024-11-20 06:43:06.307067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.852 [2024-11-20 06:43:06.307098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.852 qpair failed and we were unable to recover it. 00:32:34.852 [2024-11-20 06:43:06.307242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.852 [2024-11-20 06:43:06.307276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.852 qpair failed and we were unable to recover it. 00:32:34.852 [2024-11-20 06:43:06.307457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.852 [2024-11-20 06:43:06.307489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.852 qpair failed and we were unable to recover it. 00:32:34.852 [2024-11-20 06:43:06.307599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.852 [2024-11-20 06:43:06.307632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.852 qpair failed and we were unable to recover it. 00:32:34.852 [2024-11-20 06:43:06.307800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.852 [2024-11-20 06:43:06.307831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.852 qpair failed and we were unable to recover it. 00:32:34.852 [2024-11-20 06:43:06.308007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.852 [2024-11-20 06:43:06.308039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.852 qpair failed and we were unable to recover it. 00:32:34.852 [2024-11-20 06:43:06.308185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.852 [2024-11-20 06:43:06.308224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.852 qpair failed and we were unable to recover it. 00:32:34.852 [2024-11-20 06:43:06.308330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.852 [2024-11-20 06:43:06.308362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.852 qpair failed and we were unable to recover it. 00:32:34.852 [2024-11-20 06:43:06.308471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.852 [2024-11-20 06:43:06.308503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.852 qpair failed and we were unable to recover it. 00:32:34.852 [2024-11-20 06:43:06.308625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.852 [2024-11-20 06:43:06.308657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.852 qpair failed and we were unable to recover it. 00:32:34.852 [2024-11-20 06:43:06.308794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.852 [2024-11-20 06:43:06.308826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.852 qpair failed and we were unable to recover it. 00:32:34.852 [2024-11-20 06:43:06.309001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.853 [2024-11-20 06:43:06.309033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.853 qpair failed and we were unable to recover it. 00:32:34.853 [2024-11-20 06:43:06.309154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.853 [2024-11-20 06:43:06.309186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.853 qpair failed and we were unable to recover it. 00:32:34.853 [2024-11-20 06:43:06.309382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.853 [2024-11-20 06:43:06.309414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.853 qpair failed and we were unable to recover it. 00:32:34.853 [2024-11-20 06:43:06.309586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.853 [2024-11-20 06:43:06.309618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.853 qpair failed and we were unable to recover it. 00:32:34.853 [2024-11-20 06:43:06.309811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.853 [2024-11-20 06:43:06.309844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.853 qpair failed and we were unable to recover it. 00:32:34.853 [2024-11-20 06:43:06.309956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.853 [2024-11-20 06:43:06.309988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.853 qpair failed and we were unable to recover it. 00:32:34.853 [2024-11-20 06:43:06.310116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.853 [2024-11-20 06:43:06.310148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.853 qpair failed and we were unable to recover it. 00:32:34.853 [2024-11-20 06:43:06.310280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.853 [2024-11-20 06:43:06.310320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.853 qpair failed and we were unable to recover it. 00:32:34.853 [2024-11-20 06:43:06.310434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.853 [2024-11-20 06:43:06.310465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.853 qpair failed and we were unable to recover it. 00:32:34.853 [2024-11-20 06:43:06.310592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.853 [2024-11-20 06:43:06.310623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.853 qpair failed and we were unable to recover it. 00:32:34.853 [2024-11-20 06:43:06.310753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.853 [2024-11-20 06:43:06.310785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.853 qpair failed and we were unable to recover it. 00:32:34.853 [2024-11-20 06:43:06.310917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.853 [2024-11-20 06:43:06.310948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.853 qpair failed and we were unable to recover it. 00:32:34.853 [2024-11-20 06:43:06.311186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.853 [2024-11-20 06:43:06.311251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.853 qpair failed and we were unable to recover it. 00:32:34.853 [2024-11-20 06:43:06.311429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.853 [2024-11-20 06:43:06.311462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.853 qpair failed and we were unable to recover it. 00:32:34.853 [2024-11-20 06:43:06.311572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.853 [2024-11-20 06:43:06.311604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.853 qpair failed and we were unable to recover it. 00:32:34.853 [2024-11-20 06:43:06.311731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.853 [2024-11-20 06:43:06.311762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.853 qpair failed and we were unable to recover it. 00:32:34.853 [2024-11-20 06:43:06.311872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.853 [2024-11-20 06:43:06.311904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.853 qpair failed and we were unable to recover it. 00:32:34.853 [2024-11-20 06:43:06.312167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.853 [2024-11-20 06:43:06.312199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.853 qpair failed and we were unable to recover it. 00:32:34.853 [2024-11-20 06:43:06.312336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.853 [2024-11-20 06:43:06.312367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.853 qpair failed and we were unable to recover it. 00:32:34.853 [2024-11-20 06:43:06.312585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.853 [2024-11-20 06:43:06.312617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.853 qpair failed and we were unable to recover it. 00:32:34.853 [2024-11-20 06:43:06.312815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.853 [2024-11-20 06:43:06.312848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.853 qpair failed and we were unable to recover it. 00:32:34.853 [2024-11-20 06:43:06.312976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.853 [2024-11-20 06:43:06.313007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.853 qpair failed and we were unable to recover it. 00:32:34.853 [2024-11-20 06:43:06.313131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.853 [2024-11-20 06:43:06.313162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.853 qpair failed and we were unable to recover it. 00:32:34.853 [2024-11-20 06:43:06.313348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.853 [2024-11-20 06:43:06.313381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.853 qpair failed and we were unable to recover it. 00:32:34.853 [2024-11-20 06:43:06.313482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.853 [2024-11-20 06:43:06.313513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.853 qpair failed and we were unable to recover it. 00:32:34.853 [2024-11-20 06:43:06.313690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.853 [2024-11-20 06:43:06.313723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.853 qpair failed and we were unable to recover it. 00:32:34.853 [2024-11-20 06:43:06.313834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.853 [2024-11-20 06:43:06.313866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.853 qpair failed and we were unable to recover it. 00:32:34.853 [2024-11-20 06:43:06.313984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.853 [2024-11-20 06:43:06.314015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.853 qpair failed and we were unable to recover it. 00:32:34.853 [2024-11-20 06:43:06.314153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.853 [2024-11-20 06:43:06.314185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.853 qpair failed and we were unable to recover it. 00:32:34.853 [2024-11-20 06:43:06.314326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.854 [2024-11-20 06:43:06.314359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.854 qpair failed and we were unable to recover it. 00:32:34.854 [2024-11-20 06:43:06.314480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.854 [2024-11-20 06:43:06.314511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.854 qpair failed and we were unable to recover it. 00:32:34.854 [2024-11-20 06:43:06.314701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.854 [2024-11-20 06:43:06.314734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.854 qpair failed and we were unable to recover it. 00:32:34.854 [2024-11-20 06:43:06.314853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.854 [2024-11-20 06:43:06.314886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.854 qpair failed and we were unable to recover it. 00:32:34.854 [2024-11-20 06:43:06.315077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.854 [2024-11-20 06:43:06.315110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.854 qpair failed and we were unable to recover it. 00:32:34.854 [2024-11-20 06:43:06.315307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.854 [2024-11-20 06:43:06.315342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.854 qpair failed and we were unable to recover it. 00:32:34.854 [2024-11-20 06:43:06.315549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.854 [2024-11-20 06:43:06.315581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.854 qpair failed and we were unable to recover it. 00:32:34.854 [2024-11-20 06:43:06.315763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.854 [2024-11-20 06:43:06.315795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.854 qpair failed and we were unable to recover it. 00:32:34.854 [2024-11-20 06:43:06.315919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.854 [2024-11-20 06:43:06.315950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.854 qpair failed and we were unable to recover it. 00:32:34.854 [2024-11-20 06:43:06.316074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.854 [2024-11-20 06:43:06.316106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.854 qpair failed and we were unable to recover it. 00:32:34.854 [2024-11-20 06:43:06.316233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.854 [2024-11-20 06:43:06.316267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.854 qpair failed and we were unable to recover it. 00:32:34.854 [2024-11-20 06:43:06.316388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.854 [2024-11-20 06:43:06.316420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.854 qpair failed and we were unable to recover it. 00:32:34.854 [2024-11-20 06:43:06.316594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.854 [2024-11-20 06:43:06.316625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.854 qpair failed and we were unable to recover it. 00:32:34.854 [2024-11-20 06:43:06.316733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.854 [2024-11-20 06:43:06.316763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.854 qpair failed and we were unable to recover it. 00:32:34.854 [2024-11-20 06:43:06.316944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.854 [2024-11-20 06:43:06.316976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.854 qpair failed and we were unable to recover it. 00:32:34.854 [2024-11-20 06:43:06.317148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.854 [2024-11-20 06:43:06.317178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.854 qpair failed and we were unable to recover it. 00:32:34.854 [2024-11-20 06:43:06.317407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.854 [2024-11-20 06:43:06.317440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.854 qpair failed and we were unable to recover it. 00:32:34.854 [2024-11-20 06:43:06.317616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.854 [2024-11-20 06:43:06.317649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.854 qpair failed and we were unable to recover it. 00:32:34.854 [2024-11-20 06:43:06.317889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.854 [2024-11-20 06:43:06.317926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.854 qpair failed and we were unable to recover it. 00:32:34.854 [2024-11-20 06:43:06.318031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.854 [2024-11-20 06:43:06.318061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.854 qpair failed and we were unable to recover it. 00:32:34.854 [2024-11-20 06:43:06.318169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.854 [2024-11-20 06:43:06.318213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.854 qpair failed and we were unable to recover it. 00:32:34.854 [2024-11-20 06:43:06.318325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.854 [2024-11-20 06:43:06.318356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.854 qpair failed and we were unable to recover it. 00:32:34.854 [2024-11-20 06:43:06.318543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.854 [2024-11-20 06:43:06.318575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.854 qpair failed and we were unable to recover it. 00:32:34.854 [2024-11-20 06:43:06.318760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.854 [2024-11-20 06:43:06.318792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.854 qpair failed and we were unable to recover it. 00:32:34.854 [2024-11-20 06:43:06.318986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.854 [2024-11-20 06:43:06.319017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.854 qpair failed and we were unable to recover it. 00:32:34.854 [2024-11-20 06:43:06.319213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.854 [2024-11-20 06:43:06.319246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.854 qpair failed and we were unable to recover it. 00:32:34.854 [2024-11-20 06:43:06.319462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.854 [2024-11-20 06:43:06.319494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.854 qpair failed and we were unable to recover it. 00:32:34.854 [2024-11-20 06:43:06.319618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.854 [2024-11-20 06:43:06.319650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.854 qpair failed and we were unable to recover it. 00:32:34.854 [2024-11-20 06:43:06.319757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.854 [2024-11-20 06:43:06.319790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.854 qpair failed and we were unable to recover it. 00:32:34.854 [2024-11-20 06:43:06.319960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.854 [2024-11-20 06:43:06.319991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.854 qpair failed and we were unable to recover it. 00:32:34.854 [2024-11-20 06:43:06.320112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.854 [2024-11-20 06:43:06.320145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.854 qpair failed and we were unable to recover it. 00:32:34.854 [2024-11-20 06:43:06.320275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.854 [2024-11-20 06:43:06.320308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.854 qpair failed and we were unable to recover it. 00:32:34.854 [2024-11-20 06:43:06.320503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.854 [2024-11-20 06:43:06.320535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.854 qpair failed and we were unable to recover it. 00:32:34.854 [2024-11-20 06:43:06.320728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.854 [2024-11-20 06:43:06.320760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.854 qpair failed and we were unable to recover it. 00:32:34.854 [2024-11-20 06:43:06.320886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.854 [2024-11-20 06:43:06.320917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.855 qpair failed and we were unable to recover it. 00:32:34.855 [2024-11-20 06:43:06.321050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.855 [2024-11-20 06:43:06.321082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.855 qpair failed and we were unable to recover it. 00:32:34.855 [2024-11-20 06:43:06.321215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.855 [2024-11-20 06:43:06.321248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.855 qpair failed and we were unable to recover it. 00:32:34.855 [2024-11-20 06:43:06.321379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.855 [2024-11-20 06:43:06.321410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.855 qpair failed and we were unable to recover it. 00:32:34.855 [2024-11-20 06:43:06.321659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.855 [2024-11-20 06:43:06.321689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.855 qpair failed and we were unable to recover it. 00:32:34.855 [2024-11-20 06:43:06.321933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.855 [2024-11-20 06:43:06.321964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.855 qpair failed and we were unable to recover it. 00:32:34.855 [2024-11-20 06:43:06.322096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.855 [2024-11-20 06:43:06.322127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.855 qpair failed and we were unable to recover it. 00:32:34.855 [2024-11-20 06:43:06.322255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.855 [2024-11-20 06:43:06.322288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.855 qpair failed and we were unable to recover it. 00:32:34.855 [2024-11-20 06:43:06.322401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.855 [2024-11-20 06:43:06.322435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.855 qpair failed and we were unable to recover it. 00:32:34.855 [2024-11-20 06:43:06.322612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.855 [2024-11-20 06:43:06.322655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.855 qpair failed and we were unable to recover it. 00:32:34.855 [2024-11-20 06:43:06.322771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.855 [2024-11-20 06:43:06.322803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.855 qpair failed and we were unable to recover it. 00:32:34.855 [2024-11-20 06:43:06.322940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.855 [2024-11-20 06:43:06.322973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.855 qpair failed and we were unable to recover it. 00:32:34.855 [2024-11-20 06:43:06.323145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.855 [2024-11-20 06:43:06.323177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.855 qpair failed and we were unable to recover it. 00:32:34.855 [2024-11-20 06:43:06.323438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.855 [2024-11-20 06:43:06.323470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.855 qpair failed and we were unable to recover it. 00:32:34.855 [2024-11-20 06:43:06.323595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.855 [2024-11-20 06:43:06.323627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.855 qpair failed and we were unable to recover it. 00:32:34.855 [2024-11-20 06:43:06.323755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.855 [2024-11-20 06:43:06.323787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.855 qpair failed and we were unable to recover it. 00:32:34.855 [2024-11-20 06:43:06.323966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.855 [2024-11-20 06:43:06.323998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.855 qpair failed and we were unable to recover it. 00:32:34.855 [2024-11-20 06:43:06.324103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.855 [2024-11-20 06:43:06.324134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.855 qpair failed and we were unable to recover it. 00:32:34.855 [2024-11-20 06:43:06.324272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.855 [2024-11-20 06:43:06.324305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.855 qpair failed and we were unable to recover it. 00:32:34.855 [2024-11-20 06:43:06.324500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.855 [2024-11-20 06:43:06.324532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.855 qpair failed and we were unable to recover it. 00:32:34.855 [2024-11-20 06:43:06.324780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.855 [2024-11-20 06:43:06.324813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.855 qpair failed and we were unable to recover it. 00:32:34.855 [2024-11-20 06:43:06.325027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.855 [2024-11-20 06:43:06.325058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.855 qpair failed and we were unable to recover it. 00:32:34.855 [2024-11-20 06:43:06.325241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.855 [2024-11-20 06:43:06.325274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.855 qpair failed and we were unable to recover it. 00:32:34.855 [2024-11-20 06:43:06.325453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.855 [2024-11-20 06:43:06.325486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.855 qpair failed and we were unable to recover it. 00:32:34.855 [2024-11-20 06:43:06.325667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.855 [2024-11-20 06:43:06.325705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.855 qpair failed and we were unable to recover it. 00:32:34.855 [2024-11-20 06:43:06.325888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.855 [2024-11-20 06:43:06.325918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.855 qpair failed and we were unable to recover it. 00:32:34.855 [2024-11-20 06:43:06.326035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.855 [2024-11-20 06:43:06.326067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.855 qpair failed and we were unable to recover it. 00:32:34.855 [2024-11-20 06:43:06.326179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.855 [2024-11-20 06:43:06.326222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.855 qpair failed and we were unable to recover it. 00:32:34.855 [2024-11-20 06:43:06.326334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.855 [2024-11-20 06:43:06.326369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.855 qpair failed and we were unable to recover it. 00:32:34.855 [2024-11-20 06:43:06.326491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.855 [2024-11-20 06:43:06.326523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.855 qpair failed and we were unable to recover it. 00:32:34.855 [2024-11-20 06:43:06.326654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.855 [2024-11-20 06:43:06.326686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.855 qpair failed and we were unable to recover it. 00:32:34.855 [2024-11-20 06:43:06.326885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.855 [2024-11-20 06:43:06.326917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.855 qpair failed and we were unable to recover it. 00:32:34.855 [2024-11-20 06:43:06.327097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.855 [2024-11-20 06:43:06.327128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.855 qpair failed and we were unable to recover it. 00:32:34.855 [2024-11-20 06:43:06.327249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.855 [2024-11-20 06:43:06.327282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.855 qpair failed and we were unable to recover it. 00:32:34.855 [2024-11-20 06:43:06.327411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.855 [2024-11-20 06:43:06.327443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.855 qpair failed and we were unable to recover it. 00:32:34.855 [2024-11-20 06:43:06.327721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.855 [2024-11-20 06:43:06.327753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.855 qpair failed and we were unable to recover it. 00:32:34.855 [2024-11-20 06:43:06.327972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.855 [2024-11-20 06:43:06.328004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.855 qpair failed and we were unable to recover it. 00:32:34.855 [2024-11-20 06:43:06.328114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.855 [2024-11-20 06:43:06.328145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.855 qpair failed and we were unable to recover it. 00:32:34.856 [2024-11-20 06:43:06.328276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.856 [2024-11-20 06:43:06.328313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.856 qpair failed and we were unable to recover it. 00:32:34.856 [2024-11-20 06:43:06.328505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.856 [2024-11-20 06:43:06.328537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.856 qpair failed and we were unable to recover it. 00:32:34.856 [2024-11-20 06:43:06.328728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.856 [2024-11-20 06:43:06.328759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.856 qpair failed and we were unable to recover it. 00:32:34.856 [2024-11-20 06:43:06.328869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.856 [2024-11-20 06:43:06.328900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.856 qpair failed and we were unable to recover it. 00:32:34.856 [2024-11-20 06:43:06.329001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.856 [2024-11-20 06:43:06.329032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.856 qpair failed and we were unable to recover it. 00:32:34.856 [2024-11-20 06:43:06.329216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.856 [2024-11-20 06:43:06.329248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.856 qpair failed and we were unable to recover it. 00:32:34.856 [2024-11-20 06:43:06.329438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.856 [2024-11-20 06:43:06.329470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.856 qpair failed and we were unable to recover it. 00:32:34.856 [2024-11-20 06:43:06.329657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.856 [2024-11-20 06:43:06.329689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.856 qpair failed and we were unable to recover it. 00:32:34.856 [2024-11-20 06:43:06.329798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.856 [2024-11-20 06:43:06.329830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.856 qpair failed and we were unable to recover it. 00:32:34.856 [2024-11-20 06:43:06.330019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.856 [2024-11-20 06:43:06.330052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.856 qpair failed and we were unable to recover it. 00:32:34.856 [2024-11-20 06:43:06.330365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.856 [2024-11-20 06:43:06.330401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.856 qpair failed and we were unable to recover it. 00:32:34.856 [2024-11-20 06:43:06.330633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.856 [2024-11-20 06:43:06.330664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.856 qpair failed and we were unable to recover it. 00:32:34.856 [2024-11-20 06:43:06.330793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.856 [2024-11-20 06:43:06.330826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.856 qpair failed and we were unable to recover it. 00:32:34.856 [2024-11-20 06:43:06.331036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.856 [2024-11-20 06:43:06.331067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.856 qpair failed and we were unable to recover it. 00:32:34.856 [2024-11-20 06:43:06.331218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.856 [2024-11-20 06:43:06.331251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.856 qpair failed and we were unable to recover it. 00:32:34.856 [2024-11-20 06:43:06.331455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.856 [2024-11-20 06:43:06.331488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.856 qpair failed and we were unable to recover it. 00:32:34.856 [2024-11-20 06:43:06.331664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.856 [2024-11-20 06:43:06.331694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.856 qpair failed and we were unable to recover it. 00:32:34.856 [2024-11-20 06:43:06.331883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.856 [2024-11-20 06:43:06.331914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.856 qpair failed and we were unable to recover it. 00:32:34.856 [2024-11-20 06:43:06.332090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.856 [2024-11-20 06:43:06.332121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.856 qpair failed and we were unable to recover it. 00:32:34.856 [2024-11-20 06:43:06.332242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.856 [2024-11-20 06:43:06.332275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.856 qpair failed and we were unable to recover it. 00:32:34.856 [2024-11-20 06:43:06.332452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.856 [2024-11-20 06:43:06.332483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.856 qpair failed and we were unable to recover it. 00:32:34.856 [2024-11-20 06:43:06.332681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.856 [2024-11-20 06:43:06.332713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.856 qpair failed and we were unable to recover it. 00:32:34.856 [2024-11-20 06:43:06.332895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.856 [2024-11-20 06:43:06.332926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.856 qpair failed and we were unable to recover it. 00:32:34.856 [2024-11-20 06:43:06.333040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.856 [2024-11-20 06:43:06.333072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.856 qpair failed and we were unable to recover it. 00:32:34.856 [2024-11-20 06:43:06.333260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.856 [2024-11-20 06:43:06.333293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.856 qpair failed and we were unable to recover it. 00:32:34.856 [2024-11-20 06:43:06.333488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.856 [2024-11-20 06:43:06.333518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.856 qpair failed and we were unable to recover it. 00:32:34.856 [2024-11-20 06:43:06.333647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.856 [2024-11-20 06:43:06.333685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.856 qpair failed and we were unable to recover it. 00:32:34.856 [2024-11-20 06:43:06.333809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.856 [2024-11-20 06:43:06.333848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.856 qpair failed and we were unable to recover it. 00:32:34.856 [2024-11-20 06:43:06.334128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.856 [2024-11-20 06:43:06.334159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.856 qpair failed and we were unable to recover it. 00:32:34.856 [2024-11-20 06:43:06.334370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.856 [2024-11-20 06:43:06.334403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.856 qpair failed and we were unable to recover it. 00:32:34.856 [2024-11-20 06:43:06.334584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.856 [2024-11-20 06:43:06.334615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.856 qpair failed and we were unable to recover it. 00:32:34.857 [2024-11-20 06:43:06.334738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.857 [2024-11-20 06:43:06.334768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.857 qpair failed and we were unable to recover it. 00:32:34.857 [2024-11-20 06:43:06.334951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.857 [2024-11-20 06:43:06.334983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.857 qpair failed and we were unable to recover it. 00:32:34.857 [2024-11-20 06:43:06.335095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.857 [2024-11-20 06:43:06.335126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.857 qpair failed and we were unable to recover it. 00:32:34.857 [2024-11-20 06:43:06.335391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.857 [2024-11-20 06:43:06.335425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.857 qpair failed and we were unable to recover it. 00:32:34.857 [2024-11-20 06:43:06.335545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.857 [2024-11-20 06:43:06.335577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.857 qpair failed and we were unable to recover it. 00:32:34.857 [2024-11-20 06:43:06.335827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.857 [2024-11-20 06:43:06.335859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.857 qpair failed and we were unable to recover it. 00:32:34.857 [2024-11-20 06:43:06.335995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.857 [2024-11-20 06:43:06.336026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.857 qpair failed and we were unable to recover it. 00:32:34.857 [2024-11-20 06:43:06.336228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.857 [2024-11-20 06:43:06.336261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.857 qpair failed and we were unable to recover it. 00:32:34.857 [2024-11-20 06:43:06.336400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.857 [2024-11-20 06:43:06.336432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.857 qpair failed and we were unable to recover it. 00:32:34.857 [2024-11-20 06:43:06.336553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.857 [2024-11-20 06:43:06.336584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.857 qpair failed and we were unable to recover it. 00:32:34.857 [2024-11-20 06:43:06.336719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.857 [2024-11-20 06:43:06.336749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.857 qpair failed and we were unable to recover it. 00:32:34.857 [2024-11-20 06:43:06.336857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.857 [2024-11-20 06:43:06.336889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.857 qpair failed and we were unable to recover it. 00:32:34.857 [2024-11-20 06:43:06.336997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.857 [2024-11-20 06:43:06.337029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.857 qpair failed and we were unable to recover it. 00:32:34.857 [2024-11-20 06:43:06.337129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.857 [2024-11-20 06:43:06.337160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.857 qpair failed and we were unable to recover it. 00:32:34.857 [2024-11-20 06:43:06.337365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.857 [2024-11-20 06:43:06.337399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.857 qpair failed and we were unable to recover it. 00:32:34.857 [2024-11-20 06:43:06.337523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.857 [2024-11-20 06:43:06.337554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.857 qpair failed and we were unable to recover it. 00:32:34.857 [2024-11-20 06:43:06.337748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.857 [2024-11-20 06:43:06.337780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.857 qpair failed and we were unable to recover it. 00:32:34.857 [2024-11-20 06:43:06.337980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.857 [2024-11-20 06:43:06.338012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.857 qpair failed and we were unable to recover it. 00:32:34.857 [2024-11-20 06:43:06.338135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.857 [2024-11-20 06:43:06.338166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.857 qpair failed and we were unable to recover it. 00:32:34.857 [2024-11-20 06:43:06.338317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.857 [2024-11-20 06:43:06.338350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.857 qpair failed and we were unable to recover it. 00:32:34.857 [2024-11-20 06:43:06.338537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.857 [2024-11-20 06:43:06.338569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.857 qpair failed and we were unable to recover it. 00:32:34.857 [2024-11-20 06:43:06.338682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.857 [2024-11-20 06:43:06.338714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.857 qpair failed and we were unable to recover it. 00:32:34.857 [2024-11-20 06:43:06.338952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.857 [2024-11-20 06:43:06.338989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.857 qpair failed and we were unable to recover it. 00:32:34.857 [2024-11-20 06:43:06.339166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.857 [2024-11-20 06:43:06.339198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.857 qpair failed and we were unable to recover it. 00:32:34.857 [2024-11-20 06:43:06.339319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.857 [2024-11-20 06:43:06.339352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.857 qpair failed and we were unable to recover it. 00:32:34.857 [2024-11-20 06:43:06.339556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.857 [2024-11-20 06:43:06.339587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.857 qpair failed and we were unable to recover it. 00:32:34.857 [2024-11-20 06:43:06.339724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.857 [2024-11-20 06:43:06.339756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.857 qpair failed and we were unable to recover it. 00:32:34.857 [2024-11-20 06:43:06.339889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.857 [2024-11-20 06:43:06.339920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.857 qpair failed and we were unable to recover it. 00:32:34.857 [2024-11-20 06:43:06.340131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.857 [2024-11-20 06:43:06.340162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.857 qpair failed and we were unable to recover it. 00:32:34.857 [2024-11-20 06:43:06.340351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.857 [2024-11-20 06:43:06.340383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.857 qpair failed and we were unable to recover it. 00:32:34.857 [2024-11-20 06:43:06.340520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.857 [2024-11-20 06:43:06.340551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.857 qpair failed and we were unable to recover it. 00:32:34.857 [2024-11-20 06:43:06.340685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.857 [2024-11-20 06:43:06.340717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.857 qpair failed and we were unable to recover it. 00:32:34.857 [2024-11-20 06:43:06.340831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.857 [2024-11-20 06:43:06.340863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.857 qpair failed and we were unable to recover it. 00:32:34.857 [2024-11-20 06:43:06.340973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.857 [2024-11-20 06:43:06.341003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.857 qpair failed and we were unable to recover it. 00:32:34.857 [2024-11-20 06:43:06.341284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.857 [2024-11-20 06:43:06.341317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.857 qpair failed and we were unable to recover it. 00:32:34.857 [2024-11-20 06:43:06.341506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.857 [2024-11-20 06:43:06.341538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.857 qpair failed and we were unable to recover it. 00:32:34.857 [2024-11-20 06:43:06.341649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.857 [2024-11-20 06:43:06.341680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.857 qpair failed and we were unable to recover it. 00:32:34.858 [2024-11-20 06:43:06.341850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.858 [2024-11-20 06:43:06.341882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.858 qpair failed and we were unable to recover it. 00:32:34.858 [2024-11-20 06:43:06.342010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.858 [2024-11-20 06:43:06.342042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.858 qpair failed and we were unable to recover it. 00:32:34.858 [2024-11-20 06:43:06.342144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.858 [2024-11-20 06:43:06.342174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.858 qpair failed and we were unable to recover it. 00:32:34.858 [2024-11-20 06:43:06.342521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.858 [2024-11-20 06:43:06.342592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.858 qpair failed and we were unable to recover it. 00:32:34.858 [2024-11-20 06:43:06.342748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.858 [2024-11-20 06:43:06.342786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.858 qpair failed and we were unable to recover it. 00:32:34.858 [2024-11-20 06:43:06.342924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.858 [2024-11-20 06:43:06.342957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.858 qpair failed and we were unable to recover it. 00:32:34.858 [2024-11-20 06:43:06.343142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.858 [2024-11-20 06:43:06.343173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.858 qpair failed and we were unable to recover it. 00:32:34.858 [2024-11-20 06:43:06.343304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.858 [2024-11-20 06:43:06.343338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.858 qpair failed and we were unable to recover it. 00:32:34.858 [2024-11-20 06:43:06.343507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.858 [2024-11-20 06:43:06.343539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.858 qpair failed and we were unable to recover it. 00:32:34.858 [2024-11-20 06:43:06.343709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.858 [2024-11-20 06:43:06.343741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.858 qpair failed and we were unable to recover it. 00:32:34.858 [2024-11-20 06:43:06.343874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.858 [2024-11-20 06:43:06.343906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.858 qpair failed and we were unable to recover it. 00:32:34.858 [2024-11-20 06:43:06.344042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.858 [2024-11-20 06:43:06.344075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.858 qpair failed and we were unable to recover it. 00:32:34.858 [2024-11-20 06:43:06.344214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.858 [2024-11-20 06:43:06.344248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.858 qpair failed and we were unable to recover it. 00:32:34.858 [2024-11-20 06:43:06.344430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.858 [2024-11-20 06:43:06.344462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.858 qpair failed and we were unable to recover it. 00:32:34.858 [2024-11-20 06:43:06.344632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.858 [2024-11-20 06:43:06.344666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.858 qpair failed and we were unable to recover it. 00:32:34.858 [2024-11-20 06:43:06.344872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.858 [2024-11-20 06:43:06.344903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.858 qpair failed and we were unable to recover it. 00:32:34.858 [2024-11-20 06:43:06.345081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.858 [2024-11-20 06:43:06.345115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.858 qpair failed and we were unable to recover it. 00:32:34.858 [2024-11-20 06:43:06.345305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.858 [2024-11-20 06:43:06.345339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.858 qpair failed and we were unable to recover it. 00:32:34.858 [2024-11-20 06:43:06.345522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.858 [2024-11-20 06:43:06.345557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.858 qpair failed and we were unable to recover it. 00:32:34.858 [2024-11-20 06:43:06.345682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.858 [2024-11-20 06:43:06.345715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.858 qpair failed and we were unable to recover it. 00:32:34.858 [2024-11-20 06:43:06.345888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.858 [2024-11-20 06:43:06.345919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.858 qpair failed and we were unable to recover it. 00:32:34.858 [2024-11-20 06:43:06.346101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.858 [2024-11-20 06:43:06.346133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.858 qpair failed and we were unable to recover it. 00:32:34.858 [2024-11-20 06:43:06.346262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.858 [2024-11-20 06:43:06.346297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.858 qpair failed and we were unable to recover it. 00:32:34.858 [2024-11-20 06:43:06.346407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.858 [2024-11-20 06:43:06.346439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.858 qpair failed and we were unable to recover it. 00:32:34.858 [2024-11-20 06:43:06.346559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.858 [2024-11-20 06:43:06.346592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.858 qpair failed and we were unable to recover it. 00:32:34.858 [2024-11-20 06:43:06.346703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.858 [2024-11-20 06:43:06.346742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.858 qpair failed and we were unable to recover it. 00:32:34.858 [2024-11-20 06:43:06.346921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.858 [2024-11-20 06:43:06.346954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.858 qpair failed and we were unable to recover it. 00:32:34.858 [2024-11-20 06:43:06.347104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.858 [2024-11-20 06:43:06.347136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.858 qpair failed and we were unable to recover it. 00:32:34.858 [2024-11-20 06:43:06.347262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.858 [2024-11-20 06:43:06.347297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.858 qpair failed and we were unable to recover it. 00:32:34.858 [2024-11-20 06:43:06.347486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.858 [2024-11-20 06:43:06.347517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.858 qpair failed and we were unable to recover it. 00:32:34.858 [2024-11-20 06:43:06.347705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.858 [2024-11-20 06:43:06.347737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.858 qpair failed and we were unable to recover it. 00:32:34.858 [2024-11-20 06:43:06.347858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.858 [2024-11-20 06:43:06.347891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.858 qpair failed and we were unable to recover it. 00:32:34.858 [2024-11-20 06:43:06.348016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.858 [2024-11-20 06:43:06.348049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.858 qpair failed and we were unable to recover it. 00:32:34.858 [2024-11-20 06:43:06.348167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.858 [2024-11-20 06:43:06.348198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.858 qpair failed and we were unable to recover it. 00:32:34.858 [2024-11-20 06:43:06.348398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.858 [2024-11-20 06:43:06.348432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.858 qpair failed and we were unable to recover it. 00:32:34.858 [2024-11-20 06:43:06.348572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.858 [2024-11-20 06:43:06.348603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.858 qpair failed and we were unable to recover it. 00:32:34.858 [2024-11-20 06:43:06.348849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.858 [2024-11-20 06:43:06.348882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.859 qpair failed and we were unable to recover it. 00:32:34.859 [2024-11-20 06:43:06.349065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.859 [2024-11-20 06:43:06.349098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.859 qpair failed and we were unable to recover it. 00:32:34.859 [2024-11-20 06:43:06.349273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.859 [2024-11-20 06:43:06.349306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.859 qpair failed and we were unable to recover it. 00:32:34.859 [2024-11-20 06:43:06.349415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.859 [2024-11-20 06:43:06.349446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.859 qpair failed and we were unable to recover it. 00:32:34.859 [2024-11-20 06:43:06.349630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.859 [2024-11-20 06:43:06.349663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.859 qpair failed and we were unable to recover it. 00:32:34.859 [2024-11-20 06:43:06.349857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.859 [2024-11-20 06:43:06.349889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.859 qpair failed and we were unable to recover it. 00:32:34.859 [2024-11-20 06:43:06.350126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.859 [2024-11-20 06:43:06.350159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.859 qpair failed and we were unable to recover it. 00:32:34.859 [2024-11-20 06:43:06.350346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.859 [2024-11-20 06:43:06.350379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.859 qpair failed and we were unable to recover it. 00:32:34.859 [2024-11-20 06:43:06.350504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.859 [2024-11-20 06:43:06.350536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.859 qpair failed and we were unable to recover it. 00:32:34.859 [2024-11-20 06:43:06.350725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.859 [2024-11-20 06:43:06.350757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.859 qpair failed and we were unable to recover it. 00:32:34.859 [2024-11-20 06:43:06.350888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.859 [2024-11-20 06:43:06.350920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.859 qpair failed and we were unable to recover it. 00:32:34.859 [2024-11-20 06:43:06.351024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.859 [2024-11-20 06:43:06.351056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.859 qpair failed and we were unable to recover it. 00:32:34.859 [2024-11-20 06:43:06.351249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.859 [2024-11-20 06:43:06.351283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.859 qpair failed and we were unable to recover it. 00:32:34.859 [2024-11-20 06:43:06.351413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.859 [2024-11-20 06:43:06.351444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.859 qpair failed and we were unable to recover it. 00:32:34.859 [2024-11-20 06:43:06.351627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.859 [2024-11-20 06:43:06.351658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.859 qpair failed and we were unable to recover it. 00:32:34.859 [2024-11-20 06:43:06.351838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.859 [2024-11-20 06:43:06.351870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.859 qpair failed and we were unable to recover it. 00:32:34.859 [2024-11-20 06:43:06.352089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.859 [2024-11-20 06:43:06.352122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.859 qpair failed and we were unable to recover it. 00:32:34.859 [2024-11-20 06:43:06.352318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.859 [2024-11-20 06:43:06.352351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.859 qpair failed and we were unable to recover it. 00:32:34.859 [2024-11-20 06:43:06.352481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.859 [2024-11-20 06:43:06.352513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.859 qpair failed and we were unable to recover it. 00:32:34.859 [2024-11-20 06:43:06.352727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.859 [2024-11-20 06:43:06.352758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.859 qpair failed and we were unable to recover it. 00:32:34.859 [2024-11-20 06:43:06.352996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.859 [2024-11-20 06:43:06.353028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.859 qpair failed and we were unable to recover it. 00:32:34.859 [2024-11-20 06:43:06.353148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.859 [2024-11-20 06:43:06.353179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.859 qpair failed and we were unable to recover it. 00:32:34.859 [2024-11-20 06:43:06.353403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.859 [2024-11-20 06:43:06.353435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.859 qpair failed and we were unable to recover it. 00:32:34.859 [2024-11-20 06:43:06.353559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.859 [2024-11-20 06:43:06.353590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.859 qpair failed and we were unable to recover it. 00:32:34.859 [2024-11-20 06:43:06.353709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.859 [2024-11-20 06:43:06.353741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.859 qpair failed and we were unable to recover it. 00:32:34.859 [2024-11-20 06:43:06.353944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.859 [2024-11-20 06:43:06.353975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.859 qpair failed and we were unable to recover it. 00:32:34.859 [2024-11-20 06:43:06.354079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.859 [2024-11-20 06:43:06.354111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.859 qpair failed and we were unable to recover it. 00:32:34.859 [2024-11-20 06:43:06.354300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.859 [2024-11-20 06:43:06.354333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.859 qpair failed and we were unable to recover it. 00:32:34.859 [2024-11-20 06:43:06.354450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.859 [2024-11-20 06:43:06.354483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.859 qpair failed and we were unable to recover it. 00:32:34.859 [2024-11-20 06:43:06.354666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.859 [2024-11-20 06:43:06.354704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.859 qpair failed and we were unable to recover it. 00:32:34.859 [2024-11-20 06:43:06.354814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.859 [2024-11-20 06:43:06.354846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.859 qpair failed and we were unable to recover it. 00:32:34.859 [2024-11-20 06:43:06.354970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.859 [2024-11-20 06:43:06.355002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.859 qpair failed and we were unable to recover it. 00:32:34.859 [2024-11-20 06:43:06.355174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.859 [2024-11-20 06:43:06.355213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.859 qpair failed and we were unable to recover it. 00:32:34.859 [2024-11-20 06:43:06.355393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.859 [2024-11-20 06:43:06.355424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.859 qpair failed and we were unable to recover it. 00:32:34.859 [2024-11-20 06:43:06.355546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.859 [2024-11-20 06:43:06.355578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.859 qpair failed and we were unable to recover it. 00:32:34.859 [2024-11-20 06:43:06.355700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.859 [2024-11-20 06:43:06.355731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.859 qpair failed and we were unable to recover it. 00:32:34.859 [2024-11-20 06:43:06.355924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.859 [2024-11-20 06:43:06.355957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.859 qpair failed and we were unable to recover it. 00:32:34.859 [2024-11-20 06:43:06.356219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.859 [2024-11-20 06:43:06.356253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.859 qpair failed and we were unable to recover it. 00:32:34.860 [2024-11-20 06:43:06.356373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.860 [2024-11-20 06:43:06.356405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.860 qpair failed and we were unable to recover it. 00:32:34.860 [2024-11-20 06:43:06.356509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.860 [2024-11-20 06:43:06.356540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.860 qpair failed and we were unable to recover it. 00:32:34.860 [2024-11-20 06:43:06.356652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.860 [2024-11-20 06:43:06.356684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.860 qpair failed and we were unable to recover it. 00:32:34.860 [2024-11-20 06:43:06.356796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.860 [2024-11-20 06:43:06.356827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.860 qpair failed and we were unable to recover it. 00:32:34.860 [2024-11-20 06:43:06.356937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.860 [2024-11-20 06:43:06.356968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.860 qpair failed and we were unable to recover it. 00:32:34.860 [2024-11-20 06:43:06.357145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.860 [2024-11-20 06:43:06.357177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.860 qpair failed and we were unable to recover it. 00:32:34.860 [2024-11-20 06:43:06.357296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.860 [2024-11-20 06:43:06.357328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.860 qpair failed and we were unable to recover it. 00:32:34.860 [2024-11-20 06:43:06.357507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.860 [2024-11-20 06:43:06.357539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.860 qpair failed and we were unable to recover it. 00:32:34.860 [2024-11-20 06:43:06.357670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.860 [2024-11-20 06:43:06.357702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.860 qpair failed and we were unable to recover it. 00:32:34.860 [2024-11-20 06:43:06.357824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.860 [2024-11-20 06:43:06.357856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.860 qpair failed and we were unable to recover it. 00:32:34.860 [2024-11-20 06:43:06.358036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.860 [2024-11-20 06:43:06.358067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.860 qpair failed and we were unable to recover it. 00:32:34.860 [2024-11-20 06:43:06.358257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.860 [2024-11-20 06:43:06.358291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.860 qpair failed and we were unable to recover it. 00:32:34.860 [2024-11-20 06:43:06.358491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.860 [2024-11-20 06:43:06.358523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.860 qpair failed and we were unable to recover it. 00:32:34.860 [2024-11-20 06:43:06.358660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.860 [2024-11-20 06:43:06.358692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.860 qpair failed and we were unable to recover it. 00:32:34.860 [2024-11-20 06:43:06.358823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.860 [2024-11-20 06:43:06.358855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.860 qpair failed and we were unable to recover it. 00:32:34.860 [2024-11-20 06:43:06.358968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.860 [2024-11-20 06:43:06.358999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.860 qpair failed and we were unable to recover it. 00:32:34.860 [2024-11-20 06:43:06.359105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.860 [2024-11-20 06:43:06.359136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.860 qpair failed and we were unable to recover it. 00:32:34.860 [2024-11-20 06:43:06.359353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.860 [2024-11-20 06:43:06.359387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.860 qpair failed and we were unable to recover it. 00:32:34.860 [2024-11-20 06:43:06.359507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.860 [2024-11-20 06:43:06.359539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.860 qpair failed and we were unable to recover it. 00:32:34.860 [2024-11-20 06:43:06.359733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.860 [2024-11-20 06:43:06.359765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.860 qpair failed and we were unable to recover it. 00:32:34.860 [2024-11-20 06:43:06.360045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.860 [2024-11-20 06:43:06.360078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.860 qpair failed and we were unable to recover it. 00:32:34.860 [2024-11-20 06:43:06.360266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.860 [2024-11-20 06:43:06.360299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.860 qpair failed and we were unable to recover it. 00:32:34.860 [2024-11-20 06:43:06.360407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.860 [2024-11-20 06:43:06.360438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.860 qpair failed and we were unable to recover it. 00:32:34.860 [2024-11-20 06:43:06.360680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.860 [2024-11-20 06:43:06.360713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.860 qpair failed and we were unable to recover it. 00:32:34.860 [2024-11-20 06:43:06.360817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.860 [2024-11-20 06:43:06.360848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.860 qpair failed and we were unable to recover it. 00:32:34.860 [2024-11-20 06:43:06.361031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.860 [2024-11-20 06:43:06.361063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.860 qpair failed and we were unable to recover it. 00:32:34.860 [2024-11-20 06:43:06.361186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.860 [2024-11-20 06:43:06.361227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.860 qpair failed and we were unable to recover it. 00:32:34.860 [2024-11-20 06:43:06.361360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.860 [2024-11-20 06:43:06.361393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.860 qpair failed and we were unable to recover it. 00:32:34.860 [2024-11-20 06:43:06.361515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.860 [2024-11-20 06:43:06.361546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.860 qpair failed and we were unable to recover it. 00:32:34.860 [2024-11-20 06:43:06.361666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.860 [2024-11-20 06:43:06.361698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.860 qpair failed and we were unable to recover it. 00:32:34.860 [2024-11-20 06:43:06.361898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.860 [2024-11-20 06:43:06.361930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.860 qpair failed and we were unable to recover it. 00:32:34.860 [2024-11-20 06:43:06.362055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.860 [2024-11-20 06:43:06.362098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.860 qpair failed and we were unable to recover it. 00:32:34.860 [2024-11-20 06:43:06.362282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.860 [2024-11-20 06:43:06.362317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.860 qpair failed and we were unable to recover it. 00:32:34.860 [2024-11-20 06:43:06.362431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.860 [2024-11-20 06:43:06.362462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.860 qpair failed and we were unable to recover it. 00:32:34.860 [2024-11-20 06:43:06.362640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.860 [2024-11-20 06:43:06.362671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.860 qpair failed and we were unable to recover it. 00:32:34.860 [2024-11-20 06:43:06.362787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.860 [2024-11-20 06:43:06.362819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.860 qpair failed and we were unable to recover it. 00:32:34.860 [2024-11-20 06:43:06.362939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.860 [2024-11-20 06:43:06.362971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.860 qpair failed and we were unable to recover it. 00:32:34.861 [2024-11-20 06:43:06.363074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.861 [2024-11-20 06:43:06.363106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.861 qpair failed and we were unable to recover it. 00:32:34.861 [2024-11-20 06:43:06.363288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.861 [2024-11-20 06:43:06.363321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.861 qpair failed and we were unable to recover it. 00:32:34.861 [2024-11-20 06:43:06.363497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.861 [2024-11-20 06:43:06.363528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.861 qpair failed and we were unable to recover it. 00:32:34.861 [2024-11-20 06:43:06.363706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.861 [2024-11-20 06:43:06.363738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.861 qpair failed and we were unable to recover it. 00:32:34.861 [2024-11-20 06:43:06.363849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.861 [2024-11-20 06:43:06.363881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.861 qpair failed and we were unable to recover it. 00:32:34.861 [2024-11-20 06:43:06.363999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.861 [2024-11-20 06:43:06.364030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.861 qpair failed and we were unable to recover it. 00:32:34.861 [2024-11-20 06:43:06.364152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.861 [2024-11-20 06:43:06.364184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.861 qpair failed and we were unable to recover it. 00:32:34.861 [2024-11-20 06:43:06.364328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.861 [2024-11-20 06:43:06.364361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.861 qpair failed and we were unable to recover it. 00:32:34.861 [2024-11-20 06:43:06.364541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.861 [2024-11-20 06:43:06.364574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.861 qpair failed and we were unable to recover it. 00:32:34.861 [2024-11-20 06:43:06.364683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.861 [2024-11-20 06:43:06.364716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.861 qpair failed and we were unable to recover it. 00:32:34.861 [2024-11-20 06:43:06.364959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.861 [2024-11-20 06:43:06.364991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.861 qpair failed and we were unable to recover it. 00:32:34.861 [2024-11-20 06:43:06.365124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.861 [2024-11-20 06:43:06.365155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.861 qpair failed and we were unable to recover it. 00:32:34.861 [2024-11-20 06:43:06.365344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.861 [2024-11-20 06:43:06.365378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.861 qpair failed and we were unable to recover it. 00:32:34.861 [2024-11-20 06:43:06.365695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.861 [2024-11-20 06:43:06.365727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.861 qpair failed and we were unable to recover it. 00:32:34.861 [2024-11-20 06:43:06.365922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.861 [2024-11-20 06:43:06.365955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.861 qpair failed and we were unable to recover it. 00:32:34.861 [2024-11-20 06:43:06.366148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.861 [2024-11-20 06:43:06.366180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.861 qpair failed and we were unable to recover it. 00:32:34.861 [2024-11-20 06:43:06.366303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.861 [2024-11-20 06:43:06.366336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.861 qpair failed and we were unable to recover it. 00:32:34.861 [2024-11-20 06:43:06.366524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.861 [2024-11-20 06:43:06.366556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.861 qpair failed and we were unable to recover it. 00:32:34.861 [2024-11-20 06:43:06.366740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.861 [2024-11-20 06:43:06.366771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.861 qpair failed and we were unable to recover it. 00:32:34.861 [2024-11-20 06:43:06.366908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.861 [2024-11-20 06:43:06.366939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.861 qpair failed and we were unable to recover it. 00:32:34.861 [2024-11-20 06:43:06.367064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.861 [2024-11-20 06:43:06.367095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.861 qpair failed and we were unable to recover it. 00:32:34.861 [2024-11-20 06:43:06.367371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.861 [2024-11-20 06:43:06.367405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.861 qpair failed and we were unable to recover it. 00:32:34.861 [2024-11-20 06:43:06.367588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.861 [2024-11-20 06:43:06.367620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.861 qpair failed and we were unable to recover it. 00:32:34.861 [2024-11-20 06:43:06.367727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.861 [2024-11-20 06:43:06.367759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.861 qpair failed and we were unable to recover it. 00:32:34.861 [2024-11-20 06:43:06.367877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.861 [2024-11-20 06:43:06.367909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.861 qpair failed and we were unable to recover it. 00:32:34.861 [2024-11-20 06:43:06.368036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.861 [2024-11-20 06:43:06.368068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.861 qpair failed and we were unable to recover it. 00:32:34.861 [2024-11-20 06:43:06.368243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.861 [2024-11-20 06:43:06.368277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.861 qpair failed and we were unable to recover it. 00:32:34.861 [2024-11-20 06:43:06.368466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.861 [2024-11-20 06:43:06.368497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.861 qpair failed and we were unable to recover it. 00:32:34.861 [2024-11-20 06:43:06.368607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.861 [2024-11-20 06:43:06.368639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.861 qpair failed and we were unable to recover it. 00:32:34.861 [2024-11-20 06:43:06.368840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.861 [2024-11-20 06:43:06.368872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.861 qpair failed and we were unable to recover it. 00:32:34.861 [2024-11-20 06:43:06.369007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.861 [2024-11-20 06:43:06.369040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.861 qpair failed and we were unable to recover it. 00:32:34.861 [2024-11-20 06:43:06.369158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.861 [2024-11-20 06:43:06.369190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.861 qpair failed and we were unable to recover it. 00:32:34.861 [2024-11-20 06:43:06.369442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.862 [2024-11-20 06:43:06.369473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.862 qpair failed and we were unable to recover it. 00:32:34.862 [2024-11-20 06:43:06.369584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.862 [2024-11-20 06:43:06.369616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.862 qpair failed and we were unable to recover it. 00:32:34.862 [2024-11-20 06:43:06.369736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.862 [2024-11-20 06:43:06.369772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.862 qpair failed and we were unable to recover it. 00:32:34.862 [2024-11-20 06:43:06.369894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.862 [2024-11-20 06:43:06.369926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.862 qpair failed and we were unable to recover it. 00:32:34.862 [2024-11-20 06:43:06.370140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.862 [2024-11-20 06:43:06.370173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.862 qpair failed and we were unable to recover it. 00:32:34.862 [2024-11-20 06:43:06.370385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.862 [2024-11-20 06:43:06.370417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.862 qpair failed and we were unable to recover it. 00:32:34.862 [2024-11-20 06:43:06.370558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.862 [2024-11-20 06:43:06.370589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.862 qpair failed and we were unable to recover it. 00:32:34.862 [2024-11-20 06:43:06.370813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.862 [2024-11-20 06:43:06.370845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.862 qpair failed and we were unable to recover it. 00:32:34.862 [2024-11-20 06:43:06.370952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.862 [2024-11-20 06:43:06.370984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.862 qpair failed and we were unable to recover it. 00:32:34.862 [2024-11-20 06:43:06.371095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.862 [2024-11-20 06:43:06.371126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.862 qpair failed and we were unable to recover it. 00:32:34.862 [2024-11-20 06:43:06.371258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.862 [2024-11-20 06:43:06.371291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.862 qpair failed and we were unable to recover it. 00:32:34.862 [2024-11-20 06:43:06.371492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.862 [2024-11-20 06:43:06.371525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.862 qpair failed and we were unable to recover it. 00:32:34.862 [2024-11-20 06:43:06.371713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.862 [2024-11-20 06:43:06.371745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.862 qpair failed and we were unable to recover it. 00:32:34.862 [2024-11-20 06:43:06.371862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.862 [2024-11-20 06:43:06.371893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.862 qpair failed and we were unable to recover it. 00:32:34.862 [2024-11-20 06:43:06.371998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.862 [2024-11-20 06:43:06.372030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.862 qpair failed and we were unable to recover it. 00:32:34.862 [2024-11-20 06:43:06.372162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.862 [2024-11-20 06:43:06.372194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.862 qpair failed and we were unable to recover it. 00:32:34.862 [2024-11-20 06:43:06.372317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.862 [2024-11-20 06:43:06.372348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.862 qpair failed and we were unable to recover it. 00:32:34.862 [2024-11-20 06:43:06.372544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.862 [2024-11-20 06:43:06.372576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.862 qpair failed and we were unable to recover it. 00:32:34.862 [2024-11-20 06:43:06.372768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.862 [2024-11-20 06:43:06.372800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.862 qpair failed and we were unable to recover it. 00:32:34.862 [2024-11-20 06:43:06.372927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.862 [2024-11-20 06:43:06.372960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.862 qpair failed and we were unable to recover it. 00:32:34.862 [2024-11-20 06:43:06.373139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.862 [2024-11-20 06:43:06.373171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.862 qpair failed and we were unable to recover it. 00:32:34.862 [2024-11-20 06:43:06.373356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.862 [2024-11-20 06:43:06.373391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.862 qpair failed and we were unable to recover it. 00:32:34.862 [2024-11-20 06:43:06.373510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.862 [2024-11-20 06:43:06.373541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.862 qpair failed and we were unable to recover it. 00:32:34.862 [2024-11-20 06:43:06.373653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.862 [2024-11-20 06:43:06.373686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.862 qpair failed and we were unable to recover it. 00:32:34.862 [2024-11-20 06:43:06.373855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.862 [2024-11-20 06:43:06.373888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.862 qpair failed and we were unable to recover it. 00:32:34.862 [2024-11-20 06:43:06.374017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.862 [2024-11-20 06:43:06.374047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.862 qpair failed and we were unable to recover it. 00:32:34.862 [2024-11-20 06:43:06.374161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.862 [2024-11-20 06:43:06.374193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.862 qpair failed and we were unable to recover it. 00:32:34.862 [2024-11-20 06:43:06.374383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.862 [2024-11-20 06:43:06.374417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.862 qpair failed and we were unable to recover it. 00:32:34.862 [2024-11-20 06:43:06.374540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.862 [2024-11-20 06:43:06.374571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.862 qpair failed and we were unable to recover it. 00:32:34.862 [2024-11-20 06:43:06.374700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.862 [2024-11-20 06:43:06.374731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.862 qpair failed and we were unable to recover it. 00:32:34.862 [2024-11-20 06:43:06.374834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.862 [2024-11-20 06:43:06.374865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.862 qpair failed and we were unable to recover it. 00:32:34.862 [2024-11-20 06:43:06.375056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.862 [2024-11-20 06:43:06.375089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.862 qpair failed and we were unable to recover it. 00:32:34.862 [2024-11-20 06:43:06.375265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.862 [2024-11-20 06:43:06.375300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.862 qpair failed and we were unable to recover it. 00:32:34.862 [2024-11-20 06:43:06.375409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.862 [2024-11-20 06:43:06.375440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.862 qpair failed and we were unable to recover it. 00:32:34.862 [2024-11-20 06:43:06.375627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.862 [2024-11-20 06:43:06.375660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.862 qpair failed and we were unable to recover it. 00:32:34.862 [2024-11-20 06:43:06.375786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.862 [2024-11-20 06:43:06.375818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.862 qpair failed and we were unable to recover it. 00:32:34.862 [2024-11-20 06:43:06.375933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.862 [2024-11-20 06:43:06.375965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.862 qpair failed and we were unable to recover it. 00:32:34.862 [2024-11-20 06:43:06.376138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.863 [2024-11-20 06:43:06.376170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.863 qpair failed and we were unable to recover it. 00:32:34.863 [2024-11-20 06:43:06.376285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.863 [2024-11-20 06:43:06.376317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.863 qpair failed and we were unable to recover it. 00:32:34.863 [2024-11-20 06:43:06.376430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.863 [2024-11-20 06:43:06.376463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.863 qpair failed and we were unable to recover it. 00:32:34.863 [2024-11-20 06:43:06.376595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.863 [2024-11-20 06:43:06.376626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.863 qpair failed and we were unable to recover it. 00:32:34.863 [2024-11-20 06:43:06.376752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.863 [2024-11-20 06:43:06.376783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.863 qpair failed and we were unable to recover it. 00:32:34.863 [2024-11-20 06:43:06.377025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.863 [2024-11-20 06:43:06.377062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.863 qpair failed and we were unable to recover it. 00:32:34.863 [2024-11-20 06:43:06.377240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.863 [2024-11-20 06:43:06.377274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.863 qpair failed and we were unable to recover it. 00:32:34.863 [2024-11-20 06:43:06.377398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.863 [2024-11-20 06:43:06.377431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.863 qpair failed and we were unable to recover it. 00:32:34.863 [2024-11-20 06:43:06.377560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.863 [2024-11-20 06:43:06.377592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.863 qpair failed and we were unable to recover it. 00:32:34.863 [2024-11-20 06:43:06.377714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.863 [2024-11-20 06:43:06.377745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.863 qpair failed and we were unable to recover it. 00:32:34.863 [2024-11-20 06:43:06.377853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.863 [2024-11-20 06:43:06.377886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.863 qpair failed and we were unable to recover it. 00:32:34.863 [2024-11-20 06:43:06.378013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.863 [2024-11-20 06:43:06.378044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.863 qpair failed and we were unable to recover it. 00:32:34.863 [2024-11-20 06:43:06.378151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.863 [2024-11-20 06:43:06.378184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.863 qpair failed and we were unable to recover it. 00:32:34.863 [2024-11-20 06:43:06.378368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.863 [2024-11-20 06:43:06.378399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.863 qpair failed and we were unable to recover it. 00:32:34.863 [2024-11-20 06:43:06.378512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.863 [2024-11-20 06:43:06.378545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.863 qpair failed and we were unable to recover it. 00:32:34.863 [2024-11-20 06:43:06.378663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.863 [2024-11-20 06:43:06.378695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.863 qpair failed and we were unable to recover it. 00:32:34.863 [2024-11-20 06:43:06.378800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.863 [2024-11-20 06:43:06.378830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.863 qpair failed and we were unable to recover it. 00:32:34.863 [2024-11-20 06:43:06.378940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.863 [2024-11-20 06:43:06.378972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.863 qpair failed and we were unable to recover it. 00:32:34.863 [2024-11-20 06:43:06.379180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.863 [2024-11-20 06:43:06.379225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.863 qpair failed and we were unable to recover it. 00:32:34.863 [2024-11-20 06:43:06.379336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.863 [2024-11-20 06:43:06.379367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.863 qpair failed and we were unable to recover it. 00:32:34.863 [2024-11-20 06:43:06.379487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.863 [2024-11-20 06:43:06.379519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.863 qpair failed and we were unable to recover it. 00:32:34.863 [2024-11-20 06:43:06.379624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.863 [2024-11-20 06:43:06.379656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.863 qpair failed and we were unable to recover it. 00:32:34.863 [2024-11-20 06:43:06.379784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.863 [2024-11-20 06:43:06.379816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.863 qpair failed and we were unable to recover it. 00:32:34.863 [2024-11-20 06:43:06.379920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.863 [2024-11-20 06:43:06.379951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.863 qpair failed and we were unable to recover it. 00:32:34.863 [2024-11-20 06:43:06.380122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.863 [2024-11-20 06:43:06.380154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.863 qpair failed and we were unable to recover it. 00:32:34.863 [2024-11-20 06:43:06.380370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.863 [2024-11-20 06:43:06.380400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.863 qpair failed and we were unable to recover it. 00:32:34.863 [2024-11-20 06:43:06.380501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.863 [2024-11-20 06:43:06.380530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.863 qpair failed and we were unable to recover it. 00:32:34.863 [2024-11-20 06:43:06.380711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.863 [2024-11-20 06:43:06.380740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.863 qpair failed and we were unable to recover it. 00:32:34.863 [2024-11-20 06:43:06.380923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.863 [2024-11-20 06:43:06.380952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.863 qpair failed and we were unable to recover it. 00:32:34.863 [2024-11-20 06:43:06.381051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.863 [2024-11-20 06:43:06.381080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.863 qpair failed and we were unable to recover it. 00:32:34.863 [2024-11-20 06:43:06.381191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.863 [2024-11-20 06:43:06.381232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.863 qpair failed and we were unable to recover it. 00:32:34.863 [2024-11-20 06:43:06.381492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.863 [2024-11-20 06:43:06.381521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.863 qpair failed and we were unable to recover it. 00:32:34.863 [2024-11-20 06:43:06.381656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.863 [2024-11-20 06:43:06.381685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.863 qpair failed and we were unable to recover it. 00:32:34.863 [2024-11-20 06:43:06.381824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.863 [2024-11-20 06:43:06.381854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.863 qpair failed and we were unable to recover it. 00:32:34.863 [2024-11-20 06:43:06.381962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.863 [2024-11-20 06:43:06.381990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.863 qpair failed and we were unable to recover it. 00:32:34.863 [2024-11-20 06:43:06.382245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.863 [2024-11-20 06:43:06.382277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.863 qpair failed and we were unable to recover it. 00:32:34.863 [2024-11-20 06:43:06.382379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.863 [2024-11-20 06:43:06.382407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.863 qpair failed and we were unable to recover it. 00:32:34.863 [2024-11-20 06:43:06.382519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.864 [2024-11-20 06:43:06.382548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.864 qpair failed and we were unable to recover it. 00:32:34.864 [2024-11-20 06:43:06.382752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.864 [2024-11-20 06:43:06.382782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.864 qpair failed and we were unable to recover it. 00:32:34.864 [2024-11-20 06:43:06.382947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.864 [2024-11-20 06:43:06.382976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.864 qpair failed and we were unable to recover it. 00:32:34.864 [2024-11-20 06:43:06.383141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.864 [2024-11-20 06:43:06.383169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.864 qpair failed and we were unable to recover it. 00:32:34.864 [2024-11-20 06:43:06.383275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.864 [2024-11-20 06:43:06.383306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.864 qpair failed and we were unable to recover it. 00:32:34.864 [2024-11-20 06:43:06.383479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.864 [2024-11-20 06:43:06.383508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.864 qpair failed and we were unable to recover it. 00:32:34.864 [2024-11-20 06:43:06.383603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.864 [2024-11-20 06:43:06.383631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.864 qpair failed and we were unable to recover it. 00:32:34.864 [2024-11-20 06:43:06.383730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.864 [2024-11-20 06:43:06.383759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.864 qpair failed and we were unable to recover it. 00:32:34.864 [2024-11-20 06:43:06.383881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.864 [2024-11-20 06:43:06.383915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.864 qpair failed and we were unable to recover it. 00:32:34.864 [2024-11-20 06:43:06.384033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.864 [2024-11-20 06:43:06.384063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.864 qpair failed and we were unable to recover it. 00:32:34.864 [2024-11-20 06:43:06.384168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.864 [2024-11-20 06:43:06.384197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.864 qpair failed and we were unable to recover it. 00:32:34.864 [2024-11-20 06:43:06.384403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.864 [2024-11-20 06:43:06.384433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.864 qpair failed and we were unable to recover it. 00:32:34.864 [2024-11-20 06:43:06.384563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.864 [2024-11-20 06:43:06.384591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.864 qpair failed and we were unable to recover it. 00:32:34.864 [2024-11-20 06:43:06.384693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.864 [2024-11-20 06:43:06.384723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.864 qpair failed and we were unable to recover it. 00:32:34.864 [2024-11-20 06:43:06.384838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.864 [2024-11-20 06:43:06.384866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.864 qpair failed and we were unable to recover it. 00:32:34.864 [2024-11-20 06:43:06.384972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.864 [2024-11-20 06:43:06.385002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.864 qpair failed and we were unable to recover it. 00:32:34.864 [2024-11-20 06:43:06.385169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.864 [2024-11-20 06:43:06.385197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.864 qpair failed and we were unable to recover it. 00:32:34.864 [2024-11-20 06:43:06.385322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.864 [2024-11-20 06:43:06.385352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.864 qpair failed and we were unable to recover it. 00:32:34.864 [2024-11-20 06:43:06.385462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.864 [2024-11-20 06:43:06.385490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.864 qpair failed and we were unable to recover it. 00:32:34.864 [2024-11-20 06:43:06.385722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.864 [2024-11-20 06:43:06.385751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.864 qpair failed and we were unable to recover it. 00:32:34.864 [2024-11-20 06:43:06.385887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.864 [2024-11-20 06:43:06.385915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.864 qpair failed and we were unable to recover it. 00:32:34.864 [2024-11-20 06:43:06.386091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.864 [2024-11-20 06:43:06.386122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.864 qpair failed and we were unable to recover it. 00:32:34.864 [2024-11-20 06:43:06.386242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.864 [2024-11-20 06:43:06.386272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.864 qpair failed and we were unable to recover it. 00:32:34.864 [2024-11-20 06:43:06.386443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.864 [2024-11-20 06:43:06.386472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.864 qpair failed and we were unable to recover it. 00:32:34.864 [2024-11-20 06:43:06.386646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.864 [2024-11-20 06:43:06.386675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.864 qpair failed and we were unable to recover it. 00:32:34.864 [2024-11-20 06:43:06.386785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.864 [2024-11-20 06:43:06.386814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.864 qpair failed and we were unable to recover it. 00:32:34.864 [2024-11-20 06:43:06.386933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.864 [2024-11-20 06:43:06.386961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.864 qpair failed and we were unable to recover it. 00:32:34.864 [2024-11-20 06:43:06.387089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.864 [2024-11-20 06:43:06.387117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.864 qpair failed and we were unable to recover it. 00:32:34.864 [2024-11-20 06:43:06.387232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.864 [2024-11-20 06:43:06.387261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.864 qpair failed and we were unable to recover it. 00:32:34.864 [2024-11-20 06:43:06.387429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.864 [2024-11-20 06:43:06.387458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.864 qpair failed and we were unable to recover it. 00:32:34.864 [2024-11-20 06:43:06.387570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.864 [2024-11-20 06:43:06.387600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.864 qpair failed and we were unable to recover it. 00:32:34.864 [2024-11-20 06:43:06.387806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.864 [2024-11-20 06:43:06.387835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.864 qpair failed and we were unable to recover it. 00:32:34.864 [2024-11-20 06:43:06.387946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.864 [2024-11-20 06:43:06.387975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.864 qpair failed and we were unable to recover it. 00:32:34.864 [2024-11-20 06:43:06.388084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.864 [2024-11-20 06:43:06.388113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.864 qpair failed and we were unable to recover it. 00:32:34.864 [2024-11-20 06:43:06.388233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.864 [2024-11-20 06:43:06.388264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.864 qpair failed and we were unable to recover it. 00:32:34.864 [2024-11-20 06:43:06.388455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.864 [2024-11-20 06:43:06.388484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.864 qpair failed and we were unable to recover it. 00:32:34.864 [2024-11-20 06:43:06.388585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.864 [2024-11-20 06:43:06.388613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.864 qpair failed and we were unable to recover it. 00:32:34.864 [2024-11-20 06:43:06.388720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.865 [2024-11-20 06:43:06.388749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.865 qpair failed and we were unable to recover it. 00:32:34.865 [2024-11-20 06:43:06.388868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.865 [2024-11-20 06:43:06.388897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.865 qpair failed and we were unable to recover it. 00:32:34.865 [2024-11-20 06:43:06.389024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.865 [2024-11-20 06:43:06.389052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.865 qpair failed and we were unable to recover it. 00:32:34.865 [2024-11-20 06:43:06.389219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.865 [2024-11-20 06:43:06.389249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.865 qpair failed and we were unable to recover it. 00:32:34.865 [2024-11-20 06:43:06.389485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.865 [2024-11-20 06:43:06.389515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.865 qpair failed and we were unable to recover it. 00:32:34.865 [2024-11-20 06:43:06.389682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.865 [2024-11-20 06:43:06.389710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.865 qpair failed and we were unable to recover it. 00:32:34.865 [2024-11-20 06:43:06.389830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.865 [2024-11-20 06:43:06.389858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.865 qpair failed and we were unable to recover it. 00:32:34.865 [2024-11-20 06:43:06.389960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.865 [2024-11-20 06:43:06.389990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.865 qpair failed and we were unable to recover it. 00:32:34.865 [2024-11-20 06:43:06.390180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.865 [2024-11-20 06:43:06.390221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.865 qpair failed and we were unable to recover it. 00:32:34.865 [2024-11-20 06:43:06.390399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.865 [2024-11-20 06:43:06.390427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.865 qpair failed and we were unable to recover it. 00:32:34.865 [2024-11-20 06:43:06.390626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.865 [2024-11-20 06:43:06.390655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.865 qpair failed and we were unable to recover it. 00:32:34.865 [2024-11-20 06:43:06.390838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.865 [2024-11-20 06:43:06.390872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.865 qpair failed and we were unable to recover it. 00:32:34.865 [2024-11-20 06:43:06.390993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.865 [2024-11-20 06:43:06.391023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.865 qpair failed and we were unable to recover it. 00:32:34.865 [2024-11-20 06:43:06.391238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.865 [2024-11-20 06:43:06.391270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.865 qpair failed and we were unable to recover it. 00:32:34.865 [2024-11-20 06:43:06.391379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.865 [2024-11-20 06:43:06.391407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.865 qpair failed and we were unable to recover it. 00:32:34.865 [2024-11-20 06:43:06.391524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.865 [2024-11-20 06:43:06.391553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.865 qpair failed and we were unable to recover it. 00:32:34.865 [2024-11-20 06:43:06.391681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.865 [2024-11-20 06:43:06.391710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.865 qpair failed and we were unable to recover it. 00:32:34.865 [2024-11-20 06:43:06.391828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.865 [2024-11-20 06:43:06.391857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.865 qpair failed and we were unable to recover it. 00:32:34.865 [2024-11-20 06:43:06.392108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.865 [2024-11-20 06:43:06.392136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.865 qpair failed and we were unable to recover it. 00:32:34.865 [2024-11-20 06:43:06.392237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.865 [2024-11-20 06:43:06.392268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.865 qpair failed and we were unable to recover it. 00:32:34.865 [2024-11-20 06:43:06.392454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.865 [2024-11-20 06:43:06.392483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.865 qpair failed and we were unable to recover it. 00:32:34.865 [2024-11-20 06:43:06.392601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.865 [2024-11-20 06:43:06.392630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.865 qpair failed and we were unable to recover it. 00:32:34.865 [2024-11-20 06:43:06.392746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.865 [2024-11-20 06:43:06.392773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.865 qpair failed and we were unable to recover it. 00:32:34.865 [2024-11-20 06:43:06.392882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.865 [2024-11-20 06:43:06.392910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.865 qpair failed and we were unable to recover it. 00:32:34.865 [2024-11-20 06:43:06.393016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.865 [2024-11-20 06:43:06.393045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.865 qpair failed and we were unable to recover it. 00:32:34.865 [2024-11-20 06:43:06.393241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.865 [2024-11-20 06:43:06.393271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.865 qpair failed and we were unable to recover it. 00:32:34.865 [2024-11-20 06:43:06.393385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.865 [2024-11-20 06:43:06.393415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.865 qpair failed and we were unable to recover it. 00:32:34.865 [2024-11-20 06:43:06.393527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.865 [2024-11-20 06:43:06.393556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.865 qpair failed and we were unable to recover it. 00:32:34.865 [2024-11-20 06:43:06.393739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.865 [2024-11-20 06:43:06.393767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.865 qpair failed and we were unable to recover it. 00:32:34.865 [2024-11-20 06:43:06.394024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.865 [2024-11-20 06:43:06.394053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.865 qpair failed and we were unable to recover it. 00:32:34.865 [2024-11-20 06:43:06.394176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.865 [2024-11-20 06:43:06.394210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.865 qpair failed and we were unable to recover it. 00:32:34.865 [2024-11-20 06:43:06.394387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.865 [2024-11-20 06:43:06.394416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.865 qpair failed and we were unable to recover it. 00:32:34.865 [2024-11-20 06:43:06.394545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.865 [2024-11-20 06:43:06.394574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.865 qpair failed and we were unable to recover it. 00:32:34.865 [2024-11-20 06:43:06.394739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.866 [2024-11-20 06:43:06.394768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.866 qpair failed and we were unable to recover it. 00:32:34.866 [2024-11-20 06:43:06.394937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.866 [2024-11-20 06:43:06.394966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.866 qpair failed and we were unable to recover it. 00:32:34.866 [2024-11-20 06:43:06.395196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.866 [2024-11-20 06:43:06.395235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.866 qpair failed and we were unable to recover it. 00:32:34.866 [2024-11-20 06:43:06.395420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.866 [2024-11-20 06:43:06.395449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.866 qpair failed and we were unable to recover it. 00:32:34.866 [2024-11-20 06:43:06.395625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.866 [2024-11-20 06:43:06.395654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.866 qpair failed and we were unable to recover it. 00:32:34.866 [2024-11-20 06:43:06.395851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.866 [2024-11-20 06:43:06.395880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.866 qpair failed and we were unable to recover it. 00:32:34.866 [2024-11-20 06:43:06.396064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.866 [2024-11-20 06:43:06.396094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.866 qpair failed and we were unable to recover it. 00:32:34.866 [2024-11-20 06:43:06.396281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.866 [2024-11-20 06:43:06.396312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.866 qpair failed and we were unable to recover it. 00:32:34.866 [2024-11-20 06:43:06.396489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.866 [2024-11-20 06:43:06.396518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.866 qpair failed and we were unable to recover it. 00:32:34.866 [2024-11-20 06:43:06.396750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.866 [2024-11-20 06:43:06.396780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.866 qpair failed and we were unable to recover it. 00:32:34.866 [2024-11-20 06:43:06.397012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.866 [2024-11-20 06:43:06.397040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.866 qpair failed and we were unable to recover it. 00:32:34.866 [2024-11-20 06:43:06.397168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.866 [2024-11-20 06:43:06.397197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.866 qpair failed and we were unable to recover it. 00:32:34.866 [2024-11-20 06:43:06.397318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.866 [2024-11-20 06:43:06.397348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.866 qpair failed and we were unable to recover it. 00:32:34.866 [2024-11-20 06:43:06.397474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.866 [2024-11-20 06:43:06.397502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.866 qpair failed and we were unable to recover it. 00:32:34.866 [2024-11-20 06:43:06.397612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.866 [2024-11-20 06:43:06.397642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.866 qpair failed and we were unable to recover it. 00:32:34.866 [2024-11-20 06:43:06.397766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.866 [2024-11-20 06:43:06.397796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.866 qpair failed and we were unable to recover it. 00:32:34.866 [2024-11-20 06:43:06.397894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.866 [2024-11-20 06:43:06.397924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.866 qpair failed and we were unable to recover it. 00:32:34.866 [2024-11-20 06:43:06.398089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.866 [2024-11-20 06:43:06.398118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.866 qpair failed and we were unable to recover it. 00:32:34.866 [2024-11-20 06:43:06.398230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.866 [2024-11-20 06:43:06.398271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.866 qpair failed and we were unable to recover it. 00:32:34.866 [2024-11-20 06:43:06.398455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.866 [2024-11-20 06:43:06.398485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.866 qpair failed and we were unable to recover it. 00:32:34.866 [2024-11-20 06:43:06.398588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.866 [2024-11-20 06:43:06.398617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.866 qpair failed and we were unable to recover it. 00:32:34.866 [2024-11-20 06:43:06.398716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.866 [2024-11-20 06:43:06.398745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.866 qpair failed and we were unable to recover it. 00:32:34.866 [2024-11-20 06:43:06.398852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.866 [2024-11-20 06:43:06.398880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.866 qpair failed and we were unable to recover it. 00:32:34.866 [2024-11-20 06:43:06.399002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.866 [2024-11-20 06:43:06.399031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.866 qpair failed and we were unable to recover it. 00:32:34.866 [2024-11-20 06:43:06.399156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.866 [2024-11-20 06:43:06.399185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.866 qpair failed and we were unable to recover it. 00:32:34.866 [2024-11-20 06:43:06.399359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.866 [2024-11-20 06:43:06.399389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.866 qpair failed and we were unable to recover it. 00:32:34.866 [2024-11-20 06:43:06.399501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.866 [2024-11-20 06:43:06.399530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.866 qpair failed and we were unable to recover it. 00:32:34.866 [2024-11-20 06:43:06.399725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.866 [2024-11-20 06:43:06.399755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.866 qpair failed and we were unable to recover it. 00:32:34.866 [2024-11-20 06:43:06.399946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.866 [2024-11-20 06:43:06.399975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.866 qpair failed and we were unable to recover it. 00:32:34.866 [2024-11-20 06:43:06.400144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.866 [2024-11-20 06:43:06.400172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.866 qpair failed and we were unable to recover it. 00:32:34.866 [2024-11-20 06:43:06.400293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.866 [2024-11-20 06:43:06.400321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.866 qpair failed and we were unable to recover it. 00:32:34.866 [2024-11-20 06:43:06.400432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.866 [2024-11-20 06:43:06.400462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.866 qpair failed and we were unable to recover it. 00:32:34.866 [2024-11-20 06:43:06.400637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.866 [2024-11-20 06:43:06.400666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.866 qpair failed and we were unable to recover it. 00:32:34.866 [2024-11-20 06:43:06.400844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.866 [2024-11-20 06:43:06.400873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.866 qpair failed and we were unable to recover it. 00:32:34.866 [2024-11-20 06:43:06.400980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.866 [2024-11-20 06:43:06.401008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.866 qpair failed and we were unable to recover it. 00:32:34.866 [2024-11-20 06:43:06.401127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.866 [2024-11-20 06:43:06.401156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.866 qpair failed and we were unable to recover it. 00:32:34.867 [2024-11-20 06:43:06.401366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.867 [2024-11-20 06:43:06.401396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.867 qpair failed and we were unable to recover it. 00:32:34.867 [2024-11-20 06:43:06.401574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.867 [2024-11-20 06:43:06.401602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.867 qpair failed and we were unable to recover it. 00:32:34.867 [2024-11-20 06:43:06.401785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.867 [2024-11-20 06:43:06.401814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.867 qpair failed and we were unable to recover it. 00:32:34.867 [2024-11-20 06:43:06.401934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.867 [2024-11-20 06:43:06.401963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.867 qpair failed and we were unable to recover it. 00:32:34.867 [2024-11-20 06:43:06.402144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.867 [2024-11-20 06:43:06.402173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.867 qpair failed and we were unable to recover it. 00:32:34.867 [2024-11-20 06:43:06.402294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.867 [2024-11-20 06:43:06.402325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.867 qpair failed and we were unable to recover it. 00:32:34.867 [2024-11-20 06:43:06.402455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.867 [2024-11-20 06:43:06.402484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.867 qpair failed and we were unable to recover it. 00:32:34.867 [2024-11-20 06:43:06.402661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.867 [2024-11-20 06:43:06.402690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.867 qpair failed and we were unable to recover it. 00:32:34.867 [2024-11-20 06:43:06.402856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.867 [2024-11-20 06:43:06.402886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.867 qpair failed and we were unable to recover it. 00:32:34.867 [2024-11-20 06:43:06.403011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.867 [2024-11-20 06:43:06.403040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.867 qpair failed and we were unable to recover it. 00:32:34.867 [2024-11-20 06:43:06.403152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.867 [2024-11-20 06:43:06.403181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.867 qpair failed and we were unable to recover it. 00:32:34.867 [2024-11-20 06:43:06.403335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.867 [2024-11-20 06:43:06.403365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.867 qpair failed and we were unable to recover it. 00:32:34.867 [2024-11-20 06:43:06.403542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.867 [2024-11-20 06:43:06.403570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.867 qpair failed and we were unable to recover it. 00:32:34.867 [2024-11-20 06:43:06.403689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.867 [2024-11-20 06:43:06.403717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.867 qpair failed and we were unable to recover it. 00:32:34.867 [2024-11-20 06:43:06.403824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.867 [2024-11-20 06:43:06.403853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.867 qpair failed and we were unable to recover it. 00:32:34.867 [2024-11-20 06:43:06.404034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.867 [2024-11-20 06:43:06.404063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.867 qpair failed and we were unable to recover it. 00:32:34.867 [2024-11-20 06:43:06.404228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.867 [2024-11-20 06:43:06.404259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.867 qpair failed and we were unable to recover it. 00:32:34.867 [2024-11-20 06:43:06.404371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.867 [2024-11-20 06:43:06.404400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.867 qpair failed and we were unable to recover it. 00:32:34.867 [2024-11-20 06:43:06.404565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.867 [2024-11-20 06:43:06.404594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.867 qpair failed and we were unable to recover it. 00:32:34.867 [2024-11-20 06:43:06.404785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.867 [2024-11-20 06:43:06.404815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.867 qpair failed and we were unable to recover it. 00:32:34.867 [2024-11-20 06:43:06.404989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.867 [2024-11-20 06:43:06.405018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.867 qpair failed and we were unable to recover it. 00:32:34.867 [2024-11-20 06:43:06.405211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.867 [2024-11-20 06:43:06.405242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.867 qpair failed and we were unable to recover it. 00:32:34.867 [2024-11-20 06:43:06.405407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.867 [2024-11-20 06:43:06.405441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.867 qpair failed and we were unable to recover it. 00:32:34.867 [2024-11-20 06:43:06.405612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.867 [2024-11-20 06:43:06.405641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.867 qpair failed and we were unable to recover it. 00:32:34.867 [2024-11-20 06:43:06.405811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.867 [2024-11-20 06:43:06.405840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.867 qpair failed and we were unable to recover it. 00:32:34.867 [2024-11-20 06:43:06.405957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.867 [2024-11-20 06:43:06.405987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.867 qpair failed and we were unable to recover it. 00:32:34.867 [2024-11-20 06:43:06.406150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.867 [2024-11-20 06:43:06.406178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.867 qpair failed and we were unable to recover it. 00:32:34.867 [2024-11-20 06:43:06.406394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.867 [2024-11-20 06:43:06.406425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.867 qpair failed and we were unable to recover it. 00:32:34.867 [2024-11-20 06:43:06.406590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.867 [2024-11-20 06:43:06.406618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.867 qpair failed and we were unable to recover it. 00:32:34.867 [2024-11-20 06:43:06.406803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.867 [2024-11-20 06:43:06.406832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.867 qpair failed and we were unable to recover it. 00:32:34.867 [2024-11-20 06:43:06.406943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.867 [2024-11-20 06:43:06.406971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.867 qpair failed and we were unable to recover it. 00:32:34.867 [2024-11-20 06:43:06.407093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.867 [2024-11-20 06:43:06.407122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.867 qpair failed and we were unable to recover it. 00:32:34.867 [2024-11-20 06:43:06.407308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.867 [2024-11-20 06:43:06.407339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.867 qpair failed and we were unable to recover it. 00:32:34.867 [2024-11-20 06:43:06.407506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.867 [2024-11-20 06:43:06.407535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.867 qpair failed and we were unable to recover it. 00:32:34.867 [2024-11-20 06:43:06.407767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.867 [2024-11-20 06:43:06.407796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.867 qpair failed and we were unable to recover it. 00:32:34.867 [2024-11-20 06:43:06.407926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.867 [2024-11-20 06:43:06.407956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.867 qpair failed and we were unable to recover it. 00:32:34.867 [2024-11-20 06:43:06.408078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.867 [2024-11-20 06:43:06.408107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.867 qpair failed and we were unable to recover it. 00:32:34.867 [2024-11-20 06:43:06.408294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.867 [2024-11-20 06:43:06.408324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.867 qpair failed and we were unable to recover it. 00:32:34.867 [2024-11-20 06:43:06.408504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.867 [2024-11-20 06:43:06.408534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.867 qpair failed and we were unable to recover it. 00:32:34.867 [2024-11-20 06:43:06.408702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.867 [2024-11-20 06:43:06.408731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.867 qpair failed and we were unable to recover it. 00:32:34.868 [2024-11-20 06:43:06.408839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.868 [2024-11-20 06:43:06.408868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.868 qpair failed and we were unable to recover it. 00:32:34.868 [2024-11-20 06:43:06.409047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.868 [2024-11-20 06:43:06.409075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.868 qpair failed and we were unable to recover it. 00:32:34.868 [2024-11-20 06:43:06.409193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.868 [2024-11-20 06:43:06.409232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.868 qpair failed and we were unable to recover it. 00:32:34.868 [2024-11-20 06:43:06.409432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.868 [2024-11-20 06:43:06.409461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.868 qpair failed and we were unable to recover it. 00:32:34.868 [2024-11-20 06:43:06.409714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.868 [2024-11-20 06:43:06.409744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.868 qpair failed and we were unable to recover it. 00:32:34.868 [2024-11-20 06:43:06.409921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.868 [2024-11-20 06:43:06.409951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.868 qpair failed and we were unable to recover it. 00:32:34.868 [2024-11-20 06:43:06.410126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.868 [2024-11-20 06:43:06.410155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.868 qpair failed and we were unable to recover it. 00:32:34.868 [2024-11-20 06:43:06.410271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.868 [2024-11-20 06:43:06.410301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.868 qpair failed and we were unable to recover it. 00:32:34.868 [2024-11-20 06:43:06.410427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.868 [2024-11-20 06:43:06.410456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.868 qpair failed and we were unable to recover it. 00:32:34.868 [2024-11-20 06:43:06.410578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.868 [2024-11-20 06:43:06.410606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.868 qpair failed and we were unable to recover it. 00:32:34.868 [2024-11-20 06:43:06.410728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.868 [2024-11-20 06:43:06.410757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.868 qpair failed and we were unable to recover it. 00:32:34.868 [2024-11-20 06:43:06.410871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.868 [2024-11-20 06:43:06.410901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.868 qpair failed and we were unable to recover it. 00:32:34.868 [2024-11-20 06:43:06.411025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.868 [2024-11-20 06:43:06.411057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.868 qpair failed and we were unable to recover it. 00:32:34.868 [2024-11-20 06:43:06.411235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.868 [2024-11-20 06:43:06.411269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.868 qpair failed and we were unable to recover it. 00:32:34.868 [2024-11-20 06:43:06.411458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.868 [2024-11-20 06:43:06.411489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.868 qpair failed and we were unable to recover it. 00:32:34.868 [2024-11-20 06:43:06.411599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.868 [2024-11-20 06:43:06.411632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.868 qpair failed and we were unable to recover it. 00:32:34.868 [2024-11-20 06:43:06.411818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.868 [2024-11-20 06:43:06.411850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.868 qpair failed and we were unable to recover it. 00:32:34.868 [2024-11-20 06:43:06.412026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.868 [2024-11-20 06:43:06.412058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.868 qpair failed and we were unable to recover it. 00:32:34.868 [2024-11-20 06:43:06.412267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.868 [2024-11-20 06:43:06.412301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.868 qpair failed and we were unable to recover it. 00:32:34.868 [2024-11-20 06:43:06.412507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.868 [2024-11-20 06:43:06.412539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.868 qpair failed and we were unable to recover it. 00:32:34.868 [2024-11-20 06:43:06.412663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.868 [2024-11-20 06:43:06.412694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.868 qpair failed and we were unable to recover it. 00:32:34.868 [2024-11-20 06:43:06.412822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.868 [2024-11-20 06:43:06.412854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.868 qpair failed and we were unable to recover it. 00:32:34.868 [2024-11-20 06:43:06.412961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.868 [2024-11-20 06:43:06.412997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.868 qpair failed and we were unable to recover it. 00:32:34.868 [2024-11-20 06:43:06.413120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.868 [2024-11-20 06:43:06.413152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.868 qpair failed and we were unable to recover it. 00:32:34.868 [2024-11-20 06:43:06.413365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.868 [2024-11-20 06:43:06.413398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.868 qpair failed and we were unable to recover it. 00:32:34.868 [2024-11-20 06:43:06.413511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.868 [2024-11-20 06:43:06.413543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.868 qpair failed and we were unable to recover it. 00:32:34.868 [2024-11-20 06:43:06.413668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.868 [2024-11-20 06:43:06.413700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.868 qpair failed and we were unable to recover it. 00:32:34.868 [2024-11-20 06:43:06.413811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.868 [2024-11-20 06:43:06.413843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.868 qpair failed and we were unable to recover it. 00:32:34.868 [2024-11-20 06:43:06.413969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.868 [2024-11-20 06:43:06.414002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.868 qpair failed and we were unable to recover it. 00:32:34.868 [2024-11-20 06:43:06.414186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.868 [2024-11-20 06:43:06.414245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.868 qpair failed and we were unable to recover it. 00:32:34.868 [2024-11-20 06:43:06.414363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.868 [2024-11-20 06:43:06.414394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.868 qpair failed and we were unable to recover it. 00:32:34.868 [2024-11-20 06:43:06.414579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.868 [2024-11-20 06:43:06.414611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.868 qpair failed and we were unable to recover it. 00:32:34.868 [2024-11-20 06:43:06.414783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.868 [2024-11-20 06:43:06.414815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.868 qpair failed and we were unable to recover it. 00:32:34.868 [2024-11-20 06:43:06.414984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.868 [2024-11-20 06:43:06.415015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.868 qpair failed and we were unable to recover it. 00:32:34.868 [2024-11-20 06:43:06.415256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.868 [2024-11-20 06:43:06.415290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.868 qpair failed and we were unable to recover it. 00:32:34.868 [2024-11-20 06:43:06.415510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.868 [2024-11-20 06:43:06.415542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.868 qpair failed and we were unable to recover it. 00:32:34.868 [2024-11-20 06:43:06.415653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.868 [2024-11-20 06:43:06.415684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.868 qpair failed and we were unable to recover it. 00:32:34.868 [2024-11-20 06:43:06.415798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.868 [2024-11-20 06:43:06.415830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.868 qpair failed and we were unable to recover it. 00:32:34.868 [2024-11-20 06:43:06.416001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.868 [2024-11-20 06:43:06.416032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.868 qpair failed and we were unable to recover it. 00:32:34.868 [2024-11-20 06:43:06.416222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.868 [2024-11-20 06:43:06.416256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.868 qpair failed and we were unable to recover it. 00:32:34.868 [2024-11-20 06:43:06.416362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.869 [2024-11-20 06:43:06.416394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.869 qpair failed and we were unable to recover it. 00:32:34.869 [2024-11-20 06:43:06.416530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.869 [2024-11-20 06:43:06.416561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.869 qpair failed and we were unable to recover it. 00:32:34.869 [2024-11-20 06:43:06.416749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.869 [2024-11-20 06:43:06.416780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.869 qpair failed and we were unable to recover it. 00:32:34.869 [2024-11-20 06:43:06.416959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.869 [2024-11-20 06:43:06.416992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.869 qpair failed and we were unable to recover it. 00:32:34.869 [2024-11-20 06:43:06.417166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.869 [2024-11-20 06:43:06.417197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.869 qpair failed and we were unable to recover it. 00:32:34.869 [2024-11-20 06:43:06.417312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.869 [2024-11-20 06:43:06.417344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.869 qpair failed and we were unable to recover it. 00:32:34.869 [2024-11-20 06:43:06.417473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.869 [2024-11-20 06:43:06.417505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.869 qpair failed and we were unable to recover it. 00:32:34.869 [2024-11-20 06:43:06.417712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.869 [2024-11-20 06:43:06.417744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.869 qpair failed and we were unable to recover it. 00:32:34.869 [2024-11-20 06:43:06.417886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.869 [2024-11-20 06:43:06.417918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.869 qpair failed and we were unable to recover it. 00:32:34.869 [2024-11-20 06:43:06.418038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.869 [2024-11-20 06:43:06.418071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.869 qpair failed and we were unable to recover it. 00:32:34.869 [2024-11-20 06:43:06.418272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.869 [2024-11-20 06:43:06.418305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.869 qpair failed and we were unable to recover it. 00:32:34.869 [2024-11-20 06:43:06.418418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.869 [2024-11-20 06:43:06.418451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.869 qpair failed and we were unable to recover it. 00:32:34.869 [2024-11-20 06:43:06.418552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.869 [2024-11-20 06:43:06.418584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.869 qpair failed and we were unable to recover it. 00:32:34.869 [2024-11-20 06:43:06.418715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.869 [2024-11-20 06:43:06.418746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.869 qpair failed and we were unable to recover it. 00:32:34.869 [2024-11-20 06:43:06.418931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.869 [2024-11-20 06:43:06.418963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.869 qpair failed and we were unable to recover it. 00:32:34.869 [2024-11-20 06:43:06.419088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.869 [2024-11-20 06:43:06.419120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.869 qpair failed and we were unable to recover it. 00:32:34.869 [2024-11-20 06:43:06.419234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.869 [2024-11-20 06:43:06.419269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.869 qpair failed and we were unable to recover it. 00:32:34.869 [2024-11-20 06:43:06.419386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.869 [2024-11-20 06:43:06.419417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.869 qpair failed and we were unable to recover it. 00:32:34.869 [2024-11-20 06:43:06.419590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.869 [2024-11-20 06:43:06.419622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.869 qpair failed and we were unable to recover it. 00:32:34.869 [2024-11-20 06:43:06.419751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.869 [2024-11-20 06:43:06.419783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.869 qpair failed and we were unable to recover it. 00:32:34.869 [2024-11-20 06:43:06.419892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.869 [2024-11-20 06:43:06.419924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.869 qpair failed and we were unable to recover it. 00:32:34.869 [2024-11-20 06:43:06.420115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.869 [2024-11-20 06:43:06.420147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.869 qpair failed and we were unable to recover it. 00:32:34.869 [2024-11-20 06:43:06.420367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.869 [2024-11-20 06:43:06.420406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.869 qpair failed and we were unable to recover it. 00:32:34.869 [2024-11-20 06:43:06.420600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.869 [2024-11-20 06:43:06.420631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.869 qpair failed and we were unable to recover it. 00:32:34.869 [2024-11-20 06:43:06.420805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.869 [2024-11-20 06:43:06.420838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.869 qpair failed and we were unable to recover it. 00:32:34.869 [2024-11-20 06:43:06.421021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.869 [2024-11-20 06:43:06.421054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.869 qpair failed and we were unable to recover it. 00:32:34.869 [2024-11-20 06:43:06.421161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.869 [2024-11-20 06:43:06.421193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.869 qpair failed and we were unable to recover it. 00:32:34.869 [2024-11-20 06:43:06.421360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.869 [2024-11-20 06:43:06.421393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.869 qpair failed and we were unable to recover it. 00:32:34.869 [2024-11-20 06:43:06.421525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.869 [2024-11-20 06:43:06.421557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.869 qpair failed and we were unable to recover it. 00:32:34.869 [2024-11-20 06:43:06.421834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.869 [2024-11-20 06:43:06.421867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.869 qpair failed and we were unable to recover it. 00:32:34.869 [2024-11-20 06:43:06.421969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.869 [2024-11-20 06:43:06.422001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.869 qpair failed and we were unable to recover it. 00:32:34.869 [2024-11-20 06:43:06.422194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.869 [2024-11-20 06:43:06.422235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.869 qpair failed and we were unable to recover it. 00:32:34.869 [2024-11-20 06:43:06.422404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.869 [2024-11-20 06:43:06.422435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.869 qpair failed and we were unable to recover it. 00:32:34.869 [2024-11-20 06:43:06.422633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.869 [2024-11-20 06:43:06.422665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.869 qpair failed and we were unable to recover it. 00:32:34.869 [2024-11-20 06:43:06.422849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.869 [2024-11-20 06:43:06.422881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.869 qpair failed and we were unable to recover it. 00:32:34.869 [2024-11-20 06:43:06.423177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.869 [2024-11-20 06:43:06.423221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.869 qpair failed and we were unable to recover it. 00:32:34.869 [2024-11-20 06:43:06.423361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.869 [2024-11-20 06:43:06.423392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.869 qpair failed and we were unable to recover it. 00:32:34.869 [2024-11-20 06:43:06.423517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.869 [2024-11-20 06:43:06.423549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.869 qpair failed and we were unable to recover it. 00:32:34.869 [2024-11-20 06:43:06.423718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.869 [2024-11-20 06:43:06.423749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.869 qpair failed and we were unable to recover it. 00:32:34.869 [2024-11-20 06:43:06.423859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.869 [2024-11-20 06:43:06.423891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.869 qpair failed and we were unable to recover it. 00:32:34.870 [2024-11-20 06:43:06.424078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.870 [2024-11-20 06:43:06.424110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.870 qpair failed and we were unable to recover it. 00:32:34.870 [2024-11-20 06:43:06.424236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.870 [2024-11-20 06:43:06.424270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.870 qpair failed and we were unable to recover it. 00:32:34.870 [2024-11-20 06:43:06.424461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.870 [2024-11-20 06:43:06.424492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.870 qpair failed and we were unable to recover it. 00:32:34.870 [2024-11-20 06:43:06.424730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.870 [2024-11-20 06:43:06.424763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.870 qpair failed and we were unable to recover it. 00:32:34.870 [2024-11-20 06:43:06.424867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.870 [2024-11-20 06:43:06.424899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.870 qpair failed and we were unable to recover it. 00:32:34.870 [2024-11-20 06:43:06.425015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.870 [2024-11-20 06:43:06.425047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.870 qpair failed and we were unable to recover it. 00:32:34.870 [2024-11-20 06:43:06.425157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.870 [2024-11-20 06:43:06.425188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.870 qpair failed and we were unable to recover it. 00:32:34.870 [2024-11-20 06:43:06.425317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.870 [2024-11-20 06:43:06.425350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.870 qpair failed and we were unable to recover it. 00:32:34.870 [2024-11-20 06:43:06.425470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.870 [2024-11-20 06:43:06.425501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.870 qpair failed and we were unable to recover it. 00:32:34.870 [2024-11-20 06:43:06.425622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.870 [2024-11-20 06:43:06.425654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.870 qpair failed and we were unable to recover it. 00:32:34.870 [2024-11-20 06:43:06.425845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.870 [2024-11-20 06:43:06.425876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.870 qpair failed and we were unable to recover it. 00:32:34.870 [2024-11-20 06:43:06.426067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.870 [2024-11-20 06:43:06.426100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.870 qpair failed and we were unable to recover it. 00:32:34.870 [2024-11-20 06:43:06.426246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.870 [2024-11-20 06:43:06.426280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.870 qpair failed and we were unable to recover it. 00:32:34.870 [2024-11-20 06:43:06.426387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.870 [2024-11-20 06:43:06.426421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.870 qpair failed and we were unable to recover it. 00:32:34.870 [2024-11-20 06:43:06.426535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.870 [2024-11-20 06:43:06.426566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.870 qpair failed and we were unable to recover it. 00:32:34.870 [2024-11-20 06:43:06.426682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.870 [2024-11-20 06:43:06.426713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.870 qpair failed and we were unable to recover it. 00:32:34.870 [2024-11-20 06:43:06.426911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.870 [2024-11-20 06:43:06.426943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.870 qpair failed and we were unable to recover it. 00:32:34.870 [2024-11-20 06:43:06.427058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.870 [2024-11-20 06:43:06.427090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.870 qpair failed and we were unable to recover it. 00:32:34.870 [2024-11-20 06:43:06.427194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.870 [2024-11-20 06:43:06.427236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.870 qpair failed and we were unable to recover it. 00:32:34.870 [2024-11-20 06:43:06.427434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.870 [2024-11-20 06:43:06.427466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.870 qpair failed and we were unable to recover it. 00:32:34.870 [2024-11-20 06:43:06.427573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.870 [2024-11-20 06:43:06.427605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.870 qpair failed and we were unable to recover it. 00:32:34.870 [2024-11-20 06:43:06.427723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.870 [2024-11-20 06:43:06.427754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.870 qpair failed and we were unable to recover it. 00:32:34.870 [2024-11-20 06:43:06.427932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.870 [2024-11-20 06:43:06.427970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.870 qpair failed and we were unable to recover it. 00:32:34.870 [2024-11-20 06:43:06.428171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.870 [2024-11-20 06:43:06.428217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.870 qpair failed and we were unable to recover it. 00:32:34.870 [2024-11-20 06:43:06.428334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.870 [2024-11-20 06:43:06.428366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.870 qpair failed and we were unable to recover it. 00:32:34.870 [2024-11-20 06:43:06.428483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.870 [2024-11-20 06:43:06.428516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.870 qpair failed and we were unable to recover it. 00:32:34.870 [2024-11-20 06:43:06.428634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.870 [2024-11-20 06:43:06.428666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.870 qpair failed and we were unable to recover it. 00:32:34.870 [2024-11-20 06:43:06.428857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.870 [2024-11-20 06:43:06.428889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.870 qpair failed and we were unable to recover it. 00:32:34.870 [2024-11-20 06:43:06.429057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.870 [2024-11-20 06:43:06.429089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.870 qpair failed and we were unable to recover it. 00:32:34.870 [2024-11-20 06:43:06.429279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.870 [2024-11-20 06:43:06.429314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.870 qpair failed and we were unable to recover it. 00:32:34.870 [2024-11-20 06:43:06.429435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.870 [2024-11-20 06:43:06.429466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.870 qpair failed and we were unable to recover it. 00:32:34.870 [2024-11-20 06:43:06.429583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.870 [2024-11-20 06:43:06.429616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.870 qpair failed and we were unable to recover it. 00:32:34.870 [2024-11-20 06:43:06.429807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.870 [2024-11-20 06:43:06.429839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.870 qpair failed and we were unable to recover it. 00:32:34.870 [2024-11-20 06:43:06.429954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.870 [2024-11-20 06:43:06.429986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.870 qpair failed and we were unable to recover it. 00:32:34.871 [2024-11-20 06:43:06.430166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.871 [2024-11-20 06:43:06.430199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.871 qpair failed and we were unable to recover it. 00:32:34.871 [2024-11-20 06:43:06.430395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.871 [2024-11-20 06:43:06.430428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.871 qpair failed and we were unable to recover it. 00:32:34.871 [2024-11-20 06:43:06.430611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.871 [2024-11-20 06:43:06.430643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.871 qpair failed and we were unable to recover it. 00:32:34.871 [2024-11-20 06:43:06.430765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.871 [2024-11-20 06:43:06.430797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.871 qpair failed and we were unable to recover it. 00:32:34.871 [2024-11-20 06:43:06.430906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.871 [2024-11-20 06:43:06.430937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.871 qpair failed and we were unable to recover it. 00:32:34.871 [2024-11-20 06:43:06.431118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.871 [2024-11-20 06:43:06.431151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.871 qpair failed and we were unable to recover it. 00:32:34.871 [2024-11-20 06:43:06.431355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.871 [2024-11-20 06:43:06.431391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.871 qpair failed and we were unable to recover it. 00:32:34.871 [2024-11-20 06:43:06.431509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.871 [2024-11-20 06:43:06.431541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.871 qpair failed and we were unable to recover it. 00:32:34.871 [2024-11-20 06:43:06.431724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.871 [2024-11-20 06:43:06.431756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.871 qpair failed and we were unable to recover it. 00:32:34.871 [2024-11-20 06:43:06.431873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.871 [2024-11-20 06:43:06.431903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.871 qpair failed and we were unable to recover it. 00:32:34.871 [2024-11-20 06:43:06.432093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.871 [2024-11-20 06:43:06.432126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.871 qpair failed and we were unable to recover it. 00:32:34.871 [2024-11-20 06:43:06.432299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.871 [2024-11-20 06:43:06.432333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.871 qpair failed and we were unable to recover it. 00:32:34.871 [2024-11-20 06:43:06.432452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.871 [2024-11-20 06:43:06.432484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.871 qpair failed and we were unable to recover it. 00:32:34.871 [2024-11-20 06:43:06.432659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.871 [2024-11-20 06:43:06.432692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.871 qpair failed and we were unable to recover it. 00:32:34.871 [2024-11-20 06:43:06.432881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.871 [2024-11-20 06:43:06.432914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.871 qpair failed and we were unable to recover it. 00:32:34.871 [2024-11-20 06:43:06.433085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.871 [2024-11-20 06:43:06.433122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.871 qpair failed and we were unable to recover it. 00:32:34.871 [2024-11-20 06:43:06.433240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.871 [2024-11-20 06:43:06.433274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.871 qpair failed and we were unable to recover it. 00:32:34.871 [2024-11-20 06:43:06.433380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.871 [2024-11-20 06:43:06.433412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.872 qpair failed and we were unable to recover it. 00:32:34.872 [2024-11-20 06:43:06.433586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.872 [2024-11-20 06:43:06.433619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.872 qpair failed and we were unable to recover it. 00:32:34.872 [2024-11-20 06:43:06.433837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.872 [2024-11-20 06:43:06.433869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.872 qpair failed and we were unable to recover it. 00:32:34.872 [2024-11-20 06:43:06.434054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.872 [2024-11-20 06:43:06.434085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.872 qpair failed and we were unable to recover it. 00:32:34.872 [2024-11-20 06:43:06.434200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.872 [2024-11-20 06:43:06.434245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.872 qpair failed and we were unable to recover it. 00:32:34.872 [2024-11-20 06:43:06.434357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.872 [2024-11-20 06:43:06.434391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.872 qpair failed and we were unable to recover it. 00:32:34.872 [2024-11-20 06:43:06.434499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.872 [2024-11-20 06:43:06.434529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.872 qpair failed and we were unable to recover it. 00:32:34.872 [2024-11-20 06:43:06.434633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.872 [2024-11-20 06:43:06.434665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.872 qpair failed and we were unable to recover it. 00:32:34.872 [2024-11-20 06:43:06.434839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.872 [2024-11-20 06:43:06.434872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.872 qpair failed and we were unable to recover it. 00:32:34.872 [2024-11-20 06:43:06.435060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.872 [2024-11-20 06:43:06.435091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.872 qpair failed and we were unable to recover it. 00:32:34.872 [2024-11-20 06:43:06.435331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.872 [2024-11-20 06:43:06.435365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.872 qpair failed and we were unable to recover it. 00:32:34.872 [2024-11-20 06:43:06.435541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.872 [2024-11-20 06:43:06.435572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.872 qpair failed and we were unable to recover it. 00:32:34.872 [2024-11-20 06:43:06.435784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.872 [2024-11-20 06:43:06.435817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.872 qpair failed and we were unable to recover it. 00:32:34.872 [2024-11-20 06:43:06.435943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.872 [2024-11-20 06:43:06.435975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.872 qpair failed and we were unable to recover it. 00:32:34.872 [2024-11-20 06:43:06.436090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.872 [2024-11-20 06:43:06.436121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.872 qpair failed and we were unable to recover it. 00:32:34.872 [2024-11-20 06:43:06.436242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.872 [2024-11-20 06:43:06.436275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.872 qpair failed and we were unable to recover it. 00:32:34.872 [2024-11-20 06:43:06.436391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.872 [2024-11-20 06:43:06.436422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.872 qpair failed and we were unable to recover it. 00:32:34.872 [2024-11-20 06:43:06.436558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.872 [2024-11-20 06:43:06.436589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.872 qpair failed and we were unable to recover it. 00:32:34.872 [2024-11-20 06:43:06.436719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.872 [2024-11-20 06:43:06.436750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.872 qpair failed and we were unable to recover it. 00:32:34.872 [2024-11-20 06:43:06.436856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.872 [2024-11-20 06:43:06.436887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.872 qpair failed and we were unable to recover it. 00:32:34.872 [2024-11-20 06:43:06.437025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.872 [2024-11-20 06:43:06.437058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.872 qpair failed and we were unable to recover it. 00:32:34.872 [2024-11-20 06:43:06.437172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.872 [2024-11-20 06:43:06.437213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.872 qpair failed and we were unable to recover it. 00:32:34.872 [2024-11-20 06:43:06.437412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.872 [2024-11-20 06:43:06.437444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.872 qpair failed and we were unable to recover it. 00:32:34.872 [2024-11-20 06:43:06.437715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.872 [2024-11-20 06:43:06.437747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.872 qpair failed and we were unable to recover it. 00:32:34.872 [2024-11-20 06:43:06.437988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.872 [2024-11-20 06:43:06.438020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.872 qpair failed and we were unable to recover it. 00:32:34.872 [2024-11-20 06:43:06.438163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.872 [2024-11-20 06:43:06.438194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.872 qpair failed and we were unable to recover it. 00:32:34.872 [2024-11-20 06:43:06.438337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.872 [2024-11-20 06:43:06.438369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.872 qpair failed and we were unable to recover it. 00:32:34.872 [2024-11-20 06:43:06.438554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.872 [2024-11-20 06:43:06.438586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.872 qpair failed and we were unable to recover it. 00:32:34.872 [2024-11-20 06:43:06.438727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.872 [2024-11-20 06:43:06.438759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.872 qpair failed and we were unable to recover it. 00:32:34.872 [2024-11-20 06:43:06.438944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.872 [2024-11-20 06:43:06.438976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.872 qpair failed and we were unable to recover it. 00:32:34.872 [2024-11-20 06:43:06.439091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.872 [2024-11-20 06:43:06.439122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.872 qpair failed and we were unable to recover it. 00:32:34.872 [2024-11-20 06:43:06.439375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.872 [2024-11-20 06:43:06.439407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.872 qpair failed and we were unable to recover it. 00:32:34.872 [2024-11-20 06:43:06.439521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.872 [2024-11-20 06:43:06.439553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.872 qpair failed and we were unable to recover it. 00:32:34.872 [2024-11-20 06:43:06.439671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.872 [2024-11-20 06:43:06.439702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.872 qpair failed and we were unable to recover it. 00:32:34.872 [2024-11-20 06:43:06.439824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.872 [2024-11-20 06:43:06.439855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.872 qpair failed and we were unable to recover it. 00:32:34.872 [2024-11-20 06:43:06.439957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.872 [2024-11-20 06:43:06.439988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.872 qpair failed and we were unable to recover it. 00:32:34.872 [2024-11-20 06:43:06.440186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.872 [2024-11-20 06:43:06.440226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.872 qpair failed and we were unable to recover it. 00:32:34.872 [2024-11-20 06:43:06.440363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.872 [2024-11-20 06:43:06.440395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.872 qpair failed and we were unable to recover it. 00:32:34.872 [2024-11-20 06:43:06.440513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.873 [2024-11-20 06:43:06.440550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.873 qpair failed and we were unable to recover it. 00:32:34.873 [2024-11-20 06:43:06.440662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.873 [2024-11-20 06:43:06.440693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.873 qpair failed and we were unable to recover it. 00:32:34.873 [2024-11-20 06:43:06.440869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.873 [2024-11-20 06:43:06.440901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.873 qpair failed and we were unable to recover it. 00:32:34.873 [2024-11-20 06:43:06.441011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.873 [2024-11-20 06:43:06.441042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.873 qpair failed and we were unable to recover it. 00:32:34.873 [2024-11-20 06:43:06.441255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.873 [2024-11-20 06:43:06.441289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.873 qpair failed and we were unable to recover it. 00:32:34.873 [2024-11-20 06:43:06.441470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.873 [2024-11-20 06:43:06.441502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.873 qpair failed and we were unable to recover it. 00:32:34.873 [2024-11-20 06:43:06.441677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.873 [2024-11-20 06:43:06.441709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.873 qpair failed and we were unable to recover it. 00:32:34.873 [2024-11-20 06:43:06.441820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.873 [2024-11-20 06:43:06.441851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.873 qpair failed and we were unable to recover it. 00:32:34.873 [2024-11-20 06:43:06.442020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.873 [2024-11-20 06:43:06.442052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.873 qpair failed and we were unable to recover it. 00:32:34.873 [2024-11-20 06:43:06.442177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.873 [2024-11-20 06:43:06.442216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.873 qpair failed and we were unable to recover it. 00:32:34.873 [2024-11-20 06:43:06.442395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.873 [2024-11-20 06:43:06.442427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.873 qpair failed and we were unable to recover it. 00:32:34.873 [2024-11-20 06:43:06.442611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.873 [2024-11-20 06:43:06.442642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.873 qpair failed and we were unable to recover it. 00:32:34.873 [2024-11-20 06:43:06.442754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.873 [2024-11-20 06:43:06.442786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.873 qpair failed and we were unable to recover it. 00:32:34.873 [2024-11-20 06:43:06.442924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.873 [2024-11-20 06:43:06.442954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.873 qpair failed and we were unable to recover it. 00:32:34.873 [2024-11-20 06:43:06.443146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.873 [2024-11-20 06:43:06.443179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.873 qpair failed and we were unable to recover it. 00:32:34.873 [2024-11-20 06:43:06.443390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.873 [2024-11-20 06:43:06.443422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.873 qpair failed and we were unable to recover it. 00:32:34.873 [2024-11-20 06:43:06.443532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.873 [2024-11-20 06:43:06.443563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.873 qpair failed and we were unable to recover it. 00:32:34.873 [2024-11-20 06:43:06.443736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.873 [2024-11-20 06:43:06.443767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.873 qpair failed and we were unable to recover it. 00:32:34.873 [2024-11-20 06:43:06.443953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.873 [2024-11-20 06:43:06.443985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.873 qpair failed and we were unable to recover it. 00:32:34.873 [2024-11-20 06:43:06.444103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.873 [2024-11-20 06:43:06.444134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.873 qpair failed and we were unable to recover it. 00:32:34.873 [2024-11-20 06:43:06.444261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.873 [2024-11-20 06:43:06.444294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.873 qpair failed and we were unable to recover it. 00:32:34.873 [2024-11-20 06:43:06.444536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.873 [2024-11-20 06:43:06.444568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.873 qpair failed and we were unable to recover it. 00:32:34.873 [2024-11-20 06:43:06.444755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.873 [2024-11-20 06:43:06.444787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.873 qpair failed and we were unable to recover it. 00:32:34.873 [2024-11-20 06:43:06.444958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.873 [2024-11-20 06:43:06.444988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.873 qpair failed and we were unable to recover it. 00:32:34.873 [2024-11-20 06:43:06.445174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.873 [2024-11-20 06:43:06.445213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.873 qpair failed and we were unable to recover it. 00:32:34.873 [2024-11-20 06:43:06.445386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.873 [2024-11-20 06:43:06.445418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.873 qpair failed and we were unable to recover it. 00:32:34.873 [2024-11-20 06:43:06.445552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.873 [2024-11-20 06:43:06.445584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.873 qpair failed and we were unable to recover it. 00:32:34.873 [2024-11-20 06:43:06.445713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.873 [2024-11-20 06:43:06.445744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.873 qpair failed and we were unable to recover it. 00:32:34.873 [2024-11-20 06:43:06.445948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.873 [2024-11-20 06:43:06.445979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.873 qpair failed and we were unable to recover it. 00:32:34.873 [2024-11-20 06:43:06.446151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.873 [2024-11-20 06:43:06.446183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.873 qpair failed and we were unable to recover it. 00:32:34.873 [2024-11-20 06:43:06.446382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.873 [2024-11-20 06:43:06.446415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.873 qpair failed and we were unable to recover it. 00:32:34.873 [2024-11-20 06:43:06.446539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.873 [2024-11-20 06:43:06.446572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.873 qpair failed and we were unable to recover it. 00:32:34.873 [2024-11-20 06:43:06.446692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.873 [2024-11-20 06:43:06.446723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.873 qpair failed and we were unable to recover it. 00:32:34.873 [2024-11-20 06:43:06.446895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.873 [2024-11-20 06:43:06.446926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.873 qpair failed and we were unable to recover it. 00:32:34.873 [2024-11-20 06:43:06.447116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.873 [2024-11-20 06:43:06.447148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.873 qpair failed and we were unable to recover it. 00:32:34.873 [2024-11-20 06:43:06.447334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.873 [2024-11-20 06:43:06.447367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.873 qpair failed and we were unable to recover it. 00:32:34.873 [2024-11-20 06:43:06.447564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.873 [2024-11-20 06:43:06.447595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.873 qpair failed and we were unable to recover it. 00:32:34.873 [2024-11-20 06:43:06.447734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.874 [2024-11-20 06:43:06.447765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.874 qpair failed and we were unable to recover it. 00:32:34.874 [2024-11-20 06:43:06.447890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.874 [2024-11-20 06:43:06.447921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.874 qpair failed and we were unable to recover it. 00:32:34.874 [2024-11-20 06:43:06.448052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.874 [2024-11-20 06:43:06.448084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.874 qpair failed and we were unable to recover it. 00:32:34.874 [2024-11-20 06:43:06.448212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.874 [2024-11-20 06:43:06.448249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.874 qpair failed and we were unable to recover it. 00:32:34.874 [2024-11-20 06:43:06.448435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.874 [2024-11-20 06:43:06.448468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.874 qpair failed and we were unable to recover it. 00:32:34.874 [2024-11-20 06:43:06.448585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.874 [2024-11-20 06:43:06.448617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.874 qpair failed and we were unable to recover it. 00:32:34.874 [2024-11-20 06:43:06.448856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.874 [2024-11-20 06:43:06.448887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.874 qpair failed and we were unable to recover it. 00:32:34.874 [2024-11-20 06:43:06.449009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.874 [2024-11-20 06:43:06.449041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.874 qpair failed and we were unable to recover it. 00:32:34.874 [2024-11-20 06:43:06.449155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.874 [2024-11-20 06:43:06.449187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.874 qpair failed and we were unable to recover it. 00:32:34.874 [2024-11-20 06:43:06.449377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.874 [2024-11-20 06:43:06.449409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.874 qpair failed and we were unable to recover it. 00:32:34.874 [2024-11-20 06:43:06.449517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.874 [2024-11-20 06:43:06.449549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.874 qpair failed and we were unable to recover it. 00:32:34.874 [2024-11-20 06:43:06.449792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.874 [2024-11-20 06:43:06.449823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.874 qpair failed and we were unable to recover it. 00:32:34.874 [2024-11-20 06:43:06.449946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.874 [2024-11-20 06:43:06.449977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.874 qpair failed and we were unable to recover it. 00:32:34.874 [2024-11-20 06:43:06.450093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.874 [2024-11-20 06:43:06.450126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.874 qpair failed and we were unable to recover it. 00:32:34.874 [2024-11-20 06:43:06.450299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.874 [2024-11-20 06:43:06.450333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.874 qpair failed and we were unable to recover it. 00:32:34.874 [2024-11-20 06:43:06.450456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.874 [2024-11-20 06:43:06.450488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.874 qpair failed and we were unable to recover it. 00:32:34.874 [2024-11-20 06:43:06.450678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.874 [2024-11-20 06:43:06.450709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.874 qpair failed and we were unable to recover it. 00:32:34.874 [2024-11-20 06:43:06.450901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.874 [2024-11-20 06:43:06.450933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.874 qpair failed and we were unable to recover it. 00:32:34.874 [2024-11-20 06:43:06.451045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.874 [2024-11-20 06:43:06.451076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.874 qpair failed and we were unable to recover it. 00:32:34.874 [2024-11-20 06:43:06.451313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.874 [2024-11-20 06:43:06.451346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.874 qpair failed and we were unable to recover it. 00:32:34.874 [2024-11-20 06:43:06.451544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.874 [2024-11-20 06:43:06.451576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.874 qpair failed and we were unable to recover it. 00:32:34.874 [2024-11-20 06:43:06.451702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.874 [2024-11-20 06:43:06.451735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.874 qpair failed and we were unable to recover it. 00:32:34.874 [2024-11-20 06:43:06.451919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.874 [2024-11-20 06:43:06.451951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.874 qpair failed and we were unable to recover it. 00:32:34.874 [2024-11-20 06:43:06.452088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.874 [2024-11-20 06:43:06.452119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.874 qpair failed and we were unable to recover it. 00:32:34.874 [2024-11-20 06:43:06.452242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.874 [2024-11-20 06:43:06.452276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.874 qpair failed and we were unable to recover it. 00:32:34.874 [2024-11-20 06:43:06.452413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.874 [2024-11-20 06:43:06.452445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.874 qpair failed and we were unable to recover it. 00:32:34.874 [2024-11-20 06:43:06.452690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.874 [2024-11-20 06:43:06.452722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.874 qpair failed and we were unable to recover it. 00:32:34.874 [2024-11-20 06:43:06.452837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.874 [2024-11-20 06:43:06.452868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.874 qpair failed and we were unable to recover it. 00:32:34.874 [2024-11-20 06:43:06.453053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.874 [2024-11-20 06:43:06.453085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.874 qpair failed and we were unable to recover it. 00:32:34.874 [2024-11-20 06:43:06.453292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.874 [2024-11-20 06:43:06.453326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.874 qpair failed and we were unable to recover it. 00:32:34.874 [2024-11-20 06:43:06.453555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.874 [2024-11-20 06:43:06.453588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.874 qpair failed and we were unable to recover it. 00:32:34.874 [2024-11-20 06:43:06.453698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.874 [2024-11-20 06:43:06.453730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.874 qpair failed and we were unable to recover it. 00:32:34.874 [2024-11-20 06:43:06.453837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.874 [2024-11-20 06:43:06.453867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.874 qpair failed and we were unable to recover it. 00:32:34.874 [2024-11-20 06:43:06.453994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.874 [2024-11-20 06:43:06.454027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.874 qpair failed and we were unable to recover it. 00:32:34.874 [2024-11-20 06:43:06.454216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.874 [2024-11-20 06:43:06.454248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.874 qpair failed and we were unable to recover it. 00:32:34.874 [2024-11-20 06:43:06.454429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.874 [2024-11-20 06:43:06.454461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.874 qpair failed and we were unable to recover it. 00:32:34.874 [2024-11-20 06:43:06.454564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.874 [2024-11-20 06:43:06.454595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.874 qpair failed and we were unable to recover it. 00:32:34.874 [2024-11-20 06:43:06.454713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.874 [2024-11-20 06:43:06.454746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.875 qpair failed and we were unable to recover it. 00:32:34.875 [2024-11-20 06:43:06.454935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.875 [2024-11-20 06:43:06.454967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.875 qpair failed and we were unable to recover it. 00:32:34.875 [2024-11-20 06:43:06.455076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.875 [2024-11-20 06:43:06.455107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.875 qpair failed and we were unable to recover it. 00:32:34.875 [2024-11-20 06:43:06.455229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.875 [2024-11-20 06:43:06.455263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.875 qpair failed and we were unable to recover it. 00:32:34.875 [2024-11-20 06:43:06.455372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.875 [2024-11-20 06:43:06.455404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.875 qpair failed and we were unable to recover it. 00:32:34.875 [2024-11-20 06:43:06.455529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.875 [2024-11-20 06:43:06.455561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.875 qpair failed and we were unable to recover it. 00:32:34.875 [2024-11-20 06:43:06.455737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.875 [2024-11-20 06:43:06.455775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.875 qpair failed and we were unable to recover it. 00:32:34.875 [2024-11-20 06:43:06.455885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.875 [2024-11-20 06:43:06.455918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.875 qpair failed and we were unable to recover it. 00:32:34.875 [2024-11-20 06:43:06.456112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.875 [2024-11-20 06:43:06.456144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.875 qpair failed and we were unable to recover it. 00:32:34.875 [2024-11-20 06:43:06.456354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.875 [2024-11-20 06:43:06.456388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.875 qpair failed and we were unable to recover it. 00:32:34.875 [2024-11-20 06:43:06.456589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.875 [2024-11-20 06:43:06.456620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.875 qpair failed and we were unable to recover it. 00:32:34.875 [2024-11-20 06:43:06.456723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.875 [2024-11-20 06:43:06.456755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.875 qpair failed and we were unable to recover it. 00:32:34.875 [2024-11-20 06:43:06.456878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.875 [2024-11-20 06:43:06.456909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.875 qpair failed and we were unable to recover it. 00:32:34.875 [2024-11-20 06:43:06.457033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.875 [2024-11-20 06:43:06.457064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.875 qpair failed and we were unable to recover it. 00:32:34.875 [2024-11-20 06:43:06.457184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.875 [2024-11-20 06:43:06.457224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.875 qpair failed and we were unable to recover it. 00:32:34.875 [2024-11-20 06:43:06.457362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.875 [2024-11-20 06:43:06.457393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.875 qpair failed and we were unable to recover it. 00:32:34.875 [2024-11-20 06:43:06.457510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.875 [2024-11-20 06:43:06.457543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.875 qpair failed and we were unable to recover it. 00:32:34.875 [2024-11-20 06:43:06.457723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.875 [2024-11-20 06:43:06.457754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.875 qpair failed and we were unable to recover it. 00:32:34.875 [2024-11-20 06:43:06.457941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.875 [2024-11-20 06:43:06.457972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.875 qpair failed and we were unable to recover it. 00:32:34.875 [2024-11-20 06:43:06.458146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.875 [2024-11-20 06:43:06.458177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.875 qpair failed and we were unable to recover it. 00:32:34.875 [2024-11-20 06:43:06.458377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.875 [2024-11-20 06:43:06.458409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.875 qpair failed and we were unable to recover it. 00:32:34.875 [2024-11-20 06:43:06.458673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.875 [2024-11-20 06:43:06.458705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.875 qpair failed and we were unable to recover it. 00:32:34.875 [2024-11-20 06:43:06.458970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.875 [2024-11-20 06:43:06.459002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.875 qpair failed and we were unable to recover it. 00:32:34.875 [2024-11-20 06:43:06.459186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.875 [2024-11-20 06:43:06.459230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.875 qpair failed and we were unable to recover it. 00:32:34.875 [2024-11-20 06:43:06.459462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.875 [2024-11-20 06:43:06.459493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.875 qpair failed and we were unable to recover it. 00:32:34.875 [2024-11-20 06:43:06.459611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.875 [2024-11-20 06:43:06.459642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.875 qpair failed and we were unable to recover it. 00:32:34.875 [2024-11-20 06:43:06.459763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.875 [2024-11-20 06:43:06.459795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.875 qpair failed and we were unable to recover it. 00:32:34.875 [2024-11-20 06:43:06.459965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.875 [2024-11-20 06:43:06.459996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.875 qpair failed and we were unable to recover it. 00:32:34.875 [2024-11-20 06:43:06.460119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.875 [2024-11-20 06:43:06.460151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.875 qpair failed and we were unable to recover it. 00:32:34.875 [2024-11-20 06:43:06.460471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.875 [2024-11-20 06:43:06.460506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.875 qpair failed and we were unable to recover it. 00:32:34.875 [2024-11-20 06:43:06.460633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.875 [2024-11-20 06:43:06.460664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.875 qpair failed and we were unable to recover it. 00:32:34.875 [2024-11-20 06:43:06.460778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.875 [2024-11-20 06:43:06.460810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.876 qpair failed and we were unable to recover it. 00:32:34.876 [2024-11-20 06:43:06.460986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.876 [2024-11-20 06:43:06.461018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.876 qpair failed and we were unable to recover it. 00:32:34.876 [2024-11-20 06:43:06.461148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.876 [2024-11-20 06:43:06.461180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.876 qpair failed and we were unable to recover it. 00:32:34.876 [2024-11-20 06:43:06.461428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.876 [2024-11-20 06:43:06.461460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.876 qpair failed and we were unable to recover it. 00:32:34.876 [2024-11-20 06:43:06.461641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.876 [2024-11-20 06:43:06.461673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.876 qpair failed and we were unable to recover it. 00:32:34.876 [2024-11-20 06:43:06.461867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.876 [2024-11-20 06:43:06.461900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.876 qpair failed and we were unable to recover it. 00:32:34.876 [2024-11-20 06:43:06.462084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.876 [2024-11-20 06:43:06.462116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.876 qpair failed and we were unable to recover it. 00:32:34.876 [2024-11-20 06:43:06.462289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.876 [2024-11-20 06:43:06.462323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.876 qpair failed and we were unable to recover it. 00:32:34.876 [2024-11-20 06:43:06.462446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.876 [2024-11-20 06:43:06.462478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.876 qpair failed and we were unable to recover it. 00:32:34.876 [2024-11-20 06:43:06.462591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.876 [2024-11-20 06:43:06.462623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.876 qpair failed and we were unable to recover it. 00:32:34.876 [2024-11-20 06:43:06.462806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.876 [2024-11-20 06:43:06.462838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.876 qpair failed and we were unable to recover it. 00:32:34.876 [2024-11-20 06:43:06.463018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.876 [2024-11-20 06:43:06.463050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.876 qpair failed and we were unable to recover it. 00:32:34.876 [2024-11-20 06:43:06.463156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.876 [2024-11-20 06:43:06.463189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.876 qpair failed and we were unable to recover it. 00:32:34.876 [2024-11-20 06:43:06.463383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.876 [2024-11-20 06:43:06.463415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.876 qpair failed and we were unable to recover it. 00:32:34.876 [2024-11-20 06:43:06.463549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.876 [2024-11-20 06:43:06.463585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.876 qpair failed and we were unable to recover it. 00:32:34.876 [2024-11-20 06:43:06.463695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.876 [2024-11-20 06:43:06.463733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.876 qpair failed and we were unable to recover it. 00:32:34.876 [2024-11-20 06:43:06.463845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.876 [2024-11-20 06:43:06.463876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.876 qpair failed and we were unable to recover it. 00:32:34.876 [2024-11-20 06:43:06.463985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.876 [2024-11-20 06:43:06.464017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.876 qpair failed and we were unable to recover it. 00:32:34.876 [2024-11-20 06:43:06.464126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.876 [2024-11-20 06:43:06.464158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.876 qpair failed and we were unable to recover it. 00:32:34.876 [2024-11-20 06:43:06.464288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.876 [2024-11-20 06:43:06.464321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.876 qpair failed and we were unable to recover it. 00:32:34.876 [2024-11-20 06:43:06.464506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.876 [2024-11-20 06:43:06.464538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.876 qpair failed and we were unable to recover it. 00:32:34.876 [2024-11-20 06:43:06.464687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.876 [2024-11-20 06:43:06.464722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.876 qpair failed and we were unable to recover it. 00:32:34.876 [2024-11-20 06:43:06.464892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.876 [2024-11-20 06:43:06.464924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.876 qpair failed and we were unable to recover it. 00:32:34.876 [2024-11-20 06:43:06.465117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.876 [2024-11-20 06:43:06.465149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.876 qpair failed and we were unable to recover it. 00:32:34.876 [2024-11-20 06:43:06.465289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.876 [2024-11-20 06:43:06.465321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.876 qpair failed and we were unable to recover it. 00:32:34.876 [2024-11-20 06:43:06.465521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.876 [2024-11-20 06:43:06.465554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.876 qpair failed and we were unable to recover it. 00:32:34.876 [2024-11-20 06:43:06.465684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.876 [2024-11-20 06:43:06.465715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.876 qpair failed and we were unable to recover it. 00:32:34.876 [2024-11-20 06:43:06.465888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.876 [2024-11-20 06:43:06.465920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.876 qpair failed and we were unable to recover it. 00:32:34.876 [2024-11-20 06:43:06.466105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.876 [2024-11-20 06:43:06.466138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.876 qpair failed and we were unable to recover it. 00:32:34.876 [2024-11-20 06:43:06.466267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.876 [2024-11-20 06:43:06.466301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.876 qpair failed and we were unable to recover it. 00:32:34.876 [2024-11-20 06:43:06.466439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.876 [2024-11-20 06:43:06.466471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.876 qpair failed and we were unable to recover it. 00:32:34.876 [2024-11-20 06:43:06.466601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.876 [2024-11-20 06:43:06.466633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.876 qpair failed and we were unable to recover it. 00:32:34.876 [2024-11-20 06:43:06.466763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.876 [2024-11-20 06:43:06.466794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.876 qpair failed and we were unable to recover it. 00:32:34.876 [2024-11-20 06:43:06.466911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.876 [2024-11-20 06:43:06.466944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.876 qpair failed and we were unable to recover it. 00:32:34.876 [2024-11-20 06:43:06.467079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.876 [2024-11-20 06:43:06.467111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.876 qpair failed and we were unable to recover it. 00:32:34.876 [2024-11-20 06:43:06.467224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.876 [2024-11-20 06:43:06.467257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.876 qpair failed and we were unable to recover it. 00:32:34.876 [2024-11-20 06:43:06.467366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.876 [2024-11-20 06:43:06.467399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.876 qpair failed and we were unable to recover it. 00:32:34.876 [2024-11-20 06:43:06.467585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.876 [2024-11-20 06:43:06.467617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.876 qpair failed and we were unable to recover it. 00:32:34.876 [2024-11-20 06:43:06.467814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.877 [2024-11-20 06:43:06.467846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.877 qpair failed and we were unable to recover it. 00:32:34.877 [2024-11-20 06:43:06.467954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.877 [2024-11-20 06:43:06.467986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.877 qpair failed and we were unable to recover it. 00:32:34.877 [2024-11-20 06:43:06.468107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.877 [2024-11-20 06:43:06.468139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.877 qpair failed and we were unable to recover it. 00:32:34.877 [2024-11-20 06:43:06.468360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.877 [2024-11-20 06:43:06.468393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.877 qpair failed and we were unable to recover it. 00:32:34.877 [2024-11-20 06:43:06.468540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.877 [2024-11-20 06:43:06.468572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.877 qpair failed and we were unable to recover it. 00:32:34.877 [2024-11-20 06:43:06.468673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.877 [2024-11-20 06:43:06.468704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.877 qpair failed and we were unable to recover it. 00:32:34.877 [2024-11-20 06:43:06.468819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.877 [2024-11-20 06:43:06.468851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.877 qpair failed and we were unable to recover it. 00:32:34.877 [2024-11-20 06:43:06.468983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.877 [2024-11-20 06:43:06.469015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.877 qpair failed and we were unable to recover it. 00:32:34.877 [2024-11-20 06:43:06.469216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.877 [2024-11-20 06:43:06.469250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.877 qpair failed and we were unable to recover it. 00:32:34.877 [2024-11-20 06:43:06.469361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.877 [2024-11-20 06:43:06.469393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.877 qpair failed and we were unable to recover it. 00:32:34.877 [2024-11-20 06:43:06.469513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.877 [2024-11-20 06:43:06.469544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.877 qpair failed and we were unable to recover it. 00:32:34.877 [2024-11-20 06:43:06.469726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.877 [2024-11-20 06:43:06.469759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.877 qpair failed and we were unable to recover it. 00:32:34.877 [2024-11-20 06:43:06.469884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.877 [2024-11-20 06:43:06.469916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.877 qpair failed and we were unable to recover it. 00:32:34.877 [2024-11-20 06:43:06.470032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.877 [2024-11-20 06:43:06.470062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.877 qpair failed and we were unable to recover it. 00:32:34.877 [2024-11-20 06:43:06.470247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.877 [2024-11-20 06:43:06.470280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.877 qpair failed and we were unable to recover it. 00:32:34.877 [2024-11-20 06:43:06.470405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.877 [2024-11-20 06:43:06.470437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.877 qpair failed and we were unable to recover it. 00:32:34.877 [2024-11-20 06:43:06.470628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.877 [2024-11-20 06:43:06.470660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.877 qpair failed and we were unable to recover it. 00:32:34.877 [2024-11-20 06:43:06.470779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.877 [2024-11-20 06:43:06.470817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.877 qpair failed and we were unable to recover it. 00:32:34.877 [2024-11-20 06:43:06.471004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.877 [2024-11-20 06:43:06.471034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.877 qpair failed and we were unable to recover it. 00:32:34.877 [2024-11-20 06:43:06.471216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.877 [2024-11-20 06:43:06.471250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.877 qpair failed and we were unable to recover it. 00:32:34.877 [2024-11-20 06:43:06.471387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.877 [2024-11-20 06:43:06.471418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.877 qpair failed and we were unable to recover it. 00:32:34.877 [2024-11-20 06:43:06.471528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.877 [2024-11-20 06:43:06.471559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.877 qpair failed and we were unable to recover it. 00:32:34.877 [2024-11-20 06:43:06.471751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.877 [2024-11-20 06:43:06.471783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.877 qpair failed and we were unable to recover it. 00:32:34.877 [2024-11-20 06:43:06.471908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.877 [2024-11-20 06:43:06.471940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.877 qpair failed and we were unable to recover it. 00:32:34.877 [2024-11-20 06:43:06.472074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.877 [2024-11-20 06:43:06.472105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.877 qpair failed and we were unable to recover it. 00:32:34.877 [2024-11-20 06:43:06.472253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.877 [2024-11-20 06:43:06.472287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.877 qpair failed and we were unable to recover it. 00:32:34.877 [2024-11-20 06:43:06.472502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.877 [2024-11-20 06:43:06.472534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.877 qpair failed and we were unable to recover it. 00:32:34.877 [2024-11-20 06:43:06.472743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.877 [2024-11-20 06:43:06.472776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.877 qpair failed and we were unable to recover it. 00:32:34.877 [2024-11-20 06:43:06.472879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.877 [2024-11-20 06:43:06.472910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.877 qpair failed and we were unable to recover it. 00:32:34.877 [2024-11-20 06:43:06.473024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.877 [2024-11-20 06:43:06.473056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.877 qpair failed and we were unable to recover it. 00:32:34.877 [2024-11-20 06:43:06.473250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.877 [2024-11-20 06:43:06.473283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.877 qpair failed and we were unable to recover it. 00:32:34.877 [2024-11-20 06:43:06.473540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.877 [2024-11-20 06:43:06.473573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.877 qpair failed and we were unable to recover it. 00:32:34.877 [2024-11-20 06:43:06.473785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.877 [2024-11-20 06:43:06.473817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.877 qpair failed and we were unable to recover it. 00:32:34.877 [2024-11-20 06:43:06.473954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.877 [2024-11-20 06:43:06.473985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.877 qpair failed and we were unable to recover it. 00:32:34.877 [2024-11-20 06:43:06.474104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.877 [2024-11-20 06:43:06.474135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.877 qpair failed and we were unable to recover it. 00:32:34.877 [2024-11-20 06:43:06.474407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.877 [2024-11-20 06:43:06.474440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.877 qpair failed and we were unable to recover it. 00:32:34.877 [2024-11-20 06:43:06.474556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.877 [2024-11-20 06:43:06.474588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.877 qpair failed and we were unable to recover it. 00:32:34.877 [2024-11-20 06:43:06.474703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.878 [2024-11-20 06:43:06.474734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.878 qpair failed and we were unable to recover it. 00:32:34.878 [2024-11-20 06:43:06.474837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.878 [2024-11-20 06:43:06.474868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.878 qpair failed and we were unable to recover it. 00:32:34.878 [2024-11-20 06:43:06.474995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.878 [2024-11-20 06:43:06.475028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.878 qpair failed and we were unable to recover it. 00:32:34.878 [2024-11-20 06:43:06.475214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.878 [2024-11-20 06:43:06.475247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.878 qpair failed and we were unable to recover it. 00:32:34.878 [2024-11-20 06:43:06.475435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.878 [2024-11-20 06:43:06.475466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.878 qpair failed and we were unable to recover it. 00:32:34.878 [2024-11-20 06:43:06.475654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.878 [2024-11-20 06:43:06.475685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.878 qpair failed and we were unable to recover it. 00:32:34.878 [2024-11-20 06:43:06.475812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.878 [2024-11-20 06:43:06.475844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.878 qpair failed and we were unable to recover it. 00:32:34.878 [2024-11-20 06:43:06.476109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.878 [2024-11-20 06:43:06.476140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.878 qpair failed and we were unable to recover it. 00:32:34.878 [2024-11-20 06:43:06.476281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.878 [2024-11-20 06:43:06.476314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.878 qpair failed and we were unable to recover it. 00:32:34.878 [2024-11-20 06:43:06.476419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.878 [2024-11-20 06:43:06.476451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.878 qpair failed and we were unable to recover it. 00:32:34.878 [2024-11-20 06:43:06.476565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.878 [2024-11-20 06:43:06.476597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.878 qpair failed and we were unable to recover it. 00:32:34.878 [2024-11-20 06:43:06.476731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.878 [2024-11-20 06:43:06.476762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.878 qpair failed and we were unable to recover it. 00:32:34.878 [2024-11-20 06:43:06.476934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.878 [2024-11-20 06:43:06.476966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.878 qpair failed and we were unable to recover it. 00:32:34.878 [2024-11-20 06:43:06.477089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.878 [2024-11-20 06:43:06.477120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.878 qpair failed and we were unable to recover it. 00:32:34.878 [2024-11-20 06:43:06.477330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.878 [2024-11-20 06:43:06.477365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.878 qpair failed and we were unable to recover it. 00:32:34.878 [2024-11-20 06:43:06.477630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.878 [2024-11-20 06:43:06.477662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.878 qpair failed and we were unable to recover it. 00:32:34.878 [2024-11-20 06:43:06.477854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.878 [2024-11-20 06:43:06.477885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.878 qpair failed and we were unable to recover it. 00:32:34.878 [2024-11-20 06:43:06.477994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.878 [2024-11-20 06:43:06.478025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.878 qpair failed and we were unable to recover it. 00:32:34.878 [2024-11-20 06:43:06.478144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.878 [2024-11-20 06:43:06.478176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.878 qpair failed and we were unable to recover it. 00:32:34.878 [2024-11-20 06:43:06.478381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.878 [2024-11-20 06:43:06.478412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.878 qpair failed and we were unable to recover it. 00:32:34.878 [2024-11-20 06:43:06.478654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.878 [2024-11-20 06:43:06.478692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.878 qpair failed and we were unable to recover it. 00:32:34.878 [2024-11-20 06:43:06.478825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.878 [2024-11-20 06:43:06.478856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.878 qpair failed and we were unable to recover it. 00:32:34.878 [2024-11-20 06:43:06.479004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.878 [2024-11-20 06:43:06.479037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.878 qpair failed and we were unable to recover it. 00:32:34.878 [2024-11-20 06:43:06.479290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.878 [2024-11-20 06:43:06.479324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.878 qpair failed and we were unable to recover it. 00:32:34.878 [2024-11-20 06:43:06.479587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.878 [2024-11-20 06:43:06.479619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.878 qpair failed and we were unable to recover it. 00:32:34.878 [2024-11-20 06:43:06.479809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.878 [2024-11-20 06:43:06.479841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.878 qpair failed and we were unable to recover it. 00:32:34.878 [2024-11-20 06:43:06.480044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.878 [2024-11-20 06:43:06.480077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.878 qpair failed and we were unable to recover it. 00:32:34.878 [2024-11-20 06:43:06.480196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.878 [2024-11-20 06:43:06.480247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.878 qpair failed and we were unable to recover it. 00:32:34.878 [2024-11-20 06:43:06.480379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.878 [2024-11-20 06:43:06.480410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.878 qpair failed and we were unable to recover it. 00:32:34.878 [2024-11-20 06:43:06.480621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.878 [2024-11-20 06:43:06.480652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.878 qpair failed and we were unable to recover it. 00:32:34.878 [2024-11-20 06:43:06.480851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.878 [2024-11-20 06:43:06.480885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.878 qpair failed and we were unable to recover it. 00:32:34.878 [2024-11-20 06:43:06.481007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.878 [2024-11-20 06:43:06.481038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.878 qpair failed and we were unable to recover it. 00:32:34.878 [2024-11-20 06:43:06.481282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.878 [2024-11-20 06:43:06.481315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.878 qpair failed and we were unable to recover it. 00:32:34.878 [2024-11-20 06:43:06.481421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.878 [2024-11-20 06:43:06.481453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.878 qpair failed and we were unable to recover it. 00:32:34.878 [2024-11-20 06:43:06.481635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.878 [2024-11-20 06:43:06.481668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.878 qpair failed and we were unable to recover it. 00:32:34.878 [2024-11-20 06:43:06.481792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.878 [2024-11-20 06:43:06.481823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.878 qpair failed and we were unable to recover it. 00:32:34.878 [2024-11-20 06:43:06.481955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.878 [2024-11-20 06:43:06.481986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.878 qpair failed and we were unable to recover it. 00:32:34.878 [2024-11-20 06:43:06.482244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.878 [2024-11-20 06:43:06.482278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.878 qpair failed and we were unable to recover it. 00:32:34.879 [2024-11-20 06:43:06.482503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.879 [2024-11-20 06:43:06.482535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.879 qpair failed and we were unable to recover it. 00:32:34.879 [2024-11-20 06:43:06.482775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.879 [2024-11-20 06:43:06.482807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.879 qpair failed and we were unable to recover it. 00:32:34.879 [2024-11-20 06:43:06.482934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.879 [2024-11-20 06:43:06.482966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.879 qpair failed and we were unable to recover it. 00:32:34.879 [2024-11-20 06:43:06.483150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.879 [2024-11-20 06:43:06.483181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.879 qpair failed and we were unable to recover it. 00:32:34.879 [2024-11-20 06:43:06.483404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.879 [2024-11-20 06:43:06.483436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.879 qpair failed and we were unable to recover it. 00:32:34.879 [2024-11-20 06:43:06.483554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.879 [2024-11-20 06:43:06.483586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.879 qpair failed and we were unable to recover it. 00:32:34.879 [2024-11-20 06:43:06.483777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.879 [2024-11-20 06:43:06.483808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.879 qpair failed and we were unable to recover it. 00:32:34.879 [2024-11-20 06:43:06.483994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.879 [2024-11-20 06:43:06.484026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.879 qpair failed and we were unable to recover it. 00:32:34.879 [2024-11-20 06:43:06.484146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.879 [2024-11-20 06:43:06.484179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.879 qpair failed and we were unable to recover it. 00:32:34.879 [2024-11-20 06:43:06.484407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.879 [2024-11-20 06:43:06.484438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.879 qpair failed and we were unable to recover it. 00:32:34.879 [2024-11-20 06:43:06.484576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.879 [2024-11-20 06:43:06.484607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.879 qpair failed and we were unable to recover it. 00:32:34.879 [2024-11-20 06:43:06.484798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.879 [2024-11-20 06:43:06.484830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.879 qpair failed and we were unable to recover it. 00:32:34.879 [2024-11-20 06:43:06.485026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.879 [2024-11-20 06:43:06.485059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.879 qpair failed and we were unable to recover it. 00:32:34.879 [2024-11-20 06:43:06.485233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.879 [2024-11-20 06:43:06.485266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.879 qpair failed and we were unable to recover it. 00:32:34.879 [2024-11-20 06:43:06.485373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.879 [2024-11-20 06:43:06.485403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.879 qpair failed and we were unable to recover it. 00:32:34.879 [2024-11-20 06:43:06.485531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.879 [2024-11-20 06:43:06.485563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.879 qpair failed and we were unable to recover it. 00:32:34.879 [2024-11-20 06:43:06.485753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.879 [2024-11-20 06:43:06.485787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.879 qpair failed and we were unable to recover it. 00:32:34.879 [2024-11-20 06:43:06.485890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.879 [2024-11-20 06:43:06.485922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.879 qpair failed and we were unable to recover it. 00:32:34.879 [2024-11-20 06:43:06.486035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.879 [2024-11-20 06:43:06.486066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.879 qpair failed and we were unable to recover it. 00:32:34.879 [2024-11-20 06:43:06.486191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.879 [2024-11-20 06:43:06.486245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.879 qpair failed and we were unable to recover it. 00:32:34.879 [2024-11-20 06:43:06.486370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.879 [2024-11-20 06:43:06.486402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.879 qpair failed and we were unable to recover it. 00:32:34.879 [2024-11-20 06:43:06.486541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.879 [2024-11-20 06:43:06.486572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.879 qpair failed and we were unable to recover it. 00:32:34.879 [2024-11-20 06:43:06.486759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.879 [2024-11-20 06:43:06.486797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.879 qpair failed and we were unable to recover it. 00:32:34.879 [2024-11-20 06:43:06.486933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.879 [2024-11-20 06:43:06.486965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.879 qpair failed and we were unable to recover it. 00:32:34.879 [2024-11-20 06:43:06.487091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.879 [2024-11-20 06:43:06.487124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.879 qpair failed and we were unable to recover it. 00:32:34.879 [2024-11-20 06:43:06.487247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.879 [2024-11-20 06:43:06.487280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.879 qpair failed and we were unable to recover it. 00:32:34.879 [2024-11-20 06:43:06.487482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.879 [2024-11-20 06:43:06.487513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.879 qpair failed and we were unable to recover it. 00:32:34.879 [2024-11-20 06:43:06.487691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.879 [2024-11-20 06:43:06.487723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.879 qpair failed and we were unable to recover it. 00:32:34.879 [2024-11-20 06:43:06.487912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.879 [2024-11-20 06:43:06.487943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.879 qpair failed and we were unable to recover it. 00:32:34.879 [2024-11-20 06:43:06.488050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.879 [2024-11-20 06:43:06.488081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.879 qpair failed and we were unable to recover it. 00:32:34.879 [2024-11-20 06:43:06.488342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.879 [2024-11-20 06:43:06.488377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.879 qpair failed and we were unable to recover it. 00:32:34.879 [2024-11-20 06:43:06.488497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.879 [2024-11-20 06:43:06.488528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.879 qpair failed and we were unable to recover it. 00:32:34.879 [2024-11-20 06:43:06.488808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.879 [2024-11-20 06:43:06.488840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.879 qpair failed and we were unable to recover it. 00:32:34.879 [2024-11-20 06:43:06.488971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.879 [2024-11-20 06:43:06.489002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.879 qpair failed and we were unable to recover it. 00:32:34.879 [2024-11-20 06:43:06.489122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.879 [2024-11-20 06:43:06.489153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.879 qpair failed and we were unable to recover it. 00:32:34.879 [2024-11-20 06:43:06.489297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.879 [2024-11-20 06:43:06.489329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.879 qpair failed and we were unable to recover it. 00:32:34.879 [2024-11-20 06:43:06.489534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.879 [2024-11-20 06:43:06.489567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.879 qpair failed and we were unable to recover it. 00:32:34.879 [2024-11-20 06:43:06.489702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.879 [2024-11-20 06:43:06.489733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.880 qpair failed and we were unable to recover it. 00:32:34.880 [2024-11-20 06:43:06.489838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.880 [2024-11-20 06:43:06.489871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.880 qpair failed and we were unable to recover it. 00:32:34.880 [2024-11-20 06:43:06.489985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.880 [2024-11-20 06:43:06.490018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.880 qpair failed and we were unable to recover it. 00:32:34.880 [2024-11-20 06:43:06.490211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.880 [2024-11-20 06:43:06.490245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.880 qpair failed and we were unable to recover it. 00:32:34.880 [2024-11-20 06:43:06.490425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.880 [2024-11-20 06:43:06.490457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.880 qpair failed and we were unable to recover it. 00:32:34.880 [2024-11-20 06:43:06.490642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.880 [2024-11-20 06:43:06.490674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.880 qpair failed and we were unable to recover it. 00:32:34.880 [2024-11-20 06:43:06.490778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.880 [2024-11-20 06:43:06.490808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.880 qpair failed and we were unable to recover it. 00:32:34.880 [2024-11-20 06:43:06.490999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.880 [2024-11-20 06:43:06.491031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.880 qpair failed and we were unable to recover it. 00:32:34.880 [2024-11-20 06:43:06.491221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.880 [2024-11-20 06:43:06.491254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.880 qpair failed and we were unable to recover it. 00:32:34.880 [2024-11-20 06:43:06.491364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.880 [2024-11-20 06:43:06.491394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.880 qpair failed and we were unable to recover it. 00:32:34.880 [2024-11-20 06:43:06.491510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.880 [2024-11-20 06:43:06.491541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.880 qpair failed and we were unable to recover it. 00:32:34.880 [2024-11-20 06:43:06.491662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.880 [2024-11-20 06:43:06.491693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.880 qpair failed and we were unable to recover it. 00:32:34.880 [2024-11-20 06:43:06.491813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.880 [2024-11-20 06:43:06.491845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.880 qpair failed and we were unable to recover it. 00:32:34.880 [2024-11-20 06:43:06.492027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.880 [2024-11-20 06:43:06.492059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.880 qpair failed and we were unable to recover it. 00:32:34.880 [2024-11-20 06:43:06.492228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.880 [2024-11-20 06:43:06.492261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.880 qpair failed and we were unable to recover it. 00:32:34.880 [2024-11-20 06:43:06.492382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.880 [2024-11-20 06:43:06.492415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.880 qpair failed and we were unable to recover it. 00:32:34.880 [2024-11-20 06:43:06.492536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.880 [2024-11-20 06:43:06.492568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.880 qpair failed and we were unable to recover it. 00:32:34.880 [2024-11-20 06:43:06.492689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.880 [2024-11-20 06:43:06.492719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.880 qpair failed and we were unable to recover it. 00:32:34.880 [2024-11-20 06:43:06.492828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.880 [2024-11-20 06:43:06.492859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.880 qpair failed and we were unable to recover it. 00:32:34.880 [2024-11-20 06:43:06.493045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.880 [2024-11-20 06:43:06.493078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.880 qpair failed and we were unable to recover it. 00:32:34.880 [2024-11-20 06:43:06.493214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.880 [2024-11-20 06:43:06.493247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.880 qpair failed and we were unable to recover it. 00:32:34.880 [2024-11-20 06:43:06.493437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.880 [2024-11-20 06:43:06.493468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.880 qpair failed and we were unable to recover it. 00:32:34.880 [2024-11-20 06:43:06.493704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.880 [2024-11-20 06:43:06.493736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.880 qpair failed and we were unable to recover it. 00:32:34.880 [2024-11-20 06:43:06.493869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.880 [2024-11-20 06:43:06.493901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.880 qpair failed and we were unable to recover it. 00:32:34.880 [2024-11-20 06:43:06.494003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.880 [2024-11-20 06:43:06.494035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.880 qpair failed and we were unable to recover it. 00:32:34.880 [2024-11-20 06:43:06.494163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.880 [2024-11-20 06:43:06.494220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.880 qpair failed and we were unable to recover it. 00:32:34.880 [2024-11-20 06:43:06.494354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.880 [2024-11-20 06:43:06.494386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.880 qpair failed and we were unable to recover it. 00:32:34.880 [2024-11-20 06:43:06.494507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.880 [2024-11-20 06:43:06.494540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.880 qpair failed and we were unable to recover it. 00:32:34.880 [2024-11-20 06:43:06.494645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.880 [2024-11-20 06:43:06.494676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.880 qpair failed and we were unable to recover it. 00:32:34.880 [2024-11-20 06:43:06.494860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.880 [2024-11-20 06:43:06.494892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.880 qpair failed and we were unable to recover it. 00:32:34.880 [2024-11-20 06:43:06.494993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.880 [2024-11-20 06:43:06.495025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.880 qpair failed and we were unable to recover it. 00:32:34.880 [2024-11-20 06:43:06.495160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.880 [2024-11-20 06:43:06.495192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.880 qpair failed and we were unable to recover it. 00:32:34.880 [2024-11-20 06:43:06.495398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.880 [2024-11-20 06:43:06.495430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.880 qpair failed and we were unable to recover it. 00:32:34.880 [2024-11-20 06:43:06.495552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.881 [2024-11-20 06:43:06.495583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.881 qpair failed and we were unable to recover it. 00:32:34.881 [2024-11-20 06:43:06.495696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.881 [2024-11-20 06:43:06.495728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.881 qpair failed and we were unable to recover it. 00:32:34.881 [2024-11-20 06:43:06.495921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.881 [2024-11-20 06:43:06.495953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.881 qpair failed and we were unable to recover it. 00:32:34.881 [2024-11-20 06:43:06.496064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.881 [2024-11-20 06:43:06.496095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.881 qpair failed and we were unable to recover it. 00:32:34.881 [2024-11-20 06:43:06.496217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.881 [2024-11-20 06:43:06.496250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.881 qpair failed and we were unable to recover it. 00:32:34.881 [2024-11-20 06:43:06.496362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.881 [2024-11-20 06:43:06.496393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.881 qpair failed and we were unable to recover it. 00:32:34.881 [2024-11-20 06:43:06.496529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.881 [2024-11-20 06:43:06.496561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.881 qpair failed and we were unable to recover it. 00:32:34.881 [2024-11-20 06:43:06.496669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.881 [2024-11-20 06:43:06.496700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.881 qpair failed and we were unable to recover it. 00:32:34.881 [2024-11-20 06:43:06.496875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.881 [2024-11-20 06:43:06.496905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.881 qpair failed and we were unable to recover it. 00:32:34.881 [2024-11-20 06:43:06.497147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.881 [2024-11-20 06:43:06.497179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.881 qpair failed and we were unable to recover it. 00:32:34.881 [2024-11-20 06:43:06.497338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.881 [2024-11-20 06:43:06.497372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.881 qpair failed and we were unable to recover it. 00:32:34.881 [2024-11-20 06:43:06.497476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.881 [2024-11-20 06:43:06.497507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.881 qpair failed and we were unable to recover it. 00:32:34.881 [2024-11-20 06:43:06.497619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.881 [2024-11-20 06:43:06.497650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.881 qpair failed and we were unable to recover it. 00:32:34.881 [2024-11-20 06:43:06.497856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.881 [2024-11-20 06:43:06.497887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.881 qpair failed and we were unable to recover it. 00:32:34.881 [2024-11-20 06:43:06.498079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.881 [2024-11-20 06:43:06.498111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.881 qpair failed and we were unable to recover it. 00:32:34.881 [2024-11-20 06:43:06.498308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.881 [2024-11-20 06:43:06.498342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.881 qpair failed and we were unable to recover it. 00:32:34.881 [2024-11-20 06:43:06.498521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.881 [2024-11-20 06:43:06.498552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.881 qpair failed and we were unable to recover it. 00:32:34.881 [2024-11-20 06:43:06.498727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.881 [2024-11-20 06:43:06.498758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.881 qpair failed and we were unable to recover it. 00:32:34.881 [2024-11-20 06:43:06.498885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.881 [2024-11-20 06:43:06.498917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.881 qpair failed and we were unable to recover it. 00:32:34.881 [2024-11-20 06:43:06.499164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.881 [2024-11-20 06:43:06.499248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.881 qpair failed and we were unable to recover it. 00:32:34.881 [2024-11-20 06:43:06.499402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.881 [2024-11-20 06:43:06.499439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.881 qpair failed and we were unable to recover it. 00:32:34.881 [2024-11-20 06:43:06.499566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.881 [2024-11-20 06:43:06.499600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.881 qpair failed and we were unable to recover it. 00:32:34.881 [2024-11-20 06:43:06.499724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.881 [2024-11-20 06:43:06.499757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.881 qpair failed and we were unable to recover it. 00:32:34.881 [2024-11-20 06:43:06.499878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.881 [2024-11-20 06:43:06.499910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.881 qpair failed and we were unable to recover it. 00:32:34.881 [2024-11-20 06:43:06.500023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.881 [2024-11-20 06:43:06.500054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.881 qpair failed and we were unable to recover it. 00:32:34.881 [2024-11-20 06:43:06.500264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.881 [2024-11-20 06:43:06.500299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.881 qpair failed and we were unable to recover it. 00:32:34.881 [2024-11-20 06:43:06.500489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.881 [2024-11-20 06:43:06.500522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.881 qpair failed and we were unable to recover it. 00:32:34.881 [2024-11-20 06:43:06.500714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.881 [2024-11-20 06:43:06.500745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.881 qpair failed and we were unable to recover it. 00:32:34.881 [2024-11-20 06:43:06.500917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.881 [2024-11-20 06:43:06.500949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.881 qpair failed and we were unable to recover it. 00:32:34.881 [2024-11-20 06:43:06.501053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.881 [2024-11-20 06:43:06.501084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.881 qpair failed and we were unable to recover it. 00:32:34.881 [2024-11-20 06:43:06.501226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.881 [2024-11-20 06:43:06.501261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.881 qpair failed and we were unable to recover it. 00:32:34.881 [2024-11-20 06:43:06.501399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.881 [2024-11-20 06:43:06.501432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.881 qpair failed and we were unable to recover it. 00:32:34.881 [2024-11-20 06:43:06.501550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.881 [2024-11-20 06:43:06.501597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.881 qpair failed and we were unable to recover it. 00:32:34.881 [2024-11-20 06:43:06.501808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.881 [2024-11-20 06:43:06.501840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.881 qpair failed and we were unable to recover it. 00:32:34.881 [2024-11-20 06:43:06.501951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.881 [2024-11-20 06:43:06.501983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.881 qpair failed and we were unable to recover it. 00:32:34.881 [2024-11-20 06:43:06.502174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.881 [2024-11-20 06:43:06.502217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.881 qpair failed and we were unable to recover it. 00:32:34.881 [2024-11-20 06:43:06.502447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.881 [2024-11-20 06:43:06.502479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.881 qpair failed and we were unable to recover it. 00:32:34.881 [2024-11-20 06:43:06.502656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.881 [2024-11-20 06:43:06.502687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.882 qpair failed and we were unable to recover it. 00:32:34.882 [2024-11-20 06:43:06.502810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.882 [2024-11-20 06:43:06.502843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.882 qpair failed and we were unable to recover it. 00:32:34.882 [2024-11-20 06:43:06.502960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.882 [2024-11-20 06:43:06.502993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.882 qpair failed and we were unable to recover it. 00:32:34.882 [2024-11-20 06:43:06.503251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.882 [2024-11-20 06:43:06.503284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.882 qpair failed and we were unable to recover it. 00:32:34.882 [2024-11-20 06:43:06.503566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.882 [2024-11-20 06:43:06.503598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.882 qpair failed and we were unable to recover it. 00:32:34.882 [2024-11-20 06:43:06.503720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.882 [2024-11-20 06:43:06.503752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.882 qpair failed and we were unable to recover it. 00:32:34.882 [2024-11-20 06:43:06.503921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.882 [2024-11-20 06:43:06.503952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.882 qpair failed and we were unable to recover it. 00:32:34.882 [2024-11-20 06:43:06.504066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.882 [2024-11-20 06:43:06.504098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.882 qpair failed and we were unable to recover it. 00:32:34.882 [2024-11-20 06:43:06.504359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.882 [2024-11-20 06:43:06.504392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.882 qpair failed and we were unable to recover it. 00:32:34.882 [2024-11-20 06:43:06.504537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.882 [2024-11-20 06:43:06.504569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.882 qpair failed and we were unable to recover it. 00:32:34.882 [2024-11-20 06:43:06.504747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.882 [2024-11-20 06:43:06.504780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.882 qpair failed and we were unable to recover it. 00:32:34.882 [2024-11-20 06:43:06.505023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.882 [2024-11-20 06:43:06.505055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.882 qpair failed and we were unable to recover it. 00:32:34.882 [2024-11-20 06:43:06.505275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.882 [2024-11-20 06:43:06.505309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.882 qpair failed and we were unable to recover it. 00:32:34.882 [2024-11-20 06:43:06.505451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.882 [2024-11-20 06:43:06.505483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.882 qpair failed and we were unable to recover it. 00:32:34.882 [2024-11-20 06:43:06.505599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.882 [2024-11-20 06:43:06.505630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.882 qpair failed and we were unable to recover it. 00:32:34.882 [2024-11-20 06:43:06.505735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.882 [2024-11-20 06:43:06.505767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.882 qpair failed and we were unable to recover it. 00:32:34.882 [2024-11-20 06:43:06.505870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.882 [2024-11-20 06:43:06.505902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.882 qpair failed and we were unable to recover it. 00:32:34.882 [2024-11-20 06:43:06.506100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.882 [2024-11-20 06:43:06.506132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.882 qpair failed and we were unable to recover it. 00:32:34.882 [2024-11-20 06:43:06.506268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.882 [2024-11-20 06:43:06.506300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.882 qpair failed and we were unable to recover it. 00:32:34.882 [2024-11-20 06:43:06.506412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.882 [2024-11-20 06:43:06.506443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.882 qpair failed and we were unable to recover it. 00:32:34.882 [2024-11-20 06:43:06.506618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.882 [2024-11-20 06:43:06.506650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.882 qpair failed and we were unable to recover it. 00:32:34.882 [2024-11-20 06:43:06.506779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.882 [2024-11-20 06:43:06.506811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.882 qpair failed and we were unable to recover it. 00:32:34.882 [2024-11-20 06:43:06.506943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.882 [2024-11-20 06:43:06.506975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.882 qpair failed and we were unable to recover it. 00:32:34.882 [2024-11-20 06:43:06.507154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.882 [2024-11-20 06:43:06.507186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.882 qpair failed and we were unable to recover it. 00:32:34.882 [2024-11-20 06:43:06.507373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.882 [2024-11-20 06:43:06.507406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.882 qpair failed and we were unable to recover it. 00:32:34.882 [2024-11-20 06:43:06.507521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.882 [2024-11-20 06:43:06.507553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.882 qpair failed and we were unable to recover it. 00:32:34.882 [2024-11-20 06:43:06.507668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.882 [2024-11-20 06:43:06.507699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.882 qpair failed and we were unable to recover it. 00:32:34.882 [2024-11-20 06:43:06.507817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.882 [2024-11-20 06:43:06.507848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.882 qpair failed and we were unable to recover it. 00:32:34.882 [2024-11-20 06:43:06.507951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.882 [2024-11-20 06:43:06.507982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.882 qpair failed and we were unable to recover it. 00:32:34.882 [2024-11-20 06:43:06.508090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.882 [2024-11-20 06:43:06.508122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.882 qpair failed and we were unable to recover it. 00:32:34.882 [2024-11-20 06:43:06.508249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.882 [2024-11-20 06:43:06.508282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.882 qpair failed and we were unable to recover it. 00:32:34.882 [2024-11-20 06:43:06.508403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.882 [2024-11-20 06:43:06.508435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.882 qpair failed and we were unable to recover it. 00:32:34.882 [2024-11-20 06:43:06.508553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.882 [2024-11-20 06:43:06.508585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.882 qpair failed and we were unable to recover it. 00:32:34.882 [2024-11-20 06:43:06.508703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.882 [2024-11-20 06:43:06.508735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.882 qpair failed and we were unable to recover it. 00:32:34.882 [2024-11-20 06:43:06.508959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.882 [2024-11-20 06:43:06.508990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.882 qpair failed and we were unable to recover it. 00:32:34.882 [2024-11-20 06:43:06.509193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.882 [2024-11-20 06:43:06.509240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.882 qpair failed and we were unable to recover it. 00:32:34.882 [2024-11-20 06:43:06.509372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.882 [2024-11-20 06:43:06.509404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.882 qpair failed and we were unable to recover it. 00:32:34.882 [2024-11-20 06:43:06.509604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.882 [2024-11-20 06:43:06.509638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.882 qpair failed and we were unable to recover it. 00:32:34.883 [2024-11-20 06:43:06.509836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.883 [2024-11-20 06:43:06.509868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.883 qpair failed and we were unable to recover it. 00:32:34.883 [2024-11-20 06:43:06.509976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.883 [2024-11-20 06:43:06.510007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.883 qpair failed and we were unable to recover it. 00:32:34.883 [2024-11-20 06:43:06.510128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.883 [2024-11-20 06:43:06.510161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.883 qpair failed and we were unable to recover it. 00:32:34.883 [2024-11-20 06:43:06.510360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.883 [2024-11-20 06:43:06.510394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.883 qpair failed and we were unable to recover it. 00:32:34.883 [2024-11-20 06:43:06.510504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.883 [2024-11-20 06:43:06.510536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.883 qpair failed and we were unable to recover it. 00:32:34.883 [2024-11-20 06:43:06.510675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.883 [2024-11-20 06:43:06.510708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.883 qpair failed and we were unable to recover it. 00:32:34.883 [2024-11-20 06:43:06.510883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.883 [2024-11-20 06:43:06.510915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.883 qpair failed and we were unable to recover it. 00:32:34.883 [2024-11-20 06:43:06.511036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.883 [2024-11-20 06:43:06.511069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.883 qpair failed and we were unable to recover it. 00:32:34.883 [2024-11-20 06:43:06.511183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.883 [2024-11-20 06:43:06.511224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.883 qpair failed and we were unable to recover it. 00:32:34.883 [2024-11-20 06:43:06.511337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.883 [2024-11-20 06:43:06.511370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.883 qpair failed and we were unable to recover it. 00:32:34.883 [2024-11-20 06:43:06.511572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.883 [2024-11-20 06:43:06.511604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.883 qpair failed and we were unable to recover it. 00:32:34.883 [2024-11-20 06:43:06.511732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.883 [2024-11-20 06:43:06.511765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.883 qpair failed and we were unable to recover it. 00:32:34.883 [2024-11-20 06:43:06.511876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.883 [2024-11-20 06:43:06.511908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.883 qpair failed and we were unable to recover it. 00:32:34.883 [2024-11-20 06:43:06.512033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.883 [2024-11-20 06:43:06.512065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.883 qpair failed and we were unable to recover it. 00:32:34.883 [2024-11-20 06:43:06.512180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.883 [2024-11-20 06:43:06.512221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.883 qpair failed and we were unable to recover it. 00:32:34.883 [2024-11-20 06:43:06.512356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.883 [2024-11-20 06:43:06.512390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.883 qpair failed and we were unable to recover it. 00:32:34.883 [2024-11-20 06:43:06.512563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.883 [2024-11-20 06:43:06.512594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.883 qpair failed and we were unable to recover it. 00:32:34.883 [2024-11-20 06:43:06.512697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.883 [2024-11-20 06:43:06.512729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.883 qpair failed and we were unable to recover it. 00:32:34.883 [2024-11-20 06:43:06.512909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.883 [2024-11-20 06:43:06.512941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.883 qpair failed and we were unable to recover it. 00:32:34.883 [2024-11-20 06:43:06.513060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.883 [2024-11-20 06:43:06.513092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.883 qpair failed and we were unable to recover it. 00:32:34.883 [2024-11-20 06:43:06.513221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.883 [2024-11-20 06:43:06.513254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.883 qpair failed and we were unable to recover it. 00:32:34.883 [2024-11-20 06:43:06.513422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.883 [2024-11-20 06:43:06.513454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.883 qpair failed and we were unable to recover it. 00:32:34.883 [2024-11-20 06:43:06.513636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.883 [2024-11-20 06:43:06.513667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.883 qpair failed and we were unable to recover it. 00:32:34.883 [2024-11-20 06:43:06.513841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.883 [2024-11-20 06:43:06.513873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.883 qpair failed and we were unable to recover it. 00:32:34.883 [2024-11-20 06:43:06.514162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.883 [2024-11-20 06:43:06.514260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.883 qpair failed and we were unable to recover it. 00:32:34.883 [2024-11-20 06:43:06.514473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.883 [2024-11-20 06:43:06.514510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.883 qpair failed and we were unable to recover it. 00:32:34.883 [2024-11-20 06:43:06.514626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.883 [2024-11-20 06:43:06.514658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.883 qpair failed and we were unable to recover it. 00:32:34.883 [2024-11-20 06:43:06.514771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.883 [2024-11-20 06:43:06.514804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.883 qpair failed and we were unable to recover it. 00:32:34.883 [2024-11-20 06:43:06.514917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.883 [2024-11-20 06:43:06.514950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.883 qpair failed and we were unable to recover it. 00:32:34.883 [2024-11-20 06:43:06.515124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.883 [2024-11-20 06:43:06.515155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.883 qpair failed and we were unable to recover it. 00:32:34.883 [2024-11-20 06:43:06.515287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.883 [2024-11-20 06:43:06.515319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.883 qpair failed and we were unable to recover it. 00:32:34.883 [2024-11-20 06:43:06.515587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.883 [2024-11-20 06:43:06.515619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.883 qpair failed and we were unable to recover it. 00:32:34.883 [2024-11-20 06:43:06.515862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.883 [2024-11-20 06:43:06.515894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.883 qpair failed and we were unable to recover it. 00:32:34.883 [2024-11-20 06:43:06.516012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.883 [2024-11-20 06:43:06.516044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.883 qpair failed and we were unable to recover it. 00:32:34.883 [2024-11-20 06:43:06.516162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.883 [2024-11-20 06:43:06.516194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.883 qpair failed and we were unable to recover it. 00:32:34.883 [2024-11-20 06:43:06.516391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.883 [2024-11-20 06:43:06.516423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.883 qpair failed and we were unable to recover it. 00:32:34.883 [2024-11-20 06:43:06.516530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.883 [2024-11-20 06:43:06.516562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.883 qpair failed and we were unable to recover it. 00:32:34.883 [2024-11-20 06:43:06.516675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.883 [2024-11-20 06:43:06.516716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.884 qpair failed and we were unable to recover it. 00:32:34.884 [2024-11-20 06:43:06.516900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.884 [2024-11-20 06:43:06.516931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.884 qpair failed and we were unable to recover it. 00:32:34.884 [2024-11-20 06:43:06.517065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.884 [2024-11-20 06:43:06.517098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.884 qpair failed and we were unable to recover it. 00:32:34.884 [2024-11-20 06:43:06.517221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.884 [2024-11-20 06:43:06.517255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.884 qpair failed and we were unable to recover it. 00:32:34.884 [2024-11-20 06:43:06.517372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.884 [2024-11-20 06:43:06.517404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.884 qpair failed and we were unable to recover it. 00:32:34.884 [2024-11-20 06:43:06.517524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.884 [2024-11-20 06:43:06.517556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.884 qpair failed and we were unable to recover it. 00:32:34.884 [2024-11-20 06:43:06.517686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.884 [2024-11-20 06:43:06.517718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.884 qpair failed and we were unable to recover it. 00:32:34.884 [2024-11-20 06:43:06.517888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.884 [2024-11-20 06:43:06.517920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.884 qpair failed and we were unable to recover it. 00:32:34.884 [2024-11-20 06:43:06.518051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.884 [2024-11-20 06:43:06.518082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.884 qpair failed and we were unable to recover it. 00:32:34.884 [2024-11-20 06:43:06.518186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.884 [2024-11-20 06:43:06.518232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.884 qpair failed and we were unable to recover it. 00:32:34.884 [2024-11-20 06:43:06.518412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.884 [2024-11-20 06:43:06.518445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.884 qpair failed and we were unable to recover it. 00:32:34.884 [2024-11-20 06:43:06.518557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.884 [2024-11-20 06:43:06.518589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.884 qpair failed and we were unable to recover it. 00:32:34.884 [2024-11-20 06:43:06.518780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.884 [2024-11-20 06:43:06.518811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.884 qpair failed and we were unable to recover it. 00:32:34.884 [2024-11-20 06:43:06.519001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.884 [2024-11-20 06:43:06.519034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.884 qpair failed and we were unable to recover it. 00:32:34.884 [2024-11-20 06:43:06.519166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.884 [2024-11-20 06:43:06.519199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.884 qpair failed and we were unable to recover it. 00:32:34.884 [2024-11-20 06:43:06.519338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.884 [2024-11-20 06:43:06.519370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.884 qpair failed and we were unable to recover it. 00:32:34.884 [2024-11-20 06:43:06.519491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.884 [2024-11-20 06:43:06.519523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.884 qpair failed and we were unable to recover it. 00:32:34.884 [2024-11-20 06:43:06.519648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.884 [2024-11-20 06:43:06.519680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.884 qpair failed and we were unable to recover it. 00:32:34.884 [2024-11-20 06:43:06.519806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.884 [2024-11-20 06:43:06.519838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.884 qpair failed and we were unable to recover it. 00:32:34.884 [2024-11-20 06:43:06.519955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.884 [2024-11-20 06:43:06.519986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.884 qpair failed and we were unable to recover it. 00:32:34.884 [2024-11-20 06:43:06.520176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.884 [2024-11-20 06:43:06.520219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.884 qpair failed and we were unable to recover it. 00:32:34.884 [2024-11-20 06:43:06.520351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.884 [2024-11-20 06:43:06.520384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.884 qpair failed and we were unable to recover it. 00:32:34.884 [2024-11-20 06:43:06.520627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.884 [2024-11-20 06:43:06.520659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.884 qpair failed and we were unable to recover it. 00:32:34.884 [2024-11-20 06:43:06.520764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.884 [2024-11-20 06:43:06.520796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.884 qpair failed and we were unable to recover it. 00:32:34.884 [2024-11-20 06:43:06.520918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.884 [2024-11-20 06:43:06.520950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.884 qpair failed and we were unable to recover it. 00:32:34.884 [2024-11-20 06:43:06.521061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.884 [2024-11-20 06:43:06.521092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.884 qpair failed and we were unable to recover it. 00:32:34.884 [2024-11-20 06:43:06.521223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.884 [2024-11-20 06:43:06.521258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.884 qpair failed and we were unable to recover it. 00:32:34.884 [2024-11-20 06:43:06.521457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.884 [2024-11-20 06:43:06.521492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.884 qpair failed and we were unable to recover it. 00:32:34.884 [2024-11-20 06:43:06.521601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.884 [2024-11-20 06:43:06.521633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.884 qpair failed and we were unable to recover it. 00:32:34.884 [2024-11-20 06:43:06.521826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.884 [2024-11-20 06:43:06.521857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.884 qpair failed and we were unable to recover it. 00:32:34.884 [2024-11-20 06:43:06.522047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.884 [2024-11-20 06:43:06.522079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.884 qpair failed and we were unable to recover it. 00:32:34.884 [2024-11-20 06:43:06.522199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.884 [2024-11-20 06:43:06.522244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.884 qpair failed and we were unable to recover it. 00:32:34.884 [2024-11-20 06:43:06.522418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.884 [2024-11-20 06:43:06.522452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.884 qpair failed and we were unable to recover it. 00:32:34.884 [2024-11-20 06:43:06.522576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.884 [2024-11-20 06:43:06.522608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.884 qpair failed and we were unable to recover it. 00:32:34.885 [2024-11-20 06:43:06.522740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.885 [2024-11-20 06:43:06.522774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.885 qpair failed and we were unable to recover it. 00:32:34.885 [2024-11-20 06:43:06.522902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.885 [2024-11-20 06:43:06.522933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.885 qpair failed and we were unable to recover it. 00:32:34.885 [2024-11-20 06:43:06.523048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.885 [2024-11-20 06:43:06.523081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.885 qpair failed and we were unable to recover it. 00:32:34.885 [2024-11-20 06:43:06.523190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.885 [2024-11-20 06:43:06.523235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.885 qpair failed and we were unable to recover it. 00:32:34.885 [2024-11-20 06:43:06.523360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.885 [2024-11-20 06:43:06.523393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.885 qpair failed and we were unable to recover it. 00:32:34.885 [2024-11-20 06:43:06.523581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.885 [2024-11-20 06:43:06.523614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.885 qpair failed and we were unable to recover it. 00:32:34.885 [2024-11-20 06:43:06.523854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.885 [2024-11-20 06:43:06.523893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.885 qpair failed and we were unable to recover it. 00:32:34.885 [2024-11-20 06:43:06.524070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.885 [2024-11-20 06:43:06.524101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.885 qpair failed and we were unable to recover it. 00:32:34.885 [2024-11-20 06:43:06.524227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.885 [2024-11-20 06:43:06.524262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.885 qpair failed and we were unable to recover it. 00:32:34.885 [2024-11-20 06:43:06.524439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.885 [2024-11-20 06:43:06.524471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.885 qpair failed and we were unable to recover it. 00:32:34.885 [2024-11-20 06:43:06.524658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.885 [2024-11-20 06:43:06.524689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.885 qpair failed and we were unable to recover it. 00:32:34.885 [2024-11-20 06:43:06.524814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.885 [2024-11-20 06:43:06.524846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.885 qpair failed and we were unable to recover it. 00:32:34.885 [2024-11-20 06:43:06.524976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.885 [2024-11-20 06:43:06.525009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.885 qpair failed and we were unable to recover it. 00:32:34.885 [2024-11-20 06:43:06.525133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.885 [2024-11-20 06:43:06.525164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.885 qpair failed and we were unable to recover it. 00:32:34.885 [2024-11-20 06:43:06.525333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.885 [2024-11-20 06:43:06.525369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.885 qpair failed and we were unable to recover it. 00:32:34.885 [2024-11-20 06:43:06.525510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.885 [2024-11-20 06:43:06.525542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.885 qpair failed and we were unable to recover it. 00:32:34.885 [2024-11-20 06:43:06.525682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.885 [2024-11-20 06:43:06.525716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.885 qpair failed and we were unable to recover it. 00:32:34.885 [2024-11-20 06:43:06.525819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.885 [2024-11-20 06:43:06.525851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.885 qpair failed and we were unable to recover it. 00:32:34.885 [2024-11-20 06:43:06.525972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.885 [2024-11-20 06:43:06.526004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.885 qpair failed and we were unable to recover it. 00:32:34.885 [2024-11-20 06:43:06.526180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.885 [2024-11-20 06:43:06.526221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.885 qpair failed and we were unable to recover it. 00:32:34.885 [2024-11-20 06:43:06.526344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.885 [2024-11-20 06:43:06.526378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.885 qpair failed and we were unable to recover it. 00:32:34.885 [2024-11-20 06:43:06.526571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.885 [2024-11-20 06:43:06.526603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.885 qpair failed and we were unable to recover it. 00:32:34.885 [2024-11-20 06:43:06.526777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.885 [2024-11-20 06:43:06.526809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.885 qpair failed and we were unable to recover it. 00:32:34.885 [2024-11-20 06:43:06.526979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.885 [2024-11-20 06:43:06.527011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.885 qpair failed and we were unable to recover it. 00:32:34.885 [2024-11-20 06:43:06.527216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.885 [2024-11-20 06:43:06.527250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.885 qpair failed and we were unable to recover it. 00:32:34.885 [2024-11-20 06:43:06.527357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.885 [2024-11-20 06:43:06.527388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.885 qpair failed and we were unable to recover it. 00:32:34.885 [2024-11-20 06:43:06.527568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.885 [2024-11-20 06:43:06.527601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.885 qpair failed and we were unable to recover it. 00:32:34.885 [2024-11-20 06:43:06.527791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.885 [2024-11-20 06:43:06.527824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.885 qpair failed and we were unable to recover it. 00:32:34.885 [2024-11-20 06:43:06.528089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.885 [2024-11-20 06:43:06.528121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.885 qpair failed and we were unable to recover it. 00:32:34.885 [2024-11-20 06:43:06.528303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.885 [2024-11-20 06:43:06.528337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.885 qpair failed and we were unable to recover it. 00:32:34.885 [2024-11-20 06:43:06.528464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.885 [2024-11-20 06:43:06.528496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.885 qpair failed and we were unable to recover it. 00:32:34.885 [2024-11-20 06:43:06.528682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.885 [2024-11-20 06:43:06.528713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.885 qpair failed and we were unable to recover it. 00:32:34.885 [2024-11-20 06:43:06.528909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.885 [2024-11-20 06:43:06.528942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:34.885 qpair failed and we were unable to recover it. 00:32:34.885 [2024-11-20 06:43:06.529087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.885 [2024-11-20 06:43:06.529123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.885 qpair failed and we were unable to recover it. 00:32:34.885 [2024-11-20 06:43:06.529242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.885 [2024-11-20 06:43:06.529275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.885 qpair failed and we were unable to recover it. 00:32:34.885 [2024-11-20 06:43:06.529390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.885 [2024-11-20 06:43:06.529422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.885 qpair failed and we were unable to recover it. 00:32:34.885 [2024-11-20 06:43:06.529534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.885 [2024-11-20 06:43:06.529565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.885 qpair failed and we were unable to recover it. 00:32:34.885 [2024-11-20 06:43:06.529685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.886 [2024-11-20 06:43:06.529718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.886 qpair failed and we were unable to recover it. 00:32:34.886 [2024-11-20 06:43:06.529842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.886 [2024-11-20 06:43:06.529873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.886 qpair failed and we were unable to recover it. 00:32:34.886 [2024-11-20 06:43:06.529981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.886 [2024-11-20 06:43:06.530013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.886 qpair failed and we were unable to recover it. 00:32:34.886 [2024-11-20 06:43:06.530121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.886 [2024-11-20 06:43:06.530154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.886 qpair failed and we were unable to recover it. 00:32:34.886 [2024-11-20 06:43:06.530354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.886 [2024-11-20 06:43:06.530388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.886 qpair failed and we were unable to recover it. 00:32:34.886 [2024-11-20 06:43:06.530492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.886 [2024-11-20 06:43:06.530524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.886 qpair failed and we were unable to recover it. 00:32:34.886 [2024-11-20 06:43:06.530710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.886 [2024-11-20 06:43:06.530741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.886 qpair failed and we were unable to recover it. 00:32:34.886 [2024-11-20 06:43:06.531012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.886 [2024-11-20 06:43:06.531046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.886 qpair failed and we were unable to recover it. 00:32:34.886 [2024-11-20 06:43:06.531229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.886 [2024-11-20 06:43:06.531264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.886 qpair failed and we were unable to recover it. 00:32:34.886 [2024-11-20 06:43:06.531373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.886 [2024-11-20 06:43:06.531404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.886 qpair failed and we were unable to recover it. 00:32:34.886 [2024-11-20 06:43:06.531536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.886 [2024-11-20 06:43:06.531569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.886 qpair failed and we were unable to recover it. 00:32:34.886 [2024-11-20 06:43:06.531781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.886 [2024-11-20 06:43:06.531814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.886 qpair failed and we were unable to recover it. 00:32:34.886 [2024-11-20 06:43:06.531990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.886 [2024-11-20 06:43:06.532022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.886 qpair failed and we were unable to recover it. 00:32:34.886 [2024-11-20 06:43:06.532146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.886 [2024-11-20 06:43:06.532178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.886 qpair failed and we were unable to recover it. 00:32:34.886 [2024-11-20 06:43:06.532304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.886 [2024-11-20 06:43:06.532336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.886 qpair failed and we were unable to recover it. 00:32:34.886 [2024-11-20 06:43:06.532468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.886 [2024-11-20 06:43:06.532501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.886 qpair failed and we were unable to recover it. 00:32:34.886 [2024-11-20 06:43:06.532674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.886 [2024-11-20 06:43:06.532706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.886 qpair failed and we were unable to recover it. 00:32:34.886 [2024-11-20 06:43:06.532835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.886 [2024-11-20 06:43:06.532867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.886 qpair failed and we were unable to recover it. 00:32:34.886 [2024-11-20 06:43:06.532995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.886 [2024-11-20 06:43:06.533026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.886 qpair failed and we were unable to recover it. 00:32:34.886 [2024-11-20 06:43:06.533211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.886 [2024-11-20 06:43:06.533244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.886 qpair failed and we were unable to recover it. 00:32:34.886 [2024-11-20 06:43:06.533433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.886 [2024-11-20 06:43:06.533464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.886 qpair failed and we were unable to recover it. 00:32:34.886 [2024-11-20 06:43:06.533584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.886 [2024-11-20 06:43:06.533615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.886 qpair failed and we were unable to recover it. 00:32:34.886 [2024-11-20 06:43:06.533818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.886 [2024-11-20 06:43:06.533849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.886 qpair failed and we were unable to recover it. 00:32:34.886 [2024-11-20 06:43:06.533970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.886 [2024-11-20 06:43:06.534002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.886 qpair failed and we were unable to recover it. 00:32:34.886 [2024-11-20 06:43:06.534193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.886 [2024-11-20 06:43:06.534237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.886 qpair failed and we were unable to recover it. 00:32:34.886 [2024-11-20 06:43:06.534469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.886 [2024-11-20 06:43:06.534502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.886 qpair failed and we were unable to recover it. 00:32:34.886 [2024-11-20 06:43:06.534696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.886 [2024-11-20 06:43:06.534728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.886 qpair failed and we were unable to recover it. 00:32:34.886 [2024-11-20 06:43:06.534859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.886 [2024-11-20 06:43:06.534891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.886 qpair failed and we were unable to recover it. 00:32:34.886 [2024-11-20 06:43:06.535091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.886 [2024-11-20 06:43:06.535124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.886 qpair failed and we were unable to recover it. 00:32:34.886 [2024-11-20 06:43:06.535247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.886 [2024-11-20 06:43:06.535282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.886 qpair failed and we were unable to recover it. 00:32:34.886 [2024-11-20 06:43:06.535457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.886 [2024-11-20 06:43:06.535489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.886 qpair failed and we were unable to recover it. 00:32:34.886 [2024-11-20 06:43:06.535604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.886 [2024-11-20 06:43:06.535635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.886 qpair failed and we were unable to recover it. 00:32:34.886 [2024-11-20 06:43:06.535762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.886 [2024-11-20 06:43:06.535795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.886 qpair failed and we were unable to recover it. 00:32:34.886 [2024-11-20 06:43:06.535979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.886 [2024-11-20 06:43:06.536010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.886 qpair failed and we were unable to recover it. 00:32:34.886 [2024-11-20 06:43:06.536185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.886 [2024-11-20 06:43:06.536227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.886 qpair failed and we were unable to recover it. 00:32:34.886 [2024-11-20 06:43:06.536410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.886 [2024-11-20 06:43:06.536444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.886 qpair failed and we were unable to recover it. 00:32:34.886 [2024-11-20 06:43:06.536563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.886 [2024-11-20 06:43:06.536600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.886 qpair failed and we were unable to recover it. 00:32:34.886 [2024-11-20 06:43:06.536725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.887 [2024-11-20 06:43:06.536757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.887 qpair failed and we were unable to recover it. 00:32:34.887 [2024-11-20 06:43:06.536866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.887 [2024-11-20 06:43:06.536898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.887 qpair failed and we were unable to recover it. 00:32:34.887 [2024-11-20 06:43:06.537093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.887 [2024-11-20 06:43:06.537126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.887 qpair failed and we were unable to recover it. 00:32:34.887 [2024-11-20 06:43:06.537424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.887 [2024-11-20 06:43:06.537458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.887 qpair failed and we were unable to recover it. 00:32:34.887 [2024-11-20 06:43:06.537651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.887 [2024-11-20 06:43:06.537682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.887 qpair failed and we were unable to recover it. 00:32:34.887 [2024-11-20 06:43:06.537797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.887 [2024-11-20 06:43:06.537829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.887 qpair failed and we were unable to recover it. 00:32:34.887 [2024-11-20 06:43:06.538043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.887 [2024-11-20 06:43:06.538076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.887 qpair failed and we were unable to recover it. 00:32:34.887 [2024-11-20 06:43:06.538216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.887 [2024-11-20 06:43:06.538249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.887 qpair failed and we were unable to recover it. 00:32:34.887 [2024-11-20 06:43:06.538382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.887 [2024-11-20 06:43:06.538414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.887 qpair failed and we were unable to recover it. 00:32:34.887 [2024-11-20 06:43:06.538645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.887 [2024-11-20 06:43:06.538676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.887 qpair failed and we were unable to recover it. 00:32:34.887 [2024-11-20 06:43:06.538805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.887 [2024-11-20 06:43:06.538838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.887 qpair failed and we were unable to recover it. 00:32:34.887 [2024-11-20 06:43:06.539021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.887 [2024-11-20 06:43:06.539053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.887 qpair failed and we were unable to recover it. 00:32:34.887 [2024-11-20 06:43:06.539255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.887 [2024-11-20 06:43:06.539289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.887 qpair failed and we were unable to recover it. 00:32:34.887 [2024-11-20 06:43:06.539471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.887 [2024-11-20 06:43:06.539505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.887 qpair failed and we were unable to recover it. 00:32:34.887 [2024-11-20 06:43:06.539687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.887 [2024-11-20 06:43:06.539719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.887 qpair failed and we were unable to recover it. 00:32:34.887 [2024-11-20 06:43:06.539969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.887 [2024-11-20 06:43:06.540001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.887 qpair failed and we were unable to recover it. 00:32:34.887 [2024-11-20 06:43:06.540131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.887 [2024-11-20 06:43:06.540163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.887 qpair failed and we were unable to recover it. 00:32:34.887 [2024-11-20 06:43:06.540359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.887 [2024-11-20 06:43:06.540392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.887 qpair failed and we were unable to recover it. 00:32:34.887 [2024-11-20 06:43:06.540653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.887 [2024-11-20 06:43:06.540686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.887 qpair failed and we were unable to recover it. 00:32:34.887 [2024-11-20 06:43:06.540877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.887 [2024-11-20 06:43:06.540908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.887 qpair failed and we were unable to recover it. 00:32:34.887 [2024-11-20 06:43:06.541095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.887 [2024-11-20 06:43:06.541127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.887 qpair failed and we were unable to recover it. 00:32:34.887 [2024-11-20 06:43:06.541327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.887 [2024-11-20 06:43:06.541360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:34.887 qpair failed and we were unable to recover it. 00:32:34.887 [2024-11-20 06:43:06.541594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.887 [2024-11-20 06:43:06.541641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.887 qpair failed and we were unable to recover it. 00:32:34.887 [2024-11-20 06:43:06.541814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.887 [2024-11-20 06:43:06.541838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.887 qpair failed and we were unable to recover it. 00:32:34.887 [2024-11-20 06:43:06.541942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.887 [2024-11-20 06:43:06.541968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.887 qpair failed and we were unable to recover it. 00:32:34.887 [2024-11-20 06:43:06.542075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.887 [2024-11-20 06:43:06.542096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.887 qpair failed and we were unable to recover it. 00:32:34.887 [2024-11-20 06:43:06.542187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.887 [2024-11-20 06:43:06.542216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.887 qpair failed and we were unable to recover it. 00:32:34.887 [2024-11-20 06:43:06.542388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.887 [2024-11-20 06:43:06.542413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.887 qpair failed and we were unable to recover it. 00:32:34.887 [2024-11-20 06:43:06.542519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.887 [2024-11-20 06:43:06.542540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.887 qpair failed and we were unable to recover it. 00:32:34.887 [2024-11-20 06:43:06.542706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.887 [2024-11-20 06:43:06.542730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.887 qpair failed and we were unable to recover it. 00:32:34.887 [2024-11-20 06:43:06.542899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.887 [2024-11-20 06:43:06.542927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.887 qpair failed and we were unable to recover it. 00:32:34.887 [2024-11-20 06:43:06.543032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.887 [2024-11-20 06:43:06.543053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.887 qpair failed and we were unable to recover it. 00:32:34.887 [2024-11-20 06:43:06.543215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.887 [2024-11-20 06:43:06.543237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.887 qpair failed and we were unable to recover it. 00:32:34.887 [2024-11-20 06:43:06.543401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.887 [2024-11-20 06:43:06.543423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.887 qpair failed and we were unable to recover it. 00:32:34.887 [2024-11-20 06:43:06.543640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.887 [2024-11-20 06:43:06.543662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.887 qpair failed and we were unable to recover it. 00:32:34.887 [2024-11-20 06:43:06.543762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.887 [2024-11-20 06:43:06.543782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.887 qpair failed and we were unable to recover it. 00:32:34.887 [2024-11-20 06:43:06.543937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.887 [2024-11-20 06:43:06.543959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.887 qpair failed and we were unable to recover it. 00:32:34.887 [2024-11-20 06:43:06.544128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.887 [2024-11-20 06:43:06.544150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.887 qpair failed and we were unable to recover it. 00:32:34.887 [2024-11-20 06:43:06.544334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.888 [2024-11-20 06:43:06.544357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.888 qpair failed and we were unable to recover it. 00:32:34.888 [2024-11-20 06:43:06.544492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.888 [2024-11-20 06:43:06.544514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.888 qpair failed and we were unable to recover it. 00:32:34.888 [2024-11-20 06:43:06.544672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.888 [2024-11-20 06:43:06.544695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.888 qpair failed and we were unable to recover it. 00:32:34.888 [2024-11-20 06:43:06.544854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.888 [2024-11-20 06:43:06.544877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.888 qpair failed and we were unable to recover it. 00:32:34.888 [2024-11-20 06:43:06.544977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.888 [2024-11-20 06:43:06.544999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.888 qpair failed and we were unable to recover it. 00:32:34.888 [2024-11-20 06:43:06.545097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.888 [2024-11-20 06:43:06.545119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.888 qpair failed and we were unable to recover it. 00:32:34.888 [2024-11-20 06:43:06.545271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.888 [2024-11-20 06:43:06.545296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.888 qpair failed and we were unable to recover it. 00:32:34.888 [2024-11-20 06:43:06.545454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.888 [2024-11-20 06:43:06.545476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.888 qpair failed and we were unable to recover it. 00:32:34.888 [2024-11-20 06:43:06.545568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.888 [2024-11-20 06:43:06.545589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.888 qpair failed and we were unable to recover it. 00:32:34.888 [2024-11-20 06:43:06.545781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.888 [2024-11-20 06:43:06.545803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.888 qpair failed and we were unable to recover it. 00:32:34.888 [2024-11-20 06:43:06.545974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.888 [2024-11-20 06:43:06.545996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.888 qpair failed and we were unable to recover it. 00:32:34.888 [2024-11-20 06:43:06.546088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.888 [2024-11-20 06:43:06.546109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.888 qpair failed and we were unable to recover it. 00:32:34.888 [2024-11-20 06:43:06.546218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.888 [2024-11-20 06:43:06.546243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.888 qpair failed and we were unable to recover it. 00:32:34.888 [2024-11-20 06:43:06.546410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.888 [2024-11-20 06:43:06.546432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.888 qpair failed and we were unable to recover it. 00:32:34.888 [2024-11-20 06:43:06.546514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.888 [2024-11-20 06:43:06.546534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.888 qpair failed and we were unable to recover it. 00:32:34.888 [2024-11-20 06:43:06.546713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.888 [2024-11-20 06:43:06.546736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.888 qpair failed and we were unable to recover it. 00:32:34.888 [2024-11-20 06:43:06.546838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.888 [2024-11-20 06:43:06.546860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.888 qpair failed and we were unable to recover it. 00:32:34.888 [2024-11-20 06:43:06.547031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.888 [2024-11-20 06:43:06.547053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.888 qpair failed and we were unable to recover it. 00:32:34.888 [2024-11-20 06:43:06.547221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.888 [2024-11-20 06:43:06.547244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.888 qpair failed and we were unable to recover it. 00:32:34.888 [2024-11-20 06:43:06.547419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.888 [2024-11-20 06:43:06.547442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.888 qpair failed and we were unable to recover it. 00:32:34.888 [2024-11-20 06:43:06.547550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.888 [2024-11-20 06:43:06.547571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.888 qpair failed and we were unable to recover it. 00:32:34.888 [2024-11-20 06:43:06.547687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.888 [2024-11-20 06:43:06.547709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.888 qpair failed and we were unable to recover it. 00:32:34.888 [2024-11-20 06:43:06.547930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.888 [2024-11-20 06:43:06.547951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.888 qpair failed and we were unable to recover it. 00:32:34.888 [2024-11-20 06:43:06.548053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.888 [2024-11-20 06:43:06.548074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.888 qpair failed and we were unable to recover it. 00:32:34.888 [2024-11-20 06:43:06.548232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.888 [2024-11-20 06:43:06.548254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.888 qpair failed and we were unable to recover it. 00:32:34.888 [2024-11-20 06:43:06.548342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.888 [2024-11-20 06:43:06.548361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.888 qpair failed and we were unable to recover it. 00:32:34.888 [2024-11-20 06:43:06.548445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.888 [2024-11-20 06:43:06.548466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.888 qpair failed and we were unable to recover it. 00:32:34.888 [2024-11-20 06:43:06.548563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.888 [2024-11-20 06:43:06.548585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.888 qpair failed and we were unable to recover it. 00:32:34.888 [2024-11-20 06:43:06.548674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.888 [2024-11-20 06:43:06.548694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.888 qpair failed and we were unable to recover it. 00:32:34.888 [2024-11-20 06:43:06.548779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.888 [2024-11-20 06:43:06.548800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.888 qpair failed and we were unable to recover it. 00:32:34.888 [2024-11-20 06:43:06.549015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.888 [2024-11-20 06:43:06.549038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.888 qpair failed and we were unable to recover it. 00:32:34.888 [2024-11-20 06:43:06.549126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.888 [2024-11-20 06:43:06.549147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.888 qpair failed and we were unable to recover it. 00:32:34.888 [2024-11-20 06:43:06.549235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.888 [2024-11-20 06:43:06.549257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.888 qpair failed and we were unable to recover it. 00:32:34.888 [2024-11-20 06:43:06.549356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.888 [2024-11-20 06:43:06.549378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.888 qpair failed and we were unable to recover it. 00:32:34.888 [2024-11-20 06:43:06.549483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.888 [2024-11-20 06:43:06.549506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.888 qpair failed and we were unable to recover it. 00:32:34.888 [2024-11-20 06:43:06.549715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.888 [2024-11-20 06:43:06.549738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.888 qpair failed and we were unable to recover it. 00:32:34.888 [2024-11-20 06:43:06.549851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.888 [2024-11-20 06:43:06.549872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.888 qpair failed and we were unable to recover it. 00:32:34.888 [2024-11-20 06:43:06.550032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.888 [2024-11-20 06:43:06.550054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.888 qpair failed and we were unable to recover it. 00:32:34.888 [2024-11-20 06:43:06.550147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.889 [2024-11-20 06:43:06.550168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.889 qpair failed and we were unable to recover it. 00:32:34.889 [2024-11-20 06:43:06.550334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.889 [2024-11-20 06:43:06.550357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.889 qpair failed and we were unable to recover it. 00:32:34.889 [2024-11-20 06:43:06.550574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.889 [2024-11-20 06:43:06.550595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.889 qpair failed and we were unable to recover it. 00:32:34.889 [2024-11-20 06:43:06.550687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.889 [2024-11-20 06:43:06.550707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.889 qpair failed and we were unable to recover it. 00:32:34.889 [2024-11-20 06:43:06.550861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.889 [2024-11-20 06:43:06.550886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.889 qpair failed and we were unable to recover it. 00:32:34.889 [2024-11-20 06:43:06.551133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.889 [2024-11-20 06:43:06.551155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.889 qpair failed and we were unable to recover it. 00:32:34.889 [2024-11-20 06:43:06.551237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.889 [2024-11-20 06:43:06.551258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.889 qpair failed and we were unable to recover it. 00:32:34.889 [2024-11-20 06:43:06.551338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.889 [2024-11-20 06:43:06.551358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.889 qpair failed and we were unable to recover it. 00:32:34.889 [2024-11-20 06:43:06.551520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.889 [2024-11-20 06:43:06.551542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.889 qpair failed and we were unable to recover it. 00:32:34.889 [2024-11-20 06:43:06.551642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.889 [2024-11-20 06:43:06.551667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.889 qpair failed and we were unable to recover it. 00:32:34.889 [2024-11-20 06:43:06.551830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.889 [2024-11-20 06:43:06.551852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.889 qpair failed and we were unable to recover it. 00:32:34.889 [2024-11-20 06:43:06.551950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.889 [2024-11-20 06:43:06.551970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.889 qpair failed and we were unable to recover it. 00:32:34.889 [2024-11-20 06:43:06.552066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.889 [2024-11-20 06:43:06.552089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.889 qpair failed and we were unable to recover it. 00:32:34.889 [2024-11-20 06:43:06.552193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.889 [2024-11-20 06:43:06.552225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.889 qpair failed and we were unable to recover it. 00:32:34.889 [2024-11-20 06:43:06.552315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.889 [2024-11-20 06:43:06.552340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.889 qpair failed and we were unable to recover it. 00:32:34.889 [2024-11-20 06:43:06.552441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.889 [2024-11-20 06:43:06.552462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.889 qpair failed and we were unable to recover it. 00:32:34.889 [2024-11-20 06:43:06.552560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.889 [2024-11-20 06:43:06.552581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.889 qpair failed and we were unable to recover it. 00:32:34.889 [2024-11-20 06:43:06.552694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.889 [2024-11-20 06:43:06.552717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.889 qpair failed and we were unable to recover it. 00:32:34.889 [2024-11-20 06:43:06.552822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.889 [2024-11-20 06:43:06.552844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.889 qpair failed and we were unable to recover it. 00:32:34.889 [2024-11-20 06:43:06.552923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.889 [2024-11-20 06:43:06.552944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.889 qpair failed and we were unable to recover it. 00:32:34.889 [2024-11-20 06:43:06.553038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.889 [2024-11-20 06:43:06.553066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.889 qpair failed and we were unable to recover it. 00:32:34.889 [2024-11-20 06:43:06.553167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.889 [2024-11-20 06:43:06.553187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.889 qpair failed and we were unable to recover it. 00:32:34.889 [2024-11-20 06:43:06.553295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.889 [2024-11-20 06:43:06.553317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.889 qpair failed and we were unable to recover it. 00:32:34.889 [2024-11-20 06:43:06.553431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.889 [2024-11-20 06:43:06.553454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.889 qpair failed and we were unable to recover it. 00:32:34.889 [2024-11-20 06:43:06.553609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.889 [2024-11-20 06:43:06.553631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.889 qpair failed and we were unable to recover it. 00:32:34.889 [2024-11-20 06:43:06.553791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.889 [2024-11-20 06:43:06.553812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.889 qpair failed and we were unable to recover it. 00:32:34.889 [2024-11-20 06:43:06.553900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.889 [2024-11-20 06:43:06.553921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.889 qpair failed and we were unable to recover it. 00:32:34.889 [2024-11-20 06:43:06.554014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.889 [2024-11-20 06:43:06.554036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.889 qpair failed and we were unable to recover it. 00:32:34.889 [2024-11-20 06:43:06.554192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.889 [2024-11-20 06:43:06.554222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.889 qpair failed and we were unable to recover it. 00:32:34.889 [2024-11-20 06:43:06.554378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.889 [2024-11-20 06:43:06.554399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.889 qpair failed and we were unable to recover it. 00:32:34.889 [2024-11-20 06:43:06.554488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.889 [2024-11-20 06:43:06.554508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.889 qpair failed and we were unable to recover it. 00:32:34.889 [2024-11-20 06:43:06.554656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.889 [2024-11-20 06:43:06.554678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.889 qpair failed and we were unable to recover it. 00:32:34.889 [2024-11-20 06:43:06.554769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.889 [2024-11-20 06:43:06.554790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.889 qpair failed and we were unable to recover it. 00:32:34.889 [2024-11-20 06:43:06.554962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.889 [2024-11-20 06:43:06.554985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.890 qpair failed and we were unable to recover it. 00:32:34.890 [2024-11-20 06:43:06.555136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.890 [2024-11-20 06:43:06.555158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.890 qpair failed and we were unable to recover it. 00:32:34.890 [2024-11-20 06:43:06.555310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.890 [2024-11-20 06:43:06.555334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.890 qpair failed and we were unable to recover it. 00:32:34.890 [2024-11-20 06:43:06.555429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.890 [2024-11-20 06:43:06.555451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.890 qpair failed and we were unable to recover it. 00:32:34.890 [2024-11-20 06:43:06.555548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.890 [2024-11-20 06:43:06.555572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.890 qpair failed and we were unable to recover it. 00:32:34.890 [2024-11-20 06:43:06.555672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.890 [2024-11-20 06:43:06.555692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.890 qpair failed and we were unable to recover it. 00:32:34.890 [2024-11-20 06:43:06.555796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.890 [2024-11-20 06:43:06.555816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.890 qpair failed and we were unable to recover it. 00:32:34.890 [2024-11-20 06:43:06.555978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.890 [2024-11-20 06:43:06.556000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.890 qpair failed and we were unable to recover it. 00:32:34.890 [2024-11-20 06:43:06.556080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.890 [2024-11-20 06:43:06.556100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.890 qpair failed and we were unable to recover it. 00:32:34.890 [2024-11-20 06:43:06.556208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.890 [2024-11-20 06:43:06.556230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.890 qpair failed and we were unable to recover it. 00:32:34.890 [2024-11-20 06:43:06.556331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.890 [2024-11-20 06:43:06.556352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.890 qpair failed and we were unable to recover it. 00:32:34.890 [2024-11-20 06:43:06.556520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.890 [2024-11-20 06:43:06.556543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.890 qpair failed and we were unable to recover it. 00:32:34.890 [2024-11-20 06:43:06.556633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.890 [2024-11-20 06:43:06.556657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.890 qpair failed and we were unable to recover it. 00:32:34.890 [2024-11-20 06:43:06.556879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.890 [2024-11-20 06:43:06.556902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.890 qpair failed and we were unable to recover it. 00:32:34.890 [2024-11-20 06:43:06.556994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.890 [2024-11-20 06:43:06.557014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.890 qpair failed and we were unable to recover it. 00:32:34.890 [2024-11-20 06:43:06.557119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.890 [2024-11-20 06:43:06.557142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.890 qpair failed and we were unable to recover it. 00:32:34.890 [2024-11-20 06:43:06.557228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.890 [2024-11-20 06:43:06.557250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.890 qpair failed and we were unable to recover it. 00:32:34.890 [2024-11-20 06:43:06.557330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.890 [2024-11-20 06:43:06.557350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.890 qpair failed and we were unable to recover it. 00:32:34.890 [2024-11-20 06:43:06.557516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.890 [2024-11-20 06:43:06.557538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.890 qpair failed and we were unable to recover it. 00:32:34.890 [2024-11-20 06:43:06.557644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.890 [2024-11-20 06:43:06.557665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.890 qpair failed and we were unable to recover it. 00:32:34.890 [2024-11-20 06:43:06.557846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.890 [2024-11-20 06:43:06.557867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.890 qpair failed and we were unable to recover it. 00:32:34.890 [2024-11-20 06:43:06.557947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.890 [2024-11-20 06:43:06.557967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.890 qpair failed and we were unable to recover it. 00:32:34.890 [2024-11-20 06:43:06.558213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.890 [2024-11-20 06:43:06.558236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.890 qpair failed and we were unable to recover it. 00:32:34.890 [2024-11-20 06:43:06.558318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.890 [2024-11-20 06:43:06.558339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.890 qpair failed and we were unable to recover it. 00:32:34.890 [2024-11-20 06:43:06.558441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.890 [2024-11-20 06:43:06.558462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.890 qpair failed and we were unable to recover it. 00:32:34.890 [2024-11-20 06:43:06.558623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.890 [2024-11-20 06:43:06.558645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.890 qpair failed and we were unable to recover it. 00:32:34.890 [2024-11-20 06:43:06.558738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.890 [2024-11-20 06:43:06.558760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.890 qpair failed and we were unable to recover it. 00:32:34.890 [2024-11-20 06:43:06.558928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.890 [2024-11-20 06:43:06.558950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.890 qpair failed and we were unable to recover it. 00:32:34.890 [2024-11-20 06:43:06.559063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.890 [2024-11-20 06:43:06.559086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.890 qpair failed and we were unable to recover it. 00:32:34.890 [2024-11-20 06:43:06.559189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.890 [2024-11-20 06:43:06.559218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.890 qpair failed and we were unable to recover it. 00:32:34.890 [2024-11-20 06:43:06.559312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.890 [2024-11-20 06:43:06.559333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.890 qpair failed and we were unable to recover it. 00:32:34.890 [2024-11-20 06:43:06.559419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.890 [2024-11-20 06:43:06.559440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.890 qpair failed and we were unable to recover it. 00:32:34.890 [2024-11-20 06:43:06.559531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.890 [2024-11-20 06:43:06.559552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.890 qpair failed and we were unable to recover it. 00:32:34.890 [2024-11-20 06:43:06.559708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.890 [2024-11-20 06:43:06.559730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.890 qpair failed and we were unable to recover it. 00:32:34.890 [2024-11-20 06:43:06.559879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.890 [2024-11-20 06:43:06.559901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.890 qpair failed and we were unable to recover it. 00:32:34.890 [2024-11-20 06:43:06.559999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.890 [2024-11-20 06:43:06.560021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.890 qpair failed and we were unable to recover it. 00:32:34.890 [2024-11-20 06:43:06.560115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.890 [2024-11-20 06:43:06.560136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.890 qpair failed and we were unable to recover it. 00:32:34.890 [2024-11-20 06:43:06.560241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.890 [2024-11-20 06:43:06.560265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.890 qpair failed and we were unable to recover it. 00:32:34.890 [2024-11-20 06:43:06.560349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.891 [2024-11-20 06:43:06.560370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.891 qpair failed and we were unable to recover it. 00:32:34.891 [2024-11-20 06:43:06.560524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.891 [2024-11-20 06:43:06.560555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.891 qpair failed and we were unable to recover it. 00:32:34.891 [2024-11-20 06:43:06.560656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.891 [2024-11-20 06:43:06.560679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.891 qpair failed and we were unable to recover it. 00:32:34.891 [2024-11-20 06:43:06.560786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.891 [2024-11-20 06:43:06.560812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.891 qpair failed and we were unable to recover it. 00:32:34.891 [2024-11-20 06:43:06.560922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.891 [2024-11-20 06:43:06.560944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.891 qpair failed and we were unable to recover it. 00:32:34.891 [2024-11-20 06:43:06.561040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.891 [2024-11-20 06:43:06.561062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.891 qpair failed and we were unable to recover it. 00:32:34.891 [2024-11-20 06:43:06.561161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.891 [2024-11-20 06:43:06.561183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.891 qpair failed and we were unable to recover it. 00:32:34.891 [2024-11-20 06:43:06.561302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.891 [2024-11-20 06:43:06.561326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.891 qpair failed and we were unable to recover it. 00:32:34.891 [2024-11-20 06:43:06.561435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.891 [2024-11-20 06:43:06.561456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.891 qpair failed and we were unable to recover it. 00:32:34.891 [2024-11-20 06:43:06.561540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.891 [2024-11-20 06:43:06.561560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.891 qpair failed and we were unable to recover it. 00:32:34.891 [2024-11-20 06:43:06.561648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.891 [2024-11-20 06:43:06.561668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.891 qpair failed and we were unable to recover it. 00:32:34.891 [2024-11-20 06:43:06.561765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.891 [2024-11-20 06:43:06.561787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.891 qpair failed and we were unable to recover it. 00:32:34.891 [2024-11-20 06:43:06.561894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.891 [2024-11-20 06:43:06.561916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.891 qpair failed and we were unable to recover it. 00:32:34.891 [2024-11-20 06:43:06.562074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.891 [2024-11-20 06:43:06.562096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.891 qpair failed and we were unable to recover it. 00:32:34.891 [2024-11-20 06:43:06.562263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.891 [2024-11-20 06:43:06.562287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.891 qpair failed and we were unable to recover it. 00:32:34.891 [2024-11-20 06:43:06.562388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.891 [2024-11-20 06:43:06.562410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.891 qpair failed and we were unable to recover it. 00:32:34.891 [2024-11-20 06:43:06.562567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.891 [2024-11-20 06:43:06.562589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.891 qpair failed and we were unable to recover it. 00:32:34.891 [2024-11-20 06:43:06.562679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.891 [2024-11-20 06:43:06.562700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.891 qpair failed and we were unable to recover it. 00:32:34.891 [2024-11-20 06:43:06.562852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.891 [2024-11-20 06:43:06.562874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.891 qpair failed and we were unable to recover it. 00:32:34.891 [2024-11-20 06:43:06.562961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.891 [2024-11-20 06:43:06.562981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.891 qpair failed and we were unable to recover it. 00:32:34.891 [2024-11-20 06:43:06.563073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.891 [2024-11-20 06:43:06.563094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.891 qpair failed and we were unable to recover it. 00:32:34.891 [2024-11-20 06:43:06.563189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.891 [2024-11-20 06:43:06.563215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.891 qpair failed and we were unable to recover it. 00:32:34.891 [2024-11-20 06:43:06.563302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.891 [2024-11-20 06:43:06.563323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.891 qpair failed and we were unable to recover it. 00:32:34.891 [2024-11-20 06:43:06.563499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.891 [2024-11-20 06:43:06.563521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.891 qpair failed and we were unable to recover it. 00:32:34.891 [2024-11-20 06:43:06.563613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.891 [2024-11-20 06:43:06.563633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.891 qpair failed and we were unable to recover it. 00:32:34.891 [2024-11-20 06:43:06.563785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.891 [2024-11-20 06:43:06.563808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.891 qpair failed and we were unable to recover it. 00:32:34.891 [2024-11-20 06:43:06.563967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.891 [2024-11-20 06:43:06.563989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.891 qpair failed and we were unable to recover it. 00:32:34.891 [2024-11-20 06:43:06.564153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.891 [2024-11-20 06:43:06.564176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.891 qpair failed and we were unable to recover it. 00:32:34.891 [2024-11-20 06:43:06.564270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.891 [2024-11-20 06:43:06.564292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.891 qpair failed and we were unable to recover it. 00:32:34.891 [2024-11-20 06:43:06.564452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.891 [2024-11-20 06:43:06.564475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.891 qpair failed and we were unable to recover it. 00:32:34.891 [2024-11-20 06:43:06.564695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.891 [2024-11-20 06:43:06.564716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.891 qpair failed and we were unable to recover it. 00:32:34.891 [2024-11-20 06:43:06.564805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.891 [2024-11-20 06:43:06.564826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.891 qpair failed and we were unable to recover it. 00:32:34.891 [2024-11-20 06:43:06.565048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.891 [2024-11-20 06:43:06.565069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.891 qpair failed and we were unable to recover it. 00:32:34.891 [2024-11-20 06:43:06.565164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.891 [2024-11-20 06:43:06.565186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.891 qpair failed and we were unable to recover it. 00:32:34.891 [2024-11-20 06:43:06.565275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.891 [2024-11-20 06:43:06.565296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.891 qpair failed and we were unable to recover it. 00:32:34.891 [2024-11-20 06:43:06.565395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.891 [2024-11-20 06:43:06.565417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.891 qpair failed and we were unable to recover it. 00:32:34.891 [2024-11-20 06:43:06.565501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.891 [2024-11-20 06:43:06.565521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.891 qpair failed and we were unable to recover it. 00:32:34.891 [2024-11-20 06:43:06.565607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.891 [2024-11-20 06:43:06.565628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.891 qpair failed and we were unable to recover it. 00:32:34.891 [2024-11-20 06:43:06.565795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.892 [2024-11-20 06:43:06.565816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.892 qpair failed and we were unable to recover it. 00:32:34.892 [2024-11-20 06:43:06.565905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.892 [2024-11-20 06:43:06.565926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.892 qpair failed and we were unable to recover it. 00:32:34.892 [2024-11-20 06:43:06.566013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.892 [2024-11-20 06:43:06.566034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.892 qpair failed and we were unable to recover it. 00:32:34.892 [2024-11-20 06:43:06.566225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.892 [2024-11-20 06:43:06.566248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.892 qpair failed and we were unable to recover it. 00:32:34.892 [2024-11-20 06:43:06.566371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.892 [2024-11-20 06:43:06.566396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.892 qpair failed and we were unable to recover it. 00:32:34.892 [2024-11-20 06:43:06.566508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.892 [2024-11-20 06:43:06.566530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.892 qpair failed and we were unable to recover it. 00:32:34.892 [2024-11-20 06:43:06.566649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.892 [2024-11-20 06:43:06.566670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.892 qpair failed and we were unable to recover it. 00:32:34.892 [2024-11-20 06:43:06.566797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.892 [2024-11-20 06:43:06.566819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.892 qpair failed and we were unable to recover it. 00:32:34.892 [2024-11-20 06:43:06.566929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.892 [2024-11-20 06:43:06.566952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.892 qpair failed and we were unable to recover it. 00:32:34.892 [2024-11-20 06:43:06.567029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.892 [2024-11-20 06:43:06.567050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.892 qpair failed and we were unable to recover it. 00:32:34.892 [2024-11-20 06:43:06.567140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.892 [2024-11-20 06:43:06.567160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.892 qpair failed and we were unable to recover it. 00:32:34.892 [2024-11-20 06:43:06.567265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.892 [2024-11-20 06:43:06.567287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.892 qpair failed and we were unable to recover it. 00:32:34.892 [2024-11-20 06:43:06.567386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.892 [2024-11-20 06:43:06.567406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.892 qpair failed and we were unable to recover it. 00:32:34.892 [2024-11-20 06:43:06.567499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.892 [2024-11-20 06:43:06.567520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.892 qpair failed and we were unable to recover it. 00:32:34.892 [2024-11-20 06:43:06.567674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.892 [2024-11-20 06:43:06.567694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.892 qpair failed and we were unable to recover it. 00:32:34.892 [2024-11-20 06:43:06.567777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.892 [2024-11-20 06:43:06.567798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.892 qpair failed and we were unable to recover it. 00:32:34.892 [2024-11-20 06:43:06.567890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.892 [2024-11-20 06:43:06.567910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.892 qpair failed and we were unable to recover it. 00:32:34.892 [2024-11-20 06:43:06.567989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.892 [2024-11-20 06:43:06.568011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.892 qpair failed and we were unable to recover it. 00:32:34.892 [2024-11-20 06:43:06.568112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.892 [2024-11-20 06:43:06.568133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.892 qpair failed and we were unable to recover it. 00:32:34.892 [2024-11-20 06:43:06.568278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.892 [2024-11-20 06:43:06.568302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.892 qpair failed and we were unable to recover it. 00:32:34.892 [2024-11-20 06:43:06.568395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.892 [2024-11-20 06:43:06.568415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.892 qpair failed and we were unable to recover it. 00:32:34.892 [2024-11-20 06:43:06.568505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.892 [2024-11-20 06:43:06.568526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.892 qpair failed and we were unable to recover it. 00:32:34.892 [2024-11-20 06:43:06.568616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.892 [2024-11-20 06:43:06.568637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.892 qpair failed and we were unable to recover it. 00:32:34.892 [2024-11-20 06:43:06.568730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.892 [2024-11-20 06:43:06.568751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.892 qpair failed and we were unable to recover it. 00:32:34.892 [2024-11-20 06:43:06.568831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.892 [2024-11-20 06:43:06.568851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.892 qpair failed and we were unable to recover it. 00:32:34.892 [2024-11-20 06:43:06.568941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.892 [2024-11-20 06:43:06.568962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.892 qpair failed and we were unable to recover it. 00:32:34.892 [2024-11-20 06:43:06.569044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.892 [2024-11-20 06:43:06.569064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.892 qpair failed and we were unable to recover it. 00:32:34.892 [2024-11-20 06:43:06.569216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.892 [2024-11-20 06:43:06.569238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.892 qpair failed and we were unable to recover it. 00:32:34.892 [2024-11-20 06:43:06.569335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.892 [2024-11-20 06:43:06.569356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.892 qpair failed and we were unable to recover it. 00:32:34.892 [2024-11-20 06:43:06.569440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.892 [2024-11-20 06:43:06.569460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.892 qpair failed and we were unable to recover it. 00:32:34.892 [2024-11-20 06:43:06.569652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.892 [2024-11-20 06:43:06.569674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.892 qpair failed and we were unable to recover it. 00:32:34.892 [2024-11-20 06:43:06.569770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.892 [2024-11-20 06:43:06.569808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.892 qpair failed and we were unable to recover it. 00:32:34.892 [2024-11-20 06:43:06.569907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.892 [2024-11-20 06:43:06.569929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.892 qpair failed and we were unable to recover it. 00:32:34.892 [2024-11-20 06:43:06.570021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.892 [2024-11-20 06:43:06.570042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.892 qpair failed and we were unable to recover it. 00:32:34.892 [2024-11-20 06:43:06.570128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.892 [2024-11-20 06:43:06.570148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.892 qpair failed and we were unable to recover it. 00:32:34.892 [2024-11-20 06:43:06.570243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.892 [2024-11-20 06:43:06.570265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.892 qpair failed and we were unable to recover it. 00:32:34.892 [2024-11-20 06:43:06.570372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.892 [2024-11-20 06:43:06.570393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.892 qpair failed and we were unable to recover it. 00:32:34.892 [2024-11-20 06:43:06.570478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.892 [2024-11-20 06:43:06.570498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.892 qpair failed and we were unable to recover it. 00:32:34.892 [2024-11-20 06:43:06.570593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.893 [2024-11-20 06:43:06.570613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.893 qpair failed and we were unable to recover it. 00:32:34.893 [2024-11-20 06:43:06.570770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.893 [2024-11-20 06:43:06.570791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.893 qpair failed and we were unable to recover it. 00:32:34.893 [2024-11-20 06:43:06.570955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.893 [2024-11-20 06:43:06.570976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.893 qpair failed and we were unable to recover it. 00:32:34.893 [2024-11-20 06:43:06.571138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.893 [2024-11-20 06:43:06.571160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.893 qpair failed and we were unable to recover it. 00:32:34.893 [2024-11-20 06:43:06.571341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.893 [2024-11-20 06:43:06.571364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.893 qpair failed and we were unable to recover it. 00:32:34.893 [2024-11-20 06:43:06.571534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.893 [2024-11-20 06:43:06.571556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.893 qpair failed and we were unable to recover it. 00:32:34.893 [2024-11-20 06:43:06.571666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.893 [2024-11-20 06:43:06.571688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.893 qpair failed and we were unable to recover it. 00:32:34.893 [2024-11-20 06:43:06.571778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.893 [2024-11-20 06:43:06.571800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.893 qpair failed and we were unable to recover it. 00:32:34.893 [2024-11-20 06:43:06.571888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.893 [2024-11-20 06:43:06.571916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.893 qpair failed and we were unable to recover it. 00:32:34.893 [2024-11-20 06:43:06.572006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.893 [2024-11-20 06:43:06.572027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.893 qpair failed and we were unable to recover it. 00:32:34.893 [2024-11-20 06:43:06.572119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.893 [2024-11-20 06:43:06.572141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.893 qpair failed and we were unable to recover it. 00:32:34.893 [2024-11-20 06:43:06.572297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.893 [2024-11-20 06:43:06.572318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.893 qpair failed and we were unable to recover it. 00:32:34.893 [2024-11-20 06:43:06.572402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.893 [2024-11-20 06:43:06.572423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.893 qpair failed and we were unable to recover it. 00:32:34.893 [2024-11-20 06:43:06.572528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.893 [2024-11-20 06:43:06.572549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.893 qpair failed and we were unable to recover it. 00:32:34.893 [2024-11-20 06:43:06.572628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.893 [2024-11-20 06:43:06.572649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.893 qpair failed and we were unable to recover it. 00:32:34.893 [2024-11-20 06:43:06.572734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.893 [2024-11-20 06:43:06.572755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.893 qpair failed and we were unable to recover it. 00:32:34.893 [2024-11-20 06:43:06.572856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.893 [2024-11-20 06:43:06.572877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.893 qpair failed and we were unable to recover it. 00:32:34.893 [2024-11-20 06:43:06.573027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.893 [2024-11-20 06:43:06.573050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.893 qpair failed and we were unable to recover it. 00:32:34.893 [2024-11-20 06:43:06.573133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.893 [2024-11-20 06:43:06.573153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.893 qpair failed and we were unable to recover it. 00:32:34.893 [2024-11-20 06:43:06.573306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.893 [2024-11-20 06:43:06.573329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.893 qpair failed and we were unable to recover it. 00:32:34.893 [2024-11-20 06:43:06.573503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.893 [2024-11-20 06:43:06.573526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.893 qpair failed and we were unable to recover it. 00:32:34.893 [2024-11-20 06:43:06.573612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.893 [2024-11-20 06:43:06.573633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.893 qpair failed and we were unable to recover it. 00:32:34.893 [2024-11-20 06:43:06.573725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.893 [2024-11-20 06:43:06.573745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.893 qpair failed and we were unable to recover it. 00:32:34.893 [2024-11-20 06:43:06.573896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.893 [2024-11-20 06:43:06.573918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.893 qpair failed and we were unable to recover it. 00:32:34.893 [2024-11-20 06:43:06.574004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.893 [2024-11-20 06:43:06.574025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.893 qpair failed and we were unable to recover it. 00:32:34.893 [2024-11-20 06:43:06.574106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.893 [2024-11-20 06:43:06.574127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.893 qpair failed and we were unable to recover it. 00:32:34.893 [2024-11-20 06:43:06.574281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.893 [2024-11-20 06:43:06.574305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.893 qpair failed and we were unable to recover it. 00:32:34.893 [2024-11-20 06:43:06.574404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.893 [2024-11-20 06:43:06.574426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.893 qpair failed and we were unable to recover it. 00:32:34.893 [2024-11-20 06:43:06.574515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.893 [2024-11-20 06:43:06.574536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.893 qpair failed and we were unable to recover it. 00:32:34.893 [2024-11-20 06:43:06.574624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.893 [2024-11-20 06:43:06.574644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.893 qpair failed and we were unable to recover it. 00:32:34.893 [2024-11-20 06:43:06.574729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.893 [2024-11-20 06:43:06.574749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.893 qpair failed and we were unable to recover it. 00:32:34.893 [2024-11-20 06:43:06.574899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.893 [2024-11-20 06:43:06.574920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.893 qpair failed and we were unable to recover it. 00:32:34.893 [2024-11-20 06:43:06.575021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.893 [2024-11-20 06:43:06.575043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.893 qpair failed and we were unable to recover it. 00:32:34.893 [2024-11-20 06:43:06.575127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.893 [2024-11-20 06:43:06.575149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.893 qpair failed and we were unable to recover it. 00:32:34.893 [2024-11-20 06:43:06.575317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.893 [2024-11-20 06:43:06.575344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.893 qpair failed and we were unable to recover it. 00:32:34.893 [2024-11-20 06:43:06.575442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.893 [2024-11-20 06:43:06.575467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.893 qpair failed and we were unable to recover it. 00:32:34.893 [2024-11-20 06:43:06.575564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.893 [2024-11-20 06:43:06.575586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.893 qpair failed and we were unable to recover it. 00:32:34.893 [2024-11-20 06:43:06.575672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.893 [2024-11-20 06:43:06.575692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.893 qpair failed and we were unable to recover it. 00:32:34.893 [2024-11-20 06:43:06.575772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.894 [2024-11-20 06:43:06.575792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.894 qpair failed and we were unable to recover it. 00:32:34.894 [2024-11-20 06:43:06.575875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.894 [2024-11-20 06:43:06.575896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.894 qpair failed and we were unable to recover it. 00:32:34.894 [2024-11-20 06:43:06.575974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.894 [2024-11-20 06:43:06.575994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.894 qpair failed and we were unable to recover it. 00:32:34.894 [2024-11-20 06:43:06.576077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.894 [2024-11-20 06:43:06.576098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.894 qpair failed and we were unable to recover it. 00:32:34.894 [2024-11-20 06:43:06.576182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.894 [2024-11-20 06:43:06.576241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.894 qpair failed and we were unable to recover it. 00:32:34.894 [2024-11-20 06:43:06.576336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.894 [2024-11-20 06:43:06.576357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.894 qpair failed and we were unable to recover it. 00:32:34.894 [2024-11-20 06:43:06.576435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.894 [2024-11-20 06:43:06.576455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.894 qpair failed and we were unable to recover it. 00:32:34.894 [2024-11-20 06:43:06.576608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.894 [2024-11-20 06:43:06.576630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.894 qpair failed and we were unable to recover it. 00:32:34.894 [2024-11-20 06:43:06.576791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.894 [2024-11-20 06:43:06.576814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.894 qpair failed and we were unable to recover it. 00:32:34.894 [2024-11-20 06:43:06.576898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.894 [2024-11-20 06:43:06.576919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.894 qpair failed and we were unable to recover it. 00:32:34.894 [2024-11-20 06:43:06.577006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.894 [2024-11-20 06:43:06.577027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.894 qpair failed and we were unable to recover it. 00:32:34.894 [2024-11-20 06:43:06.577113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.894 [2024-11-20 06:43:06.577133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.894 qpair failed and we were unable to recover it. 00:32:34.894 [2024-11-20 06:43:06.577220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.894 [2024-11-20 06:43:06.577241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.894 qpair failed and we were unable to recover it. 00:32:34.894 [2024-11-20 06:43:06.577339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.894 [2024-11-20 06:43:06.577360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.894 qpair failed and we were unable to recover it. 00:32:34.894 [2024-11-20 06:43:06.577454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.894 [2024-11-20 06:43:06.577474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.894 qpair failed and we were unable to recover it. 00:32:34.894 [2024-11-20 06:43:06.577555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.894 [2024-11-20 06:43:06.577577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.894 qpair failed and we were unable to recover it. 00:32:34.894 [2024-11-20 06:43:06.577731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.894 [2024-11-20 06:43:06.577752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.894 qpair failed and we were unable to recover it. 00:32:34.894 [2024-11-20 06:43:06.577848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.894 [2024-11-20 06:43:06.577869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.894 qpair failed and we were unable to recover it. 00:32:34.894 [2024-11-20 06:43:06.577970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.894 [2024-11-20 06:43:06.577990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.894 qpair failed and we were unable to recover it. 00:32:34.894 [2024-11-20 06:43:06.578137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.894 [2024-11-20 06:43:06.578158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.894 qpair failed and we were unable to recover it. 00:32:34.894 [2024-11-20 06:43:06.578306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.894 [2024-11-20 06:43:06.578327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.894 qpair failed and we were unable to recover it. 00:32:34.894 [2024-11-20 06:43:06.578484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.894 [2024-11-20 06:43:06.578507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.894 qpair failed and we were unable to recover it. 00:32:34.894 [2024-11-20 06:43:06.578595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.894 [2024-11-20 06:43:06.578616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.894 qpair failed and we were unable to recover it. 00:32:34.894 [2024-11-20 06:43:06.578697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.894 [2024-11-20 06:43:06.578718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.894 qpair failed and we were unable to recover it. 00:32:34.894 [2024-11-20 06:43:06.578809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.894 [2024-11-20 06:43:06.578830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.894 qpair failed and we were unable to recover it. 00:32:34.894 [2024-11-20 06:43:06.578997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.894 [2024-11-20 06:43:06.579020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.894 qpair failed and we were unable to recover it. 00:32:34.894 [2024-11-20 06:43:06.579102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.894 [2024-11-20 06:43:06.579123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.894 qpair failed and we were unable to recover it. 00:32:34.894 [2024-11-20 06:43:06.579219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.894 [2024-11-20 06:43:06.579241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.894 qpair failed and we were unable to recover it. 00:32:34.894 [2024-11-20 06:43:06.579419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.894 [2024-11-20 06:43:06.579441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.894 qpair failed and we were unable to recover it. 00:32:34.894 [2024-11-20 06:43:06.579607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.894 [2024-11-20 06:43:06.579629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.894 qpair failed and we were unable to recover it. 00:32:34.894 [2024-11-20 06:43:06.579721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.894 [2024-11-20 06:43:06.579742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.894 qpair failed and we were unable to recover it. 00:32:34.894 [2024-11-20 06:43:06.579836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.894 [2024-11-20 06:43:06.579857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.894 qpair failed and we were unable to recover it. 00:32:34.894 [2024-11-20 06:43:06.579956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.894 [2024-11-20 06:43:06.579977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.894 qpair failed and we were unable to recover it. 00:32:34.894 [2024-11-20 06:43:06.580139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.894 [2024-11-20 06:43:06.580162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.894 qpair failed and we were unable to recover it. 00:32:34.895 [2024-11-20 06:43:06.580317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.895 [2024-11-20 06:43:06.580340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.895 qpair failed and we were unable to recover it. 00:32:34.895 [2024-11-20 06:43:06.580438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.895 [2024-11-20 06:43:06.580460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.895 qpair failed and we were unable to recover it. 00:32:34.895 [2024-11-20 06:43:06.580561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.895 [2024-11-20 06:43:06.580584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.895 qpair failed and we were unable to recover it. 00:32:34.895 [2024-11-20 06:43:06.580743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.895 [2024-11-20 06:43:06.580766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.895 qpair failed and we were unable to recover it. 00:32:34.895 [2024-11-20 06:43:06.580849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.895 [2024-11-20 06:43:06.580869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.895 qpair failed and we were unable to recover it. 00:32:34.895 [2024-11-20 06:43:06.580962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.895 [2024-11-20 06:43:06.580983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.895 qpair failed and we were unable to recover it. 00:32:34.895 [2024-11-20 06:43:06.581075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.895 [2024-11-20 06:43:06.581095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.895 qpair failed and we were unable to recover it. 00:32:34.895 [2024-11-20 06:43:06.581243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.895 [2024-11-20 06:43:06.581314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.895 qpair failed and we were unable to recover it. 00:32:34.895 [2024-11-20 06:43:06.581463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.895 [2024-11-20 06:43:06.581501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.895 qpair failed and we were unable to recover it. 00:32:34.895 [2024-11-20 06:43:06.581622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.895 [2024-11-20 06:43:06.581655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.895 qpair failed and we were unable to recover it. 00:32:34.895 [2024-11-20 06:43:06.581766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.895 [2024-11-20 06:43:06.581797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:34.895 qpair failed and we were unable to recover it. 00:32:34.895 [2024-11-20 06:43:06.581890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.895 [2024-11-20 06:43:06.581913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.895 qpair failed and we were unable to recover it. 00:32:34.895 [2024-11-20 06:43:06.582070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.895 [2024-11-20 06:43:06.582092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.895 qpair failed and we were unable to recover it. 00:32:34.895 [2024-11-20 06:43:06.582255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.895 [2024-11-20 06:43:06.582278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.895 qpair failed and we were unable to recover it. 00:32:34.895 [2024-11-20 06:43:06.582366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.895 [2024-11-20 06:43:06.582386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.895 qpair failed and we were unable to recover it. 00:32:34.895 [2024-11-20 06:43:06.582535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.895 [2024-11-20 06:43:06.582557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.895 qpair failed and we were unable to recover it. 00:32:34.895 [2024-11-20 06:43:06.582643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.895 [2024-11-20 06:43:06.582665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.895 qpair failed and we were unable to recover it. 00:32:34.895 [2024-11-20 06:43:06.582832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.895 [2024-11-20 06:43:06.582854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.895 qpair failed and we were unable to recover it. 00:32:34.895 [2024-11-20 06:43:06.582951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.895 [2024-11-20 06:43:06.582972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.895 qpair failed and we were unable to recover it. 00:32:34.895 [2024-11-20 06:43:06.583193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.895 [2024-11-20 06:43:06.583229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.895 qpair failed and we were unable to recover it. 00:32:34.895 [2024-11-20 06:43:06.583314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.895 [2024-11-20 06:43:06.583335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.895 qpair failed and we were unable to recover it. 00:32:34.895 [2024-11-20 06:43:06.583431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.895 [2024-11-20 06:43:06.583456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.895 qpair failed and we were unable to recover it. 00:32:34.895 [2024-11-20 06:43:06.583548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.895 [2024-11-20 06:43:06.583568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.895 qpair failed and we were unable to recover it. 00:32:34.895 [2024-11-20 06:43:06.583650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.895 [2024-11-20 06:43:06.583671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.895 qpair failed and we were unable to recover it. 00:32:34.895 [2024-11-20 06:43:06.583754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.895 [2024-11-20 06:43:06.583775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.895 qpair failed and we were unable to recover it. 00:32:34.895 [2024-11-20 06:43:06.583874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.895 [2024-11-20 06:43:06.583895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.895 qpair failed and we were unable to recover it. 00:32:34.895 [2024-11-20 06:43:06.584151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.895 [2024-11-20 06:43:06.584174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.895 qpair failed and we were unable to recover it. 00:32:34.895 [2024-11-20 06:43:06.584309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.895 [2024-11-20 06:43:06.584332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.895 qpair failed and we were unable to recover it. 00:32:34.895 [2024-11-20 06:43:06.584429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.895 [2024-11-20 06:43:06.584450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.895 qpair failed and we were unable to recover it. 00:32:34.895 [2024-11-20 06:43:06.584533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.895 [2024-11-20 06:43:06.584554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.895 qpair failed and we were unable to recover it. 00:32:34.895 [2024-11-20 06:43:06.584707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.895 [2024-11-20 06:43:06.584733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.895 qpair failed and we were unable to recover it. 00:32:34.895 [2024-11-20 06:43:06.584887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.895 [2024-11-20 06:43:06.584909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.895 qpair failed and we were unable to recover it. 00:32:34.895 [2024-11-20 06:43:06.585058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.895 [2024-11-20 06:43:06.585080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.895 qpair failed and we were unable to recover it. 00:32:34.895 [2024-11-20 06:43:06.585175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.895 [2024-11-20 06:43:06.585195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.895 qpair failed and we were unable to recover it. 00:32:34.895 [2024-11-20 06:43:06.585287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.895 [2024-11-20 06:43:06.585308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.895 qpair failed and we were unable to recover it. 00:32:34.895 [2024-11-20 06:43:06.585392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.895 [2024-11-20 06:43:06.585413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.895 qpair failed and we were unable to recover it. 00:32:34.895 [2024-11-20 06:43:06.585560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.895 [2024-11-20 06:43:06.585583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.895 qpair failed and we were unable to recover it. 00:32:34.895 [2024-11-20 06:43:06.585683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.895 [2024-11-20 06:43:06.585705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.895 qpair failed and we were unable to recover it. 00:32:34.896 [2024-11-20 06:43:06.585806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.896 [2024-11-20 06:43:06.585827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.896 qpair failed and we were unable to recover it. 00:32:34.896 [2024-11-20 06:43:06.585960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.896 [2024-11-20 06:43:06.585982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.896 qpair failed and we were unable to recover it. 00:32:34.896 [2024-11-20 06:43:06.586072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.896 [2024-11-20 06:43:06.586093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.896 qpair failed and we were unable to recover it. 00:32:34.896 [2024-11-20 06:43:06.586174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.896 [2024-11-20 06:43:06.586195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.896 qpair failed and we were unable to recover it. 00:32:34.896 [2024-11-20 06:43:06.586296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.896 [2024-11-20 06:43:06.586317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.896 qpair failed and we were unable to recover it. 00:32:34.896 [2024-11-20 06:43:06.586404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.896 [2024-11-20 06:43:06.586425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.896 qpair failed and we were unable to recover it. 00:32:34.896 [2024-11-20 06:43:06.586515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.896 [2024-11-20 06:43:06.586535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.896 qpair failed and we were unable to recover it. 00:32:34.896 [2024-11-20 06:43:06.586641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.896 [2024-11-20 06:43:06.586664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.896 qpair failed and we were unable to recover it. 00:32:34.896 [2024-11-20 06:43:06.586770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.896 [2024-11-20 06:43:06.586792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.896 qpair failed and we were unable to recover it. 00:32:34.896 [2024-11-20 06:43:06.586963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.896 [2024-11-20 06:43:06.586985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.896 qpair failed and we were unable to recover it. 00:32:34.896 [2024-11-20 06:43:06.587137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.896 [2024-11-20 06:43:06.587160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.896 qpair failed and we were unable to recover it. 00:32:34.896 [2024-11-20 06:43:06.587265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.896 [2024-11-20 06:43:06.587288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.896 qpair failed and we were unable to recover it. 00:32:34.896 [2024-11-20 06:43:06.587453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.896 [2024-11-20 06:43:06.587475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.896 qpair failed and we were unable to recover it. 00:32:34.896 [2024-11-20 06:43:06.587569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.896 [2024-11-20 06:43:06.587591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.896 qpair failed and we were unable to recover it. 00:32:34.896 [2024-11-20 06:43:06.587686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.896 [2024-11-20 06:43:06.587710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.896 qpair failed and we were unable to recover it. 00:32:34.896 [2024-11-20 06:43:06.587865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.896 [2024-11-20 06:43:06.587887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.896 qpair failed and we were unable to recover it. 00:32:34.896 [2024-11-20 06:43:06.587993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.896 [2024-11-20 06:43:06.588015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.896 qpair failed and we were unable to recover it. 00:32:34.896 [2024-11-20 06:43:06.588107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.896 [2024-11-20 06:43:06.588128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.896 qpair failed and we were unable to recover it. 00:32:34.896 [2024-11-20 06:43:06.588239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.896 [2024-11-20 06:43:06.588261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.896 qpair failed and we were unable to recover it. 00:32:34.896 [2024-11-20 06:43:06.588433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.896 [2024-11-20 06:43:06.588464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.896 qpair failed and we were unable to recover it. 00:32:34.896 [2024-11-20 06:43:06.588622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.896 [2024-11-20 06:43:06.588644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.896 qpair failed and we were unable to recover it. 00:32:34.896 [2024-11-20 06:43:06.588732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.896 [2024-11-20 06:43:06.588753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.896 qpair failed and we were unable to recover it. 00:32:34.896 [2024-11-20 06:43:06.588853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.896 [2024-11-20 06:43:06.588875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.896 qpair failed and we were unable to recover it. 00:32:34.896 [2024-11-20 06:43:06.588969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.896 [2024-11-20 06:43:06.588990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.896 qpair failed and we were unable to recover it. 00:32:34.896 [2024-11-20 06:43:06.589138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.896 [2024-11-20 06:43:06.589161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.896 qpair failed and we were unable to recover it. 00:32:34.896 [2024-11-20 06:43:06.589244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.896 [2024-11-20 06:43:06.589265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.896 qpair failed and we were unable to recover it. 00:32:34.896 [2024-11-20 06:43:06.589354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.896 [2024-11-20 06:43:06.589375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.896 qpair failed and we were unable to recover it. 00:32:34.896 [2024-11-20 06:43:06.589472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.896 [2024-11-20 06:43:06.589494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.896 qpair failed and we were unable to recover it. 00:32:34.896 [2024-11-20 06:43:06.589581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.896 [2024-11-20 06:43:06.589602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.896 qpair failed and we were unable to recover it. 00:32:34.896 [2024-11-20 06:43:06.589705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.896 [2024-11-20 06:43:06.589728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.896 qpair failed and we were unable to recover it. 00:32:34.896 [2024-11-20 06:43:06.589922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.896 [2024-11-20 06:43:06.589945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.896 qpair failed and we were unable to recover it. 00:32:34.896 [2024-11-20 06:43:06.590043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.896 [2024-11-20 06:43:06.590065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.896 qpair failed and we were unable to recover it. 00:32:34.896 [2024-11-20 06:43:06.590150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.896 [2024-11-20 06:43:06.590175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.896 qpair failed and we were unable to recover it. 00:32:34.896 [2024-11-20 06:43:06.590302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.896 [2024-11-20 06:43:06.590327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.896 qpair failed and we were unable to recover it. 00:32:34.896 [2024-11-20 06:43:06.590433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.896 [2024-11-20 06:43:06.590455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.896 qpair failed and we were unable to recover it. 00:32:34.896 [2024-11-20 06:43:06.590540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.896 [2024-11-20 06:43:06.590561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.896 qpair failed and we were unable to recover it. 00:32:34.896 [2024-11-20 06:43:06.590666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.896 [2024-11-20 06:43:06.590688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.896 qpair failed and we were unable to recover it. 00:32:34.896 [2024-11-20 06:43:06.590772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.896 [2024-11-20 06:43:06.590793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.896 qpair failed and we were unable to recover it. 00:32:34.897 [2024-11-20 06:43:06.590944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.897 [2024-11-20 06:43:06.590965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.897 qpair failed and we were unable to recover it. 00:32:34.897 [2024-11-20 06:43:06.591059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.897 [2024-11-20 06:43:06.591080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.897 qpair failed and we were unable to recover it. 00:32:34.897 [2024-11-20 06:43:06.591178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.897 [2024-11-20 06:43:06.591207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.897 qpair failed and we were unable to recover it. 00:32:34.897 [2024-11-20 06:43:06.591370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.897 [2024-11-20 06:43:06.591392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.897 qpair failed and we were unable to recover it. 00:32:34.897 [2024-11-20 06:43:06.591546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.897 [2024-11-20 06:43:06.591569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.897 qpair failed and we were unable to recover it. 00:32:34.897 [2024-11-20 06:43:06.591662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.897 [2024-11-20 06:43:06.591683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.897 qpair failed and we were unable to recover it. 00:32:34.897 [2024-11-20 06:43:06.591766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.897 [2024-11-20 06:43:06.591787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.897 qpair failed and we were unable to recover it. 00:32:34.897 [2024-11-20 06:43:06.591888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.897 [2024-11-20 06:43:06.591914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.897 qpair failed and we were unable to recover it. 00:32:34.897 [2024-11-20 06:43:06.592012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.897 [2024-11-20 06:43:06.592034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.897 qpair failed and we were unable to recover it. 00:32:34.897 [2024-11-20 06:43:06.592140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.897 [2024-11-20 06:43:06.592163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.897 qpair failed and we were unable to recover it. 00:32:34.897 [2024-11-20 06:43:06.592285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.897 [2024-11-20 06:43:06.592307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.897 qpair failed and we were unable to recover it. 00:32:34.897 [2024-11-20 06:43:06.592403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.897 [2024-11-20 06:43:06.592425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.897 qpair failed and we were unable to recover it. 00:32:34.897 [2024-11-20 06:43:06.592514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.897 [2024-11-20 06:43:06.592536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.897 qpair failed and we were unable to recover it. 00:32:34.897 [2024-11-20 06:43:06.592753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.897 [2024-11-20 06:43:06.592775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.897 qpair failed and we were unable to recover it. 00:32:34.897 [2024-11-20 06:43:06.592873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.897 [2024-11-20 06:43:06.592896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.897 qpair failed and we were unable to recover it. 00:32:34.897 [2024-11-20 06:43:06.593096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.897 [2024-11-20 06:43:06.593119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.897 qpair failed and we were unable to recover it. 00:32:34.897 [2024-11-20 06:43:06.593219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.897 [2024-11-20 06:43:06.593241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.897 qpair failed and we were unable to recover it. 00:32:34.897 [2024-11-20 06:43:06.593373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.897 [2024-11-20 06:43:06.593396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.897 qpair failed and we were unable to recover it. 00:32:34.897 [2024-11-20 06:43:06.593492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.897 [2024-11-20 06:43:06.593512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.897 qpair failed and we were unable to recover it. 00:32:34.897 [2024-11-20 06:43:06.593672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.897 [2024-11-20 06:43:06.593695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.897 qpair failed and we were unable to recover it. 00:32:34.897 [2024-11-20 06:43:06.593854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.897 [2024-11-20 06:43:06.593876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.897 qpair failed and we were unable to recover it. 00:32:34.897 [2024-11-20 06:43:06.594029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.897 [2024-11-20 06:43:06.594051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.897 qpair failed and we were unable to recover it. 00:32:34.897 [2024-11-20 06:43:06.594137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.897 [2024-11-20 06:43:06.594161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.897 qpair failed and we were unable to recover it. 00:32:34.897 [2024-11-20 06:43:06.594274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.897 [2024-11-20 06:43:06.594299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.897 qpair failed and we were unable to recover it. 00:32:34.897 [2024-11-20 06:43:06.594466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.897 [2024-11-20 06:43:06.594486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.897 qpair failed and we were unable to recover it. 00:32:34.897 [2024-11-20 06:43:06.594611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.897 [2024-11-20 06:43:06.594633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.897 qpair failed and we were unable to recover it. 00:32:34.897 [2024-11-20 06:43:06.594742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.897 [2024-11-20 06:43:06.594765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.897 qpair failed and we were unable to recover it. 00:32:34.897 [2024-11-20 06:43:06.594920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.897 [2024-11-20 06:43:06.594943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.897 qpair failed and we were unable to recover it. 00:32:34.897 [2024-11-20 06:43:06.595039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.897 [2024-11-20 06:43:06.595059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.897 qpair failed and we were unable to recover it. 00:32:34.897 [2024-11-20 06:43:06.595141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.897 [2024-11-20 06:43:06.595162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.897 qpair failed and we were unable to recover it. 00:32:34.897 [2024-11-20 06:43:06.595258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.897 [2024-11-20 06:43:06.595279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.897 qpair failed and we were unable to recover it. 00:32:34.897 [2024-11-20 06:43:06.595372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.897 [2024-11-20 06:43:06.595393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.897 qpair failed and we were unable to recover it. 00:32:34.897 [2024-11-20 06:43:06.595550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.897 [2024-11-20 06:43:06.595572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.897 qpair failed and we were unable to recover it. 00:32:34.897 [2024-11-20 06:43:06.595793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.897 [2024-11-20 06:43:06.595815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.897 qpair failed and we were unable to recover it. 00:32:34.897 [2024-11-20 06:43:06.595973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.897 [2024-11-20 06:43:06.595995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.897 qpair failed and we were unable to recover it. 00:32:34.897 [2024-11-20 06:43:06.596092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.897 [2024-11-20 06:43:06.596114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.897 qpair failed and we were unable to recover it. 00:32:34.897 [2024-11-20 06:43:06.596215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.897 [2024-11-20 06:43:06.596238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.897 qpair failed and we were unable to recover it. 00:32:34.897 [2024-11-20 06:43:06.596364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.897 [2024-11-20 06:43:06.596386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.897 qpair failed and we were unable to recover it. 00:32:34.897 [2024-11-20 06:43:06.596496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.898 [2024-11-20 06:43:06.596518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.898 qpair failed and we were unable to recover it. 00:32:34.898 [2024-11-20 06:43:06.596737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.898 [2024-11-20 06:43:06.596759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.898 qpair failed and we were unable to recover it. 00:32:34.898 [2024-11-20 06:43:06.596908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.898 [2024-11-20 06:43:06.596930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.898 qpair failed and we were unable to recover it. 00:32:34.898 [2024-11-20 06:43:06.597026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.898 [2024-11-20 06:43:06.597052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.898 qpair failed and we were unable to recover it. 00:32:34.898 [2024-11-20 06:43:06.597152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.898 [2024-11-20 06:43:06.597172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.898 qpair failed and we were unable to recover it. 00:32:34.898 [2024-11-20 06:43:06.597343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.898 [2024-11-20 06:43:06.597365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.898 qpair failed and we were unable to recover it. 00:32:34.898 [2024-11-20 06:43:06.597448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.898 [2024-11-20 06:43:06.597468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.898 qpair failed and we were unable to recover it. 00:32:34.898 [2024-11-20 06:43:06.597572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.898 [2024-11-20 06:43:06.597593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.898 qpair failed and we were unable to recover it. 00:32:34.898 [2024-11-20 06:43:06.597686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.898 [2024-11-20 06:43:06.597707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.898 qpair failed and we were unable to recover it. 00:32:34.898 [2024-11-20 06:43:06.597866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.898 [2024-11-20 06:43:06.597888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.898 qpair failed and we were unable to recover it. 00:32:34.898 [2024-11-20 06:43:06.597985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.898 [2024-11-20 06:43:06.598006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.898 qpair failed and we were unable to recover it. 00:32:34.898 [2024-11-20 06:43:06.598170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.898 [2024-11-20 06:43:06.598191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.898 qpair failed and we were unable to recover it. 00:32:34.898 [2024-11-20 06:43:06.598296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.898 [2024-11-20 06:43:06.598318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.898 qpair failed and we were unable to recover it. 00:32:34.898 [2024-11-20 06:43:06.598413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.898 [2024-11-20 06:43:06.598436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.898 qpair failed and we were unable to recover it. 00:32:34.898 [2024-11-20 06:43:06.598532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.898 [2024-11-20 06:43:06.598553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.898 qpair failed and we were unable to recover it. 00:32:34.898 [2024-11-20 06:43:06.598654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.898 [2024-11-20 06:43:06.598675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.898 qpair failed and we were unable to recover it. 00:32:34.898 [2024-11-20 06:43:06.598775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.898 [2024-11-20 06:43:06.598796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.898 qpair failed and we were unable to recover it. 00:32:34.898 [2024-11-20 06:43:06.598894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.898 [2024-11-20 06:43:06.598916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.898 qpair failed and we were unable to recover it. 00:32:34.898 [2024-11-20 06:43:06.599014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.898 [2024-11-20 06:43:06.599035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.898 qpair failed and we were unable to recover it. 00:32:34.898 [2024-11-20 06:43:06.599208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.898 [2024-11-20 06:43:06.599231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.898 qpair failed and we were unable to recover it. 00:32:34.898 [2024-11-20 06:43:06.599324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.898 [2024-11-20 06:43:06.599344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.898 qpair failed and we were unable to recover it. 00:32:34.898 [2024-11-20 06:43:06.599451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.898 [2024-11-20 06:43:06.599472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.898 qpair failed and we were unable to recover it. 00:32:34.898 [2024-11-20 06:43:06.599622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.898 [2024-11-20 06:43:06.599643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.898 qpair failed and we were unable to recover it. 00:32:34.898 [2024-11-20 06:43:06.599752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.898 [2024-11-20 06:43:06.599778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.898 qpair failed and we were unable to recover it. 00:32:34.898 [2024-11-20 06:43:06.599929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.898 [2024-11-20 06:43:06.599949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.898 qpair failed and we were unable to recover it. 00:32:34.898 [2024-11-20 06:43:06.600043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.898 [2024-11-20 06:43:06.600064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.898 qpair failed and we were unable to recover it. 00:32:34.898 [2024-11-20 06:43:06.600230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.898 [2024-11-20 06:43:06.600254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.898 qpair failed and we were unable to recover it. 00:32:34.898 [2024-11-20 06:43:06.600355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.898 [2024-11-20 06:43:06.600376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.898 qpair failed and we were unable to recover it. 00:32:34.898 [2024-11-20 06:43:06.600463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.898 [2024-11-20 06:43:06.600485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.898 qpair failed and we were unable to recover it. 00:32:34.898 [2024-11-20 06:43:06.600595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.898 [2024-11-20 06:43:06.600617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.898 qpair failed and we were unable to recover it. 00:32:34.898 [2024-11-20 06:43:06.600772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.898 [2024-11-20 06:43:06.600794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.898 qpair failed and we were unable to recover it. 00:32:34.898 [2024-11-20 06:43:06.600885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.898 [2024-11-20 06:43:06.600906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.898 qpair failed and we were unable to recover it. 00:32:34.898 [2024-11-20 06:43:06.601001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.898 [2024-11-20 06:43:06.601024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.898 qpair failed and we were unable to recover it. 00:32:34.898 [2024-11-20 06:43:06.601200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.898 [2024-11-20 06:43:06.601229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.899 qpair failed and we were unable to recover it. 00:32:34.899 [2024-11-20 06:43:06.601318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.899 [2024-11-20 06:43:06.601339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.899 qpair failed and we were unable to recover it. 00:32:34.899 [2024-11-20 06:43:06.601425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.899 [2024-11-20 06:43:06.601446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.899 qpair failed and we were unable to recover it. 00:32:34.899 [2024-11-20 06:43:06.601527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.899 [2024-11-20 06:43:06.601547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.899 qpair failed and we were unable to recover it. 00:32:34.899 [2024-11-20 06:43:06.601661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.899 [2024-11-20 06:43:06.601683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.899 qpair failed and we were unable to recover it. 00:32:34.899 [2024-11-20 06:43:06.601792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.899 [2024-11-20 06:43:06.601813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.899 qpair failed and we were unable to recover it. 00:32:34.899 [2024-11-20 06:43:06.601966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.899 [2024-11-20 06:43:06.601987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.899 qpair failed and we were unable to recover it. 00:32:34.899 [2024-11-20 06:43:06.602133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.899 [2024-11-20 06:43:06.602154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.899 qpair failed and we were unable to recover it. 00:32:34.899 [2024-11-20 06:43:06.602311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.899 [2024-11-20 06:43:06.602334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.899 qpair failed and we were unable to recover it. 00:32:34.899 [2024-11-20 06:43:06.602505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.899 [2024-11-20 06:43:06.602525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.899 qpair failed and we were unable to recover it. 00:32:34.899 [2024-11-20 06:43:06.602604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.899 [2024-11-20 06:43:06.602625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.899 qpair failed and we were unable to recover it. 00:32:34.899 [2024-11-20 06:43:06.602732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.899 [2024-11-20 06:43:06.602753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.899 qpair failed and we were unable to recover it. 00:32:34.899 [2024-11-20 06:43:06.602924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.899 [2024-11-20 06:43:06.602948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.899 qpair failed and we were unable to recover it. 00:32:34.899 [2024-11-20 06:43:06.603099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.899 [2024-11-20 06:43:06.603121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.899 qpair failed and we were unable to recover it. 00:32:34.899 [2024-11-20 06:43:06.603268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.899 [2024-11-20 06:43:06.603290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.899 qpair failed and we were unable to recover it. 00:32:34.899 [2024-11-20 06:43:06.603447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.899 [2024-11-20 06:43:06.603468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.899 qpair failed and we were unable to recover it. 00:32:34.899 [2024-11-20 06:43:06.603555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.899 [2024-11-20 06:43:06.603577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.899 qpair failed and we were unable to recover it. 00:32:34.899 [2024-11-20 06:43:06.603685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.899 [2024-11-20 06:43:06.603708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.899 qpair failed and we were unable to recover it. 00:32:34.899 [2024-11-20 06:43:06.603812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.899 [2024-11-20 06:43:06.603833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.899 qpair failed and we were unable to recover it. 00:32:34.899 [2024-11-20 06:43:06.603985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.899 [2024-11-20 06:43:06.604010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.899 qpair failed and we were unable to recover it. 00:32:34.899 [2024-11-20 06:43:06.604264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.899 [2024-11-20 06:43:06.604287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.899 qpair failed and we were unable to recover it. 00:32:34.899 [2024-11-20 06:43:06.604371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.899 [2024-11-20 06:43:06.604392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.899 qpair failed and we were unable to recover it. 00:32:34.899 [2024-11-20 06:43:06.604545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.899 [2024-11-20 06:43:06.604567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.899 qpair failed and we were unable to recover it. 00:32:34.899 [2024-11-20 06:43:06.604721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.899 [2024-11-20 06:43:06.604742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.899 qpair failed and we were unable to recover it. 00:32:34.899 [2024-11-20 06:43:06.604899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.899 [2024-11-20 06:43:06.604922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.899 qpair failed and we were unable to recover it. 00:32:34.899 [2024-11-20 06:43:06.605070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.899 [2024-11-20 06:43:06.605092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.899 qpair failed and we were unable to recover it. 00:32:34.899 [2024-11-20 06:43:06.605172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.899 [2024-11-20 06:43:06.605192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.899 qpair failed and we were unable to recover it. 00:32:34.899 [2024-11-20 06:43:06.605310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.899 [2024-11-20 06:43:06.605331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.899 qpair failed and we were unable to recover it. 00:32:34.899 [2024-11-20 06:43:06.605503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.899 [2024-11-20 06:43:06.605524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.899 qpair failed and we were unable to recover it. 00:32:34.899 [2024-11-20 06:43:06.605670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.899 [2024-11-20 06:43:06.605692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.899 qpair failed and we were unable to recover it. 00:32:34.899 [2024-11-20 06:43:06.605911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.899 [2024-11-20 06:43:06.605933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.899 qpair failed and we were unable to recover it. 00:32:34.899 [2024-11-20 06:43:06.606043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.899 [2024-11-20 06:43:06.606064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.899 qpair failed and we were unable to recover it. 00:32:34.899 [2024-11-20 06:43:06.606150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.899 [2024-11-20 06:43:06.606172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.899 qpair failed and we were unable to recover it. 00:32:34.899 [2024-11-20 06:43:06.606415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.899 [2024-11-20 06:43:06.606439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.899 qpair failed and we were unable to recover it. 00:32:34.899 [2024-11-20 06:43:06.606597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.899 [2024-11-20 06:43:06.606620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.899 qpair failed and we were unable to recover it. 00:32:34.899 [2024-11-20 06:43:06.606791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.899 [2024-11-20 06:43:06.606814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.899 qpair failed and we were unable to recover it. 00:32:34.899 [2024-11-20 06:43:06.606913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.899 [2024-11-20 06:43:06.606933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.899 qpair failed and we were unable to recover it. 00:32:34.899 [2024-11-20 06:43:06.607033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.899 [2024-11-20 06:43:06.607056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.899 qpair failed and we were unable to recover it. 00:32:34.899 [2024-11-20 06:43:06.607148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.900 [2024-11-20 06:43:06.607169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.900 qpair failed and we were unable to recover it. 00:32:34.900 [2024-11-20 06:43:06.607258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.900 [2024-11-20 06:43:06.607279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.900 qpair failed and we were unable to recover it. 00:32:34.900 [2024-11-20 06:43:06.607395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.900 [2024-11-20 06:43:06.607416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.900 qpair failed and we were unable to recover it. 00:32:34.900 [2024-11-20 06:43:06.607497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.900 [2024-11-20 06:43:06.607518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.900 qpair failed and we were unable to recover it. 00:32:34.900 [2024-11-20 06:43:06.607673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.900 [2024-11-20 06:43:06.607693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.900 qpair failed and we were unable to recover it. 00:32:34.900 [2024-11-20 06:43:06.607776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.900 [2024-11-20 06:43:06.607797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.900 qpair failed and we were unable to recover it. 00:32:34.900 [2024-11-20 06:43:06.607878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.900 [2024-11-20 06:43:06.607899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.900 qpair failed and we were unable to recover it. 00:32:34.900 [2024-11-20 06:43:06.608054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.900 [2024-11-20 06:43:06.608076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.900 qpair failed and we were unable to recover it. 00:32:34.900 [2024-11-20 06:43:06.608163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.900 [2024-11-20 06:43:06.608184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.900 qpair failed and we were unable to recover it. 00:32:34.900 [2024-11-20 06:43:06.608274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.900 [2024-11-20 06:43:06.608296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.900 qpair failed and we were unable to recover it. 00:32:34.900 [2024-11-20 06:43:06.608382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.900 [2024-11-20 06:43:06.608403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.900 qpair failed and we were unable to recover it. 00:32:34.900 [2024-11-20 06:43:06.608568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.900 [2024-11-20 06:43:06.608589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.900 qpair failed and we were unable to recover it. 00:32:34.900 [2024-11-20 06:43:06.608671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.900 [2024-11-20 06:43:06.608692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.900 qpair failed and we were unable to recover it. 00:32:34.900 [2024-11-20 06:43:06.608769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.900 [2024-11-20 06:43:06.608790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.900 qpair failed and we were unable to recover it. 00:32:34.900 [2024-11-20 06:43:06.608888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.900 [2024-11-20 06:43:06.608908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.900 qpair failed and we were unable to recover it. 00:32:34.900 [2024-11-20 06:43:06.609006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.900 [2024-11-20 06:43:06.609028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.900 qpair failed and we were unable to recover it. 00:32:34.900 [2024-11-20 06:43:06.609185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.900 [2024-11-20 06:43:06.609213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.900 qpair failed and we were unable to recover it. 00:32:34.900 [2024-11-20 06:43:06.609311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.900 [2024-11-20 06:43:06.609336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.900 qpair failed and we were unable to recover it. 00:32:34.900 [2024-11-20 06:43:06.609430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.900 [2024-11-20 06:43:06.609450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.900 qpair failed and we were unable to recover it. 00:32:34.900 [2024-11-20 06:43:06.609539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.900 [2024-11-20 06:43:06.609560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.900 qpair failed and we were unable to recover it. 00:32:34.900 [2024-11-20 06:43:06.609651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.900 [2024-11-20 06:43:06.609672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.900 qpair failed and we were unable to recover it. 00:32:34.900 [2024-11-20 06:43:06.609748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.900 [2024-11-20 06:43:06.609768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.900 qpair failed and we were unable to recover it. 00:32:34.900 [2024-11-20 06:43:06.609854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.900 [2024-11-20 06:43:06.609875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.900 qpair failed and we were unable to recover it. 00:32:34.900 [2024-11-20 06:43:06.610028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.900 [2024-11-20 06:43:06.610050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.900 qpair failed and we were unable to recover it. 00:32:34.900 [2024-11-20 06:43:06.610129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.900 [2024-11-20 06:43:06.610150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.900 qpair failed and we were unable to recover it. 00:32:34.900 [2024-11-20 06:43:06.610249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.900 [2024-11-20 06:43:06.610271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.900 qpair failed and we were unable to recover it. 00:32:34.900 [2024-11-20 06:43:06.610433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.900 [2024-11-20 06:43:06.610455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.900 qpair failed and we were unable to recover it. 00:32:34.900 [2024-11-20 06:43:06.610544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.900 [2024-11-20 06:43:06.610565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.900 qpair failed and we were unable to recover it. 00:32:34.900 [2024-11-20 06:43:06.610671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.900 [2024-11-20 06:43:06.610694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.900 qpair failed and we were unable to recover it. 00:32:34.900 [2024-11-20 06:43:06.610774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.900 [2024-11-20 06:43:06.610794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.900 qpair failed and we were unable to recover it. 00:32:34.900 [2024-11-20 06:43:06.610887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.900 [2024-11-20 06:43:06.610908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.900 qpair failed and we were unable to recover it. 00:32:34.900 [2024-11-20 06:43:06.611070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.900 [2024-11-20 06:43:06.611091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.900 qpair failed and we were unable to recover it. 00:32:34.900 [2024-11-20 06:43:06.611194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.900 [2024-11-20 06:43:06.611222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.900 qpair failed and we were unable to recover it. 00:32:34.900 [2024-11-20 06:43:06.611310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.900 [2024-11-20 06:43:06.611331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.900 qpair failed and we were unable to recover it. 00:32:34.900 [2024-11-20 06:43:06.611417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.900 [2024-11-20 06:43:06.611438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.900 qpair failed and we were unable to recover it. 00:32:34.900 [2024-11-20 06:43:06.611585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.900 [2024-11-20 06:43:06.611607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.900 qpair failed and we were unable to recover it. 00:32:34.900 [2024-11-20 06:43:06.611728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.900 [2024-11-20 06:43:06.611750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.900 qpair failed and we were unable to recover it. 00:32:34.900 [2024-11-20 06:43:06.611900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.900 [2024-11-20 06:43:06.611922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.900 qpair failed and we were unable to recover it. 00:32:34.900 [2024-11-20 06:43:06.612073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.901 [2024-11-20 06:43:06.612095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.901 qpair failed and we were unable to recover it. 00:32:34.901 [2024-11-20 06:43:06.612197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.901 [2024-11-20 06:43:06.612236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.901 qpair failed and we were unable to recover it. 00:32:34.901 [2024-11-20 06:43:06.612329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.901 [2024-11-20 06:43:06.612350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.901 qpair failed and we were unable to recover it. 00:32:34.901 [2024-11-20 06:43:06.612437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.901 [2024-11-20 06:43:06.612458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.901 qpair failed and we were unable to recover it. 00:32:34.901 [2024-11-20 06:43:06.612616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.901 [2024-11-20 06:43:06.612638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.901 qpair failed and we were unable to recover it. 00:32:34.901 [2024-11-20 06:43:06.612736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.901 [2024-11-20 06:43:06.612757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.901 qpair failed and we were unable to recover it. 00:32:34.901 [2024-11-20 06:43:06.612907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.901 [2024-11-20 06:43:06.612927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.901 qpair failed and we were unable to recover it. 00:32:34.901 [2024-11-20 06:43:06.613027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.901 [2024-11-20 06:43:06.613049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.901 qpair failed and we were unable to recover it. 00:32:34.901 [2024-11-20 06:43:06.613152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.901 [2024-11-20 06:43:06.613172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.901 qpair failed and we were unable to recover it. 00:32:34.901 [2024-11-20 06:43:06.613273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.901 [2024-11-20 06:43:06.613294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.901 qpair failed and we were unable to recover it. 00:32:34.901 [2024-11-20 06:43:06.613395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.901 [2024-11-20 06:43:06.613418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.901 qpair failed and we were unable to recover it. 00:32:34.901 [2024-11-20 06:43:06.613521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.901 [2024-11-20 06:43:06.613546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.901 qpair failed and we were unable to recover it. 00:32:34.901 [2024-11-20 06:43:06.613626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.901 [2024-11-20 06:43:06.613646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.901 qpair failed and we were unable to recover it. 00:32:34.901 [2024-11-20 06:43:06.613754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.901 [2024-11-20 06:43:06.613775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.901 qpair failed and we were unable to recover it. 00:32:34.901 [2024-11-20 06:43:06.613856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.901 [2024-11-20 06:43:06.613877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.901 qpair failed and we were unable to recover it. 00:32:34.901 [2024-11-20 06:43:06.614034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.901 [2024-11-20 06:43:06.614061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.901 qpair failed and we were unable to recover it. 00:32:34.901 [2024-11-20 06:43:06.614154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.901 [2024-11-20 06:43:06.614175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.901 qpair failed and we were unable to recover it. 00:32:34.901 [2024-11-20 06:43:06.614271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.901 [2024-11-20 06:43:06.614292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.901 qpair failed and we were unable to recover it. 00:32:34.901 [2024-11-20 06:43:06.614372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.901 [2024-11-20 06:43:06.614392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.901 qpair failed and we were unable to recover it. 00:32:34.901 [2024-11-20 06:43:06.614473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.901 [2024-11-20 06:43:06.614493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.901 qpair failed and we were unable to recover it. 00:32:34.901 [2024-11-20 06:43:06.614580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.901 [2024-11-20 06:43:06.614601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.901 qpair failed and we were unable to recover it. 00:32:34.901 [2024-11-20 06:43:06.614752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.901 [2024-11-20 06:43:06.614777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.901 qpair failed and we were unable to recover it. 00:32:34.901 [2024-11-20 06:43:06.614886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.901 [2024-11-20 06:43:06.614907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.901 qpair failed and we were unable to recover it. 00:32:34.901 [2024-11-20 06:43:06.614991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.901 [2024-11-20 06:43:06.615011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.901 qpair failed and we were unable to recover it. 00:32:34.901 [2024-11-20 06:43:06.615167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.901 [2024-11-20 06:43:06.615188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.901 qpair failed and we were unable to recover it. 00:32:34.901 [2024-11-20 06:43:06.615285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.901 [2024-11-20 06:43:06.615307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.901 qpair failed and we were unable to recover it. 00:32:34.901 [2024-11-20 06:43:06.615393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.901 [2024-11-20 06:43:06.615413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.901 qpair failed and we were unable to recover it. 00:32:34.901 [2024-11-20 06:43:06.615498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.901 [2024-11-20 06:43:06.615520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.901 qpair failed and we were unable to recover it. 00:32:34.901 [2024-11-20 06:43:06.615697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.901 [2024-11-20 06:43:06.615718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.901 qpair failed and we were unable to recover it. 00:32:34.901 [2024-11-20 06:43:06.615956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.901 [2024-11-20 06:43:06.615977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.901 qpair failed and we were unable to recover it. 00:32:34.901 [2024-11-20 06:43:06.616070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.901 [2024-11-20 06:43:06.616091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.901 qpair failed and we were unable to recover it. 00:32:34.901 [2024-11-20 06:43:06.616274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.901 [2024-11-20 06:43:06.616296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.901 qpair failed and we were unable to recover it. 00:32:34.901 [2024-11-20 06:43:06.616451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.901 [2024-11-20 06:43:06.616472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.901 qpair failed and we were unable to recover it. 00:32:34.901 [2024-11-20 06:43:06.616622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.901 [2024-11-20 06:43:06.616643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.901 qpair failed and we were unable to recover it. 00:32:34.901 [2024-11-20 06:43:06.616730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.901 [2024-11-20 06:43:06.616750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.901 qpair failed and we were unable to recover it. 00:32:34.901 [2024-11-20 06:43:06.616913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.901 [2024-11-20 06:43:06.616934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.901 qpair failed and we were unable to recover it. 00:32:34.901 [2024-11-20 06:43:06.617114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.901 [2024-11-20 06:43:06.617136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.901 qpair failed and we were unable to recover it. 00:32:34.901 [2024-11-20 06:43:06.617238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.901 [2024-11-20 06:43:06.617261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.901 qpair failed and we were unable to recover it. 00:32:34.901 [2024-11-20 06:43:06.617375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.901 [2024-11-20 06:43:06.617396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.902 qpair failed and we were unable to recover it. 00:32:34.902 [2024-11-20 06:43:06.617560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.902 [2024-11-20 06:43:06.617581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.902 qpair failed and we were unable to recover it. 00:32:34.902 [2024-11-20 06:43:06.617683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.902 [2024-11-20 06:43:06.617704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.902 qpair failed and we were unable to recover it. 00:32:34.902 [2024-11-20 06:43:06.617803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.902 [2024-11-20 06:43:06.617824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.902 qpair failed and we were unable to recover it. 00:32:34.902 [2024-11-20 06:43:06.617937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.902 [2024-11-20 06:43:06.617958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.902 qpair failed and we were unable to recover it. 00:32:34.902 [2024-11-20 06:43:06.618054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.902 [2024-11-20 06:43:06.618077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.902 qpair failed and we were unable to recover it. 00:32:34.902 [2024-11-20 06:43:06.618297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.902 [2024-11-20 06:43:06.618319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.902 qpair failed and we were unable to recover it. 00:32:34.902 [2024-11-20 06:43:06.618416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.902 [2024-11-20 06:43:06.618437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.902 qpair failed and we were unable to recover it. 00:32:34.902 [2024-11-20 06:43:06.618593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.902 [2024-11-20 06:43:06.618613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.902 qpair failed and we were unable to recover it. 00:32:34.902 [2024-11-20 06:43:06.618724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.902 [2024-11-20 06:43:06.618745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.902 qpair failed and we were unable to recover it. 00:32:34.902 [2024-11-20 06:43:06.619058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.902 [2024-11-20 06:43:06.619079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.902 qpair failed and we were unable to recover it. 00:32:34.902 [2024-11-20 06:43:06.619295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.902 [2024-11-20 06:43:06.619318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.902 qpair failed and we were unable to recover it. 00:32:34.902 [2024-11-20 06:43:06.619493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.902 [2024-11-20 06:43:06.619513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.902 qpair failed and we were unable to recover it. 00:32:34.902 [2024-11-20 06:43:06.619631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.902 [2024-11-20 06:43:06.619651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.902 qpair failed and we were unable to recover it. 00:32:34.902 [2024-11-20 06:43:06.619923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.902 [2024-11-20 06:43:06.619947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.902 qpair failed and we were unable to recover it. 00:32:34.902 [2024-11-20 06:43:06.620095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.902 [2024-11-20 06:43:06.620117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.902 qpair failed and we were unable to recover it. 00:32:34.902 [2024-11-20 06:43:06.620293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.902 [2024-11-20 06:43:06.620315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.902 qpair failed and we were unable to recover it. 00:32:34.902 [2024-11-20 06:43:06.620421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.902 [2024-11-20 06:43:06.620442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.902 qpair failed and we were unable to recover it. 00:32:34.902 [2024-11-20 06:43:06.620541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.902 [2024-11-20 06:43:06.620563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.902 qpair failed and we were unable to recover it. 00:32:34.902 [2024-11-20 06:43:06.620715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.902 [2024-11-20 06:43:06.620737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.902 qpair failed and we were unable to recover it. 00:32:34.902 [2024-11-20 06:43:06.620924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.902 [2024-11-20 06:43:06.620944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.902 qpair failed and we were unable to recover it. 00:32:34.902 [2024-11-20 06:43:06.621058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.902 [2024-11-20 06:43:06.621079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.902 qpair failed and we were unable to recover it. 00:32:34.902 [2024-11-20 06:43:06.621261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.902 [2024-11-20 06:43:06.621283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.902 qpair failed and we were unable to recover it. 00:32:34.902 [2024-11-20 06:43:06.621528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.902 [2024-11-20 06:43:06.621549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.902 qpair failed and we were unable to recover it. 00:32:34.902 [2024-11-20 06:43:06.621714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.902 [2024-11-20 06:43:06.621735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.902 qpair failed and we were unable to recover it. 00:32:34.902 [2024-11-20 06:43:06.621855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.902 [2024-11-20 06:43:06.621875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.902 qpair failed and we were unable to recover it. 00:32:34.902 [2024-11-20 06:43:06.621989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.902 [2024-11-20 06:43:06.622010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.902 qpair failed and we were unable to recover it. 00:32:34.902 [2024-11-20 06:43:06.622097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.902 [2024-11-20 06:43:06.622119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.902 qpair failed and we were unable to recover it. 00:32:34.902 [2024-11-20 06:43:06.622266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.902 [2024-11-20 06:43:06.622290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.902 qpair failed and we were unable to recover it. 00:32:34.902 [2024-11-20 06:43:06.622448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.902 [2024-11-20 06:43:06.622469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.902 qpair failed and we were unable to recover it. 00:32:34.902 [2024-11-20 06:43:06.622630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.902 [2024-11-20 06:43:06.622651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:34.902 qpair failed and we were unable to recover it. 00:32:34.902 [2024-11-20 06:43:06.622758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.902 [2024-11-20 06:43:06.622779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.189 qpair failed and we were unable to recover it. 00:32:35.189 [2024-11-20 06:43:06.623005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.189 [2024-11-20 06:43:06.623028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.189 qpair failed and we were unable to recover it. 00:32:35.189 [2024-11-20 06:43:06.623280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.189 [2024-11-20 06:43:06.623303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.189 qpair failed and we were unable to recover it. 00:32:35.189 [2024-11-20 06:43:06.623422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.189 [2024-11-20 06:43:06.623442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.189 qpair failed and we were unable to recover it. 00:32:35.189 [2024-11-20 06:43:06.623561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.189 [2024-11-20 06:43:06.623582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.189 qpair failed and we were unable to recover it. 00:32:35.189 [2024-11-20 06:43:06.623777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.189 [2024-11-20 06:43:06.623805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.189 qpair failed and we were unable to recover it. 00:32:35.189 [2024-11-20 06:43:06.624022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.189 [2024-11-20 06:43:06.624042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.189 qpair failed and we were unable to recover it. 00:32:35.189 [2024-11-20 06:43:06.624237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.189 [2024-11-20 06:43:06.624260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.189 qpair failed and we were unable to recover it. 00:32:35.189 [2024-11-20 06:43:06.624372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.189 [2024-11-20 06:43:06.624394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.189 qpair failed and we were unable to recover it. 00:32:35.189 [2024-11-20 06:43:06.624567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.189 [2024-11-20 06:43:06.624589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.189 qpair failed and we were unable to recover it. 00:32:35.189 [2024-11-20 06:43:06.624728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.189 [2024-11-20 06:43:06.624753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.189 qpair failed and we were unable to recover it. 00:32:35.189 [2024-11-20 06:43:06.624916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.190 [2024-11-20 06:43:06.624937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.190 qpair failed and we were unable to recover it. 00:32:35.190 [2024-11-20 06:43:06.625049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.190 [2024-11-20 06:43:06.625071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.190 qpair failed and we were unable to recover it. 00:32:35.190 [2024-11-20 06:43:06.625234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.190 [2024-11-20 06:43:06.625257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.190 qpair failed and we were unable to recover it. 00:32:35.190 [2024-11-20 06:43:06.625372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.190 [2024-11-20 06:43:06.625393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.190 qpair failed and we were unable to recover it. 00:32:35.190 [2024-11-20 06:43:06.625494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.190 [2024-11-20 06:43:06.625515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.190 qpair failed and we were unable to recover it. 00:32:35.190 [2024-11-20 06:43:06.625615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.190 [2024-11-20 06:43:06.625637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.190 qpair failed and we were unable to recover it. 00:32:35.190 [2024-11-20 06:43:06.625752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.190 [2024-11-20 06:43:06.625774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.190 qpair failed and we were unable to recover it. 00:32:35.190 [2024-11-20 06:43:06.625896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.190 [2024-11-20 06:43:06.625917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.190 qpair failed and we were unable to recover it. 00:32:35.190 [2024-11-20 06:43:06.626026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.190 [2024-11-20 06:43:06.626047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.190 qpair failed and we were unable to recover it. 00:32:35.190 [2024-11-20 06:43:06.626176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.190 [2024-11-20 06:43:06.626198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.190 qpair failed and we were unable to recover it. 00:32:35.190 [2024-11-20 06:43:06.626372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.190 [2024-11-20 06:43:06.626394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.190 qpair failed and we were unable to recover it. 00:32:35.190 [2024-11-20 06:43:06.626483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.190 [2024-11-20 06:43:06.626505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.190 qpair failed and we were unable to recover it. 00:32:35.190 [2024-11-20 06:43:06.626687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.190 [2024-11-20 06:43:06.626708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.190 qpair failed and we were unable to recover it. 00:32:35.190 [2024-11-20 06:43:06.626842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.190 [2024-11-20 06:43:06.626885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:35.190 qpair failed and we were unable to recover it. 00:32:35.190 [2024-11-20 06:43:06.627003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.190 [2024-11-20 06:43:06.627028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:35.190 qpair failed and we were unable to recover it. 00:32:35.190 [2024-11-20 06:43:06.627132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.190 [2024-11-20 06:43:06.627153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:35.190 qpair failed and we were unable to recover it. 00:32:35.190 [2024-11-20 06:43:06.627253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.190 [2024-11-20 06:43:06.627276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:35.190 qpair failed and we were unable to recover it. 00:32:35.190 [2024-11-20 06:43:06.627455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.190 [2024-11-20 06:43:06.627488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:35.190 qpair failed and we were unable to recover it. 00:32:35.190 [2024-11-20 06:43:06.627623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.190 [2024-11-20 06:43:06.627654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:35.190 qpair failed and we were unable to recover it. 00:32:35.190 [2024-11-20 06:43:06.627931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.190 [2024-11-20 06:43:06.627958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:35.190 qpair failed and we were unable to recover it. 00:32:35.190 [2024-11-20 06:43:06.628127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.190 [2024-11-20 06:43:06.628155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:35.190 qpair failed and we were unable to recover it. 00:32:35.190 [2024-11-20 06:43:06.628299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.190 [2024-11-20 06:43:06.628332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:35.190 qpair failed and we were unable to recover it. 00:32:35.190 [2024-11-20 06:43:06.628471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.190 [2024-11-20 06:43:06.628493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:35.190 qpair failed and we were unable to recover it. 00:32:35.190 [2024-11-20 06:43:06.628599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.190 [2024-11-20 06:43:06.628620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:35.190 qpair failed and we were unable to recover it. 00:32:35.190 [2024-11-20 06:43:06.628842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.190 [2024-11-20 06:43:06.628865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:35.190 qpair failed and we were unable to recover it. 00:32:35.190 [2024-11-20 06:43:06.629032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.190 [2024-11-20 06:43:06.629053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:35.190 qpair failed and we were unable to recover it. 00:32:35.190 [2024-11-20 06:43:06.629279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.190 [2024-11-20 06:43:06.629308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:35.190 qpair failed and we were unable to recover it. 00:32:35.190 [2024-11-20 06:43:06.629419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.190 [2024-11-20 06:43:06.629441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:35.190 qpair failed and we were unable to recover it. 00:32:35.190 [2024-11-20 06:43:06.629614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.190 [2024-11-20 06:43:06.629637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:35.190 qpair failed and we were unable to recover it. 00:32:35.190 [2024-11-20 06:43:06.629754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.190 [2024-11-20 06:43:06.629776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:35.190 qpair failed and we were unable to recover it. 00:32:35.190 [2024-11-20 06:43:06.629888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.190 [2024-11-20 06:43:06.629910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:35.190 qpair failed and we were unable to recover it. 00:32:35.190 [2024-11-20 06:43:06.630073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.190 [2024-11-20 06:43:06.630095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:35.190 qpair failed and we were unable to recover it. 00:32:35.190 [2024-11-20 06:43:06.630248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.190 [2024-11-20 06:43:06.630271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:35.190 qpair failed and we were unable to recover it. 00:32:35.190 [2024-11-20 06:43:06.630422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.190 [2024-11-20 06:43:06.630444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:35.190 qpair failed and we were unable to recover it. 00:32:35.190 [2024-11-20 06:43:06.630660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.190 [2024-11-20 06:43:06.630681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:35.190 qpair failed and we were unable to recover it. 00:32:35.190 [2024-11-20 06:43:06.630879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.190 [2024-11-20 06:43:06.630901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:35.190 qpair failed and we were unable to recover it. 00:32:35.190 [2024-11-20 06:43:06.631153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.190 [2024-11-20 06:43:06.631174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:35.190 qpair failed and we were unable to recover it. 00:32:35.190 [2024-11-20 06:43:06.631331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.191 [2024-11-20 06:43:06.631354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:35.191 qpair failed and we were unable to recover it. 00:32:35.191 [2024-11-20 06:43:06.631458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.191 [2024-11-20 06:43:06.631479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:35.191 qpair failed and we were unable to recover it. 00:32:35.191 [2024-11-20 06:43:06.631649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.191 [2024-11-20 06:43:06.631670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:35.191 qpair failed and we were unable to recover it. 00:32:35.191 [2024-11-20 06:43:06.631867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.191 [2024-11-20 06:43:06.631890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:35.191 qpair failed and we were unable to recover it. 00:32:35.191 [2024-11-20 06:43:06.632172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.191 [2024-11-20 06:43:06.632194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:35.191 qpair failed and we were unable to recover it. 00:32:35.191 [2024-11-20 06:43:06.632405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.191 [2024-11-20 06:43:06.632427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:35.191 qpair failed and we were unable to recover it. 00:32:35.191 [2024-11-20 06:43:06.632615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.191 [2024-11-20 06:43:06.632637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:35.191 qpair failed and we were unable to recover it. 00:32:35.191 [2024-11-20 06:43:06.632931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.191 [2024-11-20 06:43:06.632953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:35.191 qpair failed and we were unable to recover it. 00:32:35.191 [2024-11-20 06:43:06.633189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.191 [2024-11-20 06:43:06.633220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:35.191 qpair failed and we were unable to recover it. 00:32:35.191 [2024-11-20 06:43:06.633389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.191 [2024-11-20 06:43:06.633411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:35.191 qpair failed and we were unable to recover it. 00:32:35.191 [2024-11-20 06:43:06.633522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.191 [2024-11-20 06:43:06.633544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:35.191 qpair failed and we were unable to recover it. 00:32:35.191 [2024-11-20 06:43:06.633716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.191 [2024-11-20 06:43:06.633737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:35.191 qpair failed and we were unable to recover it. 00:32:35.191 [2024-11-20 06:43:06.633919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.191 [2024-11-20 06:43:06.633942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:35.191 qpair failed and we were unable to recover it. 00:32:35.191 [2024-11-20 06:43:06.634144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.191 [2024-11-20 06:43:06.634168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:35.191 qpair failed and we were unable to recover it. 00:32:35.191 [2024-11-20 06:43:06.634425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.191 [2024-11-20 06:43:06.634449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:35.191 qpair failed and we were unable to recover it. 00:32:35.191 [2024-11-20 06:43:06.634642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.191 [2024-11-20 06:43:06.634665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:35.191 qpair failed and we were unable to recover it. 00:32:35.191 [2024-11-20 06:43:06.634944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.191 [2024-11-20 06:43:06.634968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:35.191 qpair failed and we were unable to recover it. 00:32:35.191 [2024-11-20 06:43:06.635134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.191 [2024-11-20 06:43:06.635157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:35.191 qpair failed and we were unable to recover it. 00:32:35.191 [2024-11-20 06:43:06.635333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.191 [2024-11-20 06:43:06.635357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:35.191 qpair failed and we were unable to recover it. 00:32:35.191 [2024-11-20 06:43:06.635601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.191 [2024-11-20 06:43:06.635625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:35.191 qpair failed and we were unable to recover it. 00:32:35.191 [2024-11-20 06:43:06.635786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.191 [2024-11-20 06:43:06.635810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:35.191 qpair failed and we were unable to recover it. 00:32:35.191 [2024-11-20 06:43:06.636054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.191 [2024-11-20 06:43:06.636076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:35.191 qpair failed and we were unable to recover it. 00:32:35.191 [2024-11-20 06:43:06.636251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.191 [2024-11-20 06:43:06.636276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:35.191 qpair failed and we were unable to recover it. 00:32:35.191 [2024-11-20 06:43:06.636496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.191 [2024-11-20 06:43:06.636520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:35.191 qpair failed and we were unable to recover it. 00:32:35.191 [2024-11-20 06:43:06.636706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.191 [2024-11-20 06:43:06.636729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:35.191 qpair failed and we were unable to recover it. 00:32:35.191 [2024-11-20 06:43:06.636985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.191 [2024-11-20 06:43:06.637008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:35.191 qpair failed and we were unable to recover it. 00:32:35.191 [2024-11-20 06:43:06.637227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.191 [2024-11-20 06:43:06.637251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:35.191 qpair failed and we were unable to recover it. 00:32:35.191 [2024-11-20 06:43:06.637371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.191 [2024-11-20 06:43:06.637398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:35.191 qpair failed and we were unable to recover it. 00:32:35.191 [2024-11-20 06:43:06.637541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.191 [2024-11-20 06:43:06.637576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:35.191 qpair failed and we were unable to recover it. 00:32:35.191 [2024-11-20 06:43:06.637751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.191 [2024-11-20 06:43:06.637785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:35.191 qpair failed and we were unable to recover it. 00:32:35.191 [2024-11-20 06:43:06.638076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.191 [2024-11-20 06:43:06.638105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:35.191 qpair failed and we were unable to recover it. 00:32:35.191 [2024-11-20 06:43:06.638242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.191 [2024-11-20 06:43:06.638276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:35.191 qpair failed and we were unable to recover it. 00:32:35.191 [2024-11-20 06:43:06.638541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.191 [2024-11-20 06:43:06.638569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:35.191 qpair failed and we were unable to recover it. 00:32:35.191 [2024-11-20 06:43:06.638755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.191 [2024-11-20 06:43:06.638783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:35.191 qpair failed and we were unable to recover it. 00:32:35.191 [2024-11-20 06:43:06.638971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.191 [2024-11-20 06:43:06.638998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:35.191 qpair failed and we were unable to recover it. 00:32:35.191 [2024-11-20 06:43:06.639259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.191 [2024-11-20 06:43:06.639288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:35.191 qpair failed and we were unable to recover it. 00:32:35.191 [2024-11-20 06:43:06.639501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.191 [2024-11-20 06:43:06.639529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:35.191 qpair failed and we were unable to recover it. 00:32:35.191 [2024-11-20 06:43:06.639704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.191 [2024-11-20 06:43:06.639732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:35.191 qpair failed and we were unable to recover it. 00:32:35.192 [2024-11-20 06:43:06.639917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.192 [2024-11-20 06:43:06.639944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:35.192 qpair failed and we were unable to recover it. 00:32:35.192 [2024-11-20 06:43:06.640221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.192 [2024-11-20 06:43:06.640252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:35.192 qpair failed and we were unable to recover it. 00:32:35.192 [2024-11-20 06:43:06.640389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.192 [2024-11-20 06:43:06.640418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:35.192 qpair failed and we were unable to recover it. 00:32:35.192 [2024-11-20 06:43:06.640684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.192 [2024-11-20 06:43:06.640711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:35.192 qpair failed and we were unable to recover it. 00:32:35.192 [2024-11-20 06:43:06.640895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.192 [2024-11-20 06:43:06.640922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:35.192 qpair failed and we were unable to recover it. 00:32:35.192 [2024-11-20 06:43:06.641149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.192 [2024-11-20 06:43:06.641176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:35.192 qpair failed and we were unable to recover it. 00:32:35.192 [2024-11-20 06:43:06.641320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.192 [2024-11-20 06:43:06.641350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:35.192 qpair failed and we were unable to recover it. 00:32:35.192 [2024-11-20 06:43:06.641573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.192 [2024-11-20 06:43:06.641601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:35.192 qpair failed and we were unable to recover it. 00:32:35.192 [2024-11-20 06:43:06.641805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.192 [2024-11-20 06:43:06.641833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:35.192 qpair failed and we were unable to recover it. 00:32:35.192 [2024-11-20 06:43:06.642040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.192 [2024-11-20 06:43:06.642068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:35.192 qpair failed and we were unable to recover it. 00:32:35.192 [2024-11-20 06:43:06.642301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.192 [2024-11-20 06:43:06.642330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:35.192 qpair failed and we were unable to recover it. 00:32:35.192 [2024-11-20 06:43:06.642598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.192 [2024-11-20 06:43:06.642626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:35.192 qpair failed and we were unable to recover it. 00:32:35.192 [2024-11-20 06:43:06.642881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.192 [2024-11-20 06:43:06.642908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:35.192 qpair failed and we were unable to recover it. 00:32:35.192 [2024-11-20 06:43:06.643167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.192 [2024-11-20 06:43:06.643195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:35.192 qpair failed and we were unable to recover it. 00:32:35.192 [2024-11-20 06:43:06.643438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.192 [2024-11-20 06:43:06.643467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:35.192 qpair failed and we were unable to recover it. 00:32:35.192 [2024-11-20 06:43:06.643685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.192 [2024-11-20 06:43:06.643713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:35.192 qpair failed and we were unable to recover it. 00:32:35.192 [2024-11-20 06:43:06.643938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.192 [2024-11-20 06:43:06.643965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:35.192 qpair failed and we were unable to recover it. 00:32:35.192 [2024-11-20 06:43:06.644157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.192 [2024-11-20 06:43:06.644185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:35.192 qpair failed and we were unable to recover it. 00:32:35.192 [2024-11-20 06:43:06.644351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.192 [2024-11-20 06:43:06.644380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:35.192 qpair failed and we were unable to recover it. 00:32:35.192 [2024-11-20 06:43:06.644614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.192 [2024-11-20 06:43:06.644652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:35.192 qpair failed and we were unable to recover it. 00:32:35.192 [2024-11-20 06:43:06.644951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.192 [2024-11-20 06:43:06.645002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.192 qpair failed and we were unable to recover it. 00:32:35.192 [2024-11-20 06:43:06.645259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.192 [2024-11-20 06:43:06.645285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.192 qpair failed and we were unable to recover it. 00:32:35.192 [2024-11-20 06:43:06.645434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.192 [2024-11-20 06:43:06.645457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.192 qpair failed and we were unable to recover it. 00:32:35.192 [2024-11-20 06:43:06.645627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.192 [2024-11-20 06:43:06.645649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.192 qpair failed and we were unable to recover it. 00:32:35.192 [2024-11-20 06:43:06.645799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.192 [2024-11-20 06:43:06.645821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.192 qpair failed and we were unable to recover it. 00:32:35.192 [2024-11-20 06:43:06.646074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.192 [2024-11-20 06:43:06.646096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.192 qpair failed and we were unable to recover it. 00:32:35.192 [2024-11-20 06:43:06.646378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.192 [2024-11-20 06:43:06.646402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.192 qpair failed and we were unable to recover it. 00:32:35.192 [2024-11-20 06:43:06.646566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.192 [2024-11-20 06:43:06.646588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.192 qpair failed and we were unable to recover it. 00:32:35.192 [2024-11-20 06:43:06.646836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.192 [2024-11-20 06:43:06.646858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.192 qpair failed and we were unable to recover it. 00:32:35.192 [2024-11-20 06:43:06.647019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.192 [2024-11-20 06:43:06.647040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.192 qpair failed and we were unable to recover it. 00:32:35.192 [2024-11-20 06:43:06.647223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.192 [2024-11-20 06:43:06.647246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.192 qpair failed and we were unable to recover it. 00:32:35.192 [2024-11-20 06:43:06.647376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.192 [2024-11-20 06:43:06.647399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.192 qpair failed and we were unable to recover it. 00:32:35.192 [2024-11-20 06:43:06.647564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.192 [2024-11-20 06:43:06.647586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.192 qpair failed and we were unable to recover it. 00:32:35.192 [2024-11-20 06:43:06.647712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.192 [2024-11-20 06:43:06.647733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.192 qpair failed and we were unable to recover it. 00:32:35.192 [2024-11-20 06:43:06.647864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.192 [2024-11-20 06:43:06.647885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.192 qpair failed and we were unable to recover it. 00:32:35.192 [2024-11-20 06:43:06.648159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.192 [2024-11-20 06:43:06.648180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.192 qpair failed and we were unable to recover it. 00:32:35.192 [2024-11-20 06:43:06.648374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.192 [2024-11-20 06:43:06.648398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.192 qpair failed and we were unable to recover it. 00:32:35.192 [2024-11-20 06:43:06.648618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.193 [2024-11-20 06:43:06.648640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.193 qpair failed and we were unable to recover it. 00:32:35.193 [2024-11-20 06:43:06.648741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.193 [2024-11-20 06:43:06.648763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.193 qpair failed and we were unable to recover it. 00:32:35.193 [2024-11-20 06:43:06.648884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.193 [2024-11-20 06:43:06.648905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.193 qpair failed and we were unable to recover it. 00:32:35.193 [2024-11-20 06:43:06.649119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.193 [2024-11-20 06:43:06.649140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.193 qpair failed and we were unable to recover it. 00:32:35.193 [2024-11-20 06:43:06.649385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.193 [2024-11-20 06:43:06.649409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.193 qpair failed and we were unable to recover it. 00:32:35.193 [2024-11-20 06:43:06.649592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.193 [2024-11-20 06:43:06.649614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.193 qpair failed and we were unable to recover it. 00:32:35.193 [2024-11-20 06:43:06.649733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.193 [2024-11-20 06:43:06.649755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.193 qpair failed and we were unable to recover it. 00:32:35.193 [2024-11-20 06:43:06.649984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.193 [2024-11-20 06:43:06.650006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.193 qpair failed and we were unable to recover it. 00:32:35.193 [2024-11-20 06:43:06.650255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.193 [2024-11-20 06:43:06.650282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.193 qpair failed and we were unable to recover it. 00:32:35.193 [2024-11-20 06:43:06.650435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.193 [2024-11-20 06:43:06.650457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.193 qpair failed and we were unable to recover it. 00:32:35.193 [2024-11-20 06:43:06.650628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.193 [2024-11-20 06:43:06.650649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.193 qpair failed and we were unable to recover it. 00:32:35.193 [2024-11-20 06:43:06.650910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.193 [2024-11-20 06:43:06.650933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.193 qpair failed and we were unable to recover it. 00:32:35.193 [2024-11-20 06:43:06.651147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.193 [2024-11-20 06:43:06.651169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.193 qpair failed and we were unable to recover it. 00:32:35.193 [2024-11-20 06:43:06.651277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.193 [2024-11-20 06:43:06.651298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.193 qpair failed and we were unable to recover it. 00:32:35.193 [2024-11-20 06:43:06.651468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.193 [2024-11-20 06:43:06.651489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.193 qpair failed and we were unable to recover it. 00:32:35.193 [2024-11-20 06:43:06.651679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.193 [2024-11-20 06:43:06.651700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.193 qpair failed and we were unable to recover it. 00:32:35.193 [2024-11-20 06:43:06.651882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.193 [2024-11-20 06:43:06.651904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.193 qpair failed and we were unable to recover it. 00:32:35.193 [2024-11-20 06:43:06.652000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.193 [2024-11-20 06:43:06.652022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.193 qpair failed and we were unable to recover it. 00:32:35.193 [2024-11-20 06:43:06.652178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.193 [2024-11-20 06:43:06.652200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.193 qpair failed and we were unable to recover it. 00:32:35.193 [2024-11-20 06:43:06.652397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.193 [2024-11-20 06:43:06.652420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.193 qpair failed and we were unable to recover it. 00:32:35.193 [2024-11-20 06:43:06.652592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.193 [2024-11-20 06:43:06.652614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.193 qpair failed and we were unable to recover it. 00:32:35.193 [2024-11-20 06:43:06.652724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.193 [2024-11-20 06:43:06.652746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.193 qpair failed and we were unable to recover it. 00:32:35.193 [2024-11-20 06:43:06.652938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.193 [2024-11-20 06:43:06.652960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.193 qpair failed and we were unable to recover it. 00:32:35.193 [2024-11-20 06:43:06.653198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.193 [2024-11-20 06:43:06.653228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.193 qpair failed and we were unable to recover it. 00:32:35.193 [2024-11-20 06:43:06.653455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.193 [2024-11-20 06:43:06.653476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.193 qpair failed and we were unable to recover it. 00:32:35.193 [2024-11-20 06:43:06.653645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.193 [2024-11-20 06:43:06.653667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.193 qpair failed and we were unable to recover it. 00:32:35.193 [2024-11-20 06:43:06.653881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.193 [2024-11-20 06:43:06.653902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.193 qpair failed and we were unable to recover it. 00:32:35.193 [2024-11-20 06:43:06.654063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.193 [2024-11-20 06:43:06.654084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.193 qpair failed and we were unable to recover it. 00:32:35.193 [2024-11-20 06:43:06.654328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.193 [2024-11-20 06:43:06.654350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.193 qpair failed and we were unable to recover it. 00:32:35.193 [2024-11-20 06:43:06.654469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.193 [2024-11-20 06:43:06.654490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.193 qpair failed and we were unable to recover it. 00:32:35.193 [2024-11-20 06:43:06.654729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.193 [2024-11-20 06:43:06.654750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.193 qpair failed and we were unable to recover it. 00:32:35.193 [2024-11-20 06:43:06.655019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.193 [2024-11-20 06:43:06.655041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.193 qpair failed and we were unable to recover it. 00:32:35.193 [2024-11-20 06:43:06.655192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.193 [2024-11-20 06:43:06.655218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.193 qpair failed and we were unable to recover it. 00:32:35.193 [2024-11-20 06:43:06.655438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.193 [2024-11-20 06:43:06.655460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.193 qpair failed and we were unable to recover it. 00:32:35.193 [2024-11-20 06:43:06.655577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.193 [2024-11-20 06:43:06.655599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.193 qpair failed and we were unable to recover it. 00:32:35.193 [2024-11-20 06:43:06.655861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.193 [2024-11-20 06:43:06.655886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.193 qpair failed and we were unable to recover it. 00:32:35.193 [2024-11-20 06:43:06.656067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.193 [2024-11-20 06:43:06.656088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.193 qpair failed and we were unable to recover it. 00:32:35.193 [2024-11-20 06:43:06.656256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.193 [2024-11-20 06:43:06.656283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.193 qpair failed and we were unable to recover it. 00:32:35.193 [2024-11-20 06:43:06.656503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.194 [2024-11-20 06:43:06.656525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.194 qpair failed and we were unable to recover it. 00:32:35.194 [2024-11-20 06:43:06.656653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.194 [2024-11-20 06:43:06.656674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.194 qpair failed and we were unable to recover it. 00:32:35.194 [2024-11-20 06:43:06.656994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.194 [2024-11-20 06:43:06.657018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.194 qpair failed and we were unable to recover it. 00:32:35.194 [2024-11-20 06:43:06.657180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.194 [2024-11-20 06:43:06.657207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.194 qpair failed and we were unable to recover it. 00:32:35.194 [2024-11-20 06:43:06.657382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.194 [2024-11-20 06:43:06.657403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.194 qpair failed and we were unable to recover it. 00:32:35.194 [2024-11-20 06:43:06.657573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.194 [2024-11-20 06:43:06.657595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.194 qpair failed and we were unable to recover it. 00:32:35.194 [2024-11-20 06:43:06.657791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.194 [2024-11-20 06:43:06.657812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.194 qpair failed and we were unable to recover it. 00:32:35.194 [2024-11-20 06:43:06.657975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.194 [2024-11-20 06:43:06.657997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.194 qpair failed and we were unable to recover it. 00:32:35.194 [2024-11-20 06:43:06.658237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.194 [2024-11-20 06:43:06.658260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.194 qpair failed and we were unable to recover it. 00:32:35.194 [2024-11-20 06:43:06.658427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.194 [2024-11-20 06:43:06.658448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.194 qpair failed and we were unable to recover it. 00:32:35.194 [2024-11-20 06:43:06.658631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.194 [2024-11-20 06:43:06.658653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.194 qpair failed and we were unable to recover it. 00:32:35.194 [2024-11-20 06:43:06.658936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.194 [2024-11-20 06:43:06.658958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.194 qpair failed and we were unable to recover it. 00:32:35.194 [2024-11-20 06:43:06.659207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.194 [2024-11-20 06:43:06.659230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.194 qpair failed and we were unable to recover it. 00:32:35.194 [2024-11-20 06:43:06.659391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.194 [2024-11-20 06:43:06.659413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.194 qpair failed and we were unable to recover it. 00:32:35.194 [2024-11-20 06:43:06.659655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.194 [2024-11-20 06:43:06.659676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.194 qpair failed and we were unable to recover it. 00:32:35.194 [2024-11-20 06:43:06.659881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.194 [2024-11-20 06:43:06.659902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.194 qpair failed and we were unable to recover it. 00:32:35.194 [2024-11-20 06:43:06.660156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.194 [2024-11-20 06:43:06.660177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.194 qpair failed and we were unable to recover it. 00:32:35.194 [2024-11-20 06:43:06.660384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.194 [2024-11-20 06:43:06.660407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.194 qpair failed and we were unable to recover it. 00:32:35.194 [2024-11-20 06:43:06.660567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.194 [2024-11-20 06:43:06.660589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.194 qpair failed and we were unable to recover it. 00:32:35.194 [2024-11-20 06:43:06.660760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.194 [2024-11-20 06:43:06.660782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.194 qpair failed and we were unable to recover it. 00:32:35.194 [2024-11-20 06:43:06.661024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.194 [2024-11-20 06:43:06.661045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.194 qpair failed and we were unable to recover it. 00:32:35.194 [2024-11-20 06:43:06.661242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.194 [2024-11-20 06:43:06.661265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.194 qpair failed and we were unable to recover it. 00:32:35.194 [2024-11-20 06:43:06.661385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.194 [2024-11-20 06:43:06.661407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.194 qpair failed and we were unable to recover it. 00:32:35.194 [2024-11-20 06:43:06.661600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.194 [2024-11-20 06:43:06.661622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.194 qpair failed and we were unable to recover it. 00:32:35.194 [2024-11-20 06:43:06.661809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.194 [2024-11-20 06:43:06.661829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.194 qpair failed and we were unable to recover it. 00:32:35.194 [2024-11-20 06:43:06.662082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.194 [2024-11-20 06:43:06.662105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.194 qpair failed and we were unable to recover it. 00:32:35.194 [2024-11-20 06:43:06.662341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.194 [2024-11-20 06:43:06.662363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.194 qpair failed and we were unable to recover it. 00:32:35.194 [2024-11-20 06:43:06.662463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.194 [2024-11-20 06:43:06.662483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.194 qpair failed and we were unable to recover it. 00:32:35.194 [2024-11-20 06:43:06.662597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.194 [2024-11-20 06:43:06.662618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.194 qpair failed and we were unable to recover it. 00:32:35.194 [2024-11-20 06:43:06.662910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.194 [2024-11-20 06:43:06.662932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.194 qpair failed and we were unable to recover it. 00:32:35.194 [2024-11-20 06:43:06.663148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.194 [2024-11-20 06:43:06.663170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.194 qpair failed and we were unable to recover it. 00:32:35.194 [2024-11-20 06:43:06.663373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.194 [2024-11-20 06:43:06.663395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.194 qpair failed and we were unable to recover it. 00:32:35.195 [2024-11-20 06:43:06.663545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.195 [2024-11-20 06:43:06.663567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.195 qpair failed and we were unable to recover it. 00:32:35.195 [2024-11-20 06:43:06.663797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.195 [2024-11-20 06:43:06.663819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.195 qpair failed and we were unable to recover it. 00:32:35.195 [2024-11-20 06:43:06.664087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.195 [2024-11-20 06:43:06.664108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.195 qpair failed and we were unable to recover it. 00:32:35.195 [2024-11-20 06:43:06.664225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.195 [2024-11-20 06:43:06.664247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.195 qpair failed and we were unable to recover it. 00:32:35.195 [2024-11-20 06:43:06.664489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.195 [2024-11-20 06:43:06.664510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.195 qpair failed and we were unable to recover it. 00:32:35.195 [2024-11-20 06:43:06.664613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.195 [2024-11-20 06:43:06.664633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.195 qpair failed and we were unable to recover it. 00:32:35.195 [2024-11-20 06:43:06.664750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.195 [2024-11-20 06:43:06.664775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.195 qpair failed and we were unable to recover it. 00:32:35.195 [2024-11-20 06:43:06.664888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.195 [2024-11-20 06:43:06.664910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.195 qpair failed and we were unable to recover it. 00:32:35.195 [2024-11-20 06:43:06.665091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.195 [2024-11-20 06:43:06.665112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.195 qpair failed and we were unable to recover it. 00:32:35.195 [2024-11-20 06:43:06.665294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.195 [2024-11-20 06:43:06.665317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.195 qpair failed and we were unable to recover it. 00:32:35.195 [2024-11-20 06:43:06.665494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.195 [2024-11-20 06:43:06.665516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.195 qpair failed and we were unable to recover it. 00:32:35.195 [2024-11-20 06:43:06.665714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.195 [2024-11-20 06:43:06.665736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.195 qpair failed and we were unable to recover it. 00:32:35.195 [2024-11-20 06:43:06.665918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.195 [2024-11-20 06:43:06.665939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.195 qpair failed and we were unable to recover it. 00:32:35.195 [2024-11-20 06:43:06.666180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.195 [2024-11-20 06:43:06.666208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.195 qpair failed and we were unable to recover it. 00:32:35.195 [2024-11-20 06:43:06.666385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.195 [2024-11-20 06:43:06.666408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.195 qpair failed and we were unable to recover it. 00:32:35.195 [2024-11-20 06:43:06.666625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.195 [2024-11-20 06:43:06.666646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.195 qpair failed and we were unable to recover it. 00:32:35.195 [2024-11-20 06:43:06.666916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.195 [2024-11-20 06:43:06.666937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.195 qpair failed and we were unable to recover it. 00:32:35.195 [2024-11-20 06:43:06.667167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.195 [2024-11-20 06:43:06.667188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.195 qpair failed and we were unable to recover it. 00:32:35.195 [2024-11-20 06:43:06.667415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.195 [2024-11-20 06:43:06.667437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.195 qpair failed and we were unable to recover it. 00:32:35.195 [2024-11-20 06:43:06.667656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.195 [2024-11-20 06:43:06.667678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.195 qpair failed and we were unable to recover it. 00:32:35.195 [2024-11-20 06:43:06.667909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.195 [2024-11-20 06:43:06.667931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.195 qpair failed and we were unable to recover it. 00:32:35.195 [2024-11-20 06:43:06.668096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.195 [2024-11-20 06:43:06.668118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.195 qpair failed and we were unable to recover it. 00:32:35.195 [2024-11-20 06:43:06.668361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.195 [2024-11-20 06:43:06.668400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.195 qpair failed and we were unable to recover it. 00:32:35.195 [2024-11-20 06:43:06.668598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.195 [2024-11-20 06:43:06.668621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.195 qpair failed and we were unable to recover it. 00:32:35.195 [2024-11-20 06:43:06.668794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.195 [2024-11-20 06:43:06.668816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.195 qpair failed and we were unable to recover it. 00:32:35.195 [2024-11-20 06:43:06.668992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.195 [2024-11-20 06:43:06.669014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.195 qpair failed and we were unable to recover it. 00:32:35.195 [2024-11-20 06:43:06.669254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.195 [2024-11-20 06:43:06.669277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.195 qpair failed and we were unable to recover it. 00:32:35.195 [2024-11-20 06:43:06.669443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.195 [2024-11-20 06:43:06.669465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.195 qpair failed and we were unable to recover it. 00:32:35.195 [2024-11-20 06:43:06.669745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.195 [2024-11-20 06:43:06.669766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.195 qpair failed and we were unable to recover it. 00:32:35.195 [2024-11-20 06:43:06.669880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.195 [2024-11-20 06:43:06.669900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.195 qpair failed and we were unable to recover it. 00:32:35.195 [2024-11-20 06:43:06.670119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.195 [2024-11-20 06:43:06.670140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.195 qpair failed and we were unable to recover it. 00:32:35.195 [2024-11-20 06:43:06.670334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.195 [2024-11-20 06:43:06.670357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.195 qpair failed and we were unable to recover it. 00:32:35.195 [2024-11-20 06:43:06.670598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.195 [2024-11-20 06:43:06.670620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.195 qpair failed and we were unable to recover it. 00:32:35.195 [2024-11-20 06:43:06.670861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.195 [2024-11-20 06:43:06.670887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.195 qpair failed and we were unable to recover it. 00:32:35.195 [2024-11-20 06:43:06.671118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.195 [2024-11-20 06:43:06.671139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.195 qpair failed and we were unable to recover it. 00:32:35.195 [2024-11-20 06:43:06.671310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.195 [2024-11-20 06:43:06.671332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.195 qpair failed and we were unable to recover it. 00:32:35.195 [2024-11-20 06:43:06.671519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.195 [2024-11-20 06:43:06.671542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.195 qpair failed and we were unable to recover it. 00:32:35.195 [2024-11-20 06:43:06.671733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.195 [2024-11-20 06:43:06.671754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.195 qpair failed and we were unable to recover it. 00:32:35.195 [2024-11-20 06:43:06.671941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.196 [2024-11-20 06:43:06.671962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.196 qpair failed and we were unable to recover it. 00:32:35.196 [2024-11-20 06:43:06.672218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.196 [2024-11-20 06:43:06.672241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.196 qpair failed and we were unable to recover it. 00:32:35.196 [2024-11-20 06:43:06.672481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.196 [2024-11-20 06:43:06.672502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.196 qpair failed and we were unable to recover it. 00:32:35.196 [2024-11-20 06:43:06.672620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.196 [2024-11-20 06:43:06.672641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.196 qpair failed and we were unable to recover it. 00:32:35.196 [2024-11-20 06:43:06.672817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.196 [2024-11-20 06:43:06.672839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.196 qpair failed and we were unable to recover it. 00:32:35.196 [2024-11-20 06:43:06.673028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.196 [2024-11-20 06:43:06.673050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.196 qpair failed and we were unable to recover it. 00:32:35.196 [2024-11-20 06:43:06.673211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.196 [2024-11-20 06:43:06.673233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.196 qpair failed and we were unable to recover it. 00:32:35.196 [2024-11-20 06:43:06.673405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.196 [2024-11-20 06:43:06.673427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.196 qpair failed and we were unable to recover it. 00:32:35.196 [2024-11-20 06:43:06.673668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.196 [2024-11-20 06:43:06.673689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.196 qpair failed and we were unable to recover it. 00:32:35.196 [2024-11-20 06:43:06.673875] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9deaf0 is same with the state(6) to be set 00:32:35.196 [2024-11-20 06:43:06.674245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.196 [2024-11-20 06:43:06.674315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:35.196 qpair failed and we were unable to recover it. 00:32:35.196 [2024-11-20 06:43:06.674529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.196 [2024-11-20 06:43:06.674565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:35.196 qpair failed and we were unable to recover it. 00:32:35.196 [2024-11-20 06:43:06.674707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.196 [2024-11-20 06:43:06.674739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:35.196 qpair failed and we were unable to recover it. 00:32:35.196 [2024-11-20 06:43:06.675003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.196 [2024-11-20 06:43:06.675035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:35.196 qpair failed and we were unable to recover it. 00:32:35.196 [2024-11-20 06:43:06.675298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.196 [2024-11-20 06:43:06.675332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:35.196 qpair failed and we were unable to recover it. 00:32:35.196 [2024-11-20 06:43:06.675523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.196 [2024-11-20 06:43:06.675554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:35.196 qpair failed and we were unable to recover it. 00:32:35.196 [2024-11-20 06:43:06.675696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.196 [2024-11-20 06:43:06.675727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:35.196 qpair failed and we were unable to recover it. 00:32:35.196 [2024-11-20 06:43:06.675991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.196 [2024-11-20 06:43:06.676024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:35.196 qpair failed and we were unable to recover it. 00:32:35.196 [2024-11-20 06:43:06.676316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.196 [2024-11-20 06:43:06.676351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:35.196 qpair failed and we were unable to recover it. 00:32:35.196 [2024-11-20 06:43:06.676551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.196 [2024-11-20 06:43:06.676576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.196 qpair failed and we were unable to recover it. 00:32:35.196 [2024-11-20 06:43:06.676743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.196 [2024-11-20 06:43:06.676764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.196 qpair failed and we were unable to recover it. 00:32:35.196 [2024-11-20 06:43:06.676982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.196 [2024-11-20 06:43:06.677004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.196 qpair failed and we were unable to recover it. 00:32:35.196 [2024-11-20 06:43:06.677170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.196 [2024-11-20 06:43:06.677190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.196 qpair failed and we were unable to recover it. 00:32:35.196 [2024-11-20 06:43:06.677372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.196 [2024-11-20 06:43:06.677395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.196 qpair failed and we were unable to recover it. 00:32:35.196 [2024-11-20 06:43:06.677511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.196 [2024-11-20 06:43:06.677532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.196 qpair failed and we were unable to recover it. 00:32:35.196 [2024-11-20 06:43:06.677700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.196 [2024-11-20 06:43:06.677721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.196 qpair failed and we were unable to recover it. 00:32:35.196 [2024-11-20 06:43:06.677938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.196 [2024-11-20 06:43:06.677959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.196 qpair failed and we were unable to recover it. 00:32:35.196 [2024-11-20 06:43:06.678133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.196 [2024-11-20 06:43:06.678154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.196 qpair failed and we were unable to recover it. 00:32:35.196 [2024-11-20 06:43:06.678403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.196 [2024-11-20 06:43:06.678426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.196 qpair failed and we were unable to recover it. 00:32:35.196 [2024-11-20 06:43:06.678591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.196 [2024-11-20 06:43:06.678613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.196 qpair failed and we were unable to recover it. 00:32:35.196 [2024-11-20 06:43:06.678862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.196 [2024-11-20 06:43:06.678884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.196 qpair failed and we were unable to recover it. 00:32:35.196 [2024-11-20 06:43:06.679040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.196 [2024-11-20 06:43:06.679061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.196 qpair failed and we were unable to recover it. 00:32:35.196 [2024-11-20 06:43:06.679318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.196 [2024-11-20 06:43:06.679340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.196 qpair failed and we were unable to recover it. 00:32:35.196 [2024-11-20 06:43:06.679458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.196 [2024-11-20 06:43:06.679480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.196 qpair failed and we were unable to recover it. 00:32:35.196 [2024-11-20 06:43:06.679720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.196 [2024-11-20 06:43:06.679742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.196 qpair failed and we were unable to recover it. 00:32:35.196 [2024-11-20 06:43:06.680008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.196 [2024-11-20 06:43:06.680029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.196 qpair failed and we were unable to recover it. 00:32:35.196 [2024-11-20 06:43:06.680222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.196 [2024-11-20 06:43:06.680245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.196 qpair failed and we were unable to recover it. 00:32:35.196 [2024-11-20 06:43:06.680422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.196 [2024-11-20 06:43:06.680443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.196 qpair failed and we were unable to recover it. 00:32:35.196 [2024-11-20 06:43:06.680632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.197 [2024-11-20 06:43:06.680653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.197 qpair failed and we were unable to recover it. 00:32:35.197 [2024-11-20 06:43:06.680898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.197 [2024-11-20 06:43:06.680919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.197 qpair failed and we were unable to recover it. 00:32:35.197 [2024-11-20 06:43:06.681145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.197 [2024-11-20 06:43:06.681167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.197 qpair failed and we were unable to recover it. 00:32:35.197 [2024-11-20 06:43:06.681379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.197 [2024-11-20 06:43:06.681401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.197 qpair failed and we were unable to recover it. 00:32:35.197 [2024-11-20 06:43:06.681521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.197 [2024-11-20 06:43:06.681543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.197 qpair failed and we were unable to recover it. 00:32:35.197 [2024-11-20 06:43:06.681792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.197 [2024-11-20 06:43:06.681814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.197 qpair failed and we were unable to recover it. 00:32:35.197 [2024-11-20 06:43:06.682008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.197 [2024-11-20 06:43:06.682029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.197 qpair failed and we were unable to recover it. 00:32:35.197 [2024-11-20 06:43:06.682260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.197 [2024-11-20 06:43:06.682282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.197 qpair failed and we were unable to recover it. 00:32:35.197 [2024-11-20 06:43:06.682451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.197 [2024-11-20 06:43:06.682473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.197 qpair failed and we were unable to recover it. 00:32:35.197 [2024-11-20 06:43:06.682659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.197 [2024-11-20 06:43:06.682681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.197 qpair failed and we were unable to recover it. 00:32:35.197 [2024-11-20 06:43:06.682852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.197 [2024-11-20 06:43:06.682874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.197 qpair failed and we were unable to recover it. 00:32:35.197 [2024-11-20 06:43:06.683036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.197 [2024-11-20 06:43:06.683057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.197 qpair failed and we were unable to recover it. 00:32:35.197 [2024-11-20 06:43:06.683307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.197 [2024-11-20 06:43:06.683334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.197 qpair failed and we were unable to recover it. 00:32:35.197 [2024-11-20 06:43:06.683522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.197 [2024-11-20 06:43:06.683544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.197 qpair failed and we were unable to recover it. 00:32:35.197 [2024-11-20 06:43:06.683798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.197 [2024-11-20 06:43:06.683819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.197 qpair failed and we were unable to recover it. 00:32:35.197 [2024-11-20 06:43:06.683988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.197 [2024-11-20 06:43:06.684009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.197 qpair failed and we were unable to recover it. 00:32:35.197 [2024-11-20 06:43:06.684198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.197 [2024-11-20 06:43:06.684228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.197 qpair failed and we were unable to recover it. 00:32:35.197 [2024-11-20 06:43:06.684421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.197 [2024-11-20 06:43:06.684442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.197 qpair failed and we were unable to recover it. 00:32:35.197 [2024-11-20 06:43:06.684605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.197 [2024-11-20 06:43:06.684627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.197 qpair failed and we were unable to recover it. 00:32:35.197 [2024-11-20 06:43:06.684854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.197 [2024-11-20 06:43:06.684875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.197 qpair failed and we were unable to recover it. 00:32:35.197 [2024-11-20 06:43:06.685091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.197 [2024-11-20 06:43:06.685112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.197 qpair failed and we were unable to recover it. 00:32:35.197 [2024-11-20 06:43:06.685234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.197 [2024-11-20 06:43:06.685255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.197 qpair failed and we were unable to recover it. 00:32:35.197 [2024-11-20 06:43:06.685406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.197 [2024-11-20 06:43:06.685428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.197 qpair failed and we were unable to recover it. 00:32:35.197 [2024-11-20 06:43:06.685578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.197 [2024-11-20 06:43:06.685599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.197 qpair failed and we were unable to recover it. 00:32:35.197 [2024-11-20 06:43:06.685761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.197 [2024-11-20 06:43:06.685783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.197 qpair failed and we were unable to recover it. 00:32:35.197 [2024-11-20 06:43:06.686027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.197 [2024-11-20 06:43:06.686049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.197 qpair failed and we were unable to recover it. 00:32:35.197 [2024-11-20 06:43:06.686237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.197 [2024-11-20 06:43:06.686260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.197 qpair failed and we were unable to recover it. 00:32:35.197 [2024-11-20 06:43:06.686445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.197 [2024-11-20 06:43:06.686467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.197 qpair failed and we were unable to recover it. 00:32:35.197 [2024-11-20 06:43:06.686585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.197 [2024-11-20 06:43:06.686607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.197 qpair failed and we were unable to recover it. 00:32:35.197 [2024-11-20 06:43:06.686711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.197 [2024-11-20 06:43:06.686732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.197 qpair failed and we were unable to recover it. 00:32:35.197 [2024-11-20 06:43:06.686932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.197 [2024-11-20 06:43:06.686953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.197 qpair failed and we were unable to recover it. 00:32:35.197 [2024-11-20 06:43:06.687176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.197 [2024-11-20 06:43:06.687198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.197 qpair failed and we were unable to recover it. 00:32:35.197 [2024-11-20 06:43:06.687470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.197 [2024-11-20 06:43:06.687491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.197 qpair failed and we were unable to recover it. 00:32:35.197 [2024-11-20 06:43:06.687760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.197 [2024-11-20 06:43:06.687782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.197 qpair failed and we were unable to recover it. 00:32:35.197 [2024-11-20 06:43:06.687946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.197 [2024-11-20 06:43:06.687968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.197 qpair failed and we were unable to recover it. 00:32:35.197 [2024-11-20 06:43:06.688150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.197 [2024-11-20 06:43:06.688172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.197 qpair failed and we were unable to recover it. 00:32:35.197 [2024-11-20 06:43:06.688351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.197 [2024-11-20 06:43:06.688374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.197 qpair failed and we were unable to recover it. 00:32:35.197 [2024-11-20 06:43:06.688590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.197 [2024-11-20 06:43:06.688611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.197 qpair failed and we were unable to recover it. 00:32:35.197 [2024-11-20 06:43:06.688761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.198 [2024-11-20 06:43:06.688782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.198 qpair failed and we were unable to recover it. 00:32:35.198 [2024-11-20 06:43:06.689021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.198 [2024-11-20 06:43:06.689047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.198 qpair failed and we were unable to recover it. 00:32:35.198 [2024-11-20 06:43:06.689267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.198 [2024-11-20 06:43:06.689289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.198 qpair failed and we were unable to recover it. 00:32:35.198 [2024-11-20 06:43:06.689531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.198 [2024-11-20 06:43:06.689553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.198 qpair failed and we were unable to recover it. 00:32:35.198 [2024-11-20 06:43:06.689649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.198 [2024-11-20 06:43:06.689669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.198 qpair failed and we were unable to recover it. 00:32:35.198 [2024-11-20 06:43:06.689907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.198 [2024-11-20 06:43:06.689928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.198 qpair failed and we were unable to recover it. 00:32:35.198 [2024-11-20 06:43:06.690106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.198 [2024-11-20 06:43:06.690126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.198 qpair failed and we were unable to recover it. 00:32:35.198 [2024-11-20 06:43:06.690358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.198 [2024-11-20 06:43:06.690380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.198 qpair failed and we were unable to recover it. 00:32:35.198 [2024-11-20 06:43:06.690550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.198 [2024-11-20 06:43:06.690571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.198 qpair failed and we were unable to recover it. 00:32:35.198 [2024-11-20 06:43:06.690816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.198 [2024-11-20 06:43:06.690838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.198 qpair failed and we were unable to recover it. 00:32:35.198 [2024-11-20 06:43:06.691008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.198 [2024-11-20 06:43:06.691029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.198 qpair failed and we were unable to recover it. 00:32:35.198 [2024-11-20 06:43:06.691327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.198 [2024-11-20 06:43:06.691349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.198 qpair failed and we were unable to recover it. 00:32:35.198 [2024-11-20 06:43:06.691616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.198 [2024-11-20 06:43:06.691637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.198 qpair failed and we were unable to recover it. 00:32:35.198 [2024-11-20 06:43:06.691880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.198 [2024-11-20 06:43:06.691901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.198 qpair failed and we were unable to recover it. 00:32:35.198 [2024-11-20 06:43:06.692163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.198 [2024-11-20 06:43:06.692184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.198 qpair failed and we were unable to recover it. 00:32:35.198 [2024-11-20 06:43:06.692413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.198 [2024-11-20 06:43:06.692436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.198 qpair failed and we were unable to recover it. 00:32:35.198 [2024-11-20 06:43:06.692619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.198 [2024-11-20 06:43:06.692640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.198 qpair failed and we were unable to recover it. 00:32:35.198 [2024-11-20 06:43:06.692867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.198 [2024-11-20 06:43:06.692888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.198 qpair failed and we were unable to recover it. 00:32:35.198 [2024-11-20 06:43:06.693133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.198 [2024-11-20 06:43:06.693154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.198 qpair failed and we were unable to recover it. 00:32:35.198 [2024-11-20 06:43:06.693396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.198 [2024-11-20 06:43:06.693420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.198 qpair failed and we were unable to recover it. 00:32:35.198 [2024-11-20 06:43:06.693647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.198 [2024-11-20 06:43:06.693669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.198 qpair failed and we were unable to recover it. 00:32:35.198 [2024-11-20 06:43:06.693773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.198 [2024-11-20 06:43:06.693794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.198 qpair failed and we were unable to recover it. 00:32:35.198 [2024-11-20 06:43:06.693967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.198 [2024-11-20 06:43:06.693989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.198 qpair failed and we were unable to recover it. 00:32:35.198 [2024-11-20 06:43:06.694137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.198 [2024-11-20 06:43:06.694158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.198 qpair failed and we were unable to recover it. 00:32:35.198 [2024-11-20 06:43:06.694353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.198 [2024-11-20 06:43:06.694375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.198 qpair failed and we were unable to recover it. 00:32:35.198 [2024-11-20 06:43:06.694538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.198 [2024-11-20 06:43:06.694560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.198 qpair failed and we were unable to recover it. 00:32:35.198 [2024-11-20 06:43:06.694726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.198 [2024-11-20 06:43:06.694747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.198 qpair failed and we were unable to recover it. 00:32:35.198 [2024-11-20 06:43:06.694916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.198 [2024-11-20 06:43:06.694946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.198 qpair failed and we were unable to recover it. 00:32:35.198 [2024-11-20 06:43:06.695086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.198 [2024-11-20 06:43:06.695118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.198 qpair failed and we were unable to recover it. 00:32:35.198 [2024-11-20 06:43:06.695309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.198 [2024-11-20 06:43:06.695343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.198 qpair failed and we were unable to recover it. 00:32:35.198 [2024-11-20 06:43:06.695524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.198 [2024-11-20 06:43:06.695556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.198 qpair failed and we were unable to recover it. 00:32:35.198 [2024-11-20 06:43:06.695741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.198 [2024-11-20 06:43:06.695772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.198 qpair failed and we were unable to recover it. 00:32:35.198 [2024-11-20 06:43:06.696042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.198 [2024-11-20 06:43:06.696073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.198 qpair failed and we were unable to recover it. 00:32:35.198 [2024-11-20 06:43:06.696215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.198 [2024-11-20 06:43:06.696248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.198 qpair failed and we were unable to recover it. 00:32:35.198 [2024-11-20 06:43:06.696505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.198 [2024-11-20 06:43:06.696538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.198 qpair failed and we were unable to recover it. 00:32:35.198 [2024-11-20 06:43:06.696716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.198 [2024-11-20 06:43:06.696738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.198 qpair failed and we were unable to recover it. 00:32:35.199 [2024-11-20 06:43:06.696909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.199 [2024-11-20 06:43:06.696929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.199 qpair failed and we were unable to recover it. 00:32:35.199 [2024-11-20 06:43:06.697106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.199 [2024-11-20 06:43:06.697127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.199 qpair failed and we were unable to recover it. 00:32:35.199 [2024-11-20 06:43:06.697326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.199 [2024-11-20 06:43:06.697348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.199 qpair failed and we were unable to recover it. 00:32:35.199 [2024-11-20 06:43:06.697507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.199 [2024-11-20 06:43:06.697529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.199 qpair failed and we were unable to recover it. 00:32:35.199 [2024-11-20 06:43:06.697761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.199 [2024-11-20 06:43:06.697804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.199 qpair failed and we were unable to recover it. 00:32:35.199 [2024-11-20 06:43:06.697987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.199 [2024-11-20 06:43:06.698018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.199 qpair failed and we were unable to recover it. 00:32:35.199 [2024-11-20 06:43:06.698193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.199 [2024-11-20 06:43:06.698238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.199 qpair failed and we were unable to recover it. 00:32:35.199 [2024-11-20 06:43:06.698478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.199 [2024-11-20 06:43:06.698499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.199 qpair failed and we were unable to recover it. 00:32:35.199 [2024-11-20 06:43:06.698740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.199 [2024-11-20 06:43:06.698761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.199 qpair failed and we were unable to recover it. 00:32:35.199 [2024-11-20 06:43:06.698987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.199 [2024-11-20 06:43:06.699009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.199 qpair failed and we were unable to recover it. 00:32:35.199 [2024-11-20 06:43:06.699230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.199 [2024-11-20 06:43:06.699253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.199 qpair failed and we were unable to recover it. 00:32:35.199 [2024-11-20 06:43:06.699471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.199 [2024-11-20 06:43:06.699492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.199 qpair failed and we were unable to recover it. 00:32:35.199 [2024-11-20 06:43:06.699710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.199 [2024-11-20 06:43:06.699732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.199 qpair failed and we were unable to recover it. 00:32:35.199 [2024-11-20 06:43:06.699918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.199 [2024-11-20 06:43:06.699940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.199 qpair failed and we were unable to recover it. 00:32:35.199 [2024-11-20 06:43:06.700119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.199 [2024-11-20 06:43:06.700150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.199 qpair failed and we were unable to recover it. 00:32:35.199 [2024-11-20 06:43:06.700272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.199 [2024-11-20 06:43:06.700305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.199 qpair failed and we were unable to recover it. 00:32:35.199 [2024-11-20 06:43:06.700546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.199 [2024-11-20 06:43:06.700578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.199 qpair failed and we were unable to recover it. 00:32:35.199 [2024-11-20 06:43:06.700810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.199 [2024-11-20 06:43:06.700831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.199 qpair failed and we were unable to recover it. 00:32:35.199 [2024-11-20 06:43:06.700940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.199 [2024-11-20 06:43:06.700961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.199 qpair failed and we were unable to recover it. 00:32:35.199 [2024-11-20 06:43:06.701180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.199 [2024-11-20 06:43:06.701209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.199 qpair failed and we were unable to recover it. 00:32:35.199 [2024-11-20 06:43:06.701461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.199 [2024-11-20 06:43:06.701484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.199 qpair failed and we were unable to recover it. 00:32:35.199 [2024-11-20 06:43:06.701717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.199 [2024-11-20 06:43:06.701738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.199 qpair failed and we were unable to recover it. 00:32:35.199 [2024-11-20 06:43:06.701898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.199 [2024-11-20 06:43:06.701919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.199 qpair failed and we were unable to recover it. 00:32:35.199 [2024-11-20 06:43:06.702140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.199 [2024-11-20 06:43:06.702171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.199 qpair failed and we were unable to recover it. 00:32:35.199 [2024-11-20 06:43:06.702466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.199 [2024-11-20 06:43:06.702537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:35.199 qpair failed and we were unable to recover it. 00:32:35.199 [2024-11-20 06:43:06.702765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.199 [2024-11-20 06:43:06.702800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:35.199 qpair failed and we were unable to recover it. 00:32:35.199 [2024-11-20 06:43:06.703093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.199 [2024-11-20 06:43:06.703126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:35.199 qpair failed and we were unable to recover it. 00:32:35.199 [2024-11-20 06:43:06.703392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.199 [2024-11-20 06:43:06.703426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:35.199 qpair failed and we were unable to recover it. 00:32:35.199 [2024-11-20 06:43:06.703567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.199 [2024-11-20 06:43:06.703600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:35.199 qpair failed and we were unable to recover it. 00:32:35.199 [2024-11-20 06:43:06.703776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.199 [2024-11-20 06:43:06.703807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:35.199 qpair failed and we were unable to recover it. 00:32:35.199 [2024-11-20 06:43:06.703914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.199 [2024-11-20 06:43:06.703947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:35.199 qpair failed and we were unable to recover it. 00:32:35.199 [2024-11-20 06:43:06.704188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.199 [2024-11-20 06:43:06.704229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:35.199 qpair failed and we were unable to recover it. 00:32:35.199 [2024-11-20 06:43:06.704515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.199 [2024-11-20 06:43:06.704546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:35.199 qpair failed and we were unable to recover it. 00:32:35.199 [2024-11-20 06:43:06.704809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.199 [2024-11-20 06:43:06.704837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.199 qpair failed and we were unable to recover it. 00:32:35.199 [2024-11-20 06:43:06.705059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.199 [2024-11-20 06:43:06.705080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.199 qpair failed and we were unable to recover it. 00:32:35.199 [2024-11-20 06:43:06.705243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.200 [2024-11-20 06:43:06.705265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.200 qpair failed and we were unable to recover it. 00:32:35.200 [2024-11-20 06:43:06.705511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.200 [2024-11-20 06:43:06.705542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.200 qpair failed and we were unable to recover it. 00:32:35.200 [2024-11-20 06:43:06.705723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.200 [2024-11-20 06:43:06.705754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.200 qpair failed and we were unable to recover it. 00:32:35.200 [2024-11-20 06:43:06.706016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.200 [2024-11-20 06:43:06.706047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.200 qpair failed and we were unable to recover it. 00:32:35.200 [2024-11-20 06:43:06.706336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.200 [2024-11-20 06:43:06.706368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.200 qpair failed and we were unable to recover it. 00:32:35.200 [2024-11-20 06:43:06.706633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.200 [2024-11-20 06:43:06.706654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.200 qpair failed and we were unable to recover it. 00:32:35.200 [2024-11-20 06:43:06.706872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.200 [2024-11-20 06:43:06.706893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.200 qpair failed and we were unable to recover it. 00:32:35.200 [2024-11-20 06:43:06.707055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.200 [2024-11-20 06:43:06.707077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.200 qpair failed and we were unable to recover it. 00:32:35.200 [2024-11-20 06:43:06.707343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.200 [2024-11-20 06:43:06.707366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.200 qpair failed and we were unable to recover it. 00:32:35.200 [2024-11-20 06:43:06.707602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.200 [2024-11-20 06:43:06.707623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.200 qpair failed and we were unable to recover it. 00:32:35.200 [2024-11-20 06:43:06.707838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.200 [2024-11-20 06:43:06.707859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.200 qpair failed and we were unable to recover it. 00:32:35.200 [2024-11-20 06:43:06.707968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.200 [2024-11-20 06:43:06.707988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.200 qpair failed and we were unable to recover it. 00:32:35.200 [2024-11-20 06:43:06.708226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.200 [2024-11-20 06:43:06.708248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.200 qpair failed and we were unable to recover it. 00:32:35.200 [2024-11-20 06:43:06.708492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.200 [2024-11-20 06:43:06.708514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.200 qpair failed and we were unable to recover it. 00:32:35.200 [2024-11-20 06:43:06.708753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.200 [2024-11-20 06:43:06.708774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.200 qpair failed and we were unable to recover it. 00:32:35.200 [2024-11-20 06:43:06.709008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.200 [2024-11-20 06:43:06.709029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.200 qpair failed and we were unable to recover it. 00:32:35.200 [2024-11-20 06:43:06.709273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.200 [2024-11-20 06:43:06.709296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.200 qpair failed and we were unable to recover it. 00:32:35.200 [2024-11-20 06:43:06.709482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.200 [2024-11-20 06:43:06.709503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.200 qpair failed and we were unable to recover it. 00:32:35.200 [2024-11-20 06:43:06.709765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.200 [2024-11-20 06:43:06.709786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.200 qpair failed and we were unable to recover it. 00:32:35.200 [2024-11-20 06:43:06.709885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.200 [2024-11-20 06:43:06.709907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.200 qpair failed and we were unable to recover it. 00:32:35.200 [2024-11-20 06:43:06.710147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.200 [2024-11-20 06:43:06.710168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.200 qpair failed and we were unable to recover it. 00:32:35.200 [2024-11-20 06:43:06.710426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.200 [2024-11-20 06:43:06.710447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.200 qpair failed and we were unable to recover it. 00:32:35.200 [2024-11-20 06:43:06.710613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.200 [2024-11-20 06:43:06.710634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.200 qpair failed and we were unable to recover it. 00:32:35.200 [2024-11-20 06:43:06.710853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.200 [2024-11-20 06:43:06.710885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.200 qpair failed and we were unable to recover it. 00:32:35.200 [2024-11-20 06:43:06.711154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.200 [2024-11-20 06:43:06.711185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.200 qpair failed and we were unable to recover it. 00:32:35.200 [2024-11-20 06:43:06.711465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.200 [2024-11-20 06:43:06.711498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.200 qpair failed and we were unable to recover it. 00:32:35.200 [2024-11-20 06:43:06.711774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.200 [2024-11-20 06:43:06.711806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.200 qpair failed and we were unable to recover it. 00:32:35.200 [2024-11-20 06:43:06.712000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.200 [2024-11-20 06:43:06.712032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.200 qpair failed and we were unable to recover it. 00:32:35.200 [2024-11-20 06:43:06.712301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.200 [2024-11-20 06:43:06.712335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.200 qpair failed and we were unable to recover it. 00:32:35.200 [2024-11-20 06:43:06.712622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.200 [2024-11-20 06:43:06.712653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.200 qpair failed and we were unable to recover it. 00:32:35.200 [2024-11-20 06:43:06.712918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.200 [2024-11-20 06:43:06.712949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.200 qpair failed and we were unable to recover it. 00:32:35.200 [2024-11-20 06:43:06.713132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.200 [2024-11-20 06:43:06.713163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.200 qpair failed and we were unable to recover it. 00:32:35.200 [2024-11-20 06:43:06.713360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.200 [2024-11-20 06:43:06.713393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.200 qpair failed and we were unable to recover it. 00:32:35.200 [2024-11-20 06:43:06.713570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.200 [2024-11-20 06:43:06.713601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.200 qpair failed and we were unable to recover it. 00:32:35.200 [2024-11-20 06:43:06.713865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.200 [2024-11-20 06:43:06.713896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.200 qpair failed and we were unable to recover it. 00:32:35.200 [2024-11-20 06:43:06.714077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.200 [2024-11-20 06:43:06.714108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.200 qpair failed and we were unable to recover it. 00:32:35.200 [2024-11-20 06:43:06.714343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.200 [2024-11-20 06:43:06.714365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.200 qpair failed and we were unable to recover it. 00:32:35.200 [2024-11-20 06:43:06.714583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.200 [2024-11-20 06:43:06.714604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.200 qpair failed and we were unable to recover it. 00:32:35.200 [2024-11-20 06:43:06.714865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.201 [2024-11-20 06:43:06.714886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.201 qpair failed and we were unable to recover it. 00:32:35.201 [2024-11-20 06:43:06.715075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.201 [2024-11-20 06:43:06.715097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.201 qpair failed and we were unable to recover it. 00:32:35.201 [2024-11-20 06:43:06.715296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.201 [2024-11-20 06:43:06.715329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.201 qpair failed and we were unable to recover it. 00:32:35.201 [2024-11-20 06:43:06.715500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.201 [2024-11-20 06:43:06.715531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.201 qpair failed and we were unable to recover it. 00:32:35.201 [2024-11-20 06:43:06.715808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.201 [2024-11-20 06:43:06.715840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.201 qpair failed and we were unable to recover it. 00:32:35.201 [2024-11-20 06:43:06.716100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.201 [2024-11-20 06:43:06.716131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.201 qpair failed and we were unable to recover it. 00:32:35.201 [2024-11-20 06:43:06.716419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.201 [2024-11-20 06:43:06.716453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.201 qpair failed and we were unable to recover it. 00:32:35.201 [2024-11-20 06:43:06.716718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.201 [2024-11-20 06:43:06.716739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.201 qpair failed and we were unable to recover it. 00:32:35.201 [2024-11-20 06:43:06.716906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.201 [2024-11-20 06:43:06.716927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.201 qpair failed and we were unable to recover it. 00:32:35.201 [2024-11-20 06:43:06.717169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.201 [2024-11-20 06:43:06.717190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.201 qpair failed and we were unable to recover it. 00:32:35.201 [2024-11-20 06:43:06.717449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.201 [2024-11-20 06:43:06.717471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.201 qpair failed and we were unable to recover it. 00:32:35.201 [2024-11-20 06:43:06.717618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.201 [2024-11-20 06:43:06.717640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.201 qpair failed and we were unable to recover it. 00:32:35.201 [2024-11-20 06:43:06.717824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.201 [2024-11-20 06:43:06.717846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.201 qpair failed and we were unable to recover it. 00:32:35.201 [2024-11-20 06:43:06.718074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.201 [2024-11-20 06:43:06.718107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.201 qpair failed and we were unable to recover it. 00:32:35.201 [2024-11-20 06:43:06.718298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.201 [2024-11-20 06:43:06.718332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.201 qpair failed and we were unable to recover it. 00:32:35.201 [2024-11-20 06:43:06.718540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.201 [2024-11-20 06:43:06.718573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.201 qpair failed and we were unable to recover it. 00:32:35.201 [2024-11-20 06:43:06.718836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.201 [2024-11-20 06:43:06.718868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.201 qpair failed and we were unable to recover it. 00:32:35.201 [2024-11-20 06:43:06.719157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.201 [2024-11-20 06:43:06.719188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.201 qpair failed and we were unable to recover it. 00:32:35.201 [2024-11-20 06:43:06.719373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.201 [2024-11-20 06:43:06.719406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.201 qpair failed and we were unable to recover it. 00:32:35.201 [2024-11-20 06:43:06.719696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.201 [2024-11-20 06:43:06.719728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.201 qpair failed and we were unable to recover it. 00:32:35.201 [2024-11-20 06:43:06.719982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.201 [2024-11-20 06:43:06.720014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.201 qpair failed and we were unable to recover it. 00:32:35.201 [2024-11-20 06:43:06.720213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.201 [2024-11-20 06:43:06.720247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.201 qpair failed and we were unable to recover it. 00:32:35.201 [2024-11-20 06:43:06.720505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.201 [2024-11-20 06:43:06.720527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.201 qpair failed and we were unable to recover it. 00:32:35.201 [2024-11-20 06:43:06.720630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.201 [2024-11-20 06:43:06.720651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.201 qpair failed and we were unable to recover it. 00:32:35.201 [2024-11-20 06:43:06.720844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.201 [2024-11-20 06:43:06.720865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.201 qpair failed and we were unable to recover it. 00:32:35.201 [2024-11-20 06:43:06.721130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.201 [2024-11-20 06:43:06.721152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.201 qpair failed and we were unable to recover it. 00:32:35.201 [2024-11-20 06:43:06.721369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.201 [2024-11-20 06:43:06.721392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.201 qpair failed and we were unable to recover it. 00:32:35.201 [2024-11-20 06:43:06.721563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.201 [2024-11-20 06:43:06.721585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.201 qpair failed and we were unable to recover it. 00:32:35.201 [2024-11-20 06:43:06.721734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.201 [2024-11-20 06:43:06.721760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.201 qpair failed and we were unable to recover it. 00:32:35.201 [2024-11-20 06:43:06.721958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.201 [2024-11-20 06:43:06.721990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.201 qpair failed and we were unable to recover it. 00:32:35.202 [2024-11-20 06:43:06.722251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.202 [2024-11-20 06:43:06.722285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.202 qpair failed and we were unable to recover it. 00:32:35.202 [2024-11-20 06:43:06.722452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.202 [2024-11-20 06:43:06.722474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.202 qpair failed and we were unable to recover it. 00:32:35.202 [2024-11-20 06:43:06.722671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.202 [2024-11-20 06:43:06.722702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.202 qpair failed and we were unable to recover it. 00:32:35.202 [2024-11-20 06:43:06.722957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.202 [2024-11-20 06:43:06.722989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.202 qpair failed and we were unable to recover it. 00:32:35.202 [2024-11-20 06:43:06.723167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.202 [2024-11-20 06:43:06.723199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.202 qpair failed and we were unable to recover it. 00:32:35.202 [2024-11-20 06:43:06.723403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.202 [2024-11-20 06:43:06.723425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.202 qpair failed and we were unable to recover it. 00:32:35.202 [2024-11-20 06:43:06.723657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.202 [2024-11-20 06:43:06.723678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.202 qpair failed and we were unable to recover it. 00:32:35.202 [2024-11-20 06:43:06.723865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.202 [2024-11-20 06:43:06.723886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.202 qpair failed and we were unable to recover it. 00:32:35.202 [2024-11-20 06:43:06.724061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.202 [2024-11-20 06:43:06.724082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.202 qpair failed and we were unable to recover it. 00:32:35.202 [2024-11-20 06:43:06.724251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.202 [2024-11-20 06:43:06.724275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.202 qpair failed and we were unable to recover it. 00:32:35.202 [2024-11-20 06:43:06.724521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.202 [2024-11-20 06:43:06.724552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.202 qpair failed and we were unable to recover it. 00:32:35.202 [2024-11-20 06:43:06.724843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.202 [2024-11-20 06:43:06.724874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.202 qpair failed and we were unable to recover it. 00:32:35.202 [2024-11-20 06:43:06.725083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.202 [2024-11-20 06:43:06.725116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.202 qpair failed and we were unable to recover it. 00:32:35.202 [2024-11-20 06:43:06.725354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.202 [2024-11-20 06:43:06.725388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.202 qpair failed and we were unable to recover it. 00:32:35.202 [2024-11-20 06:43:06.725651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.202 [2024-11-20 06:43:06.725673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.202 qpair failed and we were unable to recover it. 00:32:35.202 [2024-11-20 06:43:06.725911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.202 [2024-11-20 06:43:06.725932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.202 qpair failed and we were unable to recover it. 00:32:35.202 [2024-11-20 06:43:06.726162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.202 [2024-11-20 06:43:06.726184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.202 qpair failed and we were unable to recover it. 00:32:35.202 [2024-11-20 06:43:06.726346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.202 [2024-11-20 06:43:06.726368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.202 qpair failed and we were unable to recover it. 00:32:35.202 [2024-11-20 06:43:06.726642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.202 [2024-11-20 06:43:06.726674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.202 qpair failed and we were unable to recover it. 00:32:35.202 [2024-11-20 06:43:06.726864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.202 [2024-11-20 06:43:06.726896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.202 qpair failed and we were unable to recover it. 00:32:35.202 [2024-11-20 06:43:06.727085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.202 [2024-11-20 06:43:06.727117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.202 qpair failed and we were unable to recover it. 00:32:35.202 [2024-11-20 06:43:06.727306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.202 [2024-11-20 06:43:06.727339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.202 qpair failed and we were unable to recover it. 00:32:35.202 [2024-11-20 06:43:06.727597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.202 [2024-11-20 06:43:06.727629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.202 qpair failed and we were unable to recover it. 00:32:35.202 [2024-11-20 06:43:06.727916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.202 [2024-11-20 06:43:06.727938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.202 qpair failed and we were unable to recover it. 00:32:35.202 [2024-11-20 06:43:06.728120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.202 [2024-11-20 06:43:06.728141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.202 qpair failed and we were unable to recover it. 00:32:35.203 [2024-11-20 06:43:06.728307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.203 [2024-11-20 06:43:06.728330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.203 qpair failed and we were unable to recover it. 00:32:35.203 [2024-11-20 06:43:06.728437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.203 [2024-11-20 06:43:06.728459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.203 qpair failed and we were unable to recover it. 00:32:35.203 [2024-11-20 06:43:06.728637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.203 [2024-11-20 06:43:06.728657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.203 qpair failed and we were unable to recover it. 00:32:35.203 [2024-11-20 06:43:06.728898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.203 [2024-11-20 06:43:06.728930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.203 qpair failed and we were unable to recover it. 00:32:35.203 [2024-11-20 06:43:06.729145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.203 [2024-11-20 06:43:06.729177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.203 qpair failed and we were unable to recover it. 00:32:35.203 [2024-11-20 06:43:06.729456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.203 [2024-11-20 06:43:06.729489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.203 qpair failed and we were unable to recover it. 00:32:35.203 [2024-11-20 06:43:06.729715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.203 [2024-11-20 06:43:06.729747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.203 qpair failed and we were unable to recover it. 00:32:35.203 [2024-11-20 06:43:06.730013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.203 [2024-11-20 06:43:06.730045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.203 qpair failed and we were unable to recover it. 00:32:35.203 [2024-11-20 06:43:06.730256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.203 [2024-11-20 06:43:06.730290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.203 qpair failed and we were unable to recover it. 00:32:35.203 [2024-11-20 06:43:06.730519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.203 [2024-11-20 06:43:06.730550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.203 qpair failed and we were unable to recover it. 00:32:35.203 [2024-11-20 06:43:06.730721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.203 [2024-11-20 06:43:06.730742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.203 qpair failed and we were unable to recover it. 00:32:35.203 [2024-11-20 06:43:06.731010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.203 [2024-11-20 06:43:06.731030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.203 qpair failed and we were unable to recover it. 00:32:35.203 [2024-11-20 06:43:06.731192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.203 [2024-11-20 06:43:06.731225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.203 qpair failed and we were unable to recover it. 00:32:35.203 [2024-11-20 06:43:06.731504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.203 [2024-11-20 06:43:06.731525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.203 qpair failed and we were unable to recover it. 00:32:35.203 [2024-11-20 06:43:06.731707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.203 [2024-11-20 06:43:06.731733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.203 qpair failed and we were unable to recover it. 00:32:35.203 [2024-11-20 06:43:06.732003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.203 [2024-11-20 06:43:06.732035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.203 qpair failed and we were unable to recover it. 00:32:35.203 [2024-11-20 06:43:06.732221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.203 [2024-11-20 06:43:06.732253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.203 qpair failed and we were unable to recover it. 00:32:35.203 [2024-11-20 06:43:06.732513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.203 [2024-11-20 06:43:06.732545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.203 qpair failed and we were unable to recover it. 00:32:35.203 [2024-11-20 06:43:06.732760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.203 [2024-11-20 06:43:06.732792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.203 qpair failed and we were unable to recover it. 00:32:35.203 [2024-11-20 06:43:06.733055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.203 [2024-11-20 06:43:06.733086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.203 qpair failed and we were unable to recover it. 00:32:35.203 [2024-11-20 06:43:06.733389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.203 [2024-11-20 06:43:06.733423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.203 qpair failed and we were unable to recover it. 00:32:35.203 [2024-11-20 06:43:06.733643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.203 [2024-11-20 06:43:06.733674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.203 qpair failed and we were unable to recover it. 00:32:35.203 [2024-11-20 06:43:06.733848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.203 [2024-11-20 06:43:06.733870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.203 qpair failed and we were unable to recover it. 00:32:35.203 [2024-11-20 06:43:06.734088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.203 [2024-11-20 06:43:06.734110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.203 qpair failed and we were unable to recover it. 00:32:35.203 [2024-11-20 06:43:06.734375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.203 [2024-11-20 06:43:06.734408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.203 qpair failed and we were unable to recover it. 00:32:35.204 [2024-11-20 06:43:06.734646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.204 [2024-11-20 06:43:06.734677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.204 qpair failed and we were unable to recover it. 00:32:35.204 [2024-11-20 06:43:06.734870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.204 [2024-11-20 06:43:06.734902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.204 qpair failed and we were unable to recover it. 00:32:35.204 [2024-11-20 06:43:06.735071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.204 [2024-11-20 06:43:06.735101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.204 qpair failed and we were unable to recover it. 00:32:35.204 [2024-11-20 06:43:06.735318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.204 [2024-11-20 06:43:06.735351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.204 qpair failed and we were unable to recover it. 00:32:35.204 [2024-11-20 06:43:06.735608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.204 [2024-11-20 06:43:06.735629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.204 qpair failed and we were unable to recover it. 00:32:35.204 [2024-11-20 06:43:06.735791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.204 [2024-11-20 06:43:06.735812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.204 qpair failed and we were unable to recover it. 00:32:35.204 [2024-11-20 06:43:06.736055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.204 [2024-11-20 06:43:06.736076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.204 qpair failed and we were unable to recover it. 00:32:35.204 [2024-11-20 06:43:06.736186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.204 [2024-11-20 06:43:06.736223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.204 qpair failed and we were unable to recover it. 00:32:35.204 [2024-11-20 06:43:06.736451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.204 [2024-11-20 06:43:06.736473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.204 qpair failed and we were unable to recover it. 00:32:35.204 [2024-11-20 06:43:06.736691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.204 [2024-11-20 06:43:06.736731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.204 qpair failed and we were unable to recover it. 00:32:35.204 [2024-11-20 06:43:06.736997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.204 [2024-11-20 06:43:06.737028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.204 qpair failed and we were unable to recover it. 00:32:35.204 [2024-11-20 06:43:06.737318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.204 [2024-11-20 06:43:06.737341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.204 qpair failed and we were unable to recover it. 00:32:35.204 [2024-11-20 06:43:06.737611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.204 [2024-11-20 06:43:06.737632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.204 qpair failed and we were unable to recover it. 00:32:35.204 [2024-11-20 06:43:06.737880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.204 [2024-11-20 06:43:06.737901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.204 qpair failed and we were unable to recover it. 00:32:35.204 [2024-11-20 06:43:06.738094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.204 [2024-11-20 06:43:06.738126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.204 qpair failed and we were unable to recover it. 00:32:35.204 [2024-11-20 06:43:06.738391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.204 [2024-11-20 06:43:06.738425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.204 qpair failed and we were unable to recover it. 00:32:35.204 [2024-11-20 06:43:06.738709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.204 [2024-11-20 06:43:06.738746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.204 qpair failed and we were unable to recover it. 00:32:35.204 [2024-11-20 06:43:06.739015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.204 [2024-11-20 06:43:06.739046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.204 qpair failed and we were unable to recover it. 00:32:35.204 [2024-11-20 06:43:06.739327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.204 [2024-11-20 06:43:06.739360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.204 qpair failed and we were unable to recover it. 00:32:35.204 [2024-11-20 06:43:06.739617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.204 [2024-11-20 06:43:06.739657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.205 qpair failed and we were unable to recover it. 00:32:35.205 [2024-11-20 06:43:06.739877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.205 [2024-11-20 06:43:06.739898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.205 qpair failed and we were unable to recover it. 00:32:35.205 [2024-11-20 06:43:06.740145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.205 [2024-11-20 06:43:06.740166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.205 qpair failed and we were unable to recover it. 00:32:35.205 [2024-11-20 06:43:06.740364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.205 [2024-11-20 06:43:06.740386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.205 qpair failed and we were unable to recover it. 00:32:35.205 [2024-11-20 06:43:06.740549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.205 [2024-11-20 06:43:06.740592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.205 qpair failed and we were unable to recover it. 00:32:35.205 [2024-11-20 06:43:06.740777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.205 [2024-11-20 06:43:06.740808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.205 qpair failed and we were unable to recover it. 00:32:35.205 [2024-11-20 06:43:06.741069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.205 [2024-11-20 06:43:06.741102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.205 qpair failed and we were unable to recover it. 00:32:35.205 [2024-11-20 06:43:06.741342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.205 [2024-11-20 06:43:06.741375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.205 qpair failed and we were unable to recover it. 00:32:35.205 [2024-11-20 06:43:06.741558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.205 [2024-11-20 06:43:06.741590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.205 qpair failed and we were unable to recover it. 00:32:35.205 [2024-11-20 06:43:06.741879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.205 [2024-11-20 06:43:06.741901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.205 qpair failed and we were unable to recover it. 00:32:35.205 [2024-11-20 06:43:06.742119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.205 [2024-11-20 06:43:06.742140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.205 qpair failed and we were unable to recover it. 00:32:35.205 [2024-11-20 06:43:06.742321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.205 [2024-11-20 06:43:06.742344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.205 qpair failed and we were unable to recover it. 00:32:35.205 [2024-11-20 06:43:06.742585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.205 [2024-11-20 06:43:06.742607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.205 qpair failed and we were unable to recover it. 00:32:35.205 [2024-11-20 06:43:06.742777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.205 [2024-11-20 06:43:06.742798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.205 qpair failed and we were unable to recover it. 00:32:35.205 [2024-11-20 06:43:06.743052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.205 [2024-11-20 06:43:06.743084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.205 qpair failed and we were unable to recover it. 00:32:35.205 [2024-11-20 06:43:06.743350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.205 [2024-11-20 06:43:06.743383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.205 qpair failed and we were unable to recover it. 00:32:35.205 [2024-11-20 06:43:06.743624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.205 [2024-11-20 06:43:06.743645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.205 qpair failed and we were unable to recover it. 00:32:35.205 [2024-11-20 06:43:06.743867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.205 [2024-11-20 06:43:06.743888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.205 qpair failed and we were unable to recover it. 00:32:35.205 [2024-11-20 06:43:06.744105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.205 [2024-11-20 06:43:06.744127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.205 qpair failed and we were unable to recover it. 00:32:35.205 [2024-11-20 06:43:06.744289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.205 [2024-11-20 06:43:06.744311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.205 qpair failed and we were unable to recover it. 00:32:35.205 [2024-11-20 06:43:06.744599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.205 [2024-11-20 06:43:06.744629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.205 qpair failed and we were unable to recover it. 00:32:35.205 [2024-11-20 06:43:06.744803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.205 [2024-11-20 06:43:06.744834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.205 qpair failed and we were unable to recover it. 00:32:35.205 [2024-11-20 06:43:06.745017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.205 [2024-11-20 06:43:06.745048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.205 qpair failed and we were unable to recover it. 00:32:35.205 [2024-11-20 06:43:06.745331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.205 [2024-11-20 06:43:06.745364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.205 qpair failed and we were unable to recover it. 00:32:35.205 [2024-11-20 06:43:06.745600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.205 [2024-11-20 06:43:06.745621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.205 qpair failed and we were unable to recover it. 00:32:35.205 [2024-11-20 06:43:06.745848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.205 [2024-11-20 06:43:06.745870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.205 qpair failed and we were unable to recover it. 00:32:35.205 [2024-11-20 06:43:06.746053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.206 [2024-11-20 06:43:06.746074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.206 qpair failed and we were unable to recover it. 00:32:35.206 [2024-11-20 06:43:06.746245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.206 [2024-11-20 06:43:06.746268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.206 qpair failed and we were unable to recover it. 00:32:35.206 [2024-11-20 06:43:06.746488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.206 [2024-11-20 06:43:06.746520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.206 qpair failed and we were unable to recover it. 00:32:35.206 [2024-11-20 06:43:06.746781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.206 [2024-11-20 06:43:06.746812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.206 qpair failed and we were unable to recover it. 00:32:35.206 [2024-11-20 06:43:06.747052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.206 [2024-11-20 06:43:06.747084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.206 qpair failed and we were unable to recover it. 00:32:35.206 [2024-11-20 06:43:06.747301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.206 [2024-11-20 06:43:06.747334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.206 qpair failed and we were unable to recover it. 00:32:35.206 [2024-11-20 06:43:06.747507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.206 [2024-11-20 06:43:06.747537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.206 qpair failed and we were unable to recover it. 00:32:35.206 [2024-11-20 06:43:06.747775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.206 [2024-11-20 06:43:06.747807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.206 qpair failed and we were unable to recover it. 00:32:35.206 [2024-11-20 06:43:06.748024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.206 [2024-11-20 06:43:06.748054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.206 qpair failed and we were unable to recover it. 00:32:35.206 [2024-11-20 06:43:06.748319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.206 [2024-11-20 06:43:06.748341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.206 qpair failed and we were unable to recover it. 00:32:35.206 [2024-11-20 06:43:06.748522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.206 [2024-11-20 06:43:06.748544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.206 qpair failed and we were unable to recover it. 00:32:35.206 [2024-11-20 06:43:06.748761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.206 [2024-11-20 06:43:06.748782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.206 qpair failed and we were unable to recover it. 00:32:35.206 [2024-11-20 06:43:06.748952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.206 [2024-11-20 06:43:06.748977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.206 qpair failed and we were unable to recover it. 00:32:35.206 [2024-11-20 06:43:06.749098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.206 [2024-11-20 06:43:06.749120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.206 qpair failed and we were unable to recover it. 00:32:35.206 [2024-11-20 06:43:06.749365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.206 [2024-11-20 06:43:06.749387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.206 qpair failed and we were unable to recover it. 00:32:35.206 [2024-11-20 06:43:06.749604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.206 [2024-11-20 06:43:06.749625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.206 qpair failed and we were unable to recover it. 00:32:35.206 [2024-11-20 06:43:06.749803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.206 [2024-11-20 06:43:06.749823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.206 qpair failed and we were unable to recover it. 00:32:35.206 [2024-11-20 06:43:06.750087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.206 [2024-11-20 06:43:06.750109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.206 qpair failed and we were unable to recover it. 00:32:35.206 [2024-11-20 06:43:06.750284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.206 [2024-11-20 06:43:06.750307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.206 qpair failed and we were unable to recover it. 00:32:35.206 [2024-11-20 06:43:06.750538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.206 [2024-11-20 06:43:06.750559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.206 qpair failed and we were unable to recover it. 00:32:35.206 [2024-11-20 06:43:06.750645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.206 [2024-11-20 06:43:06.750665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.206 qpair failed and we were unable to recover it. 00:32:35.206 [2024-11-20 06:43:06.750829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.206 [2024-11-20 06:43:06.750851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.206 qpair failed and we were unable to recover it. 00:32:35.206 [2024-11-20 06:43:06.751027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.206 [2024-11-20 06:43:06.751049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.206 qpair failed and we were unable to recover it. 00:32:35.206 [2024-11-20 06:43:06.751278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.206 [2024-11-20 06:43:06.751311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.206 qpair failed and we were unable to recover it. 00:32:35.206 [2024-11-20 06:43:06.751552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.206 [2024-11-20 06:43:06.751585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.206 qpair failed and we were unable to recover it. 00:32:35.206 [2024-11-20 06:43:06.751823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.206 [2024-11-20 06:43:06.751854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.206 qpair failed and we were unable to recover it. 00:32:35.206 [2024-11-20 06:43:06.752148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.207 [2024-11-20 06:43:06.752181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.207 qpair failed and we were unable to recover it. 00:32:35.207 [2024-11-20 06:43:06.752321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.207 [2024-11-20 06:43:06.752353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.207 qpair failed and we were unable to recover it. 00:32:35.207 [2024-11-20 06:43:06.752595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.207 [2024-11-20 06:43:06.752628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.207 qpair failed and we were unable to recover it. 00:32:35.207 [2024-11-20 06:43:06.752840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.207 [2024-11-20 06:43:06.752861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.207 qpair failed and we were unable to recover it. 00:32:35.207 [2024-11-20 06:43:06.753013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.207 [2024-11-20 06:43:06.753035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.207 qpair failed and we were unable to recover it. 00:32:35.207 [2024-11-20 06:43:06.753208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.207 [2024-11-20 06:43:06.753243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.207 qpair failed and we were unable to recover it. 00:32:35.207 [2024-11-20 06:43:06.753446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.207 [2024-11-20 06:43:06.753478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.207 qpair failed and we were unable to recover it. 00:32:35.207 [2024-11-20 06:43:06.753744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.207 [2024-11-20 06:43:06.753775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.207 qpair failed and we were unable to recover it. 00:32:35.207 [2024-11-20 06:43:06.753963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.207 [2024-11-20 06:43:06.753985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.207 qpair failed and we were unable to recover it. 00:32:35.207 [2024-11-20 06:43:06.754252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.207 [2024-11-20 06:43:06.754274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.207 qpair failed and we were unable to recover it. 00:32:35.207 [2024-11-20 06:43:06.754442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.207 [2024-11-20 06:43:06.754464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.207 qpair failed and we were unable to recover it. 00:32:35.207 [2024-11-20 06:43:06.754705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.207 [2024-11-20 06:43:06.754727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.207 qpair failed and we were unable to recover it. 00:32:35.207 [2024-11-20 06:43:06.754926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.207 [2024-11-20 06:43:06.754947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.207 qpair failed and we were unable to recover it. 00:32:35.207 [2024-11-20 06:43:06.755164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.207 [2024-11-20 06:43:06.755191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.207 qpair failed and we were unable to recover it. 00:32:35.207 [2024-11-20 06:43:06.755419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.207 [2024-11-20 06:43:06.755440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.207 qpair failed and we were unable to recover it. 00:32:35.207 [2024-11-20 06:43:06.755703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.207 [2024-11-20 06:43:06.755724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.207 qpair failed and we were unable to recover it. 00:32:35.207 [2024-11-20 06:43:06.755911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.207 [2024-11-20 06:43:06.755933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.207 qpair failed and we were unable to recover it. 00:32:35.207 [2024-11-20 06:43:06.756150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.207 [2024-11-20 06:43:06.756171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.207 qpair failed and we were unable to recover it. 00:32:35.207 [2024-11-20 06:43:06.756437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.207 [2024-11-20 06:43:06.756460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.207 qpair failed and we were unable to recover it. 00:32:35.207 [2024-11-20 06:43:06.756699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.207 [2024-11-20 06:43:06.756730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.207 qpair failed and we were unable to recover it. 00:32:35.207 [2024-11-20 06:43:06.756902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.207 [2024-11-20 06:43:06.756933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.207 qpair failed and we were unable to recover it. 00:32:35.207 [2024-11-20 06:43:06.757221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.207 [2024-11-20 06:43:06.757254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.207 qpair failed and we were unable to recover it. 00:32:35.207 [2024-11-20 06:43:06.757511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.207 [2024-11-20 06:43:06.757533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.207 qpair failed and we were unable to recover it. 00:32:35.207 [2024-11-20 06:43:06.757711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.207 [2024-11-20 06:43:06.757732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.207 qpair failed and we were unable to recover it. 00:32:35.207 [2024-11-20 06:43:06.757954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.207 [2024-11-20 06:43:06.757976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.207 qpair failed and we were unable to recover it. 00:32:35.207 [2024-11-20 06:43:06.758219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.207 [2024-11-20 06:43:06.758254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.207 qpair failed and we were unable to recover it. 00:32:35.207 [2024-11-20 06:43:06.758453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.207 [2024-11-20 06:43:06.758485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.207 qpair failed and we were unable to recover it. 00:32:35.207 [2024-11-20 06:43:06.758765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.207 [2024-11-20 06:43:06.758804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.207 qpair failed and we were unable to recover it. 00:32:35.207 [2024-11-20 06:43:06.759058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.207 [2024-11-20 06:43:06.759080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.207 qpair failed and we were unable to recover it. 00:32:35.207 [2024-11-20 06:43:06.759344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.207 [2024-11-20 06:43:06.759367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.207 qpair failed and we were unable to recover it. 00:32:35.207 [2024-11-20 06:43:06.759590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.208 [2024-11-20 06:43:06.759611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.208 qpair failed and we were unable to recover it. 00:32:35.208 [2024-11-20 06:43:06.759756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.208 [2024-11-20 06:43:06.759779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.208 qpair failed and we were unable to recover it. 00:32:35.208 [2024-11-20 06:43:06.759937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.208 [2024-11-20 06:43:06.759959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.208 qpair failed and we were unable to recover it. 00:32:35.208 [2024-11-20 06:43:06.760151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.208 [2024-11-20 06:43:06.760173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.208 qpair failed and we were unable to recover it. 00:32:35.208 [2024-11-20 06:43:06.760354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.208 [2024-11-20 06:43:06.760377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.208 qpair failed and we were unable to recover it. 00:32:35.208 [2024-11-20 06:43:06.760543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.208 [2024-11-20 06:43:06.760565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.208 qpair failed and we were unable to recover it. 00:32:35.208 [2024-11-20 06:43:06.760678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.208 [2024-11-20 06:43:06.760698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.208 qpair failed and we were unable to recover it. 00:32:35.208 [2024-11-20 06:43:06.760961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.208 [2024-11-20 06:43:06.760985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.208 qpair failed and we were unable to recover it. 00:32:35.208 [2024-11-20 06:43:06.761264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.208 [2024-11-20 06:43:06.761301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.208 qpair failed and we were unable to recover it. 00:32:35.208 [2024-11-20 06:43:06.761517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.208 [2024-11-20 06:43:06.761550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.208 qpair failed and we were unable to recover it. 00:32:35.208 [2024-11-20 06:43:06.761790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.208 [2024-11-20 06:43:06.761833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.208 qpair failed and we were unable to recover it. 00:32:35.208 [2024-11-20 06:43:06.762009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.208 [2024-11-20 06:43:06.762031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.208 qpair failed and we were unable to recover it. 00:32:35.208 [2024-11-20 06:43:06.762279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.208 [2024-11-20 06:43:06.762302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.208 qpair failed and we were unable to recover it. 00:32:35.208 [2024-11-20 06:43:06.762526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.208 [2024-11-20 06:43:06.762548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.208 qpair failed and we were unable to recover it. 00:32:35.208 [2024-11-20 06:43:06.762709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.208 [2024-11-20 06:43:06.762731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.208 qpair failed and we were unable to recover it. 00:32:35.208 [2024-11-20 06:43:06.762957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.208 [2024-11-20 06:43:06.762990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.208 qpair failed and we were unable to recover it. 00:32:35.208 [2024-11-20 06:43:06.763314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.208 [2024-11-20 06:43:06.763349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.208 qpair failed and we were unable to recover it. 00:32:35.208 [2024-11-20 06:43:06.763547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.208 [2024-11-20 06:43:06.763574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.208 qpair failed and we were unable to recover it. 00:32:35.208 [2024-11-20 06:43:06.763859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.208 [2024-11-20 06:43:06.763892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.208 qpair failed and we were unable to recover it. 00:32:35.208 [2024-11-20 06:43:06.764130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.208 [2024-11-20 06:43:06.764162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.208 qpair failed and we were unable to recover it. 00:32:35.208 [2024-11-20 06:43:06.764412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.208 [2024-11-20 06:43:06.764436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.208 qpair failed and we were unable to recover it. 00:32:35.208 [2024-11-20 06:43:06.764603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.208 [2024-11-20 06:43:06.764624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.208 qpair failed and we were unable to recover it. 00:32:35.208 [2024-11-20 06:43:06.764879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.208 [2024-11-20 06:43:06.764912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.208 qpair failed and we were unable to recover it. 00:32:35.208 [2024-11-20 06:43:06.765105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.208 [2024-11-20 06:43:06.765137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.208 qpair failed and we were unable to recover it. 00:32:35.208 [2024-11-20 06:43:06.765389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.208 [2024-11-20 06:43:06.765429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.208 qpair failed and we were unable to recover it. 00:32:35.208 [2024-11-20 06:43:06.765718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.208 [2024-11-20 06:43:06.765750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.208 qpair failed and we were unable to recover it. 00:32:35.208 [2024-11-20 06:43:06.766012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.208 [2024-11-20 06:43:06.766044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.208 qpair failed and we were unable to recover it. 00:32:35.208 [2024-11-20 06:43:06.766347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.208 [2024-11-20 06:43:06.766382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.208 qpair failed and we were unable to recover it. 00:32:35.208 [2024-11-20 06:43:06.766640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.208 [2024-11-20 06:43:06.766673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.208 qpair failed and we were unable to recover it. 00:32:35.208 [2024-11-20 06:43:06.766870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.208 [2024-11-20 06:43:06.766902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.208 qpair failed and we were unable to recover it. 00:32:35.208 [2024-11-20 06:43:06.767145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.208 [2024-11-20 06:43:06.767178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.208 qpair failed and we were unable to recover it. 00:32:35.208 [2024-11-20 06:43:06.767495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.208 [2024-11-20 06:43:06.767528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.208 qpair failed and we were unable to recover it. 00:32:35.208 [2024-11-20 06:43:06.767776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.208 [2024-11-20 06:43:06.767808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.208 qpair failed and we were unable to recover it. 00:32:35.208 [2024-11-20 06:43:06.768079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.208 [2024-11-20 06:43:06.768110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.208 qpair failed and we were unable to recover it. 00:32:35.208 [2024-11-20 06:43:06.768305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.208 [2024-11-20 06:43:06.768340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.208 qpair failed and we were unable to recover it. 00:32:35.208 [2024-11-20 06:43:06.768611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.208 [2024-11-20 06:43:06.768633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.208 qpair failed and we were unable to recover it. 00:32:35.209 [2024-11-20 06:43:06.768835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.209 [2024-11-20 06:43:06.768857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.209 qpair failed and we were unable to recover it. 00:32:35.209 [2024-11-20 06:43:06.769046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.209 [2024-11-20 06:43:06.769069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.209 qpair failed and we were unable to recover it. 00:32:35.209 [2024-11-20 06:43:06.769175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.209 [2024-11-20 06:43:06.769197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.209 qpair failed and we were unable to recover it. 00:32:35.209 [2024-11-20 06:43:06.769384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.209 [2024-11-20 06:43:06.769408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.209 qpair failed and we were unable to recover it. 00:32:35.209 [2024-11-20 06:43:06.769570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.209 [2024-11-20 06:43:06.769592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.209 qpair failed and we were unable to recover it. 00:32:35.209 [2024-11-20 06:43:06.769872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.209 [2024-11-20 06:43:06.769905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.209 qpair failed and we were unable to recover it. 00:32:35.209 [2024-11-20 06:43:06.770081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.209 [2024-11-20 06:43:06.770114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.209 qpair failed and we were unable to recover it. 00:32:35.209 [2024-11-20 06:43:06.770333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.209 [2024-11-20 06:43:06.770367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.209 qpair failed and we were unable to recover it. 00:32:35.209 [2024-11-20 06:43:06.770654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.209 [2024-11-20 06:43:06.770686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.209 qpair failed and we were unable to recover it. 00:32:35.209 [2024-11-20 06:43:06.770979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.209 [2024-11-20 06:43:06.771001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.209 qpair failed and we were unable to recover it. 00:32:35.209 [2024-11-20 06:43:06.771277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.209 [2024-11-20 06:43:06.771300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.209 qpair failed and we were unable to recover it. 00:32:35.209 [2024-11-20 06:43:06.771474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.209 [2024-11-20 06:43:06.771496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.209 qpair failed and we were unable to recover it. 00:32:35.209 [2024-11-20 06:43:06.771748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.209 [2024-11-20 06:43:06.771770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.209 qpair failed and we were unable to recover it. 00:32:35.209 [2024-11-20 06:43:06.771936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.209 [2024-11-20 06:43:06.771959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.209 qpair failed and we were unable to recover it. 00:32:35.209 [2024-11-20 06:43:06.772214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.209 [2024-11-20 06:43:06.772237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.209 qpair failed and we were unable to recover it. 00:32:35.209 [2024-11-20 06:43:06.772326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.209 [2024-11-20 06:43:06.772350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.209 qpair failed and we were unable to recover it. 00:32:35.209 [2024-11-20 06:43:06.772558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.209 [2024-11-20 06:43:06.772579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.209 qpair failed and we were unable to recover it. 00:32:35.209 [2024-11-20 06:43:06.772827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.209 [2024-11-20 06:43:06.772849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.209 qpair failed and we were unable to recover it. 00:32:35.209 [2024-11-20 06:43:06.773017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.209 [2024-11-20 06:43:06.773039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.209 qpair failed and we were unable to recover it. 00:32:35.209 [2024-11-20 06:43:06.773233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.209 [2024-11-20 06:43:06.773256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.209 qpair failed and we were unable to recover it. 00:32:35.209 [2024-11-20 06:43:06.773345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.209 [2024-11-20 06:43:06.773365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.209 qpair failed and we were unable to recover it. 00:32:35.209 [2024-11-20 06:43:06.773538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.209 [2024-11-20 06:43:06.773560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.209 qpair failed and we were unable to recover it. 00:32:35.209 [2024-11-20 06:43:06.773748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.209 [2024-11-20 06:43:06.773779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.209 qpair failed and we were unable to recover it. 00:32:35.209 [2024-11-20 06:43:06.774071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.209 [2024-11-20 06:43:06.774104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.209 qpair failed and we were unable to recover it. 00:32:35.209 [2024-11-20 06:43:06.774326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.209 [2024-11-20 06:43:06.774366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.209 qpair failed and we were unable to recover it. 00:32:35.209 [2024-11-20 06:43:06.774587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.209 [2024-11-20 06:43:06.774609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.209 qpair failed and we were unable to recover it. 00:32:35.209 [2024-11-20 06:43:06.774856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.209 [2024-11-20 06:43:06.774877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.209 qpair failed and we were unable to recover it. 00:32:35.209 [2024-11-20 06:43:06.775041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.210 [2024-11-20 06:43:06.775063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.210 qpair failed and we were unable to recover it. 00:32:35.210 [2024-11-20 06:43:06.775230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.210 [2024-11-20 06:43:06.775253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.210 qpair failed and we were unable to recover it. 00:32:35.210 [2024-11-20 06:43:06.775460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.210 [2024-11-20 06:43:06.775483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.210 qpair failed and we were unable to recover it. 00:32:35.210 [2024-11-20 06:43:06.775701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.210 [2024-11-20 06:43:06.775723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.210 qpair failed and we were unable to recover it. 00:32:35.210 [2024-11-20 06:43:06.775967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.210 [2024-11-20 06:43:06.775988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.210 qpair failed and we were unable to recover it. 00:32:35.210 [2024-11-20 06:43:06.776239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.210 [2024-11-20 06:43:06.776263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.210 qpair failed and we were unable to recover it. 00:32:35.210 [2024-11-20 06:43:06.776386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.210 [2024-11-20 06:43:06.776408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.210 qpair failed and we were unable to recover it. 00:32:35.210 [2024-11-20 06:43:06.776591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.210 [2024-11-20 06:43:06.776614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.210 qpair failed and we were unable to recover it. 00:32:35.210 [2024-11-20 06:43:06.776888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.210 [2024-11-20 06:43:06.776910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.210 qpair failed and we were unable to recover it. 00:32:35.210 [2024-11-20 06:43:06.777077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.210 [2024-11-20 06:43:06.777099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.210 qpair failed and we were unable to recover it. 00:32:35.210 [2024-11-20 06:43:06.777217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.210 [2024-11-20 06:43:06.777240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.210 qpair failed and we were unable to recover it. 00:32:35.210 [2024-11-20 06:43:06.777415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.210 [2024-11-20 06:43:06.777437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.210 qpair failed and we were unable to recover it. 00:32:35.210 [2024-11-20 06:43:06.777676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.210 [2024-11-20 06:43:06.777698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.210 qpair failed and we were unable to recover it. 00:32:35.210 [2024-11-20 06:43:06.777853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.210 [2024-11-20 06:43:06.777874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.210 qpair failed and we were unable to recover it. 00:32:35.210 [2024-11-20 06:43:06.778054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.210 [2024-11-20 06:43:06.778076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.210 qpair failed and we were unable to recover it. 00:32:35.210 [2024-11-20 06:43:06.778272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.210 [2024-11-20 06:43:06.778295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.210 qpair failed and we were unable to recover it. 00:32:35.210 [2024-11-20 06:43:06.778548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.210 [2024-11-20 06:43:06.778570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.210 qpair failed and we were unable to recover it. 00:32:35.210 [2024-11-20 06:43:06.778737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.210 [2024-11-20 06:43:06.778759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.210 qpair failed and we were unable to recover it. 00:32:35.210 [2024-11-20 06:43:06.779009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.210 [2024-11-20 06:43:06.779042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.210 qpair failed and we were unable to recover it. 00:32:35.210 [2024-11-20 06:43:06.779248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.210 [2024-11-20 06:43:06.779281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.210 qpair failed and we were unable to recover it. 00:32:35.210 [2024-11-20 06:43:06.779420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.210 [2024-11-20 06:43:06.779453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.210 qpair failed and we were unable to recover it. 00:32:35.210 [2024-11-20 06:43:06.779643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.210 [2024-11-20 06:43:06.779664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.210 qpair failed and we were unable to recover it. 00:32:35.210 [2024-11-20 06:43:06.779774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.210 [2024-11-20 06:43:06.779795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.210 qpair failed and we were unable to recover it. 00:32:35.210 [2024-11-20 06:43:06.780040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.210 [2024-11-20 06:43:06.780062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.210 qpair failed and we were unable to recover it. 00:32:35.210 [2024-11-20 06:43:06.780226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.210 [2024-11-20 06:43:06.780250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.210 qpair failed and we were unable to recover it. 00:32:35.210 [2024-11-20 06:43:06.780418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.211 [2024-11-20 06:43:06.780441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.211 qpair failed and we were unable to recover it. 00:32:35.211 [2024-11-20 06:43:06.780719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.211 [2024-11-20 06:43:06.780741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.211 qpair failed and we were unable to recover it. 00:32:35.211 [2024-11-20 06:43:06.780974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.211 [2024-11-20 06:43:06.780996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.211 qpair failed and we were unable to recover it. 00:32:35.211 [2024-11-20 06:43:06.781242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.211 [2024-11-20 06:43:06.781266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.211 qpair failed and we were unable to recover it. 00:32:35.211 [2024-11-20 06:43:06.781512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.211 [2024-11-20 06:43:06.781537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.211 qpair failed and we were unable to recover it. 00:32:35.211 [2024-11-20 06:43:06.781692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.211 [2024-11-20 06:43:06.781713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.211 qpair failed and we were unable to recover it. 00:32:35.211 [2024-11-20 06:43:06.781875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.211 [2024-11-20 06:43:06.781897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.211 qpair failed and we were unable to recover it. 00:32:35.211 [2024-11-20 06:43:06.782141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.211 [2024-11-20 06:43:06.782164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.211 qpair failed and we were unable to recover it. 00:32:35.211 [2024-11-20 06:43:06.782332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.211 [2024-11-20 06:43:06.782356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.211 qpair failed and we were unable to recover it. 00:32:35.211 [2024-11-20 06:43:06.782538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.211 [2024-11-20 06:43:06.782560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.211 qpair failed and we were unable to recover it. 00:32:35.211 [2024-11-20 06:43:06.782658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.211 [2024-11-20 06:43:06.782679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.211 qpair failed and we were unable to recover it. 00:32:35.211 [2024-11-20 06:43:06.782788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.211 [2024-11-20 06:43:06.782810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.211 qpair failed and we were unable to recover it. 00:32:35.211 [2024-11-20 06:43:06.782917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.211 [2024-11-20 06:43:06.782940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.211 qpair failed and we were unable to recover it. 00:32:35.211 [2024-11-20 06:43:06.783164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.211 [2024-11-20 06:43:06.783187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.211 qpair failed and we were unable to recover it. 00:32:35.211 [2024-11-20 06:43:06.783307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.211 [2024-11-20 06:43:06.783330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.211 qpair failed and we were unable to recover it. 00:32:35.211 [2024-11-20 06:43:06.783511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.211 [2024-11-20 06:43:06.783533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.211 qpair failed and we were unable to recover it. 00:32:35.211 [2024-11-20 06:43:06.783643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.211 [2024-11-20 06:43:06.783664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.211 qpair failed and we were unable to recover it. 00:32:35.211 [2024-11-20 06:43:06.783887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.211 [2024-11-20 06:43:06.783910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.211 qpair failed and we were unable to recover it. 00:32:35.211 [2024-11-20 06:43:06.784152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.211 [2024-11-20 06:43:06.784175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.211 qpair failed and we were unable to recover it. 00:32:35.211 [2024-11-20 06:43:06.784342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.211 [2024-11-20 06:43:06.784366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.211 qpair failed and we were unable to recover it. 00:32:35.211 [2024-11-20 06:43:06.784631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.212 [2024-11-20 06:43:06.784664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.212 qpair failed and we were unable to recover it. 00:32:35.212 [2024-11-20 06:43:06.784973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.212 [2024-11-20 06:43:06.785005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.212 qpair failed and we were unable to recover it. 00:32:35.212 [2024-11-20 06:43:06.785213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.212 [2024-11-20 06:43:06.785248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.212 qpair failed and we were unable to recover it. 00:32:35.212 [2024-11-20 06:43:06.785390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.212 [2024-11-20 06:43:06.785422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.212 qpair failed and we were unable to recover it. 00:32:35.212 [2024-11-20 06:43:06.785614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.212 [2024-11-20 06:43:06.785647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.212 qpair failed and we were unable to recover it. 00:32:35.212 [2024-11-20 06:43:06.785913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.212 [2024-11-20 06:43:06.785936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.212 qpair failed and we were unable to recover it. 00:32:35.212 [2024-11-20 06:43:06.786183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.212 [2024-11-20 06:43:06.786212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.212 qpair failed and we were unable to recover it. 00:32:35.212 [2024-11-20 06:43:06.786435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.212 [2024-11-20 06:43:06.786459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.212 qpair failed and we were unable to recover it. 00:32:35.212 [2024-11-20 06:43:06.786701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.212 [2024-11-20 06:43:06.786723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.212 qpair failed and we were unable to recover it. 00:32:35.212 [2024-11-20 06:43:06.786947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.212 [2024-11-20 06:43:06.786970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.212 qpair failed and we were unable to recover it. 00:32:35.212 [2024-11-20 06:43:06.787132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.212 [2024-11-20 06:43:06.787153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.212 qpair failed and we were unable to recover it. 00:32:35.212 [2024-11-20 06:43:06.787417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.212 [2024-11-20 06:43:06.787442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.212 qpair failed and we were unable to recover it. 00:32:35.212 [2024-11-20 06:43:06.787626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.212 [2024-11-20 06:43:06.787659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.212 qpair failed and we were unable to recover it. 00:32:35.212 [2024-11-20 06:43:06.787858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.212 [2024-11-20 06:43:06.787891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.212 qpair failed and we were unable to recover it. 00:32:35.212 [2024-11-20 06:43:06.788108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.212 [2024-11-20 06:43:06.788141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.212 qpair failed and we were unable to recover it. 00:32:35.212 [2024-11-20 06:43:06.788397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.212 [2024-11-20 06:43:06.788431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.212 qpair failed and we were unable to recover it. 00:32:35.212 [2024-11-20 06:43:06.788679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.212 [2024-11-20 06:43:06.788712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.212 qpair failed and we were unable to recover it. 00:32:35.212 [2024-11-20 06:43:06.788835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.212 [2024-11-20 06:43:06.788856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.212 qpair failed and we were unable to recover it. 00:32:35.212 [2024-11-20 06:43:06.789037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.212 [2024-11-20 06:43:06.789060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.212 qpair failed and we were unable to recover it. 00:32:35.212 [2024-11-20 06:43:06.789181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.212 [2024-11-20 06:43:06.789210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.212 qpair failed and we were unable to recover it. 00:32:35.212 [2024-11-20 06:43:06.789311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.212 [2024-11-20 06:43:06.789332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.212 qpair failed and we were unable to recover it. 00:32:35.212 [2024-11-20 06:43:06.789522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.212 [2024-11-20 06:43:06.789543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.212 qpair failed and we were unable to recover it. 00:32:35.212 [2024-11-20 06:43:06.789711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.212 [2024-11-20 06:43:06.789734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.212 qpair failed and we were unable to recover it. 00:32:35.212 [2024-11-20 06:43:06.789935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.212 [2024-11-20 06:43:06.789956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.212 qpair failed and we were unable to recover it. 00:32:35.212 [2024-11-20 06:43:06.790064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.212 [2024-11-20 06:43:06.790087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.212 qpair failed and we were unable to recover it. 00:32:35.213 [2024-11-20 06:43:06.790269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.213 [2024-11-20 06:43:06.790292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.213 qpair failed and we were unable to recover it. 00:32:35.213 [2024-11-20 06:43:06.790461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.213 [2024-11-20 06:43:06.790483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.213 qpair failed and we were unable to recover it. 00:32:35.213 [2024-11-20 06:43:06.790602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.213 [2024-11-20 06:43:06.790624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.213 qpair failed and we were unable to recover it. 00:32:35.213 [2024-11-20 06:43:06.790713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.213 [2024-11-20 06:43:06.790733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.213 qpair failed and we were unable to recover it. 00:32:35.213 [2024-11-20 06:43:06.790902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.213 [2024-11-20 06:43:06.790923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.213 qpair failed and we were unable to recover it. 00:32:35.213 [2024-11-20 06:43:06.791076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.213 [2024-11-20 06:43:06.791099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.213 qpair failed and we were unable to recover it. 00:32:35.213 [2024-11-20 06:43:06.791216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.213 [2024-11-20 06:43:06.791239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.213 qpair failed and we were unable to recover it. 00:32:35.213 [2024-11-20 06:43:06.791341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.213 [2024-11-20 06:43:06.791362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.213 qpair failed and we were unable to recover it. 00:32:35.213 [2024-11-20 06:43:06.791658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.213 [2024-11-20 06:43:06.791690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.213 qpair failed and we were unable to recover it. 00:32:35.213 [2024-11-20 06:43:06.791909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.213 [2024-11-20 06:43:06.791940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.213 qpair failed and we were unable to recover it. 00:32:35.213 [2024-11-20 06:43:06.792137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.213 [2024-11-20 06:43:06.792169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.213 qpair failed and we were unable to recover it. 00:32:35.213 [2024-11-20 06:43:06.792429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.213 [2024-11-20 06:43:06.792463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.213 qpair failed and we were unable to recover it. 00:32:35.213 [2024-11-20 06:43:06.792653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.213 [2024-11-20 06:43:06.792685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.213 qpair failed and we were unable to recover it. 00:32:35.213 [2024-11-20 06:43:06.792824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.213 [2024-11-20 06:43:06.792847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.213 qpair failed and we were unable to recover it. 00:32:35.213 [2024-11-20 06:43:06.792937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.213 [2024-11-20 06:43:06.792958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.213 qpair failed and we were unable to recover it. 00:32:35.213 [2024-11-20 06:43:06.793128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.213 [2024-11-20 06:43:06.793150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.213 qpair failed and we were unable to recover it. 00:32:35.213 [2024-11-20 06:43:06.793331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.213 [2024-11-20 06:43:06.793354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.213 qpair failed and we were unable to recover it. 00:32:35.213 [2024-11-20 06:43:06.793474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.213 [2024-11-20 06:43:06.793497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.213 qpair failed and we were unable to recover it. 00:32:35.213 [2024-11-20 06:43:06.793723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.213 [2024-11-20 06:43:06.793746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.213 qpair failed and we were unable to recover it. 00:32:35.213 [2024-11-20 06:43:06.793909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.213 [2024-11-20 06:43:06.793932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.213 qpair failed and we were unable to recover it. 00:32:35.213 [2024-11-20 06:43:06.794086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.213 [2024-11-20 06:43:06.794109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.213 qpair failed and we were unable to recover it. 00:32:35.213 [2024-11-20 06:43:06.794279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.213 [2024-11-20 06:43:06.794302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.213 qpair failed and we were unable to recover it. 00:32:35.213 [2024-11-20 06:43:06.794502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.213 [2024-11-20 06:43:06.794525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.213 qpair failed and we were unable to recover it. 00:32:35.213 [2024-11-20 06:43:06.794623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.213 [2024-11-20 06:43:06.794643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.213 qpair failed and we were unable to recover it. 00:32:35.213 [2024-11-20 06:43:06.794808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.213 [2024-11-20 06:43:06.794831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.213 qpair failed and we were unable to recover it. 00:32:35.213 [2024-11-20 06:43:06.794943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.213 [2024-11-20 06:43:06.794964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.214 qpair failed and we were unable to recover it. 00:32:35.214 [2024-11-20 06:43:06.795193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.214 [2024-11-20 06:43:06.795232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.214 qpair failed and we were unable to recover it. 00:32:35.214 [2024-11-20 06:43:06.795334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.214 [2024-11-20 06:43:06.795360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.214 qpair failed and we were unable to recover it. 00:32:35.214 [2024-11-20 06:43:06.795446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.214 [2024-11-20 06:43:06.795468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.214 qpair failed and we were unable to recover it. 00:32:35.214 [2024-11-20 06:43:06.795556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.214 [2024-11-20 06:43:06.795577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.214 qpair failed and we were unable to recover it. 00:32:35.214 [2024-11-20 06:43:06.795733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.214 [2024-11-20 06:43:06.795756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.214 qpair failed and we were unable to recover it. 00:32:35.214 [2024-11-20 06:43:06.795923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.214 [2024-11-20 06:43:06.795944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.214 qpair failed and we were unable to recover it. 00:32:35.214 [2024-11-20 06:43:06.796050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.214 [2024-11-20 06:43:06.796073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.214 qpair failed and we were unable to recover it. 00:32:35.214 [2024-11-20 06:43:06.796170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.214 [2024-11-20 06:43:06.796192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.214 qpair failed and we were unable to recover it. 00:32:35.214 [2024-11-20 06:43:06.796440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.214 [2024-11-20 06:43:06.796463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.214 qpair failed and we were unable to recover it. 00:32:35.214 [2024-11-20 06:43:06.796582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.214 [2024-11-20 06:43:06.796604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.214 qpair failed and we were unable to recover it. 00:32:35.214 [2024-11-20 06:43:06.796781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.214 [2024-11-20 06:43:06.796804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.214 qpair failed and we were unable to recover it. 00:32:35.214 [2024-11-20 06:43:06.796907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.214 [2024-11-20 06:43:06.796928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.214 qpair failed and we were unable to recover it. 00:32:35.214 [2024-11-20 06:43:06.797019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.214 [2024-11-20 06:43:06.797040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.214 qpair failed and we were unable to recover it. 00:32:35.214 [2024-11-20 06:43:06.797196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.214 [2024-11-20 06:43:06.797229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.214 qpair failed and we were unable to recover it. 00:32:35.214 [2024-11-20 06:43:06.797390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.214 [2024-11-20 06:43:06.797413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.214 qpair failed and we were unable to recover it. 00:32:35.214 [2024-11-20 06:43:06.797537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.214 [2024-11-20 06:43:06.797559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.214 qpair failed and we were unable to recover it. 00:32:35.214 [2024-11-20 06:43:06.797816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.214 [2024-11-20 06:43:06.797848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.214 qpair failed and we were unable to recover it. 00:32:35.214 [2024-11-20 06:43:06.797980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.214 [2024-11-20 06:43:06.798013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.214 qpair failed and we were unable to recover it. 00:32:35.214 [2024-11-20 06:43:06.798138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.214 [2024-11-20 06:43:06.798171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.214 qpair failed and we were unable to recover it. 00:32:35.214 [2024-11-20 06:43:06.798307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.214 [2024-11-20 06:43:06.798340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.214 qpair failed and we were unable to recover it. 00:32:35.214 [2024-11-20 06:43:06.798606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.214 [2024-11-20 06:43:06.798628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.214 qpair failed and we were unable to recover it. 00:32:35.214 [2024-11-20 06:43:06.798818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.214 [2024-11-20 06:43:06.798841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.214 qpair failed and we were unable to recover it. 00:32:35.214 [2024-11-20 06:43:06.798948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.214 [2024-11-20 06:43:06.798971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.214 qpair failed and we were unable to recover it. 00:32:35.214 [2024-11-20 06:43:06.799067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.214 [2024-11-20 06:43:06.799088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.214 qpair failed and we were unable to recover it. 00:32:35.214 [2024-11-20 06:43:06.799265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.214 [2024-11-20 06:43:06.799288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.214 qpair failed and we were unable to recover it. 00:32:35.214 [2024-11-20 06:43:06.799393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.214 [2024-11-20 06:43:06.799416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.215 qpair failed and we were unable to recover it. 00:32:35.215 [2024-11-20 06:43:06.799641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.215 [2024-11-20 06:43:06.799664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.215 qpair failed and we were unable to recover it. 00:32:35.215 [2024-11-20 06:43:06.799841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.215 [2024-11-20 06:43:06.799863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.215 qpair failed and we were unable to recover it. 00:32:35.215 [2024-11-20 06:43:06.799980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.215 [2024-11-20 06:43:06.800002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.215 qpair failed and we were unable to recover it. 00:32:35.215 [2024-11-20 06:43:06.800165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.215 [2024-11-20 06:43:06.800188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.215 qpair failed and we were unable to recover it. 00:32:35.215 [2024-11-20 06:43:06.800438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.215 [2024-11-20 06:43:06.800461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.215 qpair failed and we were unable to recover it. 00:32:35.215 [2024-11-20 06:43:06.800584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.215 [2024-11-20 06:43:06.800607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.215 qpair failed and we were unable to recover it. 00:32:35.215 [2024-11-20 06:43:06.800782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.215 [2024-11-20 06:43:06.800805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.215 qpair failed and we were unable to recover it. 00:32:35.215 [2024-11-20 06:43:06.801075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.215 [2024-11-20 06:43:06.801114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.215 qpair failed and we were unable to recover it. 00:32:35.215 [2024-11-20 06:43:06.801289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.215 [2024-11-20 06:43:06.801313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.215 qpair failed and we were unable to recover it. 00:32:35.215 [2024-11-20 06:43:06.801485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.215 [2024-11-20 06:43:06.801507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.215 qpair failed and we were unable to recover it. 00:32:35.215 [2024-11-20 06:43:06.801667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.215 [2024-11-20 06:43:06.801690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.215 qpair failed and we were unable to recover it. 00:32:35.215 [2024-11-20 06:43:06.801917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.215 [2024-11-20 06:43:06.801939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.215 qpair failed and we were unable to recover it. 00:32:35.215 [2024-11-20 06:43:06.802100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.215 [2024-11-20 06:43:06.802123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.215 qpair failed and we were unable to recover it. 00:32:35.215 [2024-11-20 06:43:06.802295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.215 [2024-11-20 06:43:06.802341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.215 qpair failed and we were unable to recover it. 00:32:35.215 [2024-11-20 06:43:06.802479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.215 [2024-11-20 06:43:06.802512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.215 qpair failed and we were unable to recover it. 00:32:35.215 [2024-11-20 06:43:06.802779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.215 [2024-11-20 06:43:06.802811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.215 qpair failed and we were unable to recover it. 00:32:35.215 [2024-11-20 06:43:06.803000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.215 [2024-11-20 06:43:06.803023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.215 qpair failed and we were unable to recover it. 00:32:35.215 [2024-11-20 06:43:06.803139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.215 [2024-11-20 06:43:06.803161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.215 qpair failed and we were unable to recover it. 00:32:35.215 [2024-11-20 06:43:06.803335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.215 [2024-11-20 06:43:06.803358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.215 qpair failed and we were unable to recover it. 00:32:35.215 [2024-11-20 06:43:06.803583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.215 [2024-11-20 06:43:06.803604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.215 qpair failed and we were unable to recover it. 00:32:35.215 [2024-11-20 06:43:06.803866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.215 [2024-11-20 06:43:06.803888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.215 qpair failed and we were unable to recover it. 00:32:35.215 [2024-11-20 06:43:06.804060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.215 [2024-11-20 06:43:06.804081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.215 qpair failed and we were unable to recover it. 00:32:35.215 [2024-11-20 06:43:06.804270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.215 [2024-11-20 06:43:06.804293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.215 qpair failed and we were unable to recover it. 00:32:35.215 [2024-11-20 06:43:06.804525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.215 [2024-11-20 06:43:06.804556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.215 qpair failed and we were unable to recover it. 00:32:35.215 [2024-11-20 06:43:06.804749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.215 [2024-11-20 06:43:06.804782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.215 qpair failed and we were unable to recover it. 00:32:35.215 [2024-11-20 06:43:06.804973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.215 [2024-11-20 06:43:06.805006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.215 qpair failed and we were unable to recover it. 00:32:35.215 [2024-11-20 06:43:06.805120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.215 [2024-11-20 06:43:06.805152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.215 qpair failed and we were unable to recover it. 00:32:35.215 [2024-11-20 06:43:06.805358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.215 [2024-11-20 06:43:06.805391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.215 qpair failed and we were unable to recover it. 00:32:35.215 [2024-11-20 06:43:06.805578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.215 [2024-11-20 06:43:06.805611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.215 qpair failed and we were unable to recover it. 00:32:35.215 [2024-11-20 06:43:06.805787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.216 [2024-11-20 06:43:06.805825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.216 qpair failed and we were unable to recover it. 00:32:35.216 [2024-11-20 06:43:06.805932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.216 [2024-11-20 06:43:06.805955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.216 qpair failed and we were unable to recover it. 00:32:35.216 [2024-11-20 06:43:06.806124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.216 [2024-11-20 06:43:06.806146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.216 qpair failed and we were unable to recover it. 00:32:35.216 [2024-11-20 06:43:06.806319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.216 [2024-11-20 06:43:06.806343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.216 qpair failed and we were unable to recover it. 00:32:35.216 [2024-11-20 06:43:06.806504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.216 [2024-11-20 06:43:06.806526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.216 qpair failed and we were unable to recover it. 00:32:35.216 [2024-11-20 06:43:06.806638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.216 [2024-11-20 06:43:06.806660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.216 qpair failed and we were unable to recover it. 00:32:35.216 [2024-11-20 06:43:06.806816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.216 [2024-11-20 06:43:06.806837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.216 qpair failed and we were unable to recover it. 00:32:35.216 [2024-11-20 06:43:06.807008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.216 [2024-11-20 06:43:06.807030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.216 qpair failed and we were unable to recover it. 00:32:35.216 [2024-11-20 06:43:06.807114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.216 [2024-11-20 06:43:06.807133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.216 qpair failed and we were unable to recover it. 00:32:35.216 [2024-11-20 06:43:06.807290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.216 [2024-11-20 06:43:06.807313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.216 qpair failed and we were unable to recover it. 00:32:35.216 [2024-11-20 06:43:06.807484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.216 [2024-11-20 06:43:06.807505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.216 qpair failed and we were unable to recover it. 00:32:35.216 [2024-11-20 06:43:06.807757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.216 [2024-11-20 06:43:06.807790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.216 qpair failed and we were unable to recover it. 00:32:35.216 [2024-11-20 06:43:06.807915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.216 [2024-11-20 06:43:06.807947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.216 qpair failed and we were unable to recover it. 00:32:35.216 [2024-11-20 06:43:06.808059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.216 [2024-11-20 06:43:06.808089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.216 qpair failed and we were unable to recover it. 00:32:35.216 [2024-11-20 06:43:06.808226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.216 [2024-11-20 06:43:06.808266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.216 qpair failed and we were unable to recover it. 00:32:35.216 [2024-11-20 06:43:06.808458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.216 [2024-11-20 06:43:06.808490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.216 qpair failed and we were unable to recover it. 00:32:35.216 [2024-11-20 06:43:06.808804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.216 [2024-11-20 06:43:06.808826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.216 qpair failed and we were unable to recover it. 00:32:35.216 [2024-11-20 06:43:06.809073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.216 [2024-11-20 06:43:06.809095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.216 qpair failed and we were unable to recover it. 00:32:35.216 [2024-11-20 06:43:06.809199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.216 [2024-11-20 06:43:06.809228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.216 qpair failed and we were unable to recover it. 00:32:35.216 [2024-11-20 06:43:06.809477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.216 [2024-11-20 06:43:06.809500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.216 qpair failed and we were unable to recover it. 00:32:35.216 [2024-11-20 06:43:06.809754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.216 [2024-11-20 06:43:06.809785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.216 qpair failed and we were unable to recover it. 00:32:35.216 [2024-11-20 06:43:06.809969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.216 [2024-11-20 06:43:06.810002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.216 qpair failed and we were unable to recover it. 00:32:35.216 [2024-11-20 06:43:06.810267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.216 [2024-11-20 06:43:06.810301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.216 qpair failed and we were unable to recover it. 00:32:35.216 [2024-11-20 06:43:06.810436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.216 [2024-11-20 06:43:06.810466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.216 qpair failed and we were unable to recover it. 00:32:35.216 [2024-11-20 06:43:06.810673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.216 [2024-11-20 06:43:06.810694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.216 qpair failed and we were unable to recover it. 00:32:35.216 [2024-11-20 06:43:06.810926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.216 [2024-11-20 06:43:06.810947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.216 qpair failed and we were unable to recover it. 00:32:35.216 [2024-11-20 06:43:06.811195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.216 [2024-11-20 06:43:06.811240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.216 qpair failed and we were unable to recover it. 00:32:35.216 [2024-11-20 06:43:06.811487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.216 [2024-11-20 06:43:06.811508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.216 qpair failed and we were unable to recover it. 00:32:35.216 [2024-11-20 06:43:06.811697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.216 [2024-11-20 06:43:06.811718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.216 qpair failed and we were unable to recover it. 00:32:35.216 [2024-11-20 06:43:06.811939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.216 [2024-11-20 06:43:06.811961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.216 qpair failed and we were unable to recover it. 00:32:35.216 [2024-11-20 06:43:06.812133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.216 [2024-11-20 06:43:06.812154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.216 qpair failed and we were unable to recover it. 00:32:35.216 [2024-11-20 06:43:06.812395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.216 [2024-11-20 06:43:06.812418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.216 qpair failed and we were unable to recover it. 00:32:35.216 [2024-11-20 06:43:06.812698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.216 [2024-11-20 06:43:06.812720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.216 qpair failed and we were unable to recover it. 00:32:35.216 [2024-11-20 06:43:06.812910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.216 [2024-11-20 06:43:06.812931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.216 qpair failed and we were unable to recover it. 00:32:35.216 [2024-11-20 06:43:06.813121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.216 [2024-11-20 06:43:06.813143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.217 qpair failed and we were unable to recover it. 00:32:35.217 [2024-11-20 06:43:06.813400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.217 [2024-11-20 06:43:06.813423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.217 qpair failed and we were unable to recover it. 00:32:35.217 [2024-11-20 06:43:06.813694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.217 [2024-11-20 06:43:06.813716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.217 qpair failed and we were unable to recover it. 00:32:35.217 [2024-11-20 06:43:06.813905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.217 [2024-11-20 06:43:06.813927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.217 qpair failed and we were unable to recover it. 00:32:35.217 [2024-11-20 06:43:06.814178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.217 [2024-11-20 06:43:06.814199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.217 qpair failed and we were unable to recover it. 00:32:35.217 [2024-11-20 06:43:06.814416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.217 [2024-11-20 06:43:06.814438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.217 qpair failed and we were unable to recover it. 00:32:35.217 [2024-11-20 06:43:06.814604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.217 [2024-11-20 06:43:06.814625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.217 qpair failed and we were unable to recover it. 00:32:35.217 [2024-11-20 06:43:06.814783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.217 [2024-11-20 06:43:06.814824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.217 qpair failed and we were unable to recover it. 00:32:35.217 [2024-11-20 06:43:06.814938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.217 [2024-11-20 06:43:06.814968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.217 qpair failed and we were unable to recover it. 00:32:35.217 [2024-11-20 06:43:06.815265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.217 [2024-11-20 06:43:06.815299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.217 qpair failed and we were unable to recover it. 00:32:35.217 [2024-11-20 06:43:06.815549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.217 [2024-11-20 06:43:06.815580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.217 qpair failed and we were unable to recover it. 00:32:35.217 [2024-11-20 06:43:06.815850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.217 [2024-11-20 06:43:06.815882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.217 qpair failed and we were unable to recover it. 00:32:35.217 [2024-11-20 06:43:06.816012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.217 [2024-11-20 06:43:06.816042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.217 qpair failed and we were unable to recover it. 00:32:35.217 [2024-11-20 06:43:06.816177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.217 [2024-11-20 06:43:06.816219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.217 qpair failed and we were unable to recover it. 00:32:35.217 [2024-11-20 06:43:06.816464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.217 [2024-11-20 06:43:06.816496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.217 qpair failed and we were unable to recover it. 00:32:35.217 [2024-11-20 06:43:06.816742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.217 [2024-11-20 06:43:06.816774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.217 qpair failed and we were unable to recover it. 00:32:35.217 [2024-11-20 06:43:06.817042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.217 [2024-11-20 06:43:06.817073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.217 qpair failed and we were unable to recover it. 00:32:35.217 [2024-11-20 06:43:06.817213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.217 [2024-11-20 06:43:06.817247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.217 qpair failed and we were unable to recover it. 00:32:35.217 [2024-11-20 06:43:06.817513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.217 [2024-11-20 06:43:06.817545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.217 qpair failed and we were unable to recover it. 00:32:35.217 [2024-11-20 06:43:06.817738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.217 [2024-11-20 06:43:06.817770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.217 qpair failed and we were unable to recover it. 00:32:35.217 [2024-11-20 06:43:06.817965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.217 [2024-11-20 06:43:06.817997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.217 qpair failed and we were unable to recover it. 00:32:35.217 [2024-11-20 06:43:06.818262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.217 [2024-11-20 06:43:06.818306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.217 qpair failed and we were unable to recover it. 00:32:35.217 [2024-11-20 06:43:06.818582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.217 [2024-11-20 06:43:06.818614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.217 qpair failed and we were unable to recover it. 00:32:35.217 [2024-11-20 06:43:06.818890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.217 [2024-11-20 06:43:06.818911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.217 qpair failed and we were unable to recover it. 00:32:35.217 [2024-11-20 06:43:06.819153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.217 [2024-11-20 06:43:06.819175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.217 qpair failed and we were unable to recover it. 00:32:35.217 [2024-11-20 06:43:06.819354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.217 [2024-11-20 06:43:06.819376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.217 qpair failed and we were unable to recover it. 00:32:35.217 [2024-11-20 06:43:06.819544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.217 [2024-11-20 06:43:06.819567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.217 qpair failed and we were unable to recover it. 00:32:35.217 [2024-11-20 06:43:06.819791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.217 [2024-11-20 06:43:06.819822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.217 qpair failed and we were unable to recover it. 00:32:35.217 [2024-11-20 06:43:06.820065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.217 [2024-11-20 06:43:06.820097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.217 qpair failed and we were unable to recover it. 00:32:35.217 [2024-11-20 06:43:06.820353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.217 [2024-11-20 06:43:06.820388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.217 qpair failed and we were unable to recover it. 00:32:35.217 [2024-11-20 06:43:06.820591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.217 [2024-11-20 06:43:06.820612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.217 qpair failed and we were unable to recover it. 00:32:35.217 [2024-11-20 06:43:06.820789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.217 [2024-11-20 06:43:06.820810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.217 qpair failed and we were unable to recover it. 00:32:35.217 [2024-11-20 06:43:06.821083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.218 [2024-11-20 06:43:06.821115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.218 qpair failed and we were unable to recover it. 00:32:35.218 [2024-11-20 06:43:06.821260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.218 [2024-11-20 06:43:06.821301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.218 qpair failed and we were unable to recover it. 00:32:35.218 [2024-11-20 06:43:06.821488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.218 [2024-11-20 06:43:06.821520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.218 qpair failed and we were unable to recover it. 00:32:35.218 [2024-11-20 06:43:06.821721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.218 [2024-11-20 06:43:06.821743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.218 qpair failed and we were unable to recover it. 00:32:35.218 [2024-11-20 06:43:06.821957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.218 [2024-11-20 06:43:06.821979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.218 qpair failed and we were unable to recover it. 00:32:35.218 [2024-11-20 06:43:06.822213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.218 [2024-11-20 06:43:06.822235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.218 qpair failed and we were unable to recover it. 00:32:35.218 [2024-11-20 06:43:06.822399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.218 [2024-11-20 06:43:06.822420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.218 qpair failed and we were unable to recover it. 00:32:35.218 [2024-11-20 06:43:06.822576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.218 [2024-11-20 06:43:06.822597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.218 qpair failed and we were unable to recover it. 00:32:35.218 [2024-11-20 06:43:06.822760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.218 [2024-11-20 06:43:06.822781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.218 qpair failed and we were unable to recover it. 00:32:35.218 [2024-11-20 06:43:06.823057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.218 [2024-11-20 06:43:06.823078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.218 qpair failed and we were unable to recover it. 00:32:35.218 [2024-11-20 06:43:06.823259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.218 [2024-11-20 06:43:06.823282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.218 qpair failed and we were unable to recover it. 00:32:35.218 [2024-11-20 06:43:06.823455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.218 [2024-11-20 06:43:06.823476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.218 qpair failed and we were unable to recover it. 00:32:35.218 [2024-11-20 06:43:06.823577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.218 [2024-11-20 06:43:06.823599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.218 qpair failed and we were unable to recover it. 00:32:35.218 [2024-11-20 06:43:06.823755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.218 [2024-11-20 06:43:06.823776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.218 qpair failed and we were unable to recover it. 00:32:35.218 [2024-11-20 06:43:06.823932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.218 [2024-11-20 06:43:06.823954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.218 qpair failed and we were unable to recover it. 00:32:35.218 [2024-11-20 06:43:06.824147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.218 [2024-11-20 06:43:06.824178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.218 qpair failed and we were unable to recover it. 00:32:35.218 [2024-11-20 06:43:06.824525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.218 [2024-11-20 06:43:06.824563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.218 qpair failed and we were unable to recover it. 00:32:35.218 [2024-11-20 06:43:06.824763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.218 [2024-11-20 06:43:06.824795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.218 qpair failed and we were unable to recover it. 00:32:35.218 [2024-11-20 06:43:06.825068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.218 [2024-11-20 06:43:06.825100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.218 qpair failed and we were unable to recover it. 00:32:35.218 [2024-11-20 06:43:06.825369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.218 [2024-11-20 06:43:06.825404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.218 qpair failed and we were unable to recover it. 00:32:35.218 [2024-11-20 06:43:06.825690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.218 [2024-11-20 06:43:06.825712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.218 qpair failed and we were unable to recover it. 00:32:35.218 [2024-11-20 06:43:06.825885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.218 [2024-11-20 06:43:06.825906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.218 qpair failed and we were unable to recover it. 00:32:35.218 [2024-11-20 06:43:06.826127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.218 [2024-11-20 06:43:06.826150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.218 qpair failed and we were unable to recover it. 00:32:35.218 [2024-11-20 06:43:06.826339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.218 [2024-11-20 06:43:06.826362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.218 qpair failed and we were unable to recover it. 00:32:35.218 [2024-11-20 06:43:06.826561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.218 [2024-11-20 06:43:06.826584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.218 qpair failed and we were unable to recover it. 00:32:35.218 [2024-11-20 06:43:06.826782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.218 [2024-11-20 06:43:06.826805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.218 qpair failed and we were unable to recover it. 00:32:35.218 [2024-11-20 06:43:06.826966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.218 [2024-11-20 06:43:06.826988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.218 qpair failed and we were unable to recover it. 00:32:35.218 [2024-11-20 06:43:06.827182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.218 [2024-11-20 06:43:06.827249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.218 qpair failed and we were unable to recover it. 00:32:35.218 [2024-11-20 06:43:06.827431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.218 [2024-11-20 06:43:06.827463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.218 qpair failed and we were unable to recover it. 00:32:35.218 [2024-11-20 06:43:06.827732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.218 [2024-11-20 06:43:06.827766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.218 qpair failed and we were unable to recover it. 00:32:35.218 [2024-11-20 06:43:06.827969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.218 [2024-11-20 06:43:06.828001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.218 qpair failed and we were unable to recover it. 00:32:35.218 [2024-11-20 06:43:06.828252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.218 [2024-11-20 06:43:06.828285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.218 qpair failed and we were unable to recover it. 00:32:35.218 [2024-11-20 06:43:06.828533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.218 [2024-11-20 06:43:06.828566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.218 qpair failed and we were unable to recover it. 00:32:35.218 [2024-11-20 06:43:06.828743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.218 [2024-11-20 06:43:06.828775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.218 qpair failed and we were unable to recover it. 00:32:35.218 [2024-11-20 06:43:06.829019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.218 [2024-11-20 06:43:06.829052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.218 qpair failed and we were unable to recover it. 00:32:35.218 [2024-11-20 06:43:06.829347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.218 [2024-11-20 06:43:06.829382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.218 qpair failed and we were unable to recover it. 00:32:35.218 [2024-11-20 06:43:06.829584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.218 [2024-11-20 06:43:06.829605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.218 qpair failed and we were unable to recover it. 00:32:35.218 [2024-11-20 06:43:06.829828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.218 [2024-11-20 06:43:06.829850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.219 qpair failed and we were unable to recover it. 00:32:35.219 [2024-11-20 06:43:06.830025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.219 [2024-11-20 06:43:06.830047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.219 qpair failed and we were unable to recover it. 00:32:35.219 [2024-11-20 06:43:06.830258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.219 [2024-11-20 06:43:06.830291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.219 qpair failed and we were unable to recover it. 00:32:35.219 [2024-11-20 06:43:06.830492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.219 [2024-11-20 06:43:06.830524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.219 qpair failed and we were unable to recover it. 00:32:35.219 [2024-11-20 06:43:06.830734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.219 [2024-11-20 06:43:06.830766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.219 qpair failed and we were unable to recover it. 00:32:35.219 [2024-11-20 06:43:06.830946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.219 [2024-11-20 06:43:06.830967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.219 qpair failed and we were unable to recover it. 00:32:35.219 [2024-11-20 06:43:06.831069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.219 [2024-11-20 06:43:06.831090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.219 qpair failed and we were unable to recover it. 00:32:35.219 [2024-11-20 06:43:06.831353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.219 [2024-11-20 06:43:06.831376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.219 qpair failed and we were unable to recover it. 00:32:35.219 [2024-11-20 06:43:06.831552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.219 [2024-11-20 06:43:06.831574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.219 qpair failed and we were unable to recover it. 00:32:35.219 [2024-11-20 06:43:06.831804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.219 [2024-11-20 06:43:06.831827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.219 qpair failed and we were unable to recover it. 00:32:35.219 [2024-11-20 06:43:06.832012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.219 [2024-11-20 06:43:06.832034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.219 qpair failed and we were unable to recover it. 00:32:35.219 [2024-11-20 06:43:06.832217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.219 [2024-11-20 06:43:06.832239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.219 qpair failed and we were unable to recover it. 00:32:35.219 [2024-11-20 06:43:06.832416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.219 [2024-11-20 06:43:06.832438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.219 qpair failed and we were unable to recover it. 00:32:35.219 [2024-11-20 06:43:06.832550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.219 [2024-11-20 06:43:06.832573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.219 qpair failed and we were unable to recover it. 00:32:35.219 [2024-11-20 06:43:06.832789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.219 [2024-11-20 06:43:06.832811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.219 qpair failed and we were unable to recover it. 00:32:35.219 [2024-11-20 06:43:06.832982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.219 [2024-11-20 06:43:06.833004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.219 qpair failed and we were unable to recover it. 00:32:35.219 [2024-11-20 06:43:06.833288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.219 [2024-11-20 06:43:06.833322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.219 qpair failed and we were unable to recover it. 00:32:35.219 [2024-11-20 06:43:06.833594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.219 [2024-11-20 06:43:06.833615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.219 qpair failed and we were unable to recover it. 00:32:35.219 [2024-11-20 06:43:06.833868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.219 [2024-11-20 06:43:06.833890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.219 qpair failed and we were unable to recover it. 00:32:35.219 [2024-11-20 06:43:06.834164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.219 [2024-11-20 06:43:06.834185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.219 qpair failed and we were unable to recover it. 00:32:35.219 [2024-11-20 06:43:06.834445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.219 [2024-11-20 06:43:06.834472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.219 qpair failed and we were unable to recover it. 00:32:35.219 [2024-11-20 06:43:06.834728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.219 [2024-11-20 06:43:06.834749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.219 qpair failed and we were unable to recover it. 00:32:35.219 [2024-11-20 06:43:06.835024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.219 [2024-11-20 06:43:06.835046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.219 qpair failed and we were unable to recover it. 00:32:35.219 [2024-11-20 06:43:06.835246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.219 [2024-11-20 06:43:06.835270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.219 qpair failed and we were unable to recover it. 00:32:35.219 [2024-11-20 06:43:06.835519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.219 [2024-11-20 06:43:06.835541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.219 qpair failed and we were unable to recover it. 00:32:35.219 [2024-11-20 06:43:06.835770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.219 [2024-11-20 06:43:06.835792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.219 qpair failed and we were unable to recover it. 00:32:35.219 [2024-11-20 06:43:06.836030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.219 [2024-11-20 06:43:06.836051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.219 qpair failed and we were unable to recover it. 00:32:35.219 [2024-11-20 06:43:06.836273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.219 [2024-11-20 06:43:06.836297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.219 qpair failed and we were unable to recover it. 00:32:35.219 [2024-11-20 06:43:06.836472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.219 [2024-11-20 06:43:06.836494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.219 qpair failed and we were unable to recover it. 00:32:35.219 [2024-11-20 06:43:06.836655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.219 [2024-11-20 06:43:06.836677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.219 qpair failed and we were unable to recover it. 00:32:35.219 [2024-11-20 06:43:06.836907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.219 [2024-11-20 06:43:06.836928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.219 qpair failed and we were unable to recover it. 00:32:35.219 [2024-11-20 06:43:06.837172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.219 [2024-11-20 06:43:06.837194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.219 qpair failed and we were unable to recover it. 00:32:35.219 [2024-11-20 06:43:06.837447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.219 [2024-11-20 06:43:06.837472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.219 qpair failed and we were unable to recover it. 00:32:35.219 [2024-11-20 06:43:06.837649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.219 [2024-11-20 06:43:06.837671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.219 qpair failed and we were unable to recover it. 00:32:35.219 [2024-11-20 06:43:06.837927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.219 [2024-11-20 06:43:06.837959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.219 qpair failed and we were unable to recover it. 00:32:35.219 [2024-11-20 06:43:06.838233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.219 [2024-11-20 06:43:06.838267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.219 qpair failed and we were unable to recover it. 00:32:35.219 [2024-11-20 06:43:06.838497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.219 [2024-11-20 06:43:06.838528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.219 qpair failed and we were unable to recover it. 00:32:35.219 [2024-11-20 06:43:06.838773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.219 [2024-11-20 06:43:06.838806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.219 qpair failed and we were unable to recover it. 00:32:35.219 [2024-11-20 06:43:06.838986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.220 [2024-11-20 06:43:06.839017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.220 qpair failed and we were unable to recover it. 00:32:35.220 [2024-11-20 06:43:06.839237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.220 [2024-11-20 06:43:06.839270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.220 qpair failed and we were unable to recover it. 00:32:35.220 [2024-11-20 06:43:06.839545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.220 [2024-11-20 06:43:06.839577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.220 qpair failed and we were unable to recover it. 00:32:35.220 [2024-11-20 06:43:06.839847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.220 [2024-11-20 06:43:06.839880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.220 qpair failed and we were unable to recover it. 00:32:35.220 [2024-11-20 06:43:06.840170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.220 [2024-11-20 06:43:06.840192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.220 qpair failed and we were unable to recover it. 00:32:35.220 [2024-11-20 06:43:06.840358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.220 [2024-11-20 06:43:06.840380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.220 qpair failed and we were unable to recover it. 00:32:35.220 [2024-11-20 06:43:06.840555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.220 [2024-11-20 06:43:06.840577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.220 qpair failed and we were unable to recover it. 00:32:35.220 [2024-11-20 06:43:06.840814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.220 [2024-11-20 06:43:06.840845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.220 qpair failed and we were unable to recover it. 00:32:35.220 [2024-11-20 06:43:06.841113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.220 [2024-11-20 06:43:06.841146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.220 qpair failed and we were unable to recover it. 00:32:35.220 [2024-11-20 06:43:06.841396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.220 [2024-11-20 06:43:06.841436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.220 qpair failed and we were unable to recover it. 00:32:35.220 [2024-11-20 06:43:06.841632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.220 [2024-11-20 06:43:06.841664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.220 qpair failed and we were unable to recover it. 00:32:35.220 [2024-11-20 06:43:06.841956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.220 [2024-11-20 06:43:06.841979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.220 qpair failed and we were unable to recover it. 00:32:35.220 [2024-11-20 06:43:06.842170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.220 [2024-11-20 06:43:06.842192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.220 qpair failed and we were unable to recover it. 00:32:35.220 [2024-11-20 06:43:06.842404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.220 [2024-11-20 06:43:06.842426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.220 qpair failed and we were unable to recover it. 00:32:35.220 [2024-11-20 06:43:06.842617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.220 [2024-11-20 06:43:06.842638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.220 qpair failed and we were unable to recover it. 00:32:35.220 [2024-11-20 06:43:06.842895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.220 [2024-11-20 06:43:06.842928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.220 qpair failed and we were unable to recover it. 00:32:35.220 [2024-11-20 06:43:06.843228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.220 [2024-11-20 06:43:06.843262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.220 qpair failed and we were unable to recover it. 00:32:35.220 [2024-11-20 06:43:06.843461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.220 [2024-11-20 06:43:06.843494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.220 qpair failed and we were unable to recover it. 00:32:35.220 [2024-11-20 06:43:06.843747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.220 [2024-11-20 06:43:06.843769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.220 qpair failed and we were unable to recover it. 00:32:35.220 [2024-11-20 06:43:06.844037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.220 [2024-11-20 06:43:06.844059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.220 qpair failed and we were unable to recover it. 00:32:35.220 [2024-11-20 06:43:06.844278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.220 [2024-11-20 06:43:06.844301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.220 qpair failed and we were unable to recover it. 00:32:35.220 [2024-11-20 06:43:06.844471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.220 [2024-11-20 06:43:06.844494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.220 qpair failed and we were unable to recover it. 00:32:35.220 [2024-11-20 06:43:06.844671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.220 [2024-11-20 06:43:06.844692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.220 qpair failed and we were unable to recover it. 00:32:35.220 [2024-11-20 06:43:06.844975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.220 [2024-11-20 06:43:06.844999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.220 qpair failed and we were unable to recover it. 00:32:35.220 [2024-11-20 06:43:06.845292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.220 [2024-11-20 06:43:06.845315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.220 qpair failed and we were unable to recover it. 00:32:35.220 [2024-11-20 06:43:06.845489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.220 [2024-11-20 06:43:06.845512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.220 qpair failed and we were unable to recover it. 00:32:35.220 [2024-11-20 06:43:06.845686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.220 [2024-11-20 06:43:06.845708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.220 qpair failed and we were unable to recover it. 00:32:35.220 [2024-11-20 06:43:06.845954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.220 [2024-11-20 06:43:06.845976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.220 qpair failed and we were unable to recover it. 00:32:35.220 [2024-11-20 06:43:06.846150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.220 [2024-11-20 06:43:06.846172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.220 qpair failed and we were unable to recover it. 00:32:35.220 [2024-11-20 06:43:06.846350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.221 [2024-11-20 06:43:06.846372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.221 qpair failed and we were unable to recover it. 00:32:35.221 [2024-11-20 06:43:06.846574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.221 [2024-11-20 06:43:06.846596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.221 qpair failed and we were unable to recover it. 00:32:35.221 [2024-11-20 06:43:06.846767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.221 [2024-11-20 06:43:06.846789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.221 qpair failed and we were unable to recover it. 00:32:35.221 [2024-11-20 06:43:06.846968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.221 [2024-11-20 06:43:06.846990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.221 qpair failed and we were unable to recover it. 00:32:35.221 [2024-11-20 06:43:06.847109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.221 [2024-11-20 06:43:06.847131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.221 qpair failed and we were unable to recover it. 00:32:35.221 [2024-11-20 06:43:06.847313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.221 [2024-11-20 06:43:06.847337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.221 qpair failed and we were unable to recover it. 00:32:35.221 [2024-11-20 06:43:06.847463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.221 [2024-11-20 06:43:06.847484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.221 qpair failed and we were unable to recover it. 00:32:35.221 [2024-11-20 06:43:06.847735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.221 [2024-11-20 06:43:06.847758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.221 qpair failed and we were unable to recover it. 00:32:35.221 [2024-11-20 06:43:06.848051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.221 [2024-11-20 06:43:06.848072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.221 qpair failed and we were unable to recover it. 00:32:35.221 [2024-11-20 06:43:06.848197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.221 [2024-11-20 06:43:06.848226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.221 qpair failed and we were unable to recover it. 00:32:35.221 [2024-11-20 06:43:06.848401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.221 [2024-11-20 06:43:06.848423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.221 qpair failed and we were unable to recover it. 00:32:35.221 [2024-11-20 06:43:06.848699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.221 [2024-11-20 06:43:06.848721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.221 qpair failed and we were unable to recover it. 00:32:35.221 [2024-11-20 06:43:06.848979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.221 [2024-11-20 06:43:06.849001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.221 qpair failed and we were unable to recover it. 00:32:35.221 [2024-11-20 06:43:06.849250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.221 [2024-11-20 06:43:06.849273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.221 qpair failed and we were unable to recover it. 00:32:35.221 [2024-11-20 06:43:06.849485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.221 [2024-11-20 06:43:06.849507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.221 qpair failed and we were unable to recover it. 00:32:35.221 [2024-11-20 06:43:06.849676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.221 [2024-11-20 06:43:06.849699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.221 qpair failed and we were unable to recover it. 00:32:35.221 [2024-11-20 06:43:06.849886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.221 [2024-11-20 06:43:06.849908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.221 qpair failed and we were unable to recover it. 00:32:35.221 [2024-11-20 06:43:06.850145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.221 [2024-11-20 06:43:06.850166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.221 qpair failed and we were unable to recover it. 00:32:35.221 [2024-11-20 06:43:06.850268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.221 [2024-11-20 06:43:06.850289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.221 qpair failed and we were unable to recover it. 00:32:35.221 [2024-11-20 06:43:06.850453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.221 [2024-11-20 06:43:06.850475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.221 qpair failed and we were unable to recover it. 00:32:35.221 [2024-11-20 06:43:06.850714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.221 [2024-11-20 06:43:06.850736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.221 qpair failed and we were unable to recover it. 00:32:35.221 [2024-11-20 06:43:06.851010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.221 [2024-11-20 06:43:06.851035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.221 qpair failed and we were unable to recover it. 00:32:35.221 [2024-11-20 06:43:06.851220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.221 [2024-11-20 06:43:06.851244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.221 qpair failed and we were unable to recover it. 00:32:35.221 [2024-11-20 06:43:06.851471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.221 [2024-11-20 06:43:06.851501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.221 qpair failed and we were unable to recover it. 00:32:35.221 [2024-11-20 06:43:06.851755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.221 [2024-11-20 06:43:06.851777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.221 qpair failed and we were unable to recover it. 00:32:35.221 [2024-11-20 06:43:06.852035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.221 [2024-11-20 06:43:06.852057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.221 qpair failed and we were unable to recover it. 00:32:35.221 [2024-11-20 06:43:06.852336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.221 [2024-11-20 06:43:06.852360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.221 qpair failed and we were unable to recover it. 00:32:35.221 [2024-11-20 06:43:06.852556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.221 [2024-11-20 06:43:06.852578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.221 qpair failed and we were unable to recover it. 00:32:35.221 [2024-11-20 06:43:06.852686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.221 [2024-11-20 06:43:06.852709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.221 qpair failed and we were unable to recover it. 00:32:35.221 [2024-11-20 06:43:06.852980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.221 [2024-11-20 06:43:06.853002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.221 qpair failed and we were unable to recover it. 00:32:35.221 [2024-11-20 06:43:06.853258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.221 [2024-11-20 06:43:06.853282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.221 qpair failed and we were unable to recover it. 00:32:35.221 [2024-11-20 06:43:06.853536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.221 [2024-11-20 06:43:06.853558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.221 qpair failed and we were unable to recover it. 00:32:35.221 [2024-11-20 06:43:06.853789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.221 [2024-11-20 06:43:06.853812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.221 qpair failed and we were unable to recover it. 00:32:35.221 [2024-11-20 06:43:06.854059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.221 [2024-11-20 06:43:06.854081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.221 qpair failed and we were unable to recover it. 00:32:35.221 [2024-11-20 06:43:06.854373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.221 [2024-11-20 06:43:06.854397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.221 qpair failed and we were unable to recover it. 00:32:35.221 [2024-11-20 06:43:06.854586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.221 [2024-11-20 06:43:06.854608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.221 qpair failed and we were unable to recover it. 00:32:35.221 [2024-11-20 06:43:06.854790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.221 [2024-11-20 06:43:06.854812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.221 qpair failed and we were unable to recover it. 00:32:35.221 [2024-11-20 06:43:06.855041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.221 [2024-11-20 06:43:06.855062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.221 qpair failed and we were unable to recover it. 00:32:35.222 [2024-11-20 06:43:06.855247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.222 [2024-11-20 06:43:06.855270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.222 qpair failed and we were unable to recover it. 00:32:35.222 [2024-11-20 06:43:06.855477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.222 [2024-11-20 06:43:06.855499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.222 qpair failed and we were unable to recover it. 00:32:35.222 [2024-11-20 06:43:06.855727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.222 [2024-11-20 06:43:06.855750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.222 qpair failed and we were unable to recover it. 00:32:35.222 [2024-11-20 06:43:06.855989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.222 [2024-11-20 06:43:06.856010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.222 qpair failed and we were unable to recover it. 00:32:35.222 [2024-11-20 06:43:06.856119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.222 [2024-11-20 06:43:06.856141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.222 qpair failed and we were unable to recover it. 00:32:35.222 [2024-11-20 06:43:06.856328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.222 [2024-11-20 06:43:06.856351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.222 qpair failed and we were unable to recover it. 00:32:35.222 [2024-11-20 06:43:06.856518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.222 [2024-11-20 06:43:06.856539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.222 qpair failed and we were unable to recover it. 00:32:35.222 [2024-11-20 06:43:06.856739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.222 [2024-11-20 06:43:06.856762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.222 qpair failed and we were unable to recover it. 00:32:35.222 [2024-11-20 06:43:06.857047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.222 [2024-11-20 06:43:06.857070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.222 qpair failed and we were unable to recover it. 00:32:35.222 [2024-11-20 06:43:06.857319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.222 [2024-11-20 06:43:06.857343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.222 qpair failed and we were unable to recover it. 00:32:35.222 [2024-11-20 06:43:06.857504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.222 [2024-11-20 06:43:06.857529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.222 qpair failed and we were unable to recover it. 00:32:35.222 [2024-11-20 06:43:06.857642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.222 [2024-11-20 06:43:06.857663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.222 qpair failed and we were unable to recover it. 00:32:35.222 [2024-11-20 06:43:06.857942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.222 [2024-11-20 06:43:06.857964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.222 qpair failed and we were unable to recover it. 00:32:35.222 [2024-11-20 06:43:06.858084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.222 [2024-11-20 06:43:06.858107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.222 qpair failed and we were unable to recover it. 00:32:35.222 [2024-11-20 06:43:06.858320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.222 [2024-11-20 06:43:06.858345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.222 qpair failed and we were unable to recover it. 00:32:35.222 [2024-11-20 06:43:06.858607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.222 [2024-11-20 06:43:06.858629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.222 qpair failed and we were unable to recover it. 00:32:35.222 [2024-11-20 06:43:06.858874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.222 [2024-11-20 06:43:06.858896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.222 qpair failed and we were unable to recover it. 00:32:35.222 [2024-11-20 06:43:06.859175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.222 [2024-11-20 06:43:06.859197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.222 qpair failed and we were unable to recover it. 00:32:35.222 [2024-11-20 06:43:06.859461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.222 [2024-11-20 06:43:06.859484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.222 qpair failed and we were unable to recover it. 00:32:35.222 [2024-11-20 06:43:06.859732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.222 [2024-11-20 06:43:06.859753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.222 qpair failed and we were unable to recover it. 00:32:35.222 [2024-11-20 06:43:06.859862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.222 [2024-11-20 06:43:06.859884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.222 qpair failed and we were unable to recover it. 00:32:35.222 [2024-11-20 06:43:06.859984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.222 [2024-11-20 06:43:06.860005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.222 qpair failed and we were unable to recover it. 00:32:35.222 [2024-11-20 06:43:06.860232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.222 [2024-11-20 06:43:06.860255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.222 qpair failed and we were unable to recover it. 00:32:35.223 [2024-11-20 06:43:06.860384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.223 [2024-11-20 06:43:06.860405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.223 qpair failed and we were unable to recover it. 00:32:35.223 [2024-11-20 06:43:06.860669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.223 [2024-11-20 06:43:06.860692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.223 qpair failed and we were unable to recover it. 00:32:35.223 [2024-11-20 06:43:06.860922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.223 [2024-11-20 06:43:06.860944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.223 qpair failed and we were unable to recover it. 00:32:35.223 [2024-11-20 06:43:06.861189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.223 [2024-11-20 06:43:06.861219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.223 qpair failed and we were unable to recover it. 00:32:35.223 [2024-11-20 06:43:06.861419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.223 [2024-11-20 06:43:06.861442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.223 qpair failed and we were unable to recover it. 00:32:35.223 [2024-11-20 06:43:06.861683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.223 [2024-11-20 06:43:06.861705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.223 qpair failed and we were unable to recover it. 00:32:35.223 [2024-11-20 06:43:06.861929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.223 [2024-11-20 06:43:06.861951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.223 qpair failed and we were unable to recover it. 00:32:35.223 [2024-11-20 06:43:06.862182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.223 [2024-11-20 06:43:06.862210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.223 qpair failed and we were unable to recover it. 00:32:35.223 [2024-11-20 06:43:06.862442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.223 [2024-11-20 06:43:06.862464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.223 qpair failed and we were unable to recover it. 00:32:35.223 [2024-11-20 06:43:06.862587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.223 [2024-11-20 06:43:06.862609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.223 qpair failed and we were unable to recover it. 00:32:35.223 [2024-11-20 06:43:06.862870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.223 [2024-11-20 06:43:06.862891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.223 qpair failed and we were unable to recover it. 00:32:35.223 [2024-11-20 06:43:06.863145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.223 [2024-11-20 06:43:06.863168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.223 qpair failed and we were unable to recover it. 00:32:35.223 [2024-11-20 06:43:06.863434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.223 [2024-11-20 06:43:06.863457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.223 qpair failed and we were unable to recover it. 00:32:35.223 [2024-11-20 06:43:06.863689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.223 [2024-11-20 06:43:06.863711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.223 qpair failed and we were unable to recover it. 00:32:35.223 [2024-11-20 06:43:06.863958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.223 [2024-11-20 06:43:06.863981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.223 qpair failed and we were unable to recover it. 00:32:35.223 [2024-11-20 06:43:06.864114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.223 [2024-11-20 06:43:06.864136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.223 qpair failed and we were unable to recover it. 00:32:35.223 [2024-11-20 06:43:06.864321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.223 [2024-11-20 06:43:06.864344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.223 qpair failed and we were unable to recover it. 00:32:35.223 [2024-11-20 06:43:06.864525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.223 [2024-11-20 06:43:06.864547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.223 qpair failed and we were unable to recover it. 00:32:35.223 [2024-11-20 06:43:06.864738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.223 [2024-11-20 06:43:06.864760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.223 qpair failed and we were unable to recover it. 00:32:35.223 [2024-11-20 06:43:06.864979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.223 [2024-11-20 06:43:06.865001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.223 qpair failed and we were unable to recover it. 00:32:35.223 [2024-11-20 06:43:06.865257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.223 [2024-11-20 06:43:06.865280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.223 qpair failed and we were unable to recover it. 00:32:35.223 [2024-11-20 06:43:06.865442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.223 [2024-11-20 06:43:06.865464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.223 qpair failed and we were unable to recover it. 00:32:35.223 [2024-11-20 06:43:06.865747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.223 [2024-11-20 06:43:06.865768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.223 qpair failed and we were unable to recover it. 00:32:35.223 [2024-11-20 06:43:06.866002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.223 [2024-11-20 06:43:06.866024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.223 qpair failed and we were unable to recover it. 00:32:35.223 [2024-11-20 06:43:06.866219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.223 [2024-11-20 06:43:06.866242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.223 qpair failed and we were unable to recover it. 00:32:35.223 [2024-11-20 06:43:06.866494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.223 [2024-11-20 06:43:06.866516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.223 qpair failed and we were unable to recover it. 00:32:35.223 [2024-11-20 06:43:06.866746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.223 [2024-11-20 06:43:06.866767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.223 qpair failed and we were unable to recover it. 00:32:35.223 [2024-11-20 06:43:06.867017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.223 [2024-11-20 06:43:06.867039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.223 qpair failed and we were unable to recover it. 00:32:35.223 [2024-11-20 06:43:06.867292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.223 [2024-11-20 06:43:06.867323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.223 qpair failed and we were unable to recover it. 00:32:35.223 [2024-11-20 06:43:06.867485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.223 [2024-11-20 06:43:06.867508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.223 qpair failed and we were unable to recover it. 00:32:35.223 [2024-11-20 06:43:06.867765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.223 [2024-11-20 06:43:06.867787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.223 qpair failed and we were unable to recover it. 00:32:35.223 [2024-11-20 06:43:06.867967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.223 [2024-11-20 06:43:06.867990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.223 qpair failed and we were unable to recover it. 00:32:35.223 [2024-11-20 06:43:06.868158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.223 [2024-11-20 06:43:06.868180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.223 qpair failed and we were unable to recover it. 00:32:35.223 [2024-11-20 06:43:06.868449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.223 [2024-11-20 06:43:06.868472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.223 qpair failed and we were unable to recover it. 00:32:35.223 [2024-11-20 06:43:06.868659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.223 [2024-11-20 06:43:06.868679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.223 qpair failed and we were unable to recover it. 00:32:35.223 [2024-11-20 06:43:06.868957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.224 [2024-11-20 06:43:06.868980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.224 qpair failed and we were unable to recover it. 00:32:35.224 [2024-11-20 06:43:06.869083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.224 [2024-11-20 06:43:06.869104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.224 qpair failed and we were unable to recover it. 00:32:35.224 [2024-11-20 06:43:06.869341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.224 [2024-11-20 06:43:06.869365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.224 qpair failed and we were unable to recover it. 00:32:35.224 [2024-11-20 06:43:06.869537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.224 [2024-11-20 06:43:06.869559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.224 qpair failed and we were unable to recover it. 00:32:35.224 [2024-11-20 06:43:06.869825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.224 [2024-11-20 06:43:06.869847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.224 qpair failed and we were unable to recover it. 00:32:35.224 [2024-11-20 06:43:06.870126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.224 [2024-11-20 06:43:06.870147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.224 qpair failed and we were unable to recover it. 00:32:35.224 [2024-11-20 06:43:06.870270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.224 [2024-11-20 06:43:06.870294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.224 qpair failed and we were unable to recover it. 00:32:35.224 [2024-11-20 06:43:06.870557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.224 [2024-11-20 06:43:06.870579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.224 qpair failed and we were unable to recover it. 00:32:35.224 [2024-11-20 06:43:06.870705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.224 [2024-11-20 06:43:06.870727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.224 qpair failed and we were unable to recover it. 00:32:35.224 [2024-11-20 06:43:06.870955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.224 [2024-11-20 06:43:06.870977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.224 qpair failed and we were unable to recover it. 00:32:35.224 [2024-11-20 06:43:06.871148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.224 [2024-11-20 06:43:06.871170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.224 qpair failed and we were unable to recover it. 00:32:35.224 [2024-11-20 06:43:06.871453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.224 [2024-11-20 06:43:06.871477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.224 qpair failed and we were unable to recover it. 00:32:35.224 [2024-11-20 06:43:06.871611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.224 [2024-11-20 06:43:06.871634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.224 qpair failed and we were unable to recover it. 00:32:35.224 [2024-11-20 06:43:06.871895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.224 [2024-11-20 06:43:06.871917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.224 qpair failed and we were unable to recover it. 00:32:35.224 [2024-11-20 06:43:06.872008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.224 [2024-11-20 06:43:06.872028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.224 qpair failed and we were unable to recover it. 00:32:35.224 [2024-11-20 06:43:06.872258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.224 [2024-11-20 06:43:06.872282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.224 qpair failed and we were unable to recover it. 00:32:35.224 [2024-11-20 06:43:06.872468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.224 [2024-11-20 06:43:06.872489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.224 qpair failed and we were unable to recover it. 00:32:35.224 [2024-11-20 06:43:06.872651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.224 [2024-11-20 06:43:06.872673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.224 qpair failed and we were unable to recover it. 00:32:35.224 [2024-11-20 06:43:06.872929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.224 [2024-11-20 06:43:06.872951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.224 qpair failed and we were unable to recover it. 00:32:35.224 [2024-11-20 06:43:06.873218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.224 [2024-11-20 06:43:06.873241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.224 qpair failed and we were unable to recover it. 00:32:35.224 [2024-11-20 06:43:06.873497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.224 [2024-11-20 06:43:06.873519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.224 qpair failed and we were unable to recover it. 00:32:35.224 [2024-11-20 06:43:06.873757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.224 [2024-11-20 06:43:06.873779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.224 qpair failed and we were unable to recover it. 00:32:35.224 [2024-11-20 06:43:06.874037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.224 [2024-11-20 06:43:06.874059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.224 qpair failed and we were unable to recover it. 00:32:35.224 [2024-11-20 06:43:06.874301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.224 [2024-11-20 06:43:06.874325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.224 qpair failed and we were unable to recover it. 00:32:35.224 [2024-11-20 06:43:06.874557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.224 [2024-11-20 06:43:06.874580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.224 qpair failed and we were unable to recover it. 00:32:35.224 [2024-11-20 06:43:06.874834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.224 [2024-11-20 06:43:06.874857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.224 qpair failed and we were unable to recover it. 00:32:35.224 [2024-11-20 06:43:06.875054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.224 [2024-11-20 06:43:06.875076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.224 qpair failed and we were unable to recover it. 00:32:35.224 [2024-11-20 06:43:06.875308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.224 [2024-11-20 06:43:06.875331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.224 qpair failed and we were unable to recover it. 00:32:35.224 [2024-11-20 06:43:06.875452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.224 [2024-11-20 06:43:06.875473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.224 qpair failed and we were unable to recover it. 00:32:35.224 [2024-11-20 06:43:06.875641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.224 [2024-11-20 06:43:06.875663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.224 qpair failed and we were unable to recover it. 00:32:35.224 [2024-11-20 06:43:06.875894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.224 [2024-11-20 06:43:06.875917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.224 qpair failed and we were unable to recover it. 00:32:35.224 [2024-11-20 06:43:06.876108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.224 [2024-11-20 06:43:06.876130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.224 qpair failed and we were unable to recover it. 00:32:35.225 [2024-11-20 06:43:06.876292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.225 [2024-11-20 06:43:06.876315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.225 qpair failed and we were unable to recover it. 00:32:35.225 [2024-11-20 06:43:06.876573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.225 [2024-11-20 06:43:06.876595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.225 qpair failed and we were unable to recover it. 00:32:35.225 [2024-11-20 06:43:06.876832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.225 [2024-11-20 06:43:06.876854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.225 qpair failed and we were unable to recover it. 00:32:35.225 [2024-11-20 06:43:06.877078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.225 [2024-11-20 06:43:06.877100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.225 qpair failed and we were unable to recover it. 00:32:35.225 [2024-11-20 06:43:06.877356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.225 [2024-11-20 06:43:06.877380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.225 qpair failed and we were unable to recover it. 00:32:35.225 [2024-11-20 06:43:06.877571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.225 [2024-11-20 06:43:06.877593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.225 qpair failed and we were unable to recover it. 00:32:35.225 [2024-11-20 06:43:06.877716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.225 [2024-11-20 06:43:06.877738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.225 qpair failed and we were unable to recover it. 00:32:35.225 [2024-11-20 06:43:06.877987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.225 [2024-11-20 06:43:06.878010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.225 qpair failed and we were unable to recover it. 00:32:35.225 [2024-11-20 06:43:06.878168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.225 [2024-11-20 06:43:06.878190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.225 qpair failed and we were unable to recover it. 00:32:35.225 [2024-11-20 06:43:06.878462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.225 [2024-11-20 06:43:06.878485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.225 qpair failed and we were unable to recover it. 00:32:35.225 [2024-11-20 06:43:06.878659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.225 [2024-11-20 06:43:06.878682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.225 qpair failed and we were unable to recover it. 00:32:35.225 [2024-11-20 06:43:06.878934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.225 [2024-11-20 06:43:06.878956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.225 qpair failed and we were unable to recover it. 00:32:35.225 [2024-11-20 06:43:06.879197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.225 [2024-11-20 06:43:06.879228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.225 qpair failed and we were unable to recover it. 00:32:35.225 [2024-11-20 06:43:06.879506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.225 [2024-11-20 06:43:06.879529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.225 qpair failed and we were unable to recover it. 00:32:35.225 [2024-11-20 06:43:06.879761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.225 [2024-11-20 06:43:06.879783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.225 qpair failed and we were unable to recover it. 00:32:35.225 [2024-11-20 06:43:06.880037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.225 [2024-11-20 06:43:06.880059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.225 qpair failed and we were unable to recover it. 00:32:35.225 [2024-11-20 06:43:06.880192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.225 [2024-11-20 06:43:06.880223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.225 qpair failed and we were unable to recover it. 00:32:35.225 [2024-11-20 06:43:06.880457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.225 [2024-11-20 06:43:06.880479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.225 qpair failed and we were unable to recover it. 00:32:35.225 [2024-11-20 06:43:06.880740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.225 [2024-11-20 06:43:06.880762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.225 qpair failed and we were unable to recover it. 00:32:35.225 [2024-11-20 06:43:06.881011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.225 [2024-11-20 06:43:06.881034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.225 qpair failed and we were unable to recover it. 00:32:35.225 [2024-11-20 06:43:06.881310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.225 [2024-11-20 06:43:06.881333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.225 qpair failed and we were unable to recover it. 00:32:35.225 [2024-11-20 06:43:06.881496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.225 [2024-11-20 06:43:06.881519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.225 qpair failed and we were unable to recover it. 00:32:35.225 [2024-11-20 06:43:06.881630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.225 [2024-11-20 06:43:06.881653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.225 qpair failed and we were unable to recover it. 00:32:35.225 [2024-11-20 06:43:06.881937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.225 [2024-11-20 06:43:06.881958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.225 qpair failed and we were unable to recover it. 00:32:35.225 [2024-11-20 06:43:06.882147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.225 [2024-11-20 06:43:06.882170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.225 qpair failed and we were unable to recover it. 00:32:35.225 [2024-11-20 06:43:06.882442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.225 [2024-11-20 06:43:06.882465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.225 qpair failed and we were unable to recover it. 00:32:35.225 [2024-11-20 06:43:06.882726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.225 [2024-11-20 06:43:06.882749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.225 qpair failed and we were unable to recover it. 00:32:35.225 [2024-11-20 06:43:06.883009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.225 [2024-11-20 06:43:06.883031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.225 qpair failed and we were unable to recover it. 00:32:35.225 [2024-11-20 06:43:06.883290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.225 [2024-11-20 06:43:06.883314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.225 qpair failed and we were unable to recover it. 00:32:35.225 [2024-11-20 06:43:06.883489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.225 [2024-11-20 06:43:06.883517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.225 qpair failed and we were unable to recover it. 00:32:35.225 [2024-11-20 06:43:06.883796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.225 [2024-11-20 06:43:06.883818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.225 qpair failed and we were unable to recover it. 00:32:35.225 [2024-11-20 06:43:06.884103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.225 [2024-11-20 06:43:06.884125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.225 qpair failed and we were unable to recover it. 00:32:35.225 [2024-11-20 06:43:06.884384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.225 [2024-11-20 06:43:06.884407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.225 qpair failed and we were unable to recover it. 00:32:35.225 [2024-11-20 06:43:06.884655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.225 [2024-11-20 06:43:06.884677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.225 qpair failed and we were unable to recover it. 00:32:35.225 [2024-11-20 06:43:06.884856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.225 [2024-11-20 06:43:06.884879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.225 qpair failed and we were unable to recover it. 00:32:35.226 [2024-11-20 06:43:06.885040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.226 [2024-11-20 06:43:06.885062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.226 qpair failed and we were unable to recover it. 00:32:35.226 [2024-11-20 06:43:06.885323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.226 [2024-11-20 06:43:06.885346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.226 qpair failed and we were unable to recover it. 00:32:35.226 [2024-11-20 06:43:06.885535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.226 [2024-11-20 06:43:06.885558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.226 qpair failed and we were unable to recover it. 00:32:35.226 [2024-11-20 06:43:06.885753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.226 [2024-11-20 06:43:06.885776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.226 qpair failed and we were unable to recover it. 00:32:35.226 [2024-11-20 06:43:06.886010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.226 [2024-11-20 06:43:06.886033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.226 qpair failed and we were unable to recover it. 00:32:35.226 [2024-11-20 06:43:06.886301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.226 [2024-11-20 06:43:06.886325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.226 qpair failed and we were unable to recover it. 00:32:35.226 [2024-11-20 06:43:06.886509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.226 [2024-11-20 06:43:06.886531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.226 qpair failed and we were unable to recover it. 00:32:35.226 [2024-11-20 06:43:06.886811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.226 [2024-11-20 06:43:06.886834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.226 qpair failed and we were unable to recover it. 00:32:35.226 [2024-11-20 06:43:06.887012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.226 [2024-11-20 06:43:06.887034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.226 qpair failed and we were unable to recover it. 00:32:35.226 [2024-11-20 06:43:06.887269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.226 [2024-11-20 06:43:06.887291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.226 qpair failed and we were unable to recover it. 00:32:35.226 [2024-11-20 06:43:06.887476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.226 [2024-11-20 06:43:06.887497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.226 qpair failed and we were unable to recover it. 00:32:35.226 [2024-11-20 06:43:06.887763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.226 [2024-11-20 06:43:06.887785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.226 qpair failed and we were unable to recover it. 00:32:35.226 [2024-11-20 06:43:06.887959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.226 [2024-11-20 06:43:06.887981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.226 qpair failed and we were unable to recover it. 00:32:35.226 [2024-11-20 06:43:06.888218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.226 [2024-11-20 06:43:06.888242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.226 qpair failed and we were unable to recover it. 00:32:35.226 [2024-11-20 06:43:06.888403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.226 [2024-11-20 06:43:06.888426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.226 qpair failed and we were unable to recover it. 00:32:35.226 [2024-11-20 06:43:06.888749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.226 [2024-11-20 06:43:06.888771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.226 qpair failed and we were unable to recover it. 00:32:35.226 [2024-11-20 06:43:06.889016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.226 [2024-11-20 06:43:06.889039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.226 qpair failed and we were unable to recover it. 00:32:35.226 [2024-11-20 06:43:06.889326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.226 [2024-11-20 06:43:06.889351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.226 qpair failed and we were unable to recover it. 00:32:35.226 [2024-11-20 06:43:06.889535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.226 [2024-11-20 06:43:06.889558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.226 qpair failed and we were unable to recover it. 00:32:35.226 [2024-11-20 06:43:06.889817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.226 [2024-11-20 06:43:06.889840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.226 qpair failed and we were unable to recover it. 00:32:35.226 [2024-11-20 06:43:06.890019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.226 [2024-11-20 06:43:06.890041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.226 qpair failed and we were unable to recover it. 00:32:35.226 [2024-11-20 06:43:06.890319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.226 [2024-11-20 06:43:06.890342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.226 qpair failed and we were unable to recover it. 00:32:35.226 [2024-11-20 06:43:06.890632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.226 [2024-11-20 06:43:06.890655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.226 qpair failed and we were unable to recover it. 00:32:35.226 [2024-11-20 06:43:06.890902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.226 [2024-11-20 06:43:06.890924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.226 qpair failed and we were unable to recover it. 00:32:35.226 [2024-11-20 06:43:06.891127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.226 [2024-11-20 06:43:06.891149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.226 qpair failed and we were unable to recover it. 00:32:35.226 [2024-11-20 06:43:06.891352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.226 [2024-11-20 06:43:06.891375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.226 qpair failed and we were unable to recover it. 00:32:35.226 [2024-11-20 06:43:06.891640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.226 [2024-11-20 06:43:06.891661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.226 qpair failed and we were unable to recover it. 00:32:35.226 [2024-11-20 06:43:06.891854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.226 [2024-11-20 06:43:06.891877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.226 qpair failed and we were unable to recover it. 00:32:35.226 [2024-11-20 06:43:06.892108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.226 [2024-11-20 06:43:06.892131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.226 qpair failed and we were unable to recover it. 00:32:35.226 [2024-11-20 06:43:06.892390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.226 [2024-11-20 06:43:06.892413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.226 qpair failed and we were unable to recover it. 00:32:35.226 [2024-11-20 06:43:06.892655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.226 [2024-11-20 06:43:06.892677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.226 qpair failed and we were unable to recover it. 00:32:35.226 [2024-11-20 06:43:06.892851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.226 [2024-11-20 06:43:06.892873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.226 qpair failed and we were unable to recover it. 00:32:35.226 [2024-11-20 06:43:06.893131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.226 [2024-11-20 06:43:06.893154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.226 qpair failed and we were unable to recover it. 00:32:35.227 [2024-11-20 06:43:06.893337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.227 [2024-11-20 06:43:06.893360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.227 qpair failed and we were unable to recover it. 00:32:35.227 [2024-11-20 06:43:06.893568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.227 [2024-11-20 06:43:06.893591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.227 qpair failed and we were unable to recover it. 00:32:35.227 [2024-11-20 06:43:06.893846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.227 [2024-11-20 06:43:06.893872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.227 qpair failed and we were unable to recover it. 00:32:35.227 [2024-11-20 06:43:06.894101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.227 [2024-11-20 06:43:06.894123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.227 qpair failed and we were unable to recover it. 00:32:35.227 [2024-11-20 06:43:06.894369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.227 [2024-11-20 06:43:06.894393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.227 qpair failed and we were unable to recover it. 00:32:35.227 [2024-11-20 06:43:06.894676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.227 [2024-11-20 06:43:06.894698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.227 qpair failed and we were unable to recover it. 00:32:35.227 [2024-11-20 06:43:06.894881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.227 [2024-11-20 06:43:06.894903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.227 qpair failed and we were unable to recover it. 00:32:35.227 [2024-11-20 06:43:06.895096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.227 [2024-11-20 06:43:06.895119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.227 qpair failed and we were unable to recover it. 00:32:35.227 [2024-11-20 06:43:06.895307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.227 [2024-11-20 06:43:06.895330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.227 qpair failed and we were unable to recover it. 00:32:35.227 [2024-11-20 06:43:06.895515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.227 [2024-11-20 06:43:06.895537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.227 qpair failed and we were unable to recover it. 00:32:35.227 [2024-11-20 06:43:06.895794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.227 [2024-11-20 06:43:06.895817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.227 qpair failed and we were unable to recover it. 00:32:35.227 [2024-11-20 06:43:06.896050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.227 [2024-11-20 06:43:06.896072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.227 qpair failed and we were unable to recover it. 00:32:35.227 [2024-11-20 06:43:06.896316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.227 [2024-11-20 06:43:06.896340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.227 qpair failed and we were unable to recover it. 00:32:35.227 [2024-11-20 06:43:06.896592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.227 [2024-11-20 06:43:06.896614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.227 qpair failed and we were unable to recover it. 00:32:35.227 [2024-11-20 06:43:06.896866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.227 [2024-11-20 06:43:06.896888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.227 qpair failed and we were unable to recover it. 00:32:35.227 [2024-11-20 06:43:06.897063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.227 [2024-11-20 06:43:06.897085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.227 qpair failed and we were unable to recover it. 00:32:35.227 [2024-11-20 06:43:06.897252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.227 [2024-11-20 06:43:06.897275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.227 qpair failed and we were unable to recover it. 00:32:35.227 [2024-11-20 06:43:06.897465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.227 [2024-11-20 06:43:06.897487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.227 qpair failed and we were unable to recover it. 00:32:35.227 [2024-11-20 06:43:06.897610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.227 [2024-11-20 06:43:06.897632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.227 qpair failed and we were unable to recover it. 00:32:35.227 [2024-11-20 06:43:06.897891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.227 [2024-11-20 06:43:06.897914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.227 qpair failed and we were unable to recover it. 00:32:35.227 [2024-11-20 06:43:06.898120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.227 [2024-11-20 06:43:06.898143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.227 qpair failed and we were unable to recover it. 00:32:35.227 [2024-11-20 06:43:06.898408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.227 [2024-11-20 06:43:06.898432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.227 qpair failed and we were unable to recover it. 00:32:35.227 [2024-11-20 06:43:06.898663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.227 [2024-11-20 06:43:06.898685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.227 qpair failed and we were unable to recover it. 00:32:35.227 [2024-11-20 06:43:06.898937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.227 [2024-11-20 06:43:06.898960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.227 qpair failed and we were unable to recover it. 00:32:35.227 [2024-11-20 06:43:06.899143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.227 [2024-11-20 06:43:06.899166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.227 qpair failed and we were unable to recover it. 00:32:35.227 [2024-11-20 06:43:06.899450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.227 [2024-11-20 06:43:06.899473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.227 qpair failed and we were unable to recover it. 00:32:35.227 [2024-11-20 06:43:06.899654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.227 [2024-11-20 06:43:06.899676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.227 qpair failed and we were unable to recover it. 00:32:35.227 [2024-11-20 06:43:06.899874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.227 [2024-11-20 06:43:06.899897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.227 qpair failed and we were unable to recover it. 00:32:35.227 [2024-11-20 06:43:06.900152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.227 [2024-11-20 06:43:06.900174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.227 qpair failed and we were unable to recover it. 00:32:35.227 [2024-11-20 06:43:06.900281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.227 [2024-11-20 06:43:06.900308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.227 qpair failed and we were unable to recover it. 00:32:35.227 [2024-11-20 06:43:06.900417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.227 [2024-11-20 06:43:06.900439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.227 qpair failed and we were unable to recover it. 00:32:35.227 [2024-11-20 06:43:06.900563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.227 [2024-11-20 06:43:06.900585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.227 qpair failed and we were unable to recover it. 00:32:35.228 [2024-11-20 06:43:06.900815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.228 [2024-11-20 06:43:06.900837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.228 qpair failed and we were unable to recover it. 00:32:35.228 [2024-11-20 06:43:06.901092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.228 [2024-11-20 06:43:06.901114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.228 qpair failed and we were unable to recover it. 00:32:35.228 [2024-11-20 06:43:06.901246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.228 [2024-11-20 06:43:06.901270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.228 qpair failed and we were unable to recover it. 00:32:35.228 [2024-11-20 06:43:06.901399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.228 [2024-11-20 06:43:06.901419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.228 qpair failed and we were unable to recover it. 00:32:35.228 [2024-11-20 06:43:06.901593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.228 [2024-11-20 06:43:06.901615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.228 qpair failed and we were unable to recover it. 00:32:35.228 [2024-11-20 06:43:06.901818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.228 [2024-11-20 06:43:06.901840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.228 qpair failed and we were unable to recover it. 00:32:35.228 [2024-11-20 06:43:06.902125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.228 [2024-11-20 06:43:06.902147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.228 qpair failed and we were unable to recover it. 00:32:35.228 [2024-11-20 06:43:06.902336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.228 [2024-11-20 06:43:06.902360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.228 qpair failed and we were unable to recover it. 00:32:35.228 [2024-11-20 06:43:06.902536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.228 [2024-11-20 06:43:06.902558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.228 qpair failed and we were unable to recover it. 00:32:35.228 [2024-11-20 06:43:06.902792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.228 [2024-11-20 06:43:06.902814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.228 qpair failed and we were unable to recover it. 00:32:35.228 [2024-11-20 06:43:06.903045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.228 [2024-11-20 06:43:06.903067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.228 qpair failed and we were unable to recover it. 00:32:35.228 [2024-11-20 06:43:06.903307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.228 [2024-11-20 06:43:06.903330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.228 qpair failed and we were unable to recover it. 00:32:35.228 [2024-11-20 06:43:06.903588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.228 [2024-11-20 06:43:06.903611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.228 qpair failed and we were unable to recover it. 00:32:35.228 [2024-11-20 06:43:06.903848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.228 [2024-11-20 06:43:06.903870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.228 qpair failed and we were unable to recover it. 00:32:35.228 [2024-11-20 06:43:06.904104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.228 [2024-11-20 06:43:06.904127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.228 qpair failed and we were unable to recover it. 00:32:35.228 [2024-11-20 06:43:06.904388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.228 [2024-11-20 06:43:06.904412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.228 qpair failed and we were unable to recover it. 00:32:35.228 [2024-11-20 06:43:06.904652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.228 [2024-11-20 06:43:06.904674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.228 qpair failed and we were unable to recover it. 00:32:35.228 [2024-11-20 06:43:06.904954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.228 [2024-11-20 06:43:06.904977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.228 qpair failed and we were unable to recover it. 00:32:35.228 [2024-11-20 06:43:06.905217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.228 [2024-11-20 06:43:06.905241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.228 qpair failed and we were unable to recover it. 00:32:35.228 [2024-11-20 06:43:06.905400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.228 [2024-11-20 06:43:06.905423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.228 qpair failed and we were unable to recover it. 00:32:35.228 [2024-11-20 06:43:06.905694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.228 [2024-11-20 06:43:06.905717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.228 qpair failed and we were unable to recover it. 00:32:35.228 [2024-11-20 06:43:06.905894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.228 [2024-11-20 06:43:06.905917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.228 qpair failed and we were unable to recover it. 00:32:35.228 [2024-11-20 06:43:06.906077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.228 [2024-11-20 06:43:06.906099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.228 qpair failed and we were unable to recover it. 00:32:35.228 [2024-11-20 06:43:06.906280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.228 [2024-11-20 06:43:06.906302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.228 qpair failed and we were unable to recover it. 00:32:35.228 [2024-11-20 06:43:06.906549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.228 [2024-11-20 06:43:06.906571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.228 qpair failed and we were unable to recover it. 00:32:35.228 [2024-11-20 06:43:06.906808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.228 [2024-11-20 06:43:06.906829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.228 qpair failed and we were unable to recover it. 00:32:35.228 [2024-11-20 06:43:06.907074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.228 [2024-11-20 06:43:06.907097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.228 qpair failed and we were unable to recover it. 00:32:35.228 [2024-11-20 06:43:06.907341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.228 [2024-11-20 06:43:06.907365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.228 qpair failed and we were unable to recover it. 00:32:35.228 [2024-11-20 06:43:06.907539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.228 [2024-11-20 06:43:06.907560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.228 qpair failed and we were unable to recover it. 00:32:35.228 [2024-11-20 06:43:06.907821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.228 [2024-11-20 06:43:06.907843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.228 qpair failed and we were unable to recover it. 00:32:35.228 [2024-11-20 06:43:06.908045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.228 [2024-11-20 06:43:06.908067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.228 qpair failed and we were unable to recover it. 00:32:35.228 [2024-11-20 06:43:06.908329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.228 [2024-11-20 06:43:06.908354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.229 qpair failed and we were unable to recover it. 00:32:35.229 [2024-11-20 06:43:06.908614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.229 [2024-11-20 06:43:06.908637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.229 qpair failed and we were unable to recover it. 00:32:35.229 [2024-11-20 06:43:06.908892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.229 [2024-11-20 06:43:06.908914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.229 qpair failed and we were unable to recover it. 00:32:35.229 [2024-11-20 06:43:06.909099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.229 [2024-11-20 06:43:06.909122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.229 qpair failed and we were unable to recover it. 00:32:35.229 [2024-11-20 06:43:06.909221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.229 [2024-11-20 06:43:06.909243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.229 qpair failed and we were unable to recover it. 00:32:35.229 [2024-11-20 06:43:06.909521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.229 [2024-11-20 06:43:06.909543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.229 qpair failed and we were unable to recover it. 00:32:35.229 [2024-11-20 06:43:06.909804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.229 [2024-11-20 06:43:06.909827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.229 qpair failed and we were unable to recover it. 00:32:35.229 [2024-11-20 06:43:06.909989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.229 [2024-11-20 06:43:06.910015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.229 qpair failed and we were unable to recover it. 00:32:35.229 [2024-11-20 06:43:06.910271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.229 [2024-11-20 06:43:06.910294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.229 qpair failed and we were unable to recover it. 00:32:35.229 [2024-11-20 06:43:06.910529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.229 [2024-11-20 06:43:06.910552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.229 qpair failed and we were unable to recover it. 00:32:35.229 [2024-11-20 06:43:06.910659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.229 [2024-11-20 06:43:06.910679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.229 qpair failed and we were unable to recover it. 00:32:35.229 [2024-11-20 06:43:06.910932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.229 [2024-11-20 06:43:06.910955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.229 qpair failed and we were unable to recover it. 00:32:35.229 [2024-11-20 06:43:06.911211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.229 [2024-11-20 06:43:06.911234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.229 qpair failed and we were unable to recover it. 00:32:35.229 [2024-11-20 06:43:06.911466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.229 [2024-11-20 06:43:06.911489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.229 qpair failed and we were unable to recover it. 00:32:35.229 [2024-11-20 06:43:06.911731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.229 [2024-11-20 06:43:06.911753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.229 qpair failed and we were unable to recover it. 00:32:35.229 [2024-11-20 06:43:06.911930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.229 [2024-11-20 06:43:06.911952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.229 qpair failed and we were unable to recover it. 00:32:35.229 [2024-11-20 06:43:06.912113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.229 [2024-11-20 06:43:06.912134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.229 qpair failed and we were unable to recover it. 00:32:35.229 [2024-11-20 06:43:06.912306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.229 [2024-11-20 06:43:06.912330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.229 qpair failed and we were unable to recover it. 00:32:35.229 [2024-11-20 06:43:06.912587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.229 [2024-11-20 06:43:06.912608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.229 qpair failed and we were unable to recover it. 00:32:35.229 [2024-11-20 06:43:06.912865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.229 [2024-11-20 06:43:06.912888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.229 qpair failed and we were unable to recover it. 00:32:35.229 [2024-11-20 06:43:06.913097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.229 [2024-11-20 06:43:06.913119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.229 qpair failed and we were unable to recover it. 00:32:35.229 [2024-11-20 06:43:06.913352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.229 [2024-11-20 06:43:06.913374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.229 qpair failed and we were unable to recover it. 00:32:35.229 [2024-11-20 06:43:06.913553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.229 [2024-11-20 06:43:06.913575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.229 qpair failed and we were unable to recover it. 00:32:35.229 [2024-11-20 06:43:06.913697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.229 [2024-11-20 06:43:06.913720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.229 qpair failed and we were unable to recover it. 00:32:35.229 [2024-11-20 06:43:06.913895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.229 [2024-11-20 06:43:06.913918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.229 qpair failed and we were unable to recover it. 00:32:35.229 [2024-11-20 06:43:06.914198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.229 [2024-11-20 06:43:06.914247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.229 qpair failed and we were unable to recover it. 00:32:35.229 [2024-11-20 06:43:06.914407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.229 [2024-11-20 06:43:06.914431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.229 qpair failed and we were unable to recover it. 00:32:35.229 [2024-11-20 06:43:06.914610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.229 [2024-11-20 06:43:06.914631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.229 qpair failed and we were unable to recover it. 00:32:35.229 [2024-11-20 06:43:06.914867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.229 [2024-11-20 06:43:06.914889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.229 qpair failed and we were unable to recover it. 00:32:35.229 [2024-11-20 06:43:06.915006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.229 [2024-11-20 06:43:06.915027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.229 qpair failed and we were unable to recover it. 00:32:35.229 [2024-11-20 06:43:06.915127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.229 [2024-11-20 06:43:06.915147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.229 qpair failed and we were unable to recover it. 00:32:35.229 [2024-11-20 06:43:06.915401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.229 [2024-11-20 06:43:06.915425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.229 qpair failed and we were unable to recover it. 00:32:35.229 [2024-11-20 06:43:06.915608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.229 [2024-11-20 06:43:06.915630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.229 qpair failed and we were unable to recover it. 00:32:35.229 [2024-11-20 06:43:06.915862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.229 [2024-11-20 06:43:06.915885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.229 qpair failed and we were unable to recover it. 00:32:35.229 [2024-11-20 06:43:06.916142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.229 [2024-11-20 06:43:06.916173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.229 qpair failed and we were unable to recover it. 00:32:35.229 [2024-11-20 06:43:06.916428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.229 [2024-11-20 06:43:06.916451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.229 qpair failed and we were unable to recover it. 00:32:35.229 [2024-11-20 06:43:06.916612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.229 [2024-11-20 06:43:06.916634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.229 qpair failed and we were unable to recover it. 00:32:35.229 [2024-11-20 06:43:06.916888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.229 [2024-11-20 06:43:06.916910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.229 qpair failed and we were unable to recover it. 00:32:35.229 [2024-11-20 06:43:06.917111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.230 [2024-11-20 06:43:06.917134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.230 qpair failed and we were unable to recover it. 00:32:35.230 [2024-11-20 06:43:06.917295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.230 [2024-11-20 06:43:06.917317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.230 qpair failed and we were unable to recover it. 00:32:35.230 [2024-11-20 06:43:06.917549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.230 [2024-11-20 06:43:06.917571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.230 qpair failed and we were unable to recover it. 00:32:35.230 [2024-11-20 06:43:06.917803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.230 [2024-11-20 06:43:06.917825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.230 qpair failed and we were unable to recover it. 00:32:35.230 [2024-11-20 06:43:06.918096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.230 [2024-11-20 06:43:06.918118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.230 qpair failed and we were unable to recover it. 00:32:35.230 [2024-11-20 06:43:06.918369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.230 [2024-11-20 06:43:06.918392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.230 qpair failed and we were unable to recover it. 00:32:35.230 [2024-11-20 06:43:06.918648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.230 [2024-11-20 06:43:06.918670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.230 qpair failed and we were unable to recover it. 00:32:35.230 [2024-11-20 06:43:06.918928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.230 [2024-11-20 06:43:06.918950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.230 qpair failed and we were unable to recover it. 00:32:35.230 [2024-11-20 06:43:06.919220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.230 [2024-11-20 06:43:06.919243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.230 qpair failed and we were unable to recover it. 00:32:35.230 [2024-11-20 06:43:06.919475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.230 [2024-11-20 06:43:06.919498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.230 qpair failed and we were unable to recover it. 00:32:35.230 [2024-11-20 06:43:06.919631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.230 [2024-11-20 06:43:06.919654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.230 qpair failed and we were unable to recover it. 00:32:35.230 [2024-11-20 06:43:06.919904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.230 [2024-11-20 06:43:06.919927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.230 qpair failed and we were unable to recover it. 00:32:35.230 [2024-11-20 06:43:06.920158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.230 [2024-11-20 06:43:06.920180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.230 qpair failed and we were unable to recover it. 00:32:35.230 [2024-11-20 06:43:06.920433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.230 [2024-11-20 06:43:06.920457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.230 qpair failed and we were unable to recover it. 00:32:35.230 [2024-11-20 06:43:06.920701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.230 [2024-11-20 06:43:06.920723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.230 qpair failed and we were unable to recover it. 00:32:35.230 [2024-11-20 06:43:06.920994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.230 [2024-11-20 06:43:06.921016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.230 qpair failed and we were unable to recover it. 00:32:35.230 [2024-11-20 06:43:06.921232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.230 [2024-11-20 06:43:06.921256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.230 qpair failed and we were unable to recover it. 00:32:35.230 [2024-11-20 06:43:06.921465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.230 [2024-11-20 06:43:06.921487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.230 qpair failed and we were unable to recover it. 00:32:35.230 [2024-11-20 06:43:06.921674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.230 [2024-11-20 06:43:06.921696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.230 qpair failed and we were unable to recover it. 00:32:35.230 [2024-11-20 06:43:06.921876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.230 [2024-11-20 06:43:06.921899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.230 qpair failed and we were unable to recover it. 00:32:35.230 [2024-11-20 06:43:06.922150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.230 [2024-11-20 06:43:06.922172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.230 qpair failed and we were unable to recover it. 00:32:35.230 [2024-11-20 06:43:06.922456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.230 [2024-11-20 06:43:06.922479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.230 qpair failed and we were unable to recover it. 00:32:35.230 [2024-11-20 06:43:06.922735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.230 [2024-11-20 06:43:06.922757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.230 qpair failed and we were unable to recover it. 00:32:35.230 [2024-11-20 06:43:06.923013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.230 [2024-11-20 06:43:06.923035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.230 qpair failed and we were unable to recover it. 00:32:35.230 [2024-11-20 06:43:06.923157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.230 [2024-11-20 06:43:06.923181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.230 qpair failed and we were unable to recover it. 00:32:35.230 [2024-11-20 06:43:06.923445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.230 [2024-11-20 06:43:06.923468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.230 qpair failed and we were unable to recover it. 00:32:35.230 [2024-11-20 06:43:06.923723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.230 [2024-11-20 06:43:06.923746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.230 qpair failed and we were unable to recover it. 00:32:35.230 [2024-11-20 06:43:06.924004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.230 [2024-11-20 06:43:06.924026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.230 qpair failed and we were unable to recover it. 00:32:35.230 [2024-11-20 06:43:06.924215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.230 [2024-11-20 06:43:06.924238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.230 qpair failed and we were unable to recover it. 00:32:35.230 [2024-11-20 06:43:06.924495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.230 [2024-11-20 06:43:06.924516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.230 qpair failed and we were unable to recover it. 00:32:35.230 [2024-11-20 06:43:06.924799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.230 [2024-11-20 06:43:06.924822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.230 qpair failed and we were unable to recover it. 00:32:35.230 [2024-11-20 06:43:06.925122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.230 [2024-11-20 06:43:06.925144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.230 qpair failed and we were unable to recover it. 00:32:35.230 [2024-11-20 06:43:06.925309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.230 [2024-11-20 06:43:06.925332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.230 qpair failed and we were unable to recover it. 00:32:35.230 [2024-11-20 06:43:06.925517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.230 [2024-11-20 06:43:06.925539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.230 qpair failed and we were unable to recover it. 00:32:35.230 [2024-11-20 06:43:06.925773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.230 [2024-11-20 06:43:06.925795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.230 qpair failed and we were unable to recover it. 00:32:35.230 [2024-11-20 06:43:06.925978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.230 [2024-11-20 06:43:06.926000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.230 qpair failed and we were unable to recover it. 00:32:35.230 [2024-11-20 06:43:06.926217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.230 [2024-11-20 06:43:06.926239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.230 qpair failed and we were unable to recover it. 00:32:35.230 [2024-11-20 06:43:06.926407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.230 [2024-11-20 06:43:06.926433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.230 qpair failed and we were unable to recover it. 00:32:35.231 [2024-11-20 06:43:06.926643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.231 [2024-11-20 06:43:06.926665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.231 qpair failed and we were unable to recover it. 00:32:35.231 [2024-11-20 06:43:06.926841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.231 [2024-11-20 06:43:06.926863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.231 qpair failed and we were unable to recover it. 00:32:35.231 [2024-11-20 06:43:06.926991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.231 [2024-11-20 06:43:06.927013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.231 qpair failed and we were unable to recover it. 00:32:35.231 [2024-11-20 06:43:06.927208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.231 [2024-11-20 06:43:06.927230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.231 qpair failed and we were unable to recover it. 00:32:35.231 [2024-11-20 06:43:06.927472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.231 [2024-11-20 06:43:06.927495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.231 qpair failed and we were unable to recover it. 00:32:35.231 [2024-11-20 06:43:06.927737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.231 [2024-11-20 06:43:06.927759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.231 qpair failed and we were unable to recover it. 00:32:35.231 [2024-11-20 06:43:06.927954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.231 [2024-11-20 06:43:06.927976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.231 qpair failed and we were unable to recover it. 00:32:35.231 [2024-11-20 06:43:06.928211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.231 [2024-11-20 06:43:06.928233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.231 qpair failed and we were unable to recover it. 00:32:35.231 [2024-11-20 06:43:06.928413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.231 [2024-11-20 06:43:06.928436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.231 qpair failed and we were unable to recover it. 00:32:35.231 [2024-11-20 06:43:06.928594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.231 [2024-11-20 06:43:06.928616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.231 qpair failed and we were unable to recover it. 00:32:35.231 [2024-11-20 06:43:06.928714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.231 [2024-11-20 06:43:06.928734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.231 qpair failed and we were unable to recover it. 00:32:35.231 [2024-11-20 06:43:06.928989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.231 [2024-11-20 06:43:06.929012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.231 qpair failed and we were unable to recover it. 00:32:35.231 [2024-11-20 06:43:06.929188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.231 [2024-11-20 06:43:06.929218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.231 qpair failed and we were unable to recover it. 00:32:35.231 [2024-11-20 06:43:06.929483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.231 [2024-11-20 06:43:06.929505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.231 qpair failed and we were unable to recover it. 00:32:35.231 [2024-11-20 06:43:06.929738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.231 [2024-11-20 06:43:06.929760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.231 qpair failed and we were unable to recover it. 00:32:35.231 [2024-11-20 06:43:06.930003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.231 [2024-11-20 06:43:06.930025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.231 qpair failed and we were unable to recover it. 00:32:35.231 [2024-11-20 06:43:06.930232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.231 [2024-11-20 06:43:06.930255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.231 qpair failed and we were unable to recover it. 00:32:35.231 [2024-11-20 06:43:06.930508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.231 [2024-11-20 06:43:06.930531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.231 qpair failed and we were unable to recover it. 00:32:35.231 [2024-11-20 06:43:06.930739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.231 [2024-11-20 06:43:06.930762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.231 qpair failed and we were unable to recover it. 00:32:35.231 [2024-11-20 06:43:06.931020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.231 [2024-11-20 06:43:06.931043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.231 qpair failed and we were unable to recover it. 00:32:35.231 [2024-11-20 06:43:06.931331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.231 [2024-11-20 06:43:06.931354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.231 qpair failed and we were unable to recover it. 00:32:35.231 [2024-11-20 06:43:06.931539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.231 [2024-11-20 06:43:06.931562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.231 qpair failed and we were unable to recover it. 00:32:35.231 [2024-11-20 06:43:06.931736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.231 [2024-11-20 06:43:06.931758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.231 qpair failed and we were unable to recover it. 00:32:35.231 [2024-11-20 06:43:06.931945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.231 [2024-11-20 06:43:06.931967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.231 qpair failed and we were unable to recover it. 00:32:35.231 [2024-11-20 06:43:06.932174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.231 [2024-11-20 06:43:06.932196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.231 qpair failed and we were unable to recover it. 00:32:35.231 [2024-11-20 06:43:06.932503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.231 [2024-11-20 06:43:06.932525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.231 qpair failed and we were unable to recover it. 00:32:35.231 [2024-11-20 06:43:06.932710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.231 [2024-11-20 06:43:06.932736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.231 qpair failed and we were unable to recover it. 00:32:35.231 [2024-11-20 06:43:06.932944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.231 [2024-11-20 06:43:06.932965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.231 qpair failed and we were unable to recover it. 00:32:35.231 [2024-11-20 06:43:06.933158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.231 [2024-11-20 06:43:06.933179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.231 qpair failed and we were unable to recover it. 00:32:35.231 [2024-11-20 06:43:06.933350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.231 [2024-11-20 06:43:06.933373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.231 qpair failed and we were unable to recover it. 00:32:35.231 [2024-11-20 06:43:06.933633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.231 [2024-11-20 06:43:06.933655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.231 qpair failed and we were unable to recover it. 00:32:35.231 [2024-11-20 06:43:06.933885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.231 [2024-11-20 06:43:06.933907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.231 qpair failed and we were unable to recover it. 00:32:35.231 [2024-11-20 06:43:06.934158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.231 [2024-11-20 06:43:06.934180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.231 qpair failed and we were unable to recover it. 00:32:35.231 [2024-11-20 06:43:06.934349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.231 [2024-11-20 06:43:06.934372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.231 qpair failed and we were unable to recover it. 00:32:35.231 [2024-11-20 06:43:06.934633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.231 [2024-11-20 06:43:06.934655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.231 qpair failed and we were unable to recover it. 00:32:35.231 [2024-11-20 06:43:06.934911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.231 [2024-11-20 06:43:06.934933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.231 qpair failed and we were unable to recover it. 00:32:35.231 [2024-11-20 06:43:06.935193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.231 [2024-11-20 06:43:06.935230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.231 qpair failed and we were unable to recover it. 00:32:35.231 [2024-11-20 06:43:06.935511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.232 [2024-11-20 06:43:06.935533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.232 qpair failed and we were unable to recover it. 00:32:35.232 [2024-11-20 06:43:06.935771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.232 [2024-11-20 06:43:06.935793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.232 qpair failed and we were unable to recover it. 00:32:35.232 [2024-11-20 06:43:06.936052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.232 [2024-11-20 06:43:06.936074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.232 qpair failed and we were unable to recover it. 00:32:35.232 [2024-11-20 06:43:06.936357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.232 [2024-11-20 06:43:06.936380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.232 qpair failed and we were unable to recover it. 00:32:35.232 [2024-11-20 06:43:06.936540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.232 [2024-11-20 06:43:06.936562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.232 qpair failed and we were unable to recover it. 00:32:35.232 [2024-11-20 06:43:06.936818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.232 [2024-11-20 06:43:06.936840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.232 qpair failed and we were unable to recover it. 00:32:35.232 [2024-11-20 06:43:06.937082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.232 [2024-11-20 06:43:06.937105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.232 qpair failed and we were unable to recover it. 00:32:35.232 [2024-11-20 06:43:06.937306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.232 [2024-11-20 06:43:06.937328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.232 qpair failed and we were unable to recover it. 00:32:35.232 [2024-11-20 06:43:06.937579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.232 [2024-11-20 06:43:06.937601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.232 qpair failed and we were unable to recover it. 00:32:35.232 [2024-11-20 06:43:06.937844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.232 [2024-11-20 06:43:06.937866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.232 qpair failed and we were unable to recover it. 00:32:35.232 [2024-11-20 06:43:06.938104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.232 [2024-11-20 06:43:06.938126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.232 qpair failed and we were unable to recover it. 00:32:35.232 [2024-11-20 06:43:06.938389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.232 [2024-11-20 06:43:06.938412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.232 qpair failed and we were unable to recover it. 00:32:35.232 [2024-11-20 06:43:06.938647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.232 [2024-11-20 06:43:06.938669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.232 qpair failed and we were unable to recover it. 00:32:35.232 [2024-11-20 06:43:06.938910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.232 [2024-11-20 06:43:06.938931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.232 qpair failed and we were unable to recover it. 00:32:35.232 [2024-11-20 06:43:06.939165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.232 [2024-11-20 06:43:06.939188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.232 qpair failed and we were unable to recover it. 00:32:35.232 [2024-11-20 06:43:06.939370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.232 [2024-11-20 06:43:06.939393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.232 qpair failed and we were unable to recover it. 00:32:35.232 [2024-11-20 06:43:06.939635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.232 [2024-11-20 06:43:06.939657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.232 qpair failed and we were unable to recover it. 00:32:35.232 [2024-11-20 06:43:06.939909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.232 [2024-11-20 06:43:06.939931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.232 qpair failed and we were unable to recover it. 00:32:35.232 [2024-11-20 06:43:06.940176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.232 [2024-11-20 06:43:06.940198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.232 qpair failed and we were unable to recover it. 00:32:35.232 [2024-11-20 06:43:06.940393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.232 [2024-11-20 06:43:06.940415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.232 qpair failed and we were unable to recover it. 00:32:35.232 [2024-11-20 06:43:06.940697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.232 [2024-11-20 06:43:06.940718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.232 qpair failed and we were unable to recover it. 00:32:35.232 [2024-11-20 06:43:06.940948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.232 [2024-11-20 06:43:06.940971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.232 qpair failed and we were unable to recover it. 00:32:35.232 [2024-11-20 06:43:06.941212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.232 [2024-11-20 06:43:06.941235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.232 qpair failed and we were unable to recover it. 00:32:35.232 [2024-11-20 06:43:06.941437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.232 [2024-11-20 06:43:06.941461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.232 qpair failed and we were unable to recover it. 00:32:35.232 [2024-11-20 06:43:06.941641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.232 [2024-11-20 06:43:06.941663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.232 qpair failed and we were unable to recover it. 00:32:35.232 [2024-11-20 06:43:06.941841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.232 [2024-11-20 06:43:06.941863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.232 qpair failed and we were unable to recover it. 00:32:35.232 [2024-11-20 06:43:06.942124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.232 [2024-11-20 06:43:06.942146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.232 qpair failed and we were unable to recover it. 00:32:35.232 [2024-11-20 06:43:06.942386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.232 [2024-11-20 06:43:06.942409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.232 qpair failed and we were unable to recover it. 00:32:35.232 [2024-11-20 06:43:06.942692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.232 [2024-11-20 06:43:06.942714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.232 qpair failed and we were unable to recover it. 00:32:35.232 [2024-11-20 06:43:06.942956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.232 [2024-11-20 06:43:06.942977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.232 qpair failed and we were unable to recover it. 00:32:35.232 [2024-11-20 06:43:06.943138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.232 [2024-11-20 06:43:06.943164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.232 qpair failed and we were unable to recover it. 00:32:35.232 [2024-11-20 06:43:06.943277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.232 [2024-11-20 06:43:06.943300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.232 qpair failed and we were unable to recover it. 00:32:35.232 [2024-11-20 06:43:06.943534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.232 [2024-11-20 06:43:06.943557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.232 qpair failed and we were unable to recover it. 00:32:35.232 [2024-11-20 06:43:06.943790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.232 [2024-11-20 06:43:06.943812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.232 qpair failed and we were unable to recover it. 00:32:35.233 [2024-11-20 06:43:06.944070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.233 [2024-11-20 06:43:06.944092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.233 qpair failed and we were unable to recover it. 00:32:35.233 [2024-11-20 06:43:06.944361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.233 [2024-11-20 06:43:06.944384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.233 qpair failed and we were unable to recover it. 00:32:35.233 [2024-11-20 06:43:06.944559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.233 [2024-11-20 06:43:06.944581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.233 qpair failed and we were unable to recover it. 00:32:35.233 [2024-11-20 06:43:06.944698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.233 [2024-11-20 06:43:06.944720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.233 qpair failed and we were unable to recover it. 00:32:35.233 [2024-11-20 06:43:06.944979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.233 [2024-11-20 06:43:06.945002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.233 qpair failed and we were unable to recover it. 00:32:35.233 [2024-11-20 06:43:06.945278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.233 [2024-11-20 06:43:06.945301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.233 qpair failed and we were unable to recover it. 00:32:35.233 [2024-11-20 06:43:06.945432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.233 [2024-11-20 06:43:06.945453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.233 qpair failed and we were unable to recover it. 00:32:35.233 [2024-11-20 06:43:06.945708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.233 [2024-11-20 06:43:06.945730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.233 qpair failed and we were unable to recover it. 00:32:35.233 [2024-11-20 06:43:06.946010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.233 [2024-11-20 06:43:06.946032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.233 qpair failed and we were unable to recover it. 00:32:35.233 [2024-11-20 06:43:06.946223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.233 [2024-11-20 06:43:06.946246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.233 qpair failed and we were unable to recover it. 00:32:35.233 [2024-11-20 06:43:06.946535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.233 [2024-11-20 06:43:06.946558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.233 qpair failed and we were unable to recover it. 00:32:35.233 [2024-11-20 06:43:06.946834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.233 [2024-11-20 06:43:06.946856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.233 qpair failed and we were unable to recover it. 00:32:35.233 [2024-11-20 06:43:06.947110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.233 [2024-11-20 06:43:06.947132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.233 qpair failed and we were unable to recover it. 00:32:35.233 [2024-11-20 06:43:06.947389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.233 [2024-11-20 06:43:06.947412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.233 qpair failed and we were unable to recover it. 00:32:35.233 [2024-11-20 06:43:06.947593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.233 [2024-11-20 06:43:06.947616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.233 qpair failed and we were unable to recover it. 00:32:35.233 [2024-11-20 06:43:06.947787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.233 [2024-11-20 06:43:06.947809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.233 qpair failed and we were unable to recover it. 00:32:35.233 [2024-11-20 06:43:06.948051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.233 [2024-11-20 06:43:06.948074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.233 qpair failed and we were unable to recover it. 00:32:35.233 [2024-11-20 06:43:06.948312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.233 [2024-11-20 06:43:06.948335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.233 qpair failed and we were unable to recover it. 00:32:35.233 [2024-11-20 06:43:06.948565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.233 [2024-11-20 06:43:06.948587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.233 qpair failed and we were unable to recover it. 00:32:35.233 [2024-11-20 06:43:06.948851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.233 [2024-11-20 06:43:06.948874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.233 qpair failed and we were unable to recover it. 00:32:35.233 [2024-11-20 06:43:06.949138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.233 [2024-11-20 06:43:06.949160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.233 qpair failed and we were unable to recover it. 00:32:35.233 [2024-11-20 06:43:06.949465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.233 [2024-11-20 06:43:06.949488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.233 qpair failed and we were unable to recover it. 00:32:35.233 [2024-11-20 06:43:06.949742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.233 [2024-11-20 06:43:06.949764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.233 qpair failed and we were unable to recover it. 00:32:35.233 [2024-11-20 06:43:06.949928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.233 [2024-11-20 06:43:06.949951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.233 qpair failed and we were unable to recover it. 00:32:35.233 [2024-11-20 06:43:06.950135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.233 [2024-11-20 06:43:06.950159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.233 qpair failed and we were unable to recover it. 00:32:35.233 [2024-11-20 06:43:06.950415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.233 [2024-11-20 06:43:06.950438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.233 qpair failed and we were unable to recover it. 00:32:35.233 [2024-11-20 06:43:06.950693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.233 [2024-11-20 06:43:06.950715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.233 qpair failed and we were unable to recover it. 00:32:35.233 [2024-11-20 06:43:06.950898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.233 [2024-11-20 06:43:06.950920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.233 qpair failed and we were unable to recover it. 00:32:35.233 [2024-11-20 06:43:06.951178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.233 [2024-11-20 06:43:06.951200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.233 qpair failed and we were unable to recover it. 00:32:35.233 [2024-11-20 06:43:06.951316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.233 [2024-11-20 06:43:06.951338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.233 qpair failed and we were unable to recover it. 00:32:35.233 [2024-11-20 06:43:06.951531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.233 [2024-11-20 06:43:06.951552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.233 qpair failed and we were unable to recover it. 00:32:35.233 [2024-11-20 06:43:06.951821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.233 [2024-11-20 06:43:06.951843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.233 qpair failed and we were unable to recover it. 00:32:35.233 [2024-11-20 06:43:06.952126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.233 [2024-11-20 06:43:06.952149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.233 qpair failed and we were unable to recover it. 00:32:35.233 [2024-11-20 06:43:06.952404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.233 [2024-11-20 06:43:06.952428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.233 qpair failed and we were unable to recover it. 00:32:35.233 [2024-11-20 06:43:06.952605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.233 [2024-11-20 06:43:06.952627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.233 qpair failed and we were unable to recover it. 00:32:35.233 [2024-11-20 06:43:06.952881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.233 [2024-11-20 06:43:06.952903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.233 qpair failed and we were unable to recover it. 00:32:35.233 [2024-11-20 06:43:06.953183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.233 [2024-11-20 06:43:06.953212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.233 qpair failed and we were unable to recover it. 00:32:35.233 [2024-11-20 06:43:06.953382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.233 [2024-11-20 06:43:06.953405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.233 qpair failed and we were unable to recover it. 00:32:35.233 [2024-11-20 06:43:06.953587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.234 [2024-11-20 06:43:06.953609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.234 qpair failed and we were unable to recover it. 00:32:35.234 [2024-11-20 06:43:06.953790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.234 [2024-11-20 06:43:06.953812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.234 qpair failed and we were unable to recover it. 00:32:35.234 [2024-11-20 06:43:06.954049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.234 [2024-11-20 06:43:06.954070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.234 qpair failed and we were unable to recover it. 00:32:35.234 [2024-11-20 06:43:06.954254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.234 [2024-11-20 06:43:06.954277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.234 qpair failed and we were unable to recover it. 00:32:35.234 [2024-11-20 06:43:06.954469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.234 [2024-11-20 06:43:06.954492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.234 qpair failed and we were unable to recover it. 00:32:35.234 [2024-11-20 06:43:06.954748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.234 [2024-11-20 06:43:06.954770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.234 qpair failed and we were unable to recover it. 00:32:35.234 [2024-11-20 06:43:06.955022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.234 [2024-11-20 06:43:06.955044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.234 qpair failed and we were unable to recover it. 00:32:35.234 [2024-11-20 06:43:06.955323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.234 [2024-11-20 06:43:06.955347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.234 qpair failed and we were unable to recover it. 00:32:35.234 [2024-11-20 06:43:06.955548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.234 [2024-11-20 06:43:06.955569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.234 qpair failed and we were unable to recover it. 00:32:35.234 [2024-11-20 06:43:06.955822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.234 [2024-11-20 06:43:06.955845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.234 qpair failed and we were unable to recover it. 00:32:35.234 [2024-11-20 06:43:06.956080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.234 [2024-11-20 06:43:06.956102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.234 qpair failed and we were unable to recover it. 00:32:35.234 [2024-11-20 06:43:06.956347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.234 [2024-11-20 06:43:06.956370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.234 qpair failed and we were unable to recover it. 00:32:35.234 [2024-11-20 06:43:06.956620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.234 [2024-11-20 06:43:06.956642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.234 qpair failed and we were unable to recover it. 00:32:35.234 [2024-11-20 06:43:06.956842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.234 [2024-11-20 06:43:06.956865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.234 qpair failed and we were unable to recover it. 00:32:35.234 [2024-11-20 06:43:06.957095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.234 [2024-11-20 06:43:06.957117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.234 qpair failed and we were unable to recover it. 00:32:35.234 [2024-11-20 06:43:06.957365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.234 [2024-11-20 06:43:06.957388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.234 qpair failed and we were unable to recover it. 00:32:35.234 [2024-11-20 06:43:06.957549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.234 [2024-11-20 06:43:06.957571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.234 qpair failed and we were unable to recover it. 00:32:35.234 [2024-11-20 06:43:06.957767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.234 [2024-11-20 06:43:06.957789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.234 qpair failed and we were unable to recover it. 00:32:35.234 [2024-11-20 06:43:06.958043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.234 [2024-11-20 06:43:06.958066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.234 qpair failed and we were unable to recover it. 00:32:35.234 [2024-11-20 06:43:06.958348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.234 [2024-11-20 06:43:06.958372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.234 qpair failed and we were unable to recover it. 00:32:35.234 [2024-11-20 06:43:06.958632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.234 [2024-11-20 06:43:06.958655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.234 qpair failed and we were unable to recover it. 00:32:35.234 [2024-11-20 06:43:06.958912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.234 [2024-11-20 06:43:06.958934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.234 qpair failed and we were unable to recover it. 00:32:35.234 [2024-11-20 06:43:06.959095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.234 [2024-11-20 06:43:06.959117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.234 qpair failed and we were unable to recover it. 00:32:35.234 [2024-11-20 06:43:06.959372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.234 [2024-11-20 06:43:06.959395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.234 qpair failed and we were unable to recover it. 00:32:35.234 [2024-11-20 06:43:06.959615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.234 [2024-11-20 06:43:06.959637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.234 qpair failed and we were unable to recover it. 00:32:35.234 [2024-11-20 06:43:06.959920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.234 [2024-11-20 06:43:06.959942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.234 qpair failed and we were unable to recover it. 00:32:35.234 [2024-11-20 06:43:06.960246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.234 [2024-11-20 06:43:06.960273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.234 qpair failed and we were unable to recover it. 00:32:35.234 [2024-11-20 06:43:06.960510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.234 [2024-11-20 06:43:06.960532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.234 qpair failed and we were unable to recover it. 00:32:35.234 [2024-11-20 06:43:06.960720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.234 [2024-11-20 06:43:06.960742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.234 qpair failed and we were unable to recover it. 00:32:35.234 [2024-11-20 06:43:06.961001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.234 [2024-11-20 06:43:06.961023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.234 qpair failed and we were unable to recover it. 00:32:35.234 [2024-11-20 06:43:06.961298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.234 [2024-11-20 06:43:06.961321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.234 qpair failed and we were unable to recover it. 00:32:35.234 [2024-11-20 06:43:06.961425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.234 [2024-11-20 06:43:06.961447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.234 qpair failed and we were unable to recover it. 00:32:35.234 [2024-11-20 06:43:06.961706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.234 [2024-11-20 06:43:06.961728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.234 qpair failed and we were unable to recover it. 00:32:35.234 [2024-11-20 06:43:06.961959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.234 [2024-11-20 06:43:06.961982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.234 qpair failed and we were unable to recover it. 00:32:35.234 [2024-11-20 06:43:06.962242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.234 [2024-11-20 06:43:06.962265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.234 qpair failed and we were unable to recover it. 00:32:35.234 [2024-11-20 06:43:06.962428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.234 [2024-11-20 06:43:06.962449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.234 qpair failed and we were unable to recover it. 00:32:35.234 [2024-11-20 06:43:06.962625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.234 [2024-11-20 06:43:06.962647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.234 qpair failed and we were unable to recover it. 00:32:35.234 [2024-11-20 06:43:06.962810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.234 [2024-11-20 06:43:06.962832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.234 qpair failed and we were unable to recover it. 00:32:35.234 [2024-11-20 06:43:06.963004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.235 [2024-11-20 06:43:06.963026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.235 qpair failed and we were unable to recover it. 00:32:35.235 [2024-11-20 06:43:06.963268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.235 [2024-11-20 06:43:06.963291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.235 qpair failed and we were unable to recover it. 00:32:35.235 [2024-11-20 06:43:06.963558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.235 [2024-11-20 06:43:06.963580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.235 qpair failed and we were unable to recover it. 00:32:35.235 [2024-11-20 06:43:06.963803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.235 [2024-11-20 06:43:06.963825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.235 qpair failed and we were unable to recover it. 00:32:35.235 [2024-11-20 06:43:06.963994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.235 [2024-11-20 06:43:06.964016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.235 qpair failed and we were unable to recover it. 00:32:35.235 [2024-11-20 06:43:06.964274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.235 [2024-11-20 06:43:06.964298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.235 qpair failed and we were unable to recover it. 00:32:35.235 [2024-11-20 06:43:06.964530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.235 [2024-11-20 06:43:06.964551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.235 qpair failed and we were unable to recover it. 00:32:35.235 [2024-11-20 06:43:06.964792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.235 [2024-11-20 06:43:06.964815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.235 qpair failed and we were unable to recover it. 00:32:35.235 [2024-11-20 06:43:06.964980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.235 [2024-11-20 06:43:06.965003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.235 qpair failed and we were unable to recover it. 00:32:35.235 [2024-11-20 06:43:06.965273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.235 [2024-11-20 06:43:06.965296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.235 qpair failed and we were unable to recover it. 00:32:35.235 [2024-11-20 06:43:06.965575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.235 [2024-11-20 06:43:06.965598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.235 qpair failed and we were unable to recover it. 00:32:35.235 [2024-11-20 06:43:06.965792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.235 [2024-11-20 06:43:06.965814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.235 qpair failed and we were unable to recover it. 00:32:35.235 [2024-11-20 06:43:06.966070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.235 [2024-11-20 06:43:06.966092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.235 qpair failed and we were unable to recover it. 00:32:35.235 [2024-11-20 06:43:06.966273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.235 [2024-11-20 06:43:06.966296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.235 qpair failed and we were unable to recover it. 00:32:35.235 [2024-11-20 06:43:06.966578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.235 [2024-11-20 06:43:06.966600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.235 qpair failed and we were unable to recover it. 00:32:35.235 [2024-11-20 06:43:06.966840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.235 [2024-11-20 06:43:06.966863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.235 qpair failed and we were unable to recover it. 00:32:35.235 [2024-11-20 06:43:06.967056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.235 [2024-11-20 06:43:06.967078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.235 qpair failed and we were unable to recover it. 00:32:35.235 [2024-11-20 06:43:06.967310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.235 [2024-11-20 06:43:06.967334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.235 qpair failed and we were unable to recover it. 00:32:35.235 [2024-11-20 06:43:06.967591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.235 [2024-11-20 06:43:06.967614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.235 qpair failed and we were unable to recover it. 00:32:35.235 [2024-11-20 06:43:06.967855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.235 [2024-11-20 06:43:06.967877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.235 qpair failed and we were unable to recover it. 00:32:35.235 [2024-11-20 06:43:06.968153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.235 [2024-11-20 06:43:06.968174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.235 qpair failed and we were unable to recover it. 00:32:35.235 [2024-11-20 06:43:06.968362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.235 [2024-11-20 06:43:06.968385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.235 qpair failed and we were unable to recover it. 00:32:35.235 [2024-11-20 06:43:06.968566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.235 [2024-11-20 06:43:06.968589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.235 qpair failed and we were unable to recover it. 00:32:35.235 [2024-11-20 06:43:06.968769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.235 [2024-11-20 06:43:06.968791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.235 qpair failed and we were unable to recover it. 00:32:35.235 [2024-11-20 06:43:06.969093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.235 [2024-11-20 06:43:06.969116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.235 qpair failed and we were unable to recover it. 00:32:35.235 [2024-11-20 06:43:06.969232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.235 [2024-11-20 06:43:06.969255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.235 qpair failed and we were unable to recover it. 00:32:35.235 [2024-11-20 06:43:06.969502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.235 [2024-11-20 06:43:06.969525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.235 qpair failed and we were unable to recover it. 00:32:35.235 [2024-11-20 06:43:06.969708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.235 [2024-11-20 06:43:06.969731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.235 qpair failed and we were unable to recover it. 00:32:35.235 [2024-11-20 06:43:06.970039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.235 [2024-11-20 06:43:06.970061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.235 qpair failed and we were unable to recover it. 00:32:35.235 [2024-11-20 06:43:06.970323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.235 [2024-11-20 06:43:06.970346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.235 qpair failed and we were unable to recover it. 00:32:35.235 [2024-11-20 06:43:06.970511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.235 [2024-11-20 06:43:06.970533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.235 qpair failed and we were unable to recover it. 00:32:35.235 [2024-11-20 06:43:06.970695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.235 [2024-11-20 06:43:06.970717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.235 qpair failed and we were unable to recover it. 00:32:35.235 [2024-11-20 06:43:06.970899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.235 [2024-11-20 06:43:06.970922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.235 qpair failed and we were unable to recover it. 00:32:35.235 [2024-11-20 06:43:06.971121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.235 [2024-11-20 06:43:06.971143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.235 qpair failed and we were unable to recover it. 00:32:35.235 [2024-11-20 06:43:06.971397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.235 [2024-11-20 06:43:06.971421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.235 qpair failed and we were unable to recover it. 00:32:35.235 [2024-11-20 06:43:06.971652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.235 [2024-11-20 06:43:06.971674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.235 qpair failed and we were unable to recover it. 00:32:35.235 [2024-11-20 06:43:06.971932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.235 [2024-11-20 06:43:06.971955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.235 qpair failed and we were unable to recover it. 00:32:35.235 [2024-11-20 06:43:06.972046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.235 [2024-11-20 06:43:06.972066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.235 qpair failed and we were unable to recover it. 00:32:35.235 [2024-11-20 06:43:06.972319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.236 [2024-11-20 06:43:06.972343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.236 qpair failed and we were unable to recover it. 00:32:35.236 [2024-11-20 06:43:06.972600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.236 [2024-11-20 06:43:06.972622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.236 qpair failed and we were unable to recover it. 00:32:35.236 [2024-11-20 06:43:06.972901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.236 [2024-11-20 06:43:06.972923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.236 qpair failed and we were unable to recover it. 00:32:35.236 [2024-11-20 06:43:06.973178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.236 [2024-11-20 06:43:06.973200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.236 qpair failed and we were unable to recover it. 00:32:35.236 [2024-11-20 06:43:06.973454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.236 [2024-11-20 06:43:06.973477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.236 qpair failed and we were unable to recover it. 00:32:35.236 [2024-11-20 06:43:06.973699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.236 [2024-11-20 06:43:06.973721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.236 qpair failed and we were unable to recover it. 00:32:35.236 [2024-11-20 06:43:06.974007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.236 [2024-11-20 06:43:06.974029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.236 qpair failed and we were unable to recover it. 00:32:35.236 [2024-11-20 06:43:06.974323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.236 [2024-11-20 06:43:06.974347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.236 qpair failed and we were unable to recover it. 00:32:35.236 [2024-11-20 06:43:06.974439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.236 [2024-11-20 06:43:06.974460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.236 qpair failed and we were unable to recover it. 00:32:35.236 [2024-11-20 06:43:06.974694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.236 [2024-11-20 06:43:06.974717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.236 qpair failed and we were unable to recover it. 00:32:35.236 [2024-11-20 06:43:06.974956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.236 [2024-11-20 06:43:06.974980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.236 qpair failed and we were unable to recover it. 00:32:35.236 [2024-11-20 06:43:06.975157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.236 [2024-11-20 06:43:06.975180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.236 qpair failed and we were unable to recover it. 00:32:35.236 [2024-11-20 06:43:06.975445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.236 [2024-11-20 06:43:06.975468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.236 qpair failed and we were unable to recover it. 00:32:35.236 [2024-11-20 06:43:06.975751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.236 [2024-11-20 06:43:06.975774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.236 qpair failed and we were unable to recover it. 00:32:35.236 [2024-11-20 06:43:06.975892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.236 [2024-11-20 06:43:06.975914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.236 qpair failed and we were unable to recover it. 00:32:35.236 [2024-11-20 06:43:06.976167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.236 [2024-11-20 06:43:06.976189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.236 qpair failed and we were unable to recover it. 00:32:35.236 [2024-11-20 06:43:06.976456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.236 [2024-11-20 06:43:06.976478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.236 qpair failed and we were unable to recover it. 00:32:35.236 [2024-11-20 06:43:06.976734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.236 [2024-11-20 06:43:06.976757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.236 qpair failed and we were unable to recover it. 00:32:35.236 [2024-11-20 06:43:06.976960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.236 [2024-11-20 06:43:06.976988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.236 qpair failed and we were unable to recover it. 00:32:35.236 [2024-11-20 06:43:06.977252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.236 [2024-11-20 06:43:06.977276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.236 qpair failed and we were unable to recover it. 00:32:35.236 [2024-11-20 06:43:06.977475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.236 [2024-11-20 06:43:06.977498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.236 qpair failed and we were unable to recover it. 00:32:35.236 [2024-11-20 06:43:06.977755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.236 [2024-11-20 06:43:06.977778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.236 qpair failed and we were unable to recover it. 00:32:35.236 [2024-11-20 06:43:06.977958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.236 [2024-11-20 06:43:06.977980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.236 qpair failed and we were unable to recover it. 00:32:35.236 [2024-11-20 06:43:06.978255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.236 [2024-11-20 06:43:06.978279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.236 qpair failed and we were unable to recover it. 00:32:35.236 [2024-11-20 06:43:06.978515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.236 [2024-11-20 06:43:06.978538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.236 qpair failed and we were unable to recover it. 00:32:35.236 [2024-11-20 06:43:06.978811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.236 [2024-11-20 06:43:06.978833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.236 qpair failed and we were unable to recover it. 00:32:35.236 [2024-11-20 06:43:06.979063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.236 [2024-11-20 06:43:06.979085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.236 qpair failed and we were unable to recover it. 00:32:35.236 [2024-11-20 06:43:06.979269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.236 [2024-11-20 06:43:06.979292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.236 qpair failed and we were unable to recover it. 00:32:35.236 [2024-11-20 06:43:06.979532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.236 [2024-11-20 06:43:06.979554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.236 qpair failed and we were unable to recover it. 00:32:35.236 [2024-11-20 06:43:06.979785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.236 [2024-11-20 06:43:06.979808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.236 qpair failed and we were unable to recover it. 00:32:35.236 [2024-11-20 06:43:06.980006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.236 [2024-11-20 06:43:06.980028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.236 qpair failed and we were unable to recover it. 00:32:35.236 [2024-11-20 06:43:06.980217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.236 [2024-11-20 06:43:06.980241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.236 qpair failed and we were unable to recover it. 00:32:35.236 [2024-11-20 06:43:06.980417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.236 [2024-11-20 06:43:06.980440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.236 qpair failed and we were unable to recover it. 00:32:35.236 [2024-11-20 06:43:06.980700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.236 [2024-11-20 06:43:06.980721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.236 qpair failed and we were unable to recover it. 00:32:35.236 [2024-11-20 06:43:06.980841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.236 [2024-11-20 06:43:06.980863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.236 qpair failed and we were unable to recover it. 00:32:35.236 [2024-11-20 06:43:06.981123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.236 [2024-11-20 06:43:06.981145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.236 qpair failed and we were unable to recover it. 00:32:35.236 [2024-11-20 06:43:06.981365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.236 [2024-11-20 06:43:06.981387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.236 qpair failed and we were unable to recover it. 00:32:35.236 [2024-11-20 06:43:06.981641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.236 [2024-11-20 06:43:06.981663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.236 qpair failed and we were unable to recover it. 00:32:35.236 [2024-11-20 06:43:06.981918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.236 [2024-11-20 06:43:06.981940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.237 qpair failed and we were unable to recover it. 00:32:35.237 [2024-11-20 06:43:06.982113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.237 [2024-11-20 06:43:06.982134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.237 qpair failed and we were unable to recover it. 00:32:35.237 [2024-11-20 06:43:06.982316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.237 [2024-11-20 06:43:06.982341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.237 qpair failed and we were unable to recover it. 00:32:35.237 [2024-11-20 06:43:06.982503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.237 [2024-11-20 06:43:06.982525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.237 qpair failed and we were unable to recover it. 00:32:35.237 [2024-11-20 06:43:06.982700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.237 [2024-11-20 06:43:06.982723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.237 qpair failed and we were unable to recover it. 00:32:35.237 [2024-11-20 06:43:06.982905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.237 [2024-11-20 06:43:06.982927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.237 qpair failed and we were unable to recover it. 00:32:35.237 [2024-11-20 06:43:06.983158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.237 [2024-11-20 06:43:06.983181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.237 qpair failed and we were unable to recover it. 00:32:35.237 [2024-11-20 06:43:06.983372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.237 [2024-11-20 06:43:06.983394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.237 qpair failed and we were unable to recover it. 00:32:35.237 [2024-11-20 06:43:06.983600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.237 [2024-11-20 06:43:06.983622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.237 qpair failed and we were unable to recover it. 00:32:35.237 [2024-11-20 06:43:06.983881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.237 [2024-11-20 06:43:06.983903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.237 qpair failed and we were unable to recover it. 00:32:35.237 [2024-11-20 06:43:06.984071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.237 [2024-11-20 06:43:06.984093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.237 qpair failed and we were unable to recover it. 00:32:35.237 [2024-11-20 06:43:06.984322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.237 [2024-11-20 06:43:06.984345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.237 qpair failed and we were unable to recover it. 00:32:35.237 [2024-11-20 06:43:06.984526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.237 [2024-11-20 06:43:06.984548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.237 qpair failed and we were unable to recover it. 00:32:35.237 [2024-11-20 06:43:06.984802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.237 [2024-11-20 06:43:06.984824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.237 qpair failed and we were unable to recover it. 00:32:35.237 [2024-11-20 06:43:06.984943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.237 [2024-11-20 06:43:06.984965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.237 qpair failed and we were unable to recover it. 00:32:35.237 [2024-11-20 06:43:06.985261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.237 [2024-11-20 06:43:06.985285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.237 qpair failed and we were unable to recover it. 00:32:35.237 [2024-11-20 06:43:06.985471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.237 [2024-11-20 06:43:06.985494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.237 qpair failed and we were unable to recover it. 00:32:35.237 [2024-11-20 06:43:06.985724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.237 [2024-11-20 06:43:06.985746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.237 qpair failed and we were unable to recover it. 00:32:35.237 [2024-11-20 06:43:06.985950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.237 [2024-11-20 06:43:06.985973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.237 qpair failed and we were unable to recover it. 00:32:35.237 [2024-11-20 06:43:06.986224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.237 [2024-11-20 06:43:06.986248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.237 qpair failed and we were unable to recover it. 00:32:35.237 [2024-11-20 06:43:06.986479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.237 [2024-11-20 06:43:06.986502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.237 qpair failed and we were unable to recover it. 00:32:35.237 [2024-11-20 06:43:06.986757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.237 [2024-11-20 06:43:06.986783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.237 qpair failed and we were unable to recover it. 00:32:35.237 [2024-11-20 06:43:06.987026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.237 [2024-11-20 06:43:06.987048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.237 qpair failed and we were unable to recover it. 00:32:35.237 [2024-11-20 06:43:06.987219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.237 [2024-11-20 06:43:06.987242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.237 qpair failed and we were unable to recover it. 00:32:35.237 [2024-11-20 06:43:06.987502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.237 [2024-11-20 06:43:06.987525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.237 qpair failed and we were unable to recover it. 00:32:35.237 [2024-11-20 06:43:06.987771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.237 [2024-11-20 06:43:06.987803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.237 qpair failed and we were unable to recover it. 00:32:35.237 [2024-11-20 06:43:06.988030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.237 [2024-11-20 06:43:06.988062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.237 qpair failed and we were unable to recover it. 00:32:35.237 [2024-11-20 06:43:06.988338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.237 [2024-11-20 06:43:06.988372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.237 qpair failed and we were unable to recover it. 00:32:35.237 [2024-11-20 06:43:06.988629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.237 [2024-11-20 06:43:06.988661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.237 qpair failed and we were unable to recover it. 00:32:35.237 [2024-11-20 06:43:06.988919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.237 [2024-11-20 06:43:06.988951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.237 qpair failed and we were unable to recover it. 00:32:35.237 [2024-11-20 06:43:06.989196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.237 [2024-11-20 06:43:06.989238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.237 qpair failed and we were unable to recover it. 00:32:35.237 [2024-11-20 06:43:06.989493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.237 [2024-11-20 06:43:06.989526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.237 qpair failed and we were unable to recover it. 00:32:35.237 [2024-11-20 06:43:06.989733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.237 [2024-11-20 06:43:06.989765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.237 qpair failed and we were unable to recover it. 00:32:35.237 [2024-11-20 06:43:06.989963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.238 [2024-11-20 06:43:06.989996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.238 qpair failed and we were unable to recover it. 00:32:35.238 [2024-11-20 06:43:06.990198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.238 [2024-11-20 06:43:06.990229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.238 qpair failed and we were unable to recover it. 00:32:35.238 [2024-11-20 06:43:06.990409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.238 [2024-11-20 06:43:06.990432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.238 qpair failed and we were unable to recover it. 00:32:35.238 [2024-11-20 06:43:06.990612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.238 [2024-11-20 06:43:06.990633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.238 qpair failed and we were unable to recover it. 00:32:35.238 [2024-11-20 06:43:06.990910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.238 [2024-11-20 06:43:06.990933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.238 qpair failed and we were unable to recover it. 00:32:35.238 [2024-11-20 06:43:06.991131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.238 [2024-11-20 06:43:06.991153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.238 qpair failed and we were unable to recover it. 00:32:35.238 [2024-11-20 06:43:06.991383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.238 [2024-11-20 06:43:06.991406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.238 qpair failed and we were unable to recover it. 00:32:35.238 [2024-11-20 06:43:06.991655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.238 [2024-11-20 06:43:06.991677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.238 qpair failed and we were unable to recover it. 00:32:35.238 [2024-11-20 06:43:06.991934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.238 [2024-11-20 06:43:06.991956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.238 qpair failed and we were unable to recover it. 00:32:35.238 [2024-11-20 06:43:06.992220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.238 [2024-11-20 06:43:06.992244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.238 qpair failed and we were unable to recover it. 00:32:35.238 [2024-11-20 06:43:06.992512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.238 [2024-11-20 06:43:06.992534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.238 qpair failed and we were unable to recover it. 00:32:35.238 [2024-11-20 06:43:06.992809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.238 [2024-11-20 06:43:06.992841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.238 qpair failed and we were unable to recover it. 00:32:35.238 [2024-11-20 06:43:06.993036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.238 [2024-11-20 06:43:06.993068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.238 qpair failed and we were unable to recover it. 00:32:35.238 [2024-11-20 06:43:06.993270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.238 [2024-11-20 06:43:06.993304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.238 qpair failed and we were unable to recover it. 00:32:35.238 [2024-11-20 06:43:06.993585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.238 [2024-11-20 06:43:06.993616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.238 qpair failed and we were unable to recover it. 00:32:35.238 [2024-11-20 06:43:06.993895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.238 [2024-11-20 06:43:06.993934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.238 qpair failed and we were unable to recover it. 00:32:35.238 [2024-11-20 06:43:06.994138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.238 [2024-11-20 06:43:06.994161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.238 qpair failed and we were unable to recover it. 00:32:35.238 [2024-11-20 06:43:06.994411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.238 [2024-11-20 06:43:06.994434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.238 qpair failed and we were unable to recover it. 00:32:35.238 [2024-11-20 06:43:06.994639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.238 [2024-11-20 06:43:06.994661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.238 qpair failed and we were unable to recover it. 00:32:35.238 [2024-11-20 06:43:06.994904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.238 [2024-11-20 06:43:06.994927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.238 qpair failed and we were unable to recover it. 00:32:35.523 [2024-11-20 06:43:06.995106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.523 [2024-11-20 06:43:06.995128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.523 qpair failed and we were unable to recover it. 00:32:35.523 [2024-11-20 06:43:06.995304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.523 [2024-11-20 06:43:06.995328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.523 qpair failed and we were unable to recover it. 00:32:35.523 [2024-11-20 06:43:06.995583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.523 [2024-11-20 06:43:06.995607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.523 qpair failed and we were unable to recover it. 00:32:35.523 [2024-11-20 06:43:06.995846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.523 [2024-11-20 06:43:06.995868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.523 qpair failed and we were unable to recover it. 00:32:35.523 [2024-11-20 06:43:06.996165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.523 [2024-11-20 06:43:06.996188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.523 qpair failed and we were unable to recover it. 00:32:35.523 [2024-11-20 06:43:06.996483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.523 [2024-11-20 06:43:06.996505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.523 qpair failed and we were unable to recover it. 00:32:35.523 [2024-11-20 06:43:06.996670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.523 [2024-11-20 06:43:06.996693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.523 qpair failed and we were unable to recover it. 00:32:35.523 [2024-11-20 06:43:06.996893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.523 [2024-11-20 06:43:06.996915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.523 qpair failed and we were unable to recover it. 00:32:35.523 [2024-11-20 06:43:06.997146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.523 [2024-11-20 06:43:06.997169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.523 qpair failed and we were unable to recover it. 00:32:35.523 [2024-11-20 06:43:06.997465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.523 [2024-11-20 06:43:06.997488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.523 qpair failed and we were unable to recover it. 00:32:35.523 [2024-11-20 06:43:06.997766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.523 [2024-11-20 06:43:06.997788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.523 qpair failed and we were unable to recover it. 00:32:35.523 [2024-11-20 06:43:06.998036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.523 [2024-11-20 06:43:06.998058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.523 qpair failed and we were unable to recover it. 00:32:35.523 [2024-11-20 06:43:06.998312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.523 [2024-11-20 06:43:06.998336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.523 qpair failed and we were unable to recover it. 00:32:35.523 [2024-11-20 06:43:06.998579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.523 [2024-11-20 06:43:06.998602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.523 qpair failed and we were unable to recover it. 00:32:35.523 [2024-11-20 06:43:06.998858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.523 [2024-11-20 06:43:06.998880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.523 qpair failed and we were unable to recover it. 00:32:35.523 [2024-11-20 06:43:06.999057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.523 [2024-11-20 06:43:06.999080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.523 qpair failed and we were unable to recover it. 00:32:35.523 [2024-11-20 06:43:06.999308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.524 [2024-11-20 06:43:06.999332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.524 qpair failed and we were unable to recover it. 00:32:35.524 [2024-11-20 06:43:06.999449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.524 [2024-11-20 06:43:06.999471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.524 qpair failed and we were unable to recover it. 00:32:35.524 [2024-11-20 06:43:06.999727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.524 [2024-11-20 06:43:06.999750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.524 qpair failed and we were unable to recover it. 00:32:35.524 [2024-11-20 06:43:07.000004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.524 [2024-11-20 06:43:07.000026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.524 qpair failed and we were unable to recover it. 00:32:35.524 [2024-11-20 06:43:07.000249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.524 [2024-11-20 06:43:07.000273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.524 qpair failed and we were unable to recover it. 00:32:35.524 [2024-11-20 06:43:07.000449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.524 [2024-11-20 06:43:07.000471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.524 qpair failed and we were unable to recover it. 00:32:35.524 [2024-11-20 06:43:07.000632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.524 [2024-11-20 06:43:07.000655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.524 qpair failed and we were unable to recover it. 00:32:35.524 [2024-11-20 06:43:07.000836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.524 [2024-11-20 06:43:07.000858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.524 qpair failed and we were unable to recover it. 00:32:35.524 [2024-11-20 06:43:07.001020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.524 [2024-11-20 06:43:07.001042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.524 qpair failed and we were unable to recover it. 00:32:35.524 [2024-11-20 06:43:07.001254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.524 [2024-11-20 06:43:07.001277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.524 qpair failed and we were unable to recover it. 00:32:35.524 [2024-11-20 06:43:07.001531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.524 [2024-11-20 06:43:07.001553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.524 qpair failed and we were unable to recover it. 00:32:35.524 [2024-11-20 06:43:07.001732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.524 [2024-11-20 06:43:07.001754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.524 qpair failed and we were unable to recover it. 00:32:35.524 [2024-11-20 06:43:07.002036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.524 [2024-11-20 06:43:07.002068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.524 qpair failed and we were unable to recover it. 00:32:35.524 [2024-11-20 06:43:07.002255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.524 [2024-11-20 06:43:07.002288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.524 qpair failed and we were unable to recover it. 00:32:35.524 [2024-11-20 06:43:07.002475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.524 [2024-11-20 06:43:07.002508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.524 qpair failed and we were unable to recover it. 00:32:35.524 [2024-11-20 06:43:07.002703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.524 [2024-11-20 06:43:07.002734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.524 qpair failed and we were unable to recover it. 00:32:35.524 [2024-11-20 06:43:07.002984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.524 [2024-11-20 06:43:07.003017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.524 qpair failed and we were unable to recover it. 00:32:35.524 [2024-11-20 06:43:07.003272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.524 [2024-11-20 06:43:07.003307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.524 qpair failed and we were unable to recover it. 00:32:35.524 [2024-11-20 06:43:07.003558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.524 [2024-11-20 06:43:07.003590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.524 qpair failed and we were unable to recover it. 00:32:35.524 [2024-11-20 06:43:07.003850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.524 [2024-11-20 06:43:07.003883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.524 qpair failed and we were unable to recover it. 00:32:35.524 [2024-11-20 06:43:07.004103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.524 [2024-11-20 06:43:07.004129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.524 qpair failed and we were unable to recover it. 00:32:35.524 [2024-11-20 06:43:07.004360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.524 [2024-11-20 06:43:07.004382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.524 qpair failed and we were unable to recover it. 00:32:35.524 [2024-11-20 06:43:07.004625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.524 [2024-11-20 06:43:07.004647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.524 qpair failed and we were unable to recover it. 00:32:35.524 [2024-11-20 06:43:07.004929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.524 [2024-11-20 06:43:07.004951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.524 qpair failed and we were unable to recover it. 00:32:35.524 [2024-11-20 06:43:07.005162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.524 [2024-11-20 06:43:07.005184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.524 qpair failed and we were unable to recover it. 00:32:35.524 [2024-11-20 06:43:07.005385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.524 [2024-11-20 06:43:07.005407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.524 qpair failed and we were unable to recover it. 00:32:35.524 [2024-11-20 06:43:07.005656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.524 [2024-11-20 06:43:07.005679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.524 qpair failed and we were unable to recover it. 00:32:35.524 [2024-11-20 06:43:07.005935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.524 [2024-11-20 06:43:07.005957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.524 qpair failed and we were unable to recover it. 00:32:35.524 [2024-11-20 06:43:07.006220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.524 [2024-11-20 06:43:07.006243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.524 qpair failed and we were unable to recover it. 00:32:35.524 [2024-11-20 06:43:07.006424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.524 [2024-11-20 06:43:07.006447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.524 qpair failed and we were unable to recover it. 00:32:35.524 [2024-11-20 06:43:07.006626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.524 [2024-11-20 06:43:07.006667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.524 qpair failed and we were unable to recover it. 00:32:35.524 [2024-11-20 06:43:07.006859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.524 [2024-11-20 06:43:07.006893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.524 qpair failed and we were unable to recover it. 00:32:35.524 [2024-11-20 06:43:07.007023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.524 [2024-11-20 06:43:07.007054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.524 qpair failed and we were unable to recover it. 00:32:35.524 [2024-11-20 06:43:07.007243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.524 [2024-11-20 06:43:07.007277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.524 qpair failed and we were unable to recover it. 00:32:35.524 [2024-11-20 06:43:07.007534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.524 [2024-11-20 06:43:07.007566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.524 qpair failed and we were unable to recover it. 00:32:35.524 [2024-11-20 06:43:07.007820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.524 [2024-11-20 06:43:07.007853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.524 qpair failed and we were unable to recover it. 00:32:35.524 [2024-11-20 06:43:07.008177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.524 [2024-11-20 06:43:07.008217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.524 qpair failed and we were unable to recover it. 00:32:35.524 [2024-11-20 06:43:07.008496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.524 [2024-11-20 06:43:07.008529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.524 qpair failed and we were unable to recover it. 00:32:35.524 [2024-11-20 06:43:07.008736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.525 [2024-11-20 06:43:07.008767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.525 qpair failed and we were unable to recover it. 00:32:35.525 [2024-11-20 06:43:07.009044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.525 [2024-11-20 06:43:07.009086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.525 qpair failed and we were unable to recover it. 00:32:35.525 [2024-11-20 06:43:07.009325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.525 [2024-11-20 06:43:07.009348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.525 qpair failed and we were unable to recover it. 00:32:35.525 [2024-11-20 06:43:07.009579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.525 [2024-11-20 06:43:07.009601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.525 qpair failed and we were unable to recover it. 00:32:35.525 [2024-11-20 06:43:07.009831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.525 [2024-11-20 06:43:07.009853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.525 qpair failed and we were unable to recover it. 00:32:35.525 [2024-11-20 06:43:07.010088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.525 [2024-11-20 06:43:07.010111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.525 qpair failed and we were unable to recover it. 00:32:35.525 [2024-11-20 06:43:07.010341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.525 [2024-11-20 06:43:07.010365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.525 qpair failed and we were unable to recover it. 00:32:35.525 [2024-11-20 06:43:07.010621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.525 [2024-11-20 06:43:07.010644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.525 qpair failed and we were unable to recover it. 00:32:35.525 [2024-11-20 06:43:07.010897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.525 [2024-11-20 06:43:07.010917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.525 qpair failed and we were unable to recover it. 00:32:35.525 [2024-11-20 06:43:07.011090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.525 [2024-11-20 06:43:07.011114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.525 qpair failed and we were unable to recover it. 00:32:35.525 [2024-11-20 06:43:07.011366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.525 [2024-11-20 06:43:07.011387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.525 qpair failed and we were unable to recover it. 00:32:35.525 [2024-11-20 06:43:07.011645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.525 [2024-11-20 06:43:07.011666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.525 qpair failed and we were unable to recover it. 00:32:35.525 [2024-11-20 06:43:07.011896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.525 [2024-11-20 06:43:07.011916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.525 qpair failed and we were unable to recover it. 00:32:35.525 [2024-11-20 06:43:07.012099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.525 [2024-11-20 06:43:07.012119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.525 qpair failed and we were unable to recover it. 00:32:35.525 [2024-11-20 06:43:07.012391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.525 [2024-11-20 06:43:07.012412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.525 qpair failed and we were unable to recover it. 00:32:35.525 [2024-11-20 06:43:07.012691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.525 [2024-11-20 06:43:07.012711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.525 qpair failed and we were unable to recover it. 00:32:35.525 [2024-11-20 06:43:07.012969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.525 [2024-11-20 06:43:07.012989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.525 qpair failed and we were unable to recover it. 00:32:35.525 [2024-11-20 06:43:07.013218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.525 [2024-11-20 06:43:07.013239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.525 qpair failed and we were unable to recover it. 00:32:35.525 [2024-11-20 06:43:07.013362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.525 [2024-11-20 06:43:07.013382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.525 qpair failed and we were unable to recover it. 00:32:35.525 [2024-11-20 06:43:07.013595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.525 [2024-11-20 06:43:07.013615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.525 qpair failed and we were unable to recover it. 00:32:35.525 [2024-11-20 06:43:07.013847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.525 [2024-11-20 06:43:07.013867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.525 qpair failed and we were unable to recover it. 00:32:35.525 [2024-11-20 06:43:07.014112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.525 [2024-11-20 06:43:07.014132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.525 qpair failed and we were unable to recover it. 00:32:35.525 [2024-11-20 06:43:07.014391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.525 [2024-11-20 06:43:07.014413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.525 qpair failed and we were unable to recover it. 00:32:35.525 [2024-11-20 06:43:07.014540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.525 [2024-11-20 06:43:07.014561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.525 qpair failed and we were unable to recover it. 00:32:35.525 [2024-11-20 06:43:07.014729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.525 [2024-11-20 06:43:07.014750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.525 qpair failed and we were unable to recover it. 00:32:35.525 [2024-11-20 06:43:07.014935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.525 [2024-11-20 06:43:07.014955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.525 qpair failed and we were unable to recover it. 00:32:35.525 [2024-11-20 06:43:07.015126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.525 [2024-11-20 06:43:07.015146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.525 qpair failed and we were unable to recover it. 00:32:35.525 [2024-11-20 06:43:07.015412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.525 [2024-11-20 06:43:07.015435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.525 qpair failed and we were unable to recover it. 00:32:35.525 [2024-11-20 06:43:07.015690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.525 [2024-11-20 06:43:07.015709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.525 qpair failed and we were unable to recover it. 00:32:35.525 [2024-11-20 06:43:07.015873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.525 [2024-11-20 06:43:07.015894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.525 qpair failed and we were unable to recover it. 00:32:35.525 [2024-11-20 06:43:07.016149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.525 [2024-11-20 06:43:07.016170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.525 qpair failed and we were unable to recover it. 00:32:35.525 [2024-11-20 06:43:07.016441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.525 [2024-11-20 06:43:07.016464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.525 qpair failed and we were unable to recover it. 00:32:35.525 [2024-11-20 06:43:07.016661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.525 [2024-11-20 06:43:07.016682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.525 qpair failed and we were unable to recover it. 00:32:35.525 [2024-11-20 06:43:07.016864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.525 [2024-11-20 06:43:07.016885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.525 qpair failed and we were unable to recover it. 00:32:35.525 [2024-11-20 06:43:07.017060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.525 [2024-11-20 06:43:07.017080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.525 qpair failed and we were unable to recover it. 00:32:35.525 [2024-11-20 06:43:07.017348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.525 [2024-11-20 06:43:07.017370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.525 qpair failed and we were unable to recover it. 00:32:35.525 [2024-11-20 06:43:07.017569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.525 [2024-11-20 06:43:07.017591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.525 qpair failed and we were unable to recover it. 00:32:35.525 [2024-11-20 06:43:07.017773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.525 [2024-11-20 06:43:07.017795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.525 qpair failed and we were unable to recover it. 00:32:35.525 [2024-11-20 06:43:07.018078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.526 [2024-11-20 06:43:07.018100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.526 qpair failed and we were unable to recover it. 00:32:35.526 [2024-11-20 06:43:07.018276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.526 [2024-11-20 06:43:07.018300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.526 qpair failed and we were unable to recover it. 00:32:35.526 [2024-11-20 06:43:07.018538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.526 [2024-11-20 06:43:07.018561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.526 qpair failed and we were unable to recover it. 00:32:35.526 [2024-11-20 06:43:07.018798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.526 [2024-11-20 06:43:07.018821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.526 qpair failed and we were unable to recover it. 00:32:35.526 [2024-11-20 06:43:07.019052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.526 [2024-11-20 06:43:07.019074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.526 qpair failed and we were unable to recover it. 00:32:35.526 [2024-11-20 06:43:07.019266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.526 [2024-11-20 06:43:07.019290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.526 qpair failed and we were unable to recover it. 00:32:35.526 [2024-11-20 06:43:07.019544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.526 [2024-11-20 06:43:07.019566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.526 qpair failed and we were unable to recover it. 00:32:35.526 [2024-11-20 06:43:07.019844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.526 [2024-11-20 06:43:07.019867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.526 qpair failed and we were unable to recover it. 00:32:35.526 [2024-11-20 06:43:07.020122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.526 [2024-11-20 06:43:07.020144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.526 qpair failed and we were unable to recover it. 00:32:35.526 [2024-11-20 06:43:07.020396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.526 [2024-11-20 06:43:07.020418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.526 qpair failed and we were unable to recover it. 00:32:35.526 [2024-11-20 06:43:07.020592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.526 [2024-11-20 06:43:07.020614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.526 qpair failed and we were unable to recover it. 00:32:35.526 [2024-11-20 06:43:07.020844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.526 [2024-11-20 06:43:07.020866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.526 qpair failed and we were unable to recover it. 00:32:35.526 [2024-11-20 06:43:07.021033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.526 [2024-11-20 06:43:07.021059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.526 qpair failed and we were unable to recover it. 00:32:35.526 [2024-11-20 06:43:07.021259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.526 [2024-11-20 06:43:07.021282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.526 qpair failed and we were unable to recover it. 00:32:35.526 [2024-11-20 06:43:07.021479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.526 [2024-11-20 06:43:07.021501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.526 qpair failed and we were unable to recover it. 00:32:35.526 [2024-11-20 06:43:07.021776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.526 [2024-11-20 06:43:07.021798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.526 qpair failed and we were unable to recover it. 00:32:35.526 [2024-11-20 06:43:07.022000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.526 [2024-11-20 06:43:07.022022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.526 qpair failed and we were unable to recover it. 00:32:35.526 [2024-11-20 06:43:07.022288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.526 [2024-11-20 06:43:07.022311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.526 qpair failed and we were unable to recover it. 00:32:35.526 [2024-11-20 06:43:07.022492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.526 [2024-11-20 06:43:07.022515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.526 qpair failed and we were unable to recover it. 00:32:35.526 [2024-11-20 06:43:07.022717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.526 [2024-11-20 06:43:07.022740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.526 qpair failed and we were unable to recover it. 00:32:35.526 [2024-11-20 06:43:07.023011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.526 [2024-11-20 06:43:07.023033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.526 qpair failed and we were unable to recover it. 00:32:35.526 [2024-11-20 06:43:07.023291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.526 [2024-11-20 06:43:07.023315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.526 qpair failed and we were unable to recover it. 00:32:35.526 [2024-11-20 06:43:07.023569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.526 [2024-11-20 06:43:07.023592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.526 qpair failed and we were unable to recover it. 00:32:35.526 [2024-11-20 06:43:07.023845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.526 [2024-11-20 06:43:07.023868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.526 qpair failed and we were unable to recover it. 00:32:35.526 [2024-11-20 06:43:07.024147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.526 [2024-11-20 06:43:07.024169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.526 qpair failed and we were unable to recover it. 00:32:35.526 [2024-11-20 06:43:07.024433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.526 [2024-11-20 06:43:07.024455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.526 qpair failed and we were unable to recover it. 00:32:35.526 [2024-11-20 06:43:07.024627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.526 [2024-11-20 06:43:07.024649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.526 qpair failed and we were unable to recover it. 00:32:35.526 [2024-11-20 06:43:07.024834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.526 [2024-11-20 06:43:07.024855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.526 qpair failed and we were unable to recover it. 00:32:35.526 [2024-11-20 06:43:07.025112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.526 [2024-11-20 06:43:07.025135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.526 qpair failed and we were unable to recover it. 00:32:35.526 [2024-11-20 06:43:07.025307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.526 [2024-11-20 06:43:07.025330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.526 qpair failed and we were unable to recover it. 00:32:35.526 [2024-11-20 06:43:07.025626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.526 [2024-11-20 06:43:07.025648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.526 qpair failed and we were unable to recover it. 00:32:35.526 [2024-11-20 06:43:07.025846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.526 [2024-11-20 06:43:07.025869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.526 qpair failed and we were unable to recover it. 00:32:35.526 [2024-11-20 06:43:07.026121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.526 [2024-11-20 06:43:07.026142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.526 qpair failed and we were unable to recover it. 00:32:35.526 [2024-11-20 06:43:07.026418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.526 [2024-11-20 06:43:07.026441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.526 qpair failed and we were unable to recover it. 00:32:35.526 [2024-11-20 06:43:07.026617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.526 [2024-11-20 06:43:07.026638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.526 qpair failed and we were unable to recover it. 00:32:35.526 [2024-11-20 06:43:07.026895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.526 [2024-11-20 06:43:07.026917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.526 qpair failed and we were unable to recover it. 00:32:35.526 [2024-11-20 06:43:07.027109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.526 [2024-11-20 06:43:07.027132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.526 qpair failed and we were unable to recover it. 00:32:35.526 [2024-11-20 06:43:07.027295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.526 [2024-11-20 06:43:07.027317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.526 qpair failed and we were unable to recover it. 00:32:35.526 [2024-11-20 06:43:07.027559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.527 [2024-11-20 06:43:07.027581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.527 qpair failed and we were unable to recover it. 00:32:35.527 [2024-11-20 06:43:07.027744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.527 [2024-11-20 06:43:07.027766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.527 qpair failed and we were unable to recover it. 00:32:35.527 [2024-11-20 06:43:07.027968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.527 [2024-11-20 06:43:07.027990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.527 qpair failed and we were unable to recover it. 00:32:35.527 [2024-11-20 06:43:07.028249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.527 [2024-11-20 06:43:07.028272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.527 qpair failed and we were unable to recover it. 00:32:35.527 [2024-11-20 06:43:07.028480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.527 [2024-11-20 06:43:07.028502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.527 qpair failed and we were unable to recover it. 00:32:35.527 [2024-11-20 06:43:07.028784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.527 [2024-11-20 06:43:07.028806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.527 qpair failed and we were unable to recover it. 00:32:35.527 [2024-11-20 06:43:07.029059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.527 [2024-11-20 06:43:07.029081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.527 qpair failed and we were unable to recover it. 00:32:35.527 [2024-11-20 06:43:07.029327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.527 [2024-11-20 06:43:07.029350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.527 qpair failed and we were unable to recover it. 00:32:35.527 [2024-11-20 06:43:07.029585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.527 [2024-11-20 06:43:07.029607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.527 qpair failed and we were unable to recover it. 00:32:35.527 [2024-11-20 06:43:07.029857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.527 [2024-11-20 06:43:07.029879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.527 qpair failed and we were unable to recover it. 00:32:35.527 [2024-11-20 06:43:07.030126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.527 [2024-11-20 06:43:07.030148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.527 qpair failed and we were unable to recover it. 00:32:35.527 [2024-11-20 06:43:07.030403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.527 [2024-11-20 06:43:07.030427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.527 qpair failed and we were unable to recover it. 00:32:35.527 [2024-11-20 06:43:07.030628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.527 [2024-11-20 06:43:07.030650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.527 qpair failed and we were unable to recover it. 00:32:35.527 [2024-11-20 06:43:07.030824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.527 [2024-11-20 06:43:07.030847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.527 qpair failed and we were unable to recover it. 00:32:35.527 [2024-11-20 06:43:07.031026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.527 [2024-11-20 06:43:07.031047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.527 qpair failed and we were unable to recover it. 00:32:35.527 [2024-11-20 06:43:07.031227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.527 [2024-11-20 06:43:07.031252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.527 qpair failed and we were unable to recover it. 00:32:35.527 [2024-11-20 06:43:07.031475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.527 [2024-11-20 06:43:07.031497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.527 qpair failed and we were unable to recover it. 00:32:35.527 [2024-11-20 06:43:07.031672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.527 [2024-11-20 06:43:07.031694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.527 qpair failed and we were unable to recover it. 00:32:35.527 [2024-11-20 06:43:07.031873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.527 [2024-11-20 06:43:07.031896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.527 qpair failed and we were unable to recover it. 00:32:35.527 [2024-11-20 06:43:07.032150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.527 [2024-11-20 06:43:07.032171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.527 qpair failed and we were unable to recover it. 00:32:35.527 [2024-11-20 06:43:07.032431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.527 [2024-11-20 06:43:07.032456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.527 qpair failed and we were unable to recover it. 00:32:35.527 [2024-11-20 06:43:07.032634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.527 [2024-11-20 06:43:07.032657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.527 qpair failed and we were unable to recover it. 00:32:35.527 [2024-11-20 06:43:07.032854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.527 [2024-11-20 06:43:07.032876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.527 qpair failed and we were unable to recover it. 00:32:35.527 [2024-11-20 06:43:07.032985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.527 [2024-11-20 06:43:07.033005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.527 qpair failed and we were unable to recover it. 00:32:35.527 [2024-11-20 06:43:07.033332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.527 [2024-11-20 06:43:07.033355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.527 qpair failed and we were unable to recover it. 00:32:35.527 [2024-11-20 06:43:07.033532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.527 [2024-11-20 06:43:07.033554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.527 qpair failed and we were unable to recover it. 00:32:35.527 [2024-11-20 06:43:07.033686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.527 [2024-11-20 06:43:07.033709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.527 qpair failed and we were unable to recover it. 00:32:35.527 [2024-11-20 06:43:07.033966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.527 [2024-11-20 06:43:07.033989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.527 qpair failed and we were unable to recover it. 00:32:35.527 [2024-11-20 06:43:07.034277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.527 [2024-11-20 06:43:07.034301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.527 qpair failed and we were unable to recover it. 00:32:35.527 [2024-11-20 06:43:07.034535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.527 [2024-11-20 06:43:07.034558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.527 qpair failed and we were unable to recover it. 00:32:35.527 [2024-11-20 06:43:07.034818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.527 [2024-11-20 06:43:07.034841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.527 qpair failed and we were unable to recover it. 00:32:35.527 [2024-11-20 06:43:07.035004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.527 [2024-11-20 06:43:07.035027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.527 qpair failed and we were unable to recover it. 00:32:35.527 [2024-11-20 06:43:07.035258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.527 [2024-11-20 06:43:07.035282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.527 qpair failed and we were unable to recover it. 00:32:35.527 [2024-11-20 06:43:07.035539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.527 [2024-11-20 06:43:07.035561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.527 qpair failed and we were unable to recover it. 00:32:35.527 [2024-11-20 06:43:07.035807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.527 [2024-11-20 06:43:07.035830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.527 qpair failed and we were unable to recover it. 00:32:35.527 [2024-11-20 06:43:07.036002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.527 [2024-11-20 06:43:07.036024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.527 qpair failed and we were unable to recover it. 00:32:35.527 [2024-11-20 06:43:07.036216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.527 [2024-11-20 06:43:07.036240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.527 qpair failed and we were unable to recover it. 00:32:35.527 [2024-11-20 06:43:07.036415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.527 [2024-11-20 06:43:07.036437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.527 qpair failed and we were unable to recover it. 00:32:35.527 [2024-11-20 06:43:07.036685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.527 [2024-11-20 06:43:07.036708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.528 qpair failed and we were unable to recover it. 00:32:35.528 [2024-11-20 06:43:07.036942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.528 [2024-11-20 06:43:07.036964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.528 qpair failed and we were unable to recover it. 00:32:35.528 [2024-11-20 06:43:07.037218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.528 [2024-11-20 06:43:07.037242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.528 qpair failed and we were unable to recover it. 00:32:35.528 [2024-11-20 06:43:07.037483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.528 [2024-11-20 06:43:07.037506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.528 qpair failed and we were unable to recover it. 00:32:35.528 [2024-11-20 06:43:07.037759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.528 [2024-11-20 06:43:07.037787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.528 qpair failed and we were unable to recover it. 00:32:35.528 [2024-11-20 06:43:07.037898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.528 [2024-11-20 06:43:07.037919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.528 qpair failed and we were unable to recover it. 00:32:35.528 [2024-11-20 06:43:07.038177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.528 [2024-11-20 06:43:07.038199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.528 qpair failed and we were unable to recover it. 00:32:35.528 [2024-11-20 06:43:07.038459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.528 [2024-11-20 06:43:07.038482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.528 qpair failed and we were unable to recover it. 00:32:35.528 [2024-11-20 06:43:07.038741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.528 [2024-11-20 06:43:07.038764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.528 qpair failed and we were unable to recover it. 00:32:35.528 [2024-11-20 06:43:07.039000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.528 [2024-11-20 06:43:07.039022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.528 qpair failed and we were unable to recover it. 00:32:35.528 [2024-11-20 06:43:07.039199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.528 [2024-11-20 06:43:07.039231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.528 qpair failed and we were unable to recover it. 00:32:35.528 [2024-11-20 06:43:07.039481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.528 [2024-11-20 06:43:07.039503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.528 qpair failed and we were unable to recover it. 00:32:35.528 [2024-11-20 06:43:07.039743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.528 [2024-11-20 06:43:07.039765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.528 qpair failed and we were unable to recover it. 00:32:35.528 [2024-11-20 06:43:07.040005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.528 [2024-11-20 06:43:07.040027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.528 qpair failed and we were unable to recover it. 00:32:35.528 [2024-11-20 06:43:07.040287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.528 [2024-11-20 06:43:07.040311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.528 qpair failed and we were unable to recover it. 00:32:35.528 [2024-11-20 06:43:07.040475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.528 [2024-11-20 06:43:07.040497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.528 qpair failed and we were unable to recover it. 00:32:35.528 [2024-11-20 06:43:07.040750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.528 [2024-11-20 06:43:07.040773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.528 qpair failed and we were unable to recover it. 00:32:35.528 [2024-11-20 06:43:07.040956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.528 [2024-11-20 06:43:07.040978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.528 qpair failed and we were unable to recover it. 00:32:35.528 [2024-11-20 06:43:07.041243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.528 [2024-11-20 06:43:07.041267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.528 qpair failed and we were unable to recover it. 00:32:35.528 [2024-11-20 06:43:07.041452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.528 [2024-11-20 06:43:07.041475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.528 qpair failed and we were unable to recover it. 00:32:35.528 [2024-11-20 06:43:07.041709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.528 [2024-11-20 06:43:07.041732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.528 qpair failed and we were unable to recover it. 00:32:35.528 [2024-11-20 06:43:07.041896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.528 [2024-11-20 06:43:07.041918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.528 qpair failed and we were unable to recover it. 00:32:35.528 [2024-11-20 06:43:07.042195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.528 [2024-11-20 06:43:07.042236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.528 qpair failed and we were unable to recover it. 00:32:35.528 [2024-11-20 06:43:07.042412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.528 [2024-11-20 06:43:07.042434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.528 qpair failed and we were unable to recover it. 00:32:35.528 [2024-11-20 06:43:07.042616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.528 [2024-11-20 06:43:07.042638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.528 qpair failed and we were unable to recover it. 00:32:35.528 [2024-11-20 06:43:07.042818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.528 [2024-11-20 06:43:07.042841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.528 qpair failed and we were unable to recover it. 00:32:35.528 [2024-11-20 06:43:07.043041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.528 [2024-11-20 06:43:07.043063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.528 qpair failed and we were unable to recover it. 00:32:35.528 [2024-11-20 06:43:07.043267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.528 [2024-11-20 06:43:07.043291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.528 qpair failed and we were unable to recover it. 00:32:35.528 [2024-11-20 06:43:07.043549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.528 [2024-11-20 06:43:07.043572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.528 qpair failed and we were unable to recover it. 00:32:35.528 [2024-11-20 06:43:07.043831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.528 [2024-11-20 06:43:07.043853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.528 qpair failed and we were unable to recover it. 00:32:35.528 [2024-11-20 06:43:07.044059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.528 [2024-11-20 06:43:07.044081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.528 qpair failed and we were unable to recover it. 00:32:35.528 [2024-11-20 06:43:07.044339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.528 [2024-11-20 06:43:07.044363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.528 qpair failed and we were unable to recover it. 00:32:35.528 [2024-11-20 06:43:07.044626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.529 [2024-11-20 06:43:07.044648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.529 qpair failed and we were unable to recover it. 00:32:35.529 [2024-11-20 06:43:07.044883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.529 [2024-11-20 06:43:07.044907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.529 qpair failed and we were unable to recover it. 00:32:35.529 [2024-11-20 06:43:07.045145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.529 [2024-11-20 06:43:07.045167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.529 qpair failed and we were unable to recover it. 00:32:35.529 [2024-11-20 06:43:07.045443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.529 [2024-11-20 06:43:07.045466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.529 qpair failed and we were unable to recover it. 00:32:35.529 [2024-11-20 06:43:07.045696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.529 [2024-11-20 06:43:07.045719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.529 qpair failed and we were unable to recover it. 00:32:35.529 [2024-11-20 06:43:07.045974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.529 [2024-11-20 06:43:07.045996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.529 qpair failed and we were unable to recover it. 00:32:35.529 [2024-11-20 06:43:07.046119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.529 [2024-11-20 06:43:07.046141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.529 qpair failed and we were unable to recover it. 00:32:35.529 [2024-11-20 06:43:07.046324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.529 [2024-11-20 06:43:07.046347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.529 qpair failed and we were unable to recover it. 00:32:35.529 [2024-11-20 06:43:07.046512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.529 [2024-11-20 06:43:07.046533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.529 qpair failed and we were unable to recover it. 00:32:35.529 [2024-11-20 06:43:07.046747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.529 [2024-11-20 06:43:07.046770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.529 qpair failed and we were unable to recover it. 00:32:35.529 [2024-11-20 06:43:07.047031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.529 [2024-11-20 06:43:07.047053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.529 qpair failed and we were unable to recover it. 00:32:35.529 [2024-11-20 06:43:07.047263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.529 [2024-11-20 06:43:07.047286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.529 qpair failed and we were unable to recover it. 00:32:35.529 [2024-11-20 06:43:07.047542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.529 [2024-11-20 06:43:07.047564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.529 qpair failed and we were unable to recover it. 00:32:35.529 [2024-11-20 06:43:07.047848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.529 [2024-11-20 06:43:07.047870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.529 qpair failed and we were unable to recover it. 00:32:35.529 [2024-11-20 06:43:07.048049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.529 [2024-11-20 06:43:07.048071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.529 qpair failed and we were unable to recover it. 00:32:35.529 [2024-11-20 06:43:07.048255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.529 [2024-11-20 06:43:07.048278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.529 qpair failed and we were unable to recover it. 00:32:35.529 [2024-11-20 06:43:07.048459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.529 [2024-11-20 06:43:07.048481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.529 qpair failed and we were unable to recover it. 00:32:35.529 [2024-11-20 06:43:07.048606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.529 [2024-11-20 06:43:07.048628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.529 qpair failed and we were unable to recover it. 00:32:35.529 [2024-11-20 06:43:07.048858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.529 [2024-11-20 06:43:07.048880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.529 qpair failed and we were unable to recover it. 00:32:35.529 [2024-11-20 06:43:07.049112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.529 [2024-11-20 06:43:07.049135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.529 qpair failed and we were unable to recover it. 00:32:35.529 [2024-11-20 06:43:07.049326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.529 [2024-11-20 06:43:07.049349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.529 qpair failed and we were unable to recover it. 00:32:35.529 [2024-11-20 06:43:07.049608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.529 [2024-11-20 06:43:07.049630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.529 qpair failed and we were unable to recover it. 00:32:35.529 [2024-11-20 06:43:07.049816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.529 [2024-11-20 06:43:07.049839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.529 qpair failed and we were unable to recover it. 00:32:35.529 [2024-11-20 06:43:07.050011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.529 [2024-11-20 06:43:07.050032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.529 qpair failed and we were unable to recover it. 00:32:35.529 [2024-11-20 06:43:07.050294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.529 [2024-11-20 06:43:07.050318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.529 qpair failed and we were unable to recover it. 00:32:35.529 [2024-11-20 06:43:07.050561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.529 [2024-11-20 06:43:07.050583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.529 qpair failed and we were unable to recover it. 00:32:35.529 [2024-11-20 06:43:07.050818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.529 [2024-11-20 06:43:07.050840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.529 qpair failed and we were unable to recover it. 00:32:35.529 [2024-11-20 06:43:07.051000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.529 [2024-11-20 06:43:07.051024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.529 qpair failed and we were unable to recover it. 00:32:35.529 [2024-11-20 06:43:07.051279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.529 [2024-11-20 06:43:07.051302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.529 qpair failed and we were unable to recover it. 00:32:35.529 [2024-11-20 06:43:07.051562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.529 [2024-11-20 06:43:07.051584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.529 qpair failed and we were unable to recover it. 00:32:35.529 [2024-11-20 06:43:07.051705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.529 [2024-11-20 06:43:07.051726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.529 qpair failed and we were unable to recover it. 00:32:35.529 [2024-11-20 06:43:07.051908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.529 [2024-11-20 06:43:07.051930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.529 qpair failed and we were unable to recover it. 00:32:35.529 [2024-11-20 06:43:07.052136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.529 [2024-11-20 06:43:07.052159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.529 qpair failed and we were unable to recover it. 00:32:35.529 [2024-11-20 06:43:07.052386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.529 [2024-11-20 06:43:07.052409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.529 qpair failed and we were unable to recover it. 00:32:35.529 [2024-11-20 06:43:07.052647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.529 [2024-11-20 06:43:07.052669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.529 qpair failed and we were unable to recover it. 00:32:35.529 [2024-11-20 06:43:07.052791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.529 [2024-11-20 06:43:07.052812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.529 qpair failed and we were unable to recover it. 00:32:35.529 [2024-11-20 06:43:07.052991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.529 [2024-11-20 06:43:07.053013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.529 qpair failed and we were unable to recover it. 00:32:35.529 [2024-11-20 06:43:07.053275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.529 [2024-11-20 06:43:07.053299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.529 qpair failed and we were unable to recover it. 00:32:35.529 [2024-11-20 06:43:07.053581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.530 [2024-11-20 06:43:07.053613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.530 qpair failed and we were unable to recover it. 00:32:35.530 [2024-11-20 06:43:07.053884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.530 [2024-11-20 06:43:07.053917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.530 qpair failed and we were unable to recover it. 00:32:35.530 [2024-11-20 06:43:07.054199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.530 [2024-11-20 06:43:07.054238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.530 qpair failed and we were unable to recover it. 00:32:35.530 [2024-11-20 06:43:07.054519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.530 [2024-11-20 06:43:07.054541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.530 qpair failed and we were unable to recover it. 00:32:35.530 [2024-11-20 06:43:07.054822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.530 [2024-11-20 06:43:07.054845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.530 qpair failed and we were unable to recover it. 00:32:35.530 [2024-11-20 06:43:07.055010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.530 [2024-11-20 06:43:07.055032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.530 qpair failed and we were unable to recover it. 00:32:35.530 [2024-11-20 06:43:07.055284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.530 [2024-11-20 06:43:07.055307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.530 qpair failed and we were unable to recover it. 00:32:35.530 [2024-11-20 06:43:07.055511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.530 [2024-11-20 06:43:07.055533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.530 qpair failed and we were unable to recover it. 00:32:35.530 [2024-11-20 06:43:07.055734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.530 [2024-11-20 06:43:07.055756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.530 qpair failed and we were unable to recover it. 00:32:35.530 [2024-11-20 06:43:07.055936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.530 [2024-11-20 06:43:07.055958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.530 qpair failed and we were unable to recover it. 00:32:35.530 [2024-11-20 06:43:07.056163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.530 [2024-11-20 06:43:07.056196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.530 qpair failed and we were unable to recover it. 00:32:35.530 [2024-11-20 06:43:07.056484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.530 [2024-11-20 06:43:07.056516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.530 qpair failed and we were unable to recover it. 00:32:35.530 [2024-11-20 06:43:07.056739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.530 [2024-11-20 06:43:07.056771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.530 qpair failed and we were unable to recover it. 00:32:35.530 [2024-11-20 06:43:07.056918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.530 [2024-11-20 06:43:07.056949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.530 qpair failed and we were unable to recover it. 00:32:35.530 [2024-11-20 06:43:07.057244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.530 [2024-11-20 06:43:07.057279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.530 qpair failed and we were unable to recover it. 00:32:35.530 [2024-11-20 06:43:07.057428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.530 [2024-11-20 06:43:07.057459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.530 qpair failed and we were unable to recover it. 00:32:35.530 [2024-11-20 06:43:07.057744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.530 [2024-11-20 06:43:07.057777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.530 qpair failed and we were unable to recover it. 00:32:35.530 [2024-11-20 06:43:07.058047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.530 [2024-11-20 06:43:07.058079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.530 qpair failed and we were unable to recover it. 00:32:35.530 [2024-11-20 06:43:07.058374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.530 [2024-11-20 06:43:07.058397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.530 qpair failed and we were unable to recover it. 00:32:35.530 [2024-11-20 06:43:07.058606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.530 [2024-11-20 06:43:07.058629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.530 qpair failed and we were unable to recover it. 00:32:35.530 [2024-11-20 06:43:07.058890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.530 [2024-11-20 06:43:07.058912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.530 qpair failed and we were unable to recover it. 00:32:35.530 [2024-11-20 06:43:07.059140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.530 [2024-11-20 06:43:07.059161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.530 qpair failed and we were unable to recover it. 00:32:35.530 [2024-11-20 06:43:07.059358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.530 [2024-11-20 06:43:07.059381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.530 qpair failed and we were unable to recover it. 00:32:35.530 [2024-11-20 06:43:07.059614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.530 [2024-11-20 06:43:07.059637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.530 qpair failed and we were unable to recover it. 00:32:35.530 [2024-11-20 06:43:07.059915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.530 [2024-11-20 06:43:07.059937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.530 qpair failed and we were unable to recover it. 00:32:35.530 [2024-11-20 06:43:07.060192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.530 [2024-11-20 06:43:07.060222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.530 qpair failed and we were unable to recover it. 00:32:35.530 [2024-11-20 06:43:07.060467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.530 [2024-11-20 06:43:07.060489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.530 qpair failed and we were unable to recover it. 00:32:35.530 [2024-11-20 06:43:07.060696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.530 [2024-11-20 06:43:07.060718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.530 qpair failed and we were unable to recover it. 00:32:35.530 [2024-11-20 06:43:07.060902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.530 [2024-11-20 06:43:07.060933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.530 qpair failed and we were unable to recover it. 00:32:35.530 [2024-11-20 06:43:07.061239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.530 [2024-11-20 06:43:07.061272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.530 qpair failed and we were unable to recover it. 00:32:35.530 [2024-11-20 06:43:07.061545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.530 [2024-11-20 06:43:07.061568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.530 qpair failed and we were unable to recover it. 00:32:35.530 [2024-11-20 06:43:07.061752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.530 [2024-11-20 06:43:07.061774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.530 qpair failed and we were unable to recover it. 00:32:35.530 [2024-11-20 06:43:07.062009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.530 [2024-11-20 06:43:07.062030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.530 qpair failed and we were unable to recover it. 00:32:35.530 [2024-11-20 06:43:07.062287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.530 [2024-11-20 06:43:07.062310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.530 qpair failed and we were unable to recover it. 00:32:35.530 [2024-11-20 06:43:07.062553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.530 [2024-11-20 06:43:07.062575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.530 qpair failed and we were unable to recover it. 00:32:35.530 [2024-11-20 06:43:07.062815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.530 [2024-11-20 06:43:07.062837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.530 qpair failed and we were unable to recover it. 00:32:35.530 [2024-11-20 06:43:07.063108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.530 [2024-11-20 06:43:07.063130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.530 qpair failed and we were unable to recover it. 00:32:35.530 [2024-11-20 06:43:07.063285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.530 [2024-11-20 06:43:07.063307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.530 qpair failed and we were unable to recover it. 00:32:35.530 [2024-11-20 06:43:07.063559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.531 [2024-11-20 06:43:07.063591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.531 qpair failed and we were unable to recover it. 00:32:35.531 [2024-11-20 06:43:07.063712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.531 [2024-11-20 06:43:07.063743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.531 qpair failed and we were unable to recover it. 00:32:35.531 [2024-11-20 06:43:07.063946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.531 [2024-11-20 06:43:07.063977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.531 qpair failed and we were unable to recover it. 00:32:35.531 [2024-11-20 06:43:07.064248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.531 [2024-11-20 06:43:07.064269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.531 qpair failed and we were unable to recover it. 00:32:35.531 [2024-11-20 06:43:07.064498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.531 [2024-11-20 06:43:07.064520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.531 qpair failed and we were unable to recover it. 00:32:35.531 [2024-11-20 06:43:07.064768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.531 [2024-11-20 06:43:07.064794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.531 qpair failed and we were unable to recover it. 00:32:35.531 [2024-11-20 06:43:07.064977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.531 [2024-11-20 06:43:07.064998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.531 qpair failed and we were unable to recover it. 00:32:35.531 [2024-11-20 06:43:07.065254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.531 [2024-11-20 06:43:07.065278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.531 qpair failed and we were unable to recover it. 00:32:35.531 [2024-11-20 06:43:07.065559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.531 [2024-11-20 06:43:07.065580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.531 qpair failed and we were unable to recover it. 00:32:35.531 [2024-11-20 06:43:07.065833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.531 [2024-11-20 06:43:07.065855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.531 qpair failed and we were unable to recover it. 00:32:35.531 [2024-11-20 06:43:07.066087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.531 [2024-11-20 06:43:07.066108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.531 qpair failed and we were unable to recover it. 00:32:35.531 [2024-11-20 06:43:07.066356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.531 [2024-11-20 06:43:07.066379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.531 qpair failed and we were unable to recover it. 00:32:35.531 [2024-11-20 06:43:07.066631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.531 [2024-11-20 06:43:07.066652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.531 qpair failed and we were unable to recover it. 00:32:35.531 [2024-11-20 06:43:07.066895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.531 [2024-11-20 06:43:07.066926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.531 qpair failed and we were unable to recover it. 00:32:35.531 [2024-11-20 06:43:07.067230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.531 [2024-11-20 06:43:07.067265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.531 qpair failed and we were unable to recover it. 00:32:35.531 [2024-11-20 06:43:07.067539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.531 [2024-11-20 06:43:07.067560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.531 qpair failed and we were unable to recover it. 00:32:35.531 [2024-11-20 06:43:07.067863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.531 [2024-11-20 06:43:07.067885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.531 qpair failed and we were unable to recover it. 00:32:35.531 [2024-11-20 06:43:07.068140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.531 [2024-11-20 06:43:07.068162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.531 qpair failed and we were unable to recover it. 00:32:35.531 [2024-11-20 06:43:07.068368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.531 [2024-11-20 06:43:07.068391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.531 qpair failed and we were unable to recover it. 00:32:35.531 [2024-11-20 06:43:07.068634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.531 [2024-11-20 06:43:07.068657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.531 qpair failed and we were unable to recover it. 00:32:35.531 [2024-11-20 06:43:07.068891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.531 [2024-11-20 06:43:07.068923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.531 qpair failed and we were unable to recover it. 00:32:35.531 [2024-11-20 06:43:07.069212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.531 [2024-11-20 06:43:07.069246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.531 qpair failed and we were unable to recover it. 00:32:35.531 [2024-11-20 06:43:07.069527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.531 [2024-11-20 06:43:07.069550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.531 qpair failed and we were unable to recover it. 00:32:35.531 [2024-11-20 06:43:07.069756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.531 [2024-11-20 06:43:07.069778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.531 qpair failed and we were unable to recover it. 00:32:35.531 [2024-11-20 06:43:07.070007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.531 [2024-11-20 06:43:07.070029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.531 qpair failed and we were unable to recover it. 00:32:35.531 [2024-11-20 06:43:07.070276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.531 [2024-11-20 06:43:07.070299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.531 qpair failed and we were unable to recover it. 00:32:35.531 [2024-11-20 06:43:07.070552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.531 [2024-11-20 06:43:07.070574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.531 qpair failed and we were unable to recover it. 00:32:35.531 [2024-11-20 06:43:07.070804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.531 [2024-11-20 06:43:07.070826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.531 qpair failed and we were unable to recover it. 00:32:35.531 [2024-11-20 06:43:07.071080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.531 [2024-11-20 06:43:07.071102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.531 qpair failed and we were unable to recover it. 00:32:35.531 [2024-11-20 06:43:07.071301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.531 [2024-11-20 06:43:07.071325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.531 qpair failed and we were unable to recover it. 00:32:35.531 [2024-11-20 06:43:07.071586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.531 [2024-11-20 06:43:07.071608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.531 qpair failed and we were unable to recover it. 00:32:35.531 [2024-11-20 06:43:07.071845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.531 [2024-11-20 06:43:07.071868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.531 qpair failed and we were unable to recover it. 00:32:35.531 [2024-11-20 06:43:07.072065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.531 [2024-11-20 06:43:07.072091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.531 qpair failed and we were unable to recover it. 00:32:35.531 [2024-11-20 06:43:07.072275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.531 [2024-11-20 06:43:07.072298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.531 qpair failed and we were unable to recover it. 00:32:35.531 [2024-11-20 06:43:07.072514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.531 [2024-11-20 06:43:07.072536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.531 qpair failed and we were unable to recover it. 00:32:35.531 [2024-11-20 06:43:07.072784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.531 [2024-11-20 06:43:07.072807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.531 qpair failed and we were unable to recover it. 00:32:35.531 [2024-11-20 06:43:07.072970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.531 [2024-11-20 06:43:07.072992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.531 qpair failed and we were unable to recover it. 00:32:35.531 [2024-11-20 06:43:07.073189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.531 [2024-11-20 06:43:07.073217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.531 qpair failed and we were unable to recover it. 00:32:35.531 [2024-11-20 06:43:07.073454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.531 [2024-11-20 06:43:07.073477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.532 qpair failed and we were unable to recover it. 00:32:35.532 [2024-11-20 06:43:07.073732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.532 [2024-11-20 06:43:07.073754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.532 qpair failed and we were unable to recover it. 00:32:35.532 [2024-11-20 06:43:07.073913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.532 [2024-11-20 06:43:07.073934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.532 qpair failed and we were unable to recover it. 00:32:35.532 [2024-11-20 06:43:07.074211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.532 [2024-11-20 06:43:07.074244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.532 qpair failed and we were unable to recover it. 00:32:35.532 [2024-11-20 06:43:07.074528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.532 [2024-11-20 06:43:07.074560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.532 qpair failed and we were unable to recover it. 00:32:35.532 [2024-11-20 06:43:07.074833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.532 [2024-11-20 06:43:07.074866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.532 qpair failed and we were unable to recover it. 00:32:35.532 [2024-11-20 06:43:07.075152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.532 [2024-11-20 06:43:07.075183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.532 qpair failed and we were unable to recover it. 00:32:35.532 [2024-11-20 06:43:07.075380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.532 [2024-11-20 06:43:07.075421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.532 qpair failed and we were unable to recover it. 00:32:35.532 [2024-11-20 06:43:07.075671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.532 [2024-11-20 06:43:07.075693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.532 qpair failed and we were unable to recover it. 00:32:35.532 [2024-11-20 06:43:07.075951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.532 [2024-11-20 06:43:07.075973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.532 qpair failed and we were unable to recover it. 00:32:35.532 [2024-11-20 06:43:07.076151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.532 [2024-11-20 06:43:07.076174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.532 qpair failed and we were unable to recover it. 00:32:35.532 [2024-11-20 06:43:07.076464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.532 [2024-11-20 06:43:07.076498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.532 qpair failed and we were unable to recover it. 00:32:35.532 [2024-11-20 06:43:07.076712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.532 [2024-11-20 06:43:07.076746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.532 qpair failed and we were unable to recover it. 00:32:35.532 [2024-11-20 06:43:07.077013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.532 [2024-11-20 06:43:07.077046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.532 qpair failed and we were unable to recover it. 00:32:35.532 [2024-11-20 06:43:07.077334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.532 [2024-11-20 06:43:07.077358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.532 qpair failed and we were unable to recover it. 00:32:35.532 [2024-11-20 06:43:07.077640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.532 [2024-11-20 06:43:07.077663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.532 qpair failed and we were unable to recover it. 00:32:35.532 [2024-11-20 06:43:07.077890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.532 [2024-11-20 06:43:07.077912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.532 qpair failed and we were unable to recover it. 00:32:35.532 [2024-11-20 06:43:07.078146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.532 [2024-11-20 06:43:07.078168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.532 qpair failed and we were unable to recover it. 00:32:35.532 [2024-11-20 06:43:07.078457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.532 [2024-11-20 06:43:07.078481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.532 qpair failed and we were unable to recover it. 00:32:35.532 [2024-11-20 06:43:07.078762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.532 [2024-11-20 06:43:07.078784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.532 qpair failed and we were unable to recover it. 00:32:35.532 [2024-11-20 06:43:07.079039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.532 [2024-11-20 06:43:07.079061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.532 qpair failed and we were unable to recover it. 00:32:35.532 [2024-11-20 06:43:07.079294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.532 [2024-11-20 06:43:07.079318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.532 qpair failed and we were unable to recover it. 00:32:35.532 [2024-11-20 06:43:07.079558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.532 [2024-11-20 06:43:07.079581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.532 qpair failed and we were unable to recover it. 00:32:35.532 [2024-11-20 06:43:07.079838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.532 [2024-11-20 06:43:07.079861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.532 qpair failed and we were unable to recover it. 00:32:35.532 [2024-11-20 06:43:07.080116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.532 [2024-11-20 06:43:07.080137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.532 qpair failed and we were unable to recover it. 00:32:35.532 [2024-11-20 06:43:07.080371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.532 [2024-11-20 06:43:07.080394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.532 qpair failed and we were unable to recover it. 00:32:35.532 [2024-11-20 06:43:07.080556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.532 [2024-11-20 06:43:07.080578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.532 qpair failed and we were unable to recover it. 00:32:35.532 [2024-11-20 06:43:07.080842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.532 [2024-11-20 06:43:07.080876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.532 qpair failed and we were unable to recover it. 00:32:35.532 [2024-11-20 06:43:07.081156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.532 [2024-11-20 06:43:07.081188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.532 qpair failed and we were unable to recover it. 00:32:35.532 [2024-11-20 06:43:07.081409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.532 [2024-11-20 06:43:07.081432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.532 qpair failed and we were unable to recover it. 00:32:35.532 [2024-11-20 06:43:07.081611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.532 [2024-11-20 06:43:07.081633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.532 qpair failed and we were unable to recover it. 00:32:35.532 [2024-11-20 06:43:07.081900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.532 [2024-11-20 06:43:07.081923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.532 qpair failed and we were unable to recover it. 00:32:35.532 [2024-11-20 06:43:07.082086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.532 [2024-11-20 06:43:07.082108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.532 qpair failed and we were unable to recover it. 00:32:35.532 [2024-11-20 06:43:07.082311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.532 [2024-11-20 06:43:07.082334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.532 qpair failed and we were unable to recover it. 00:32:35.532 [2024-11-20 06:43:07.082516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.533 [2024-11-20 06:43:07.082558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.533 qpair failed and we were unable to recover it. 00:32:35.533 [2024-11-20 06:43:07.082759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.533 [2024-11-20 06:43:07.082796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.533 qpair failed and we were unable to recover it. 00:32:35.533 [2024-11-20 06:43:07.083074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.533 [2024-11-20 06:43:07.083113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.533 qpair failed and we were unable to recover it. 00:32:35.533 [2024-11-20 06:43:07.083228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.533 [2024-11-20 06:43:07.083250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.533 qpair failed and we were unable to recover it. 00:32:35.533 [2024-11-20 06:43:07.083436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.533 [2024-11-20 06:43:07.083458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.533 qpair failed and we were unable to recover it. 00:32:35.533 [2024-11-20 06:43:07.083651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.533 [2024-11-20 06:43:07.083673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.533 qpair failed and we were unable to recover it. 00:32:35.533 [2024-11-20 06:43:07.083836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.533 [2024-11-20 06:43:07.083857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.533 qpair failed and we were unable to recover it. 00:32:35.533 [2024-11-20 06:43:07.084052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.533 [2024-11-20 06:43:07.084094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.533 qpair failed and we were unable to recover it. 00:32:35.533 [2024-11-20 06:43:07.084295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.533 [2024-11-20 06:43:07.084329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.533 qpair failed and we were unable to recover it. 00:32:35.533 [2024-11-20 06:43:07.084630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.533 [2024-11-20 06:43:07.084662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.533 qpair failed and we were unable to recover it. 00:32:35.533 [2024-11-20 06:43:07.084905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.533 [2024-11-20 06:43:07.084937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.533 qpair failed and we were unable to recover it. 00:32:35.533 [2024-11-20 06:43:07.085223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.533 [2024-11-20 06:43:07.085256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.533 qpair failed and we were unable to recover it. 00:32:35.533 [2024-11-20 06:43:07.085472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.533 [2024-11-20 06:43:07.085504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.533 qpair failed and we were unable to recover it. 00:32:35.533 [2024-11-20 06:43:07.085780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.533 [2024-11-20 06:43:07.085811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.533 qpair failed and we were unable to recover it. 00:32:35.533 [2024-11-20 06:43:07.086091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.533 [2024-11-20 06:43:07.086124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.533 qpair failed and we were unable to recover it. 00:32:35.533 [2024-11-20 06:43:07.086244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.533 [2024-11-20 06:43:07.086282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.533 qpair failed and we were unable to recover it. 00:32:35.533 [2024-11-20 06:43:07.086558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.533 [2024-11-20 06:43:07.086580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.533 qpair failed and we were unable to recover it. 00:32:35.533 [2024-11-20 06:43:07.086836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.533 [2024-11-20 06:43:07.086858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.533 qpair failed and we were unable to recover it. 00:32:35.533 [2024-11-20 06:43:07.087133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.533 [2024-11-20 06:43:07.087156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.533 qpair failed and we were unable to recover it. 00:32:35.533 [2024-11-20 06:43:07.087338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.533 [2024-11-20 06:43:07.087362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.533 qpair failed and we were unable to recover it. 00:32:35.533 [2024-11-20 06:43:07.087618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.533 [2024-11-20 06:43:07.087641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.533 qpair failed and we were unable to recover it. 00:32:35.533 [2024-11-20 06:43:07.087823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.533 [2024-11-20 06:43:07.087845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.533 qpair failed and we were unable to recover it. 00:32:35.533 [2024-11-20 06:43:07.088046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.533 [2024-11-20 06:43:07.088068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.533 qpair failed and we were unable to recover it. 00:32:35.533 [2024-11-20 06:43:07.088227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.533 [2024-11-20 06:43:07.088250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.533 qpair failed and we were unable to recover it. 00:32:35.533 [2024-11-20 06:43:07.088480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.533 [2024-11-20 06:43:07.088501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.533 qpair failed and we were unable to recover it. 00:32:35.533 [2024-11-20 06:43:07.088683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.533 [2024-11-20 06:43:07.088705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.533 qpair failed and we were unable to recover it. 00:32:35.533 [2024-11-20 06:43:07.088864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.533 [2024-11-20 06:43:07.088887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.533 qpair failed and we were unable to recover it. 00:32:35.533 [2024-11-20 06:43:07.089161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.533 [2024-11-20 06:43:07.089193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.533 qpair failed and we were unable to recover it. 00:32:35.533 [2024-11-20 06:43:07.089465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.533 [2024-11-20 06:43:07.089504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.533 qpair failed and we were unable to recover it. 00:32:35.533 [2024-11-20 06:43:07.089702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.533 [2024-11-20 06:43:07.089733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.533 qpair failed and we were unable to recover it. 00:32:35.533 [2024-11-20 06:43:07.089954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.533 [2024-11-20 06:43:07.089986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.533 qpair failed and we were unable to recover it. 00:32:35.533 [2024-11-20 06:43:07.090162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.533 [2024-11-20 06:43:07.090184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.533 qpair failed and we were unable to recover it. 00:32:35.533 [2024-11-20 06:43:07.090426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.533 [2024-11-20 06:43:07.090449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.533 qpair failed and we were unable to recover it. 00:32:35.533 [2024-11-20 06:43:07.090708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.533 [2024-11-20 06:43:07.090730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.533 qpair failed and we were unable to recover it. 00:32:35.533 [2024-11-20 06:43:07.090902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.533 [2024-11-20 06:43:07.090924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.533 qpair failed and we were unable to recover it. 00:32:35.533 [2024-11-20 06:43:07.091098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.533 [2024-11-20 06:43:07.091119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.533 qpair failed and we were unable to recover it. 00:32:35.533 [2024-11-20 06:43:07.091375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.533 [2024-11-20 06:43:07.091409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.533 qpair failed and we were unable to recover it. 00:32:35.533 [2024-11-20 06:43:07.091631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.533 [2024-11-20 06:43:07.091665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.533 qpair failed and we were unable to recover it. 00:32:35.533 [2024-11-20 06:43:07.091938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.534 [2024-11-20 06:43:07.091970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.534 qpair failed and we were unable to recover it. 00:32:35.534 [2024-11-20 06:43:07.092261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.534 [2024-11-20 06:43:07.092296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.534 qpair failed and we were unable to recover it. 00:32:35.534 [2024-11-20 06:43:07.092571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.534 [2024-11-20 06:43:07.092604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.534 qpair failed and we were unable to recover it. 00:32:35.534 [2024-11-20 06:43:07.092917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.534 [2024-11-20 06:43:07.092950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.534 qpair failed and we were unable to recover it. 00:32:35.534 [2024-11-20 06:43:07.093213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.534 [2024-11-20 06:43:07.093237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.534 qpair failed and we were unable to recover it. 00:32:35.534 [2024-11-20 06:43:07.093417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.534 [2024-11-20 06:43:07.093440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.534 qpair failed and we were unable to recover it. 00:32:35.534 [2024-11-20 06:43:07.093626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.534 [2024-11-20 06:43:07.093648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.534 qpair failed and we were unable to recover it. 00:32:35.534 [2024-11-20 06:43:07.093812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.534 [2024-11-20 06:43:07.093834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.534 qpair failed and we were unable to recover it. 00:32:35.534 [2024-11-20 06:43:07.094013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.534 [2024-11-20 06:43:07.094034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.534 qpair failed and we were unable to recover it. 00:32:35.534 [2024-11-20 06:43:07.094292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.534 [2024-11-20 06:43:07.094314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.534 qpair failed and we were unable to recover it. 00:32:35.534 [2024-11-20 06:43:07.094556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.534 [2024-11-20 06:43:07.094577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.534 qpair failed and we were unable to recover it. 00:32:35.534 [2024-11-20 06:43:07.094770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.534 [2024-11-20 06:43:07.094791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.534 qpair failed and we were unable to recover it. 00:32:35.534 [2024-11-20 06:43:07.095049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.534 [2024-11-20 06:43:07.095080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.534 qpair failed and we were unable to recover it. 00:32:35.534 [2024-11-20 06:43:07.095385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.534 [2024-11-20 06:43:07.095417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.534 qpair failed and we were unable to recover it. 00:32:35.534 [2024-11-20 06:43:07.095699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.534 [2024-11-20 06:43:07.095721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.534 qpair failed and we were unable to recover it. 00:32:35.534 [2024-11-20 06:43:07.096013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.534 [2024-11-20 06:43:07.096036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.534 qpair failed and we were unable to recover it. 00:32:35.534 [2024-11-20 06:43:07.096242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.534 [2024-11-20 06:43:07.096265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.534 qpair failed and we were unable to recover it. 00:32:35.534 [2024-11-20 06:43:07.096439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.534 [2024-11-20 06:43:07.096460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.534 qpair failed and we were unable to recover it. 00:32:35.534 [2024-11-20 06:43:07.096697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.534 [2024-11-20 06:43:07.096720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.534 qpair failed and we were unable to recover it. 00:32:35.534 [2024-11-20 06:43:07.096976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.534 [2024-11-20 06:43:07.096998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.534 qpair failed and we were unable to recover it. 00:32:35.534 [2024-11-20 06:43:07.097125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.534 [2024-11-20 06:43:07.097147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.534 qpair failed and we were unable to recover it. 00:32:35.534 [2024-11-20 06:43:07.097409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.534 [2024-11-20 06:43:07.097431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.534 qpair failed and we were unable to recover it. 00:32:35.534 [2024-11-20 06:43:07.097563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.534 [2024-11-20 06:43:07.097585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.534 qpair failed and we were unable to recover it. 00:32:35.534 [2024-11-20 06:43:07.097760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.534 [2024-11-20 06:43:07.097783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.534 qpair failed and we were unable to recover it. 00:32:35.534 [2024-11-20 06:43:07.097955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.534 [2024-11-20 06:43:07.097977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.534 qpair failed and we were unable to recover it. 00:32:35.534 [2024-11-20 06:43:07.098219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.534 [2024-11-20 06:43:07.098242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.534 qpair failed and we were unable to recover it. 00:32:35.534 [2024-11-20 06:43:07.098504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.534 [2024-11-20 06:43:07.098527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.534 qpair failed and we were unable to recover it. 00:32:35.534 [2024-11-20 06:43:07.098807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.534 [2024-11-20 06:43:07.098829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.534 qpair failed and we were unable to recover it. 00:32:35.534 [2024-11-20 06:43:07.099011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.534 [2024-11-20 06:43:07.099033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.534 qpair failed and we were unable to recover it. 00:32:35.534 [2024-11-20 06:43:07.099287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.534 [2024-11-20 06:43:07.099311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.534 qpair failed and we were unable to recover it. 00:32:35.534 [2024-11-20 06:43:07.099565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.534 [2024-11-20 06:43:07.099586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.534 qpair failed and we were unable to recover it. 00:32:35.534 [2024-11-20 06:43:07.099821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.534 [2024-11-20 06:43:07.099848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.534 qpair failed and we were unable to recover it. 00:32:35.534 [2024-11-20 06:43:07.100084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.534 [2024-11-20 06:43:07.100107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.534 qpair failed and we were unable to recover it. 00:32:35.534 [2024-11-20 06:43:07.100342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.534 [2024-11-20 06:43:07.100366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.534 qpair failed and we were unable to recover it. 00:32:35.534 [2024-11-20 06:43:07.100489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.534 [2024-11-20 06:43:07.100511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.534 qpair failed and we were unable to recover it. 00:32:35.534 [2024-11-20 06:43:07.100691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.534 [2024-11-20 06:43:07.100714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.534 qpair failed and we were unable to recover it. 00:32:35.534 [2024-11-20 06:43:07.100949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.534 [2024-11-20 06:43:07.100972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.534 qpair failed and we were unable to recover it. 00:32:35.534 [2024-11-20 06:43:07.101232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.534 [2024-11-20 06:43:07.101256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.534 qpair failed and we were unable to recover it. 00:32:35.534 [2024-11-20 06:43:07.101463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.535 [2024-11-20 06:43:07.101486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.535 qpair failed and we were unable to recover it. 00:32:35.535 [2024-11-20 06:43:07.101723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.535 [2024-11-20 06:43:07.101745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.535 qpair failed and we were unable to recover it. 00:32:35.535 [2024-11-20 06:43:07.101956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.535 [2024-11-20 06:43:07.101979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.535 qpair failed and we were unable to recover it. 00:32:35.535 [2024-11-20 06:43:07.102261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.535 [2024-11-20 06:43:07.102287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.535 qpair failed and we were unable to recover it. 00:32:35.535 [2024-11-20 06:43:07.102518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.535 [2024-11-20 06:43:07.102541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.535 qpair failed and we were unable to recover it. 00:32:35.535 [2024-11-20 06:43:07.102773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.535 [2024-11-20 06:43:07.102796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.535 qpair failed and we were unable to recover it. 00:32:35.535 [2024-11-20 06:43:07.103053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.535 [2024-11-20 06:43:07.103075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.535 qpair failed and we were unable to recover it. 00:32:35.535 [2024-11-20 06:43:07.103344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.535 [2024-11-20 06:43:07.103369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.535 qpair failed and we were unable to recover it. 00:32:35.535 [2024-11-20 06:43:07.103630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.535 [2024-11-20 06:43:07.103652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.535 qpair failed and we were unable to recover it. 00:32:35.535 [2024-11-20 06:43:07.103982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.535 [2024-11-20 06:43:07.104006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.535 qpair failed and we were unable to recover it. 00:32:35.535 [2024-11-20 06:43:07.104263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.535 [2024-11-20 06:43:07.104286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.535 qpair failed and we were unable to recover it. 00:32:35.535 [2024-11-20 06:43:07.104519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.535 [2024-11-20 06:43:07.104541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.535 qpair failed and we were unable to recover it. 00:32:35.535 [2024-11-20 06:43:07.104782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.535 [2024-11-20 06:43:07.104804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.535 qpair failed and we were unable to recover it. 00:32:35.535 [2024-11-20 06:43:07.105062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.535 [2024-11-20 06:43:07.105084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.535 qpair failed and we were unable to recover it. 00:32:35.535 [2024-11-20 06:43:07.105302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.535 [2024-11-20 06:43:07.105326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.535 qpair failed and we were unable to recover it. 00:32:35.535 [2024-11-20 06:43:07.105587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.535 [2024-11-20 06:43:07.105610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.535 qpair failed and we were unable to recover it. 00:32:35.535 [2024-11-20 06:43:07.105718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.535 [2024-11-20 06:43:07.105741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.535 qpair failed and we were unable to recover it. 00:32:35.535 [2024-11-20 06:43:07.105956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.535 [2024-11-20 06:43:07.105977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.535 qpair failed and we were unable to recover it. 00:32:35.535 [2024-11-20 06:43:07.106216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.535 [2024-11-20 06:43:07.106240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.535 qpair failed and we were unable to recover it. 00:32:35.535 [2024-11-20 06:43:07.106474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.535 [2024-11-20 06:43:07.106496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.535 qpair failed and we were unable to recover it. 00:32:35.535 [2024-11-20 06:43:07.106611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.535 [2024-11-20 06:43:07.106638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.535 qpair failed and we were unable to recover it. 00:32:35.535 [2024-11-20 06:43:07.106737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.535 [2024-11-20 06:43:07.106759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.535 qpair failed and we were unable to recover it. 00:32:35.535 [2024-11-20 06:43:07.106988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.535 [2024-11-20 06:43:07.107010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.535 qpair failed and we were unable to recover it. 00:32:35.535 [2024-11-20 06:43:07.107270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.535 [2024-11-20 06:43:07.107294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.535 qpair failed and we were unable to recover it. 00:32:35.535 [2024-11-20 06:43:07.107550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.535 [2024-11-20 06:43:07.107572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.535 qpair failed and we were unable to recover it. 00:32:35.535 [2024-11-20 06:43:07.107698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.535 [2024-11-20 06:43:07.107720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.535 qpair failed and we were unable to recover it. 00:32:35.535 [2024-11-20 06:43:07.107947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.535 [2024-11-20 06:43:07.107970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.535 qpair failed and we were unable to recover it. 00:32:35.535 [2024-11-20 06:43:07.108086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.535 [2024-11-20 06:43:07.108109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.535 qpair failed and we were unable to recover it. 00:32:35.535 [2024-11-20 06:43:07.108358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.535 [2024-11-20 06:43:07.108381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.535 qpair failed and we were unable to recover it. 00:32:35.535 [2024-11-20 06:43:07.108637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.535 [2024-11-20 06:43:07.108659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.535 qpair failed and we were unable to recover it. 00:32:35.535 [2024-11-20 06:43:07.108905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.535 [2024-11-20 06:43:07.108928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.535 qpair failed and we were unable to recover it. 00:32:35.535 [2024-11-20 06:43:07.109215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.535 [2024-11-20 06:43:07.109238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.535 qpair failed and we were unable to recover it. 00:32:35.535 [2024-11-20 06:43:07.109435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.535 [2024-11-20 06:43:07.109457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.535 qpair failed and we were unable to recover it. 00:32:35.535 [2024-11-20 06:43:07.109700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.535 [2024-11-20 06:43:07.109722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.535 qpair failed and we were unable to recover it. 00:32:35.535 [2024-11-20 06:43:07.109928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.535 [2024-11-20 06:43:07.109951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.535 qpair failed and we were unable to recover it. 00:32:35.535 [2024-11-20 06:43:07.110230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.535 [2024-11-20 06:43:07.110253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.535 qpair failed and we were unable to recover it. 00:32:35.535 [2024-11-20 06:43:07.110428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.535 [2024-11-20 06:43:07.110450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.535 qpair failed and we were unable to recover it. 00:32:35.535 [2024-11-20 06:43:07.110611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.535 [2024-11-20 06:43:07.110634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.535 qpair failed and we were unable to recover it. 00:32:35.535 [2024-11-20 06:43:07.110822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.535 [2024-11-20 06:43:07.110844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.536 qpair failed and we were unable to recover it. 00:32:35.536 [2024-11-20 06:43:07.111083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.536 [2024-11-20 06:43:07.111105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.536 qpair failed and we were unable to recover it. 00:32:35.536 [2024-11-20 06:43:07.111343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.536 [2024-11-20 06:43:07.111367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.536 qpair failed and we were unable to recover it. 00:32:35.536 [2024-11-20 06:43:07.111542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.536 [2024-11-20 06:43:07.111564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.536 qpair failed and we were unable to recover it. 00:32:35.536 [2024-11-20 06:43:07.111752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.536 [2024-11-20 06:43:07.111774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.536 qpair failed and we were unable to recover it. 00:32:35.536 [2024-11-20 06:43:07.111980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.536 [2024-11-20 06:43:07.112001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.536 qpair failed and we were unable to recover it. 00:32:35.536 [2024-11-20 06:43:07.112254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.536 [2024-11-20 06:43:07.112276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.536 qpair failed and we were unable to recover it. 00:32:35.536 [2024-11-20 06:43:07.112545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.536 [2024-11-20 06:43:07.112567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.536 qpair failed and we were unable to recover it. 00:32:35.536 [2024-11-20 06:43:07.112742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.536 [2024-11-20 06:43:07.112764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.536 qpair failed and we were unable to recover it. 00:32:35.536 [2024-11-20 06:43:07.113048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.536 [2024-11-20 06:43:07.113070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.536 qpair failed and we were unable to recover it. 00:32:35.536 [2024-11-20 06:43:07.113333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.536 [2024-11-20 06:43:07.113355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.536 qpair failed and we were unable to recover it. 00:32:35.536 [2024-11-20 06:43:07.113607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.536 [2024-11-20 06:43:07.113629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.536 qpair failed and we were unable to recover it. 00:32:35.536 [2024-11-20 06:43:07.113892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.536 [2024-11-20 06:43:07.113915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.536 qpair failed and we were unable to recover it. 00:32:35.536 [2024-11-20 06:43:07.114167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.536 [2024-11-20 06:43:07.114188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.536 qpair failed and we were unable to recover it. 00:32:35.536 [2024-11-20 06:43:07.114360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.536 [2024-11-20 06:43:07.114383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.536 qpair failed and we were unable to recover it. 00:32:35.536 [2024-11-20 06:43:07.114561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.536 [2024-11-20 06:43:07.114584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.536 qpair failed and we were unable to recover it. 00:32:35.536 [2024-11-20 06:43:07.114763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.536 [2024-11-20 06:43:07.114785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.536 qpair failed and we were unable to recover it. 00:32:35.536 [2024-11-20 06:43:07.115041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.536 [2024-11-20 06:43:07.115064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.536 qpair failed and we were unable to recover it. 00:32:35.536 [2024-11-20 06:43:07.115339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.536 [2024-11-20 06:43:07.115362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.536 qpair failed and we were unable to recover it. 00:32:35.536 [2024-11-20 06:43:07.115538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.536 [2024-11-20 06:43:07.115559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.536 qpair failed and we were unable to recover it. 00:32:35.536 [2024-11-20 06:43:07.115836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.536 [2024-11-20 06:43:07.115859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.536 qpair failed and we were unable to recover it. 00:32:35.536 [2024-11-20 06:43:07.116089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.536 [2024-11-20 06:43:07.116111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.536 qpair failed and we were unable to recover it. 00:32:35.536 [2024-11-20 06:43:07.116357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.536 [2024-11-20 06:43:07.116381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.536 qpair failed and we were unable to recover it. 00:32:35.536 [2024-11-20 06:43:07.116581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.536 [2024-11-20 06:43:07.116607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.536 qpair failed and we were unable to recover it. 00:32:35.536 [2024-11-20 06:43:07.116837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.536 [2024-11-20 06:43:07.116860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.536 qpair failed and we were unable to recover it. 00:32:35.536 [2024-11-20 06:43:07.117103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.536 [2024-11-20 06:43:07.117126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.536 qpair failed and we were unable to recover it. 00:32:35.536 [2024-11-20 06:43:07.117363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.536 [2024-11-20 06:43:07.117386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.536 qpair failed and we were unable to recover it. 00:32:35.536 [2024-11-20 06:43:07.117560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.536 [2024-11-20 06:43:07.117583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.536 qpair failed and we were unable to recover it. 00:32:35.536 [2024-11-20 06:43:07.117857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.536 [2024-11-20 06:43:07.117880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.536 qpair failed and we were unable to recover it. 00:32:35.536 [2024-11-20 06:43:07.118111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.536 [2024-11-20 06:43:07.118133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.536 qpair failed and we were unable to recover it. 00:32:35.536 [2024-11-20 06:43:07.118379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.536 [2024-11-20 06:43:07.118403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.536 qpair failed and we were unable to recover it. 00:32:35.536 [2024-11-20 06:43:07.118566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.536 [2024-11-20 06:43:07.118588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.536 qpair failed and we were unable to recover it. 00:32:35.536 [2024-11-20 06:43:07.118770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.536 [2024-11-20 06:43:07.118792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.536 qpair failed and we were unable to recover it. 00:32:35.536 [2024-11-20 06:43:07.119041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.536 [2024-11-20 06:43:07.119063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.536 qpair failed and we were unable to recover it. 00:32:35.536 [2024-11-20 06:43:07.119345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.536 [2024-11-20 06:43:07.119369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.536 qpair failed and we were unable to recover it. 00:32:35.536 [2024-11-20 06:43:07.119530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.536 [2024-11-20 06:43:07.119553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.536 qpair failed and we were unable to recover it. 00:32:35.536 [2024-11-20 06:43:07.119730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.536 [2024-11-20 06:43:07.119752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.536 qpair failed and we were unable to recover it. 00:32:35.536 [2024-11-20 06:43:07.120015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.536 [2024-11-20 06:43:07.120038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.536 qpair failed and we were unable to recover it. 00:32:35.536 [2024-11-20 06:43:07.120293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.536 [2024-11-20 06:43:07.120316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.536 qpair failed and we were unable to recover it. 00:32:35.537 [2024-11-20 06:43:07.120475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.537 [2024-11-20 06:43:07.120497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.537 qpair failed and we were unable to recover it. 00:32:35.537 [2024-11-20 06:43:07.120757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.537 [2024-11-20 06:43:07.120780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.537 qpair failed and we were unable to recover it. 00:32:35.537 [2024-11-20 06:43:07.121036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.537 [2024-11-20 06:43:07.121058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.537 qpair failed and we were unable to recover it. 00:32:35.537 [2024-11-20 06:43:07.121190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.537 [2024-11-20 06:43:07.121220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.537 qpair failed and we were unable to recover it. 00:32:35.537 [2024-11-20 06:43:07.121453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.537 [2024-11-20 06:43:07.121475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.537 qpair failed and we were unable to recover it. 00:32:35.537 [2024-11-20 06:43:07.121646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.537 [2024-11-20 06:43:07.121668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.537 qpair failed and we were unable to recover it. 00:32:35.537 [2024-11-20 06:43:07.121852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.537 [2024-11-20 06:43:07.121875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.537 qpair failed and we were unable to recover it. 00:32:35.537 [2024-11-20 06:43:07.122132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.537 [2024-11-20 06:43:07.122155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.537 qpair failed and we were unable to recover it. 00:32:35.537 [2024-11-20 06:43:07.122383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.537 [2024-11-20 06:43:07.122407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.537 qpair failed and we were unable to recover it. 00:32:35.537 [2024-11-20 06:43:07.122651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.537 [2024-11-20 06:43:07.122673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.537 qpair failed and we were unable to recover it. 00:32:35.537 [2024-11-20 06:43:07.122843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.537 [2024-11-20 06:43:07.122866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.537 qpair failed and we were unable to recover it. 00:32:35.537 [2024-11-20 06:43:07.123121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.537 [2024-11-20 06:43:07.123143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.537 qpair failed and we were unable to recover it. 00:32:35.537 [2024-11-20 06:43:07.123356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.537 [2024-11-20 06:43:07.123380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.537 qpair failed and we were unable to recover it. 00:32:35.537 [2024-11-20 06:43:07.123640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.537 [2024-11-20 06:43:07.123662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.537 qpair failed and we were unable to recover it. 00:32:35.537 [2024-11-20 06:43:07.123895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.537 [2024-11-20 06:43:07.123916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.537 qpair failed and we were unable to recover it. 00:32:35.537 [2024-11-20 06:43:07.124152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.537 [2024-11-20 06:43:07.124174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.537 qpair failed and we were unable to recover it. 00:32:35.537 [2024-11-20 06:43:07.124438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.537 [2024-11-20 06:43:07.124461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.537 qpair failed and we were unable to recover it. 00:32:35.537 [2024-11-20 06:43:07.124618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.537 [2024-11-20 06:43:07.124640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.537 qpair failed and we were unable to recover it. 00:32:35.537 [2024-11-20 06:43:07.124901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.537 [2024-11-20 06:43:07.124923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.537 qpair failed and we were unable to recover it. 00:32:35.537 [2024-11-20 06:43:07.125118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.537 [2024-11-20 06:43:07.125140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.537 qpair failed and we were unable to recover it. 00:32:35.537 [2024-11-20 06:43:07.125394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.537 [2024-11-20 06:43:07.125418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.537 qpair failed and we were unable to recover it. 00:32:35.537 [2024-11-20 06:43:07.125673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.537 [2024-11-20 06:43:07.125696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.537 qpair failed and we were unable to recover it. 00:32:35.537 [2024-11-20 06:43:07.125876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.537 [2024-11-20 06:43:07.125899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.537 qpair failed and we were unable to recover it. 00:32:35.537 [2024-11-20 06:43:07.126128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.537 [2024-11-20 06:43:07.126150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.537 qpair failed and we were unable to recover it. 00:32:35.537 [2024-11-20 06:43:07.126273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.537 [2024-11-20 06:43:07.126307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.537 qpair failed and we were unable to recover it. 00:32:35.537 [2024-11-20 06:43:07.126647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.537 [2024-11-20 06:43:07.126724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:35.537 qpair failed and we were unable to recover it. 00:32:35.537 [2024-11-20 06:43:07.127027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.537 [2024-11-20 06:43:07.127064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:35.537 qpair failed and we were unable to recover it. 00:32:35.537 [2024-11-20 06:43:07.127329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.537 [2024-11-20 06:43:07.127365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:35.537 qpair failed and we were unable to recover it. 00:32:35.537 [2024-11-20 06:43:07.127657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.537 [2024-11-20 06:43:07.127690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:35.537 qpair failed and we were unable to recover it. 00:32:35.537 [2024-11-20 06:43:07.127899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.537 [2024-11-20 06:43:07.127932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:35.537 qpair failed and we were unable to recover it. 00:32:35.537 [2024-11-20 06:43:07.128137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.537 [2024-11-20 06:43:07.128169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:35.537 qpair failed and we were unable to recover it. 00:32:35.537 [2024-11-20 06:43:07.128497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.537 [2024-11-20 06:43:07.128543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.537 qpair failed and we were unable to recover it. 00:32:35.537 [2024-11-20 06:43:07.128790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.537 [2024-11-20 06:43:07.128812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.537 qpair failed and we were unable to recover it. 00:32:35.537 [2024-11-20 06:43:07.129064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.537 [2024-11-20 06:43:07.129085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.537 qpair failed and we were unable to recover it. 00:32:35.538 [2024-11-20 06:43:07.129331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.538 [2024-11-20 06:43:07.129355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.538 qpair failed and we were unable to recover it. 00:32:35.538 [2024-11-20 06:43:07.129612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.538 [2024-11-20 06:43:07.129634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.538 qpair failed and we were unable to recover it. 00:32:35.538 [2024-11-20 06:43:07.129866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.538 [2024-11-20 06:43:07.129888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.538 qpair failed and we were unable to recover it. 00:32:35.538 [2024-11-20 06:43:07.130131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.538 [2024-11-20 06:43:07.130154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.538 qpair failed and we were unable to recover it. 00:32:35.538 [2024-11-20 06:43:07.130387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.538 [2024-11-20 06:43:07.130410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.538 qpair failed and we were unable to recover it. 00:32:35.538 [2024-11-20 06:43:07.130648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.538 [2024-11-20 06:43:07.130681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.538 qpair failed and we were unable to recover it. 00:32:35.538 [2024-11-20 06:43:07.130914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.538 [2024-11-20 06:43:07.130946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.538 qpair failed and we were unable to recover it. 00:32:35.538 [2024-11-20 06:43:07.131200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.538 [2024-11-20 06:43:07.131248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.538 qpair failed and we were unable to recover it. 00:32:35.538 [2024-11-20 06:43:07.131533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.538 [2024-11-20 06:43:07.131566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.538 qpair failed and we were unable to recover it. 00:32:35.538 [2024-11-20 06:43:07.131709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.538 [2024-11-20 06:43:07.131741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.538 qpair failed and we were unable to recover it. 00:32:35.538 [2024-11-20 06:43:07.132039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.538 [2024-11-20 06:43:07.132073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.538 qpair failed and we were unable to recover it. 00:32:35.538 [2024-11-20 06:43:07.132275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.538 [2024-11-20 06:43:07.132298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.538 qpair failed and we were unable to recover it. 00:32:35.538 [2024-11-20 06:43:07.132534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.538 [2024-11-20 06:43:07.132567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.538 qpair failed and we were unable to recover it. 00:32:35.538 [2024-11-20 06:43:07.132876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.538 [2024-11-20 06:43:07.132909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.538 qpair failed and we were unable to recover it. 00:32:35.538 [2024-11-20 06:43:07.133190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.538 [2024-11-20 06:43:07.133231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.538 qpair failed and we were unable to recover it. 00:32:35.538 [2024-11-20 06:43:07.133451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.538 [2024-11-20 06:43:07.133485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.538 qpair failed and we were unable to recover it. 00:32:35.538 [2024-11-20 06:43:07.133683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.538 [2024-11-20 06:43:07.133715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.538 qpair failed and we were unable to recover it. 00:32:35.538 [2024-11-20 06:43:07.133915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.538 [2024-11-20 06:43:07.133947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.538 qpair failed and we were unable to recover it. 00:32:35.538 [2024-11-20 06:43:07.134153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.538 [2024-11-20 06:43:07.134190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.538 qpair failed and we were unable to recover it. 00:32:35.538 [2024-11-20 06:43:07.134505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.538 [2024-11-20 06:43:07.134527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.538 qpair failed and we were unable to recover it. 00:32:35.538 [2024-11-20 06:43:07.134644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.538 [2024-11-20 06:43:07.134667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.538 qpair failed and we were unable to recover it. 00:32:35.538 [2024-11-20 06:43:07.134842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.538 [2024-11-20 06:43:07.134875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.538 qpair failed and we were unable to recover it. 00:32:35.538 [2024-11-20 06:43:07.135160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.538 [2024-11-20 06:43:07.135191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.538 qpair failed and we were unable to recover it. 00:32:35.538 [2024-11-20 06:43:07.135476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.538 [2024-11-20 06:43:07.135509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.538 qpair failed and we were unable to recover it. 00:32:35.538 [2024-11-20 06:43:07.135701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.538 [2024-11-20 06:43:07.135722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.538 qpair failed and we were unable to recover it. 00:32:35.538 [2024-11-20 06:43:07.135894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.538 [2024-11-20 06:43:07.135926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.538 qpair failed and we were unable to recover it. 00:32:35.538 [2024-11-20 06:43:07.136215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.538 [2024-11-20 06:43:07.136248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.538 qpair failed and we were unable to recover it. 00:32:35.538 [2024-11-20 06:43:07.136500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.538 [2024-11-20 06:43:07.136533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.538 qpair failed and we were unable to recover it. 00:32:35.538 [2024-11-20 06:43:07.136685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.538 [2024-11-20 06:43:07.136707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.538 qpair failed and we were unable to recover it. 00:32:35.538 [2024-11-20 06:43:07.136941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.538 [2024-11-20 06:43:07.136973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.538 qpair failed and we were unable to recover it. 00:32:35.538 [2024-11-20 06:43:07.137228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.538 [2024-11-20 06:43:07.137263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.538 qpair failed and we were unable to recover it. 00:32:35.538 [2024-11-20 06:43:07.137486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.538 [2024-11-20 06:43:07.137518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.538 qpair failed and we were unable to recover it. 00:32:35.538 [2024-11-20 06:43:07.137718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.538 [2024-11-20 06:43:07.137751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.538 qpair failed and we were unable to recover it. 00:32:35.538 [2024-11-20 06:43:07.137948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.538 [2024-11-20 06:43:07.137981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.538 qpair failed and we were unable to recover it. 00:32:35.538 [2024-11-20 06:43:07.138261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.538 [2024-11-20 06:43:07.138295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.538 qpair failed and we were unable to recover it. 00:32:35.538 [2024-11-20 06:43:07.138507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.538 [2024-11-20 06:43:07.138538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.538 qpair failed and we were unable to recover it. 00:32:35.538 [2024-11-20 06:43:07.138721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.538 [2024-11-20 06:43:07.138742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.538 qpair failed and we were unable to recover it. 00:32:35.538 [2024-11-20 06:43:07.139002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.538 [2024-11-20 06:43:07.139034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.538 qpair failed and we were unable to recover it. 00:32:35.539 [2024-11-20 06:43:07.139230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.539 [2024-11-20 06:43:07.139264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.539 qpair failed and we were unable to recover it. 00:32:35.539 [2024-11-20 06:43:07.139522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.539 [2024-11-20 06:43:07.139544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.539 qpair failed and we were unable to recover it. 00:32:35.539 [2024-11-20 06:43:07.139727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.539 [2024-11-20 06:43:07.139759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.539 qpair failed and we were unable to recover it. 00:32:35.539 [2024-11-20 06:43:07.139991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.539 [2024-11-20 06:43:07.140023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.539 qpair failed and we were unable to recover it. 00:32:35.539 [2024-11-20 06:43:07.140255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.539 [2024-11-20 06:43:07.140288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.539 qpair failed and we were unable to recover it. 00:32:35.539 [2024-11-20 06:43:07.140566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.539 [2024-11-20 06:43:07.140589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.539 qpair failed and we were unable to recover it. 00:32:35.539 [2024-11-20 06:43:07.140824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.539 [2024-11-20 06:43:07.140846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.539 qpair failed and we were unable to recover it. 00:32:35.539 [2024-11-20 06:43:07.141086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.539 [2024-11-20 06:43:07.141112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.539 qpair failed and we were unable to recover it. 00:32:35.539 [2024-11-20 06:43:07.141384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.539 [2024-11-20 06:43:07.141407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.539 qpair failed and we were unable to recover it. 00:32:35.539 [2024-11-20 06:43:07.141651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.539 [2024-11-20 06:43:07.141683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.539 qpair failed and we were unable to recover it. 00:32:35.539 [2024-11-20 06:43:07.141907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.539 [2024-11-20 06:43:07.141940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.539 qpair failed and we were unable to recover it. 00:32:35.539 [2024-11-20 06:43:07.142134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.539 [2024-11-20 06:43:07.142166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.539 qpair failed and we were unable to recover it. 00:32:35.539 [2024-11-20 06:43:07.142378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.539 [2024-11-20 06:43:07.142401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.539 qpair failed and we were unable to recover it. 00:32:35.539 [2024-11-20 06:43:07.142568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.539 [2024-11-20 06:43:07.142600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.539 qpair failed and we were unable to recover it. 00:32:35.539 [2024-11-20 06:43:07.142881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.539 [2024-11-20 06:43:07.142913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.539 qpair failed and we were unable to recover it. 00:32:35.539 [2024-11-20 06:43:07.143132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.539 [2024-11-20 06:43:07.143164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.539 qpair failed and we were unable to recover it. 00:32:35.539 [2024-11-20 06:43:07.143457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.539 [2024-11-20 06:43:07.143491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.539 qpair failed and we were unable to recover it. 00:32:35.539 [2024-11-20 06:43:07.143794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.539 [2024-11-20 06:43:07.143826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.539 qpair failed and we were unable to recover it. 00:32:35.539 [2024-11-20 06:43:07.144024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.539 [2024-11-20 06:43:07.144056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.539 qpair failed and we were unable to recover it. 00:32:35.539 [2024-11-20 06:43:07.144261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.539 [2024-11-20 06:43:07.144295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.539 qpair failed and we were unable to recover it. 00:32:35.539 [2024-11-20 06:43:07.144572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.539 [2024-11-20 06:43:07.144594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.539 qpair failed and we were unable to recover it. 00:32:35.539 [2024-11-20 06:43:07.144873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.539 [2024-11-20 06:43:07.144914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.539 qpair failed and we were unable to recover it. 00:32:35.539 [2024-11-20 06:43:07.145217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.539 [2024-11-20 06:43:07.145251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.539 qpair failed and we were unable to recover it. 00:32:35.539 [2024-11-20 06:43:07.145530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.539 [2024-11-20 06:43:07.145562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.539 qpair failed and we were unable to recover it. 00:32:35.539 [2024-11-20 06:43:07.145790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.539 [2024-11-20 06:43:07.145821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.539 qpair failed and we were unable to recover it. 00:32:35.539 [2024-11-20 06:43:07.146049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.539 [2024-11-20 06:43:07.146081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.539 qpair failed and we were unable to recover it. 00:32:35.539 [2024-11-20 06:43:07.146367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.539 [2024-11-20 06:43:07.146402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.539 qpair failed and we were unable to recover it. 00:32:35.539 [2024-11-20 06:43:07.146606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.539 [2024-11-20 06:43:07.146639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.539 qpair failed and we were unable to recover it. 00:32:35.539 [2024-11-20 06:43:07.146819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.539 [2024-11-20 06:43:07.146841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.539 qpair failed and we were unable to recover it. 00:32:35.539 [2024-11-20 06:43:07.147105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.539 [2024-11-20 06:43:07.147127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.539 qpair failed and we were unable to recover it. 00:32:35.539 [2024-11-20 06:43:07.147382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.539 [2024-11-20 06:43:07.147405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.539 qpair failed and we were unable to recover it. 00:32:35.539 [2024-11-20 06:43:07.147637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.539 [2024-11-20 06:43:07.147659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.539 qpair failed and we were unable to recover it. 00:32:35.539 [2024-11-20 06:43:07.147922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.539 [2024-11-20 06:43:07.147960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.539 qpair failed and we were unable to recover it. 00:32:35.539 [2024-11-20 06:43:07.148223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.539 [2024-11-20 06:43:07.148257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.539 qpair failed and we were unable to recover it. 00:32:35.539 [2024-11-20 06:43:07.148552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.539 [2024-11-20 06:43:07.148585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.539 qpair failed and we were unable to recover it. 00:32:35.539 [2024-11-20 06:43:07.148871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.539 [2024-11-20 06:43:07.148903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.539 qpair failed and we were unable to recover it. 00:32:35.539 [2024-11-20 06:43:07.149110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.539 [2024-11-20 06:43:07.149143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.539 qpair failed and we were unable to recover it. 00:32:35.539 [2024-11-20 06:43:07.149458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.539 [2024-11-20 06:43:07.149492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.539 qpair failed and we were unable to recover it. 00:32:35.539 [2024-11-20 06:43:07.149746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.540 [2024-11-20 06:43:07.149777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.540 qpair failed and we were unable to recover it. 00:32:35.540 [2024-11-20 06:43:07.150083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.540 [2024-11-20 06:43:07.150115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.540 qpair failed and we were unable to recover it. 00:32:35.540 [2024-11-20 06:43:07.150313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.540 [2024-11-20 06:43:07.150348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.540 qpair failed and we were unable to recover it. 00:32:35.540 [2024-11-20 06:43:07.150655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.540 [2024-11-20 06:43:07.150687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.540 qpair failed and we were unable to recover it. 00:32:35.540 [2024-11-20 06:43:07.150986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.540 [2024-11-20 06:43:07.151017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.540 qpair failed and we were unable to recover it. 00:32:35.540 [2024-11-20 06:43:07.151311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.540 [2024-11-20 06:43:07.151345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.540 qpair failed and we were unable to recover it. 00:32:35.540 [2024-11-20 06:43:07.151596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.540 [2024-11-20 06:43:07.151617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.540 qpair failed and we were unable to recover it. 00:32:35.540 [2024-11-20 06:43:07.151859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.540 [2024-11-20 06:43:07.151881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.540 qpair failed and we were unable to recover it. 00:32:35.540 [2024-11-20 06:43:07.152140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.540 [2024-11-20 06:43:07.152162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.540 qpair failed and we were unable to recover it. 00:32:35.540 [2024-11-20 06:43:07.152350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.540 [2024-11-20 06:43:07.152373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.540 qpair failed and we were unable to recover it. 00:32:35.540 [2024-11-20 06:43:07.152545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.540 [2024-11-20 06:43:07.152571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.540 qpair failed and we were unable to recover it. 00:32:35.540 [2024-11-20 06:43:07.152830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.540 [2024-11-20 06:43:07.152863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.540 qpair failed and we were unable to recover it. 00:32:35.540 [2024-11-20 06:43:07.153054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.540 [2024-11-20 06:43:07.153086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.540 qpair failed and we were unable to recover it. 00:32:35.540 [2024-11-20 06:43:07.153236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.540 [2024-11-20 06:43:07.153282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.540 qpair failed and we were unable to recover it. 00:32:35.540 [2024-11-20 06:43:07.153412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.540 [2024-11-20 06:43:07.153434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.540 qpair failed and we were unable to recover it. 00:32:35.540 [2024-11-20 06:43:07.153690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.540 [2024-11-20 06:43:07.153722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.540 qpair failed and we were unable to recover it. 00:32:35.540 [2024-11-20 06:43:07.153923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.540 [2024-11-20 06:43:07.153955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.540 qpair failed and we were unable to recover it. 00:32:35.540 [2024-11-20 06:43:07.154248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.540 [2024-11-20 06:43:07.154282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.540 qpair failed and we were unable to recover it. 00:32:35.540 [2024-11-20 06:43:07.154584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.540 [2024-11-20 06:43:07.154617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.540 qpair failed and we were unable to recover it. 00:32:35.540 [2024-11-20 06:43:07.154892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.540 [2024-11-20 06:43:07.154914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.540 qpair failed and we were unable to recover it. 00:32:35.540 [2024-11-20 06:43:07.155081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.540 [2024-11-20 06:43:07.155103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.540 qpair failed and we were unable to recover it. 00:32:35.540 [2024-11-20 06:43:07.155287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.540 [2024-11-20 06:43:07.155311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.540 qpair failed and we were unable to recover it. 00:32:35.540 [2024-11-20 06:43:07.155612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.540 [2024-11-20 06:43:07.155645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.540 qpair failed and we were unable to recover it. 00:32:35.540 [2024-11-20 06:43:07.155768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.540 [2024-11-20 06:43:07.155800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.540 qpair failed and we were unable to recover it. 00:32:35.540 [2024-11-20 06:43:07.156020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.540 [2024-11-20 06:43:07.156053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.540 qpair failed and we were unable to recover it. 00:32:35.540 [2024-11-20 06:43:07.156272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.540 [2024-11-20 06:43:07.156306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.540 qpair failed and we were unable to recover it. 00:32:35.540 [2024-11-20 06:43:07.156585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.540 [2024-11-20 06:43:07.156618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.540 qpair failed and we were unable to recover it. 00:32:35.540 [2024-11-20 06:43:07.156821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.540 [2024-11-20 06:43:07.156853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.540 qpair failed and we were unable to recover it. 00:32:35.540 [2024-11-20 06:43:07.157057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.540 [2024-11-20 06:43:07.157089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.540 qpair failed and we were unable to recover it. 00:32:35.540 [2024-11-20 06:43:07.157342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.540 [2024-11-20 06:43:07.157375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.540 qpair failed and we were unable to recover it. 00:32:35.540 [2024-11-20 06:43:07.157691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.540 [2024-11-20 06:43:07.157723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.540 qpair failed and we were unable to recover it. 00:32:35.540 [2024-11-20 06:43:07.158006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.540 [2024-11-20 06:43:07.158037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.540 qpair failed and we were unable to recover it. 00:32:35.540 [2024-11-20 06:43:07.158314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.540 [2024-11-20 06:43:07.158348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.540 qpair failed and we were unable to recover it. 00:32:35.540 [2024-11-20 06:43:07.158609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.540 [2024-11-20 06:43:07.158648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.540 qpair failed and we were unable to recover it. 00:32:35.540 [2024-11-20 06:43:07.158931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.540 [2024-11-20 06:43:07.158975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.540 qpair failed and we were unable to recover it. 00:32:35.540 [2024-11-20 06:43:07.159261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.540 [2024-11-20 06:43:07.159295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.540 qpair failed and we were unable to recover it. 00:32:35.540 [2024-11-20 06:43:07.159518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.540 [2024-11-20 06:43:07.159550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.540 qpair failed and we were unable to recover it. 00:32:35.540 [2024-11-20 06:43:07.159857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.540 [2024-11-20 06:43:07.159895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.540 qpair failed and we were unable to recover it. 00:32:35.540 [2024-11-20 06:43:07.160145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.541 [2024-11-20 06:43:07.160176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.541 qpair failed and we were unable to recover it. 00:32:35.541 [2024-11-20 06:43:07.160462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.541 [2024-11-20 06:43:07.160495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.541 qpair failed and we were unable to recover it. 00:32:35.541 [2024-11-20 06:43:07.160776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.541 [2024-11-20 06:43:07.160809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.541 qpair failed and we were unable to recover it. 00:32:35.541 [2024-11-20 06:43:07.161009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.541 [2024-11-20 06:43:07.161041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.541 qpair failed and we were unable to recover it. 00:32:35.541 [2024-11-20 06:43:07.161290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.541 [2024-11-20 06:43:07.161323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.541 qpair failed and we were unable to recover it. 00:32:35.541 [2024-11-20 06:43:07.161581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.541 [2024-11-20 06:43:07.161613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.541 qpair failed and we were unable to recover it. 00:32:35.541 [2024-11-20 06:43:07.161889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.541 [2024-11-20 06:43:07.161921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.541 qpair failed and we were unable to recover it. 00:32:35.541 [2024-11-20 06:43:07.162172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.541 [2024-11-20 06:43:07.162223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.541 qpair failed and we were unable to recover it. 00:32:35.541 [2024-11-20 06:43:07.162432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.541 [2024-11-20 06:43:07.162465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.541 qpair failed and we were unable to recover it. 00:32:35.541 [2024-11-20 06:43:07.162739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.541 [2024-11-20 06:43:07.162771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.541 qpair failed and we were unable to recover it. 00:32:35.541 [2024-11-20 06:43:07.163057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.541 [2024-11-20 06:43:07.163089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.541 qpair failed and we were unable to recover it. 00:32:35.541 [2024-11-20 06:43:07.163278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.541 [2024-11-20 06:43:07.163311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.541 qpair failed and we were unable to recover it. 00:32:35.541 [2024-11-20 06:43:07.163569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.541 [2024-11-20 06:43:07.163600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.541 qpair failed and we were unable to recover it. 00:32:35.541 [2024-11-20 06:43:07.163868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.541 [2024-11-20 06:43:07.163891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.541 qpair failed and we were unable to recover it. 00:32:35.541 [2024-11-20 06:43:07.164067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.541 [2024-11-20 06:43:07.164089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.541 qpair failed and we were unable to recover it. 00:32:35.541 [2024-11-20 06:43:07.164293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.541 [2024-11-20 06:43:07.164326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.541 qpair failed and we were unable to recover it. 00:32:35.541 [2024-11-20 06:43:07.164536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.541 [2024-11-20 06:43:07.164569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.541 qpair failed and we were unable to recover it. 00:32:35.541 [2024-11-20 06:43:07.164775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.541 [2024-11-20 06:43:07.164808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.541 qpair failed and we were unable to recover it. 00:32:35.541 [2024-11-20 06:43:07.165004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.541 [2024-11-20 06:43:07.165036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.541 qpair failed and we were unable to recover it. 00:32:35.541 [2024-11-20 06:43:07.165223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.541 [2024-11-20 06:43:07.165256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.541 qpair failed and we were unable to recover it. 00:32:35.541 [2024-11-20 06:43:07.165512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.541 [2024-11-20 06:43:07.165545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.541 qpair failed and we were unable to recover it. 00:32:35.541 [2024-11-20 06:43:07.165797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.541 [2024-11-20 06:43:07.165828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.541 qpair failed and we were unable to recover it. 00:32:35.541 [2024-11-20 06:43:07.166083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.541 [2024-11-20 06:43:07.166115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.541 qpair failed and we were unable to recover it. 00:32:35.541 [2024-11-20 06:43:07.166342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.541 [2024-11-20 06:43:07.166376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.541 qpair failed and we were unable to recover it. 00:32:35.541 [2024-11-20 06:43:07.166658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.541 [2024-11-20 06:43:07.166691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.541 qpair failed and we were unable to recover it. 00:32:35.541 [2024-11-20 06:43:07.166947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.541 [2024-11-20 06:43:07.166969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.541 qpair failed and we were unable to recover it. 00:32:35.541 [2024-11-20 06:43:07.167230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.541 [2024-11-20 06:43:07.167253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.541 qpair failed and we were unable to recover it. 00:32:35.541 [2024-11-20 06:43:07.167470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.541 [2024-11-20 06:43:07.167492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.541 qpair failed and we were unable to recover it. 00:32:35.541 [2024-11-20 06:43:07.167752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.541 [2024-11-20 06:43:07.167793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.541 qpair failed and we were unable to recover it. 00:32:35.541 [2024-11-20 06:43:07.168047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.541 [2024-11-20 06:43:07.168079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.541 qpair failed and we were unable to recover it. 00:32:35.541 [2024-11-20 06:43:07.168278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.541 [2024-11-20 06:43:07.168320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.541 qpair failed and we were unable to recover it. 00:32:35.541 [2024-11-20 06:43:07.168523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.541 [2024-11-20 06:43:07.168545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.541 qpair failed and we were unable to recover it. 00:32:35.541 [2024-11-20 06:43:07.168717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.541 [2024-11-20 06:43:07.168739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.541 qpair failed and we were unable to recover it. 00:32:35.541 [2024-11-20 06:43:07.168958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.541 [2024-11-20 06:43:07.168989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.541 qpair failed and we were unable to recover it. 00:32:35.541 [2024-11-20 06:43:07.169266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.541 [2024-11-20 06:43:07.169307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.541 qpair failed and we were unable to recover it. 00:32:35.541 [2024-11-20 06:43:07.169480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.541 [2024-11-20 06:43:07.169502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.541 qpair failed and we were unable to recover it. 00:32:35.541 [2024-11-20 06:43:07.169680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.541 [2024-11-20 06:43:07.169712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.542 qpair failed and we were unable to recover it. 00:32:35.542 [2024-11-20 06:43:07.169967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.542 [2024-11-20 06:43:07.169999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.542 qpair failed and we were unable to recover it. 00:32:35.542 [2024-11-20 06:43:07.170253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.542 [2024-11-20 06:43:07.170286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.542 qpair failed and we were unable to recover it. 00:32:35.542 [2024-11-20 06:43:07.170424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.542 [2024-11-20 06:43:07.170447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.542 qpair failed and we were unable to recover it. 00:32:35.542 [2024-11-20 06:43:07.170607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.542 [2024-11-20 06:43:07.170633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.542 qpair failed and we were unable to recover it. 00:32:35.542 [2024-11-20 06:43:07.170906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.542 [2024-11-20 06:43:07.170938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.542 qpair failed and we were unable to recover it. 00:32:35.542 [2024-11-20 06:43:07.171237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.542 [2024-11-20 06:43:07.171271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.542 qpair failed and we were unable to recover it. 00:32:35.542 [2024-11-20 06:43:07.171542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.542 [2024-11-20 06:43:07.171574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.542 qpair failed and we were unable to recover it. 00:32:35.542 [2024-11-20 06:43:07.171829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.542 [2024-11-20 06:43:07.171862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.542 qpair failed and we were unable to recover it. 00:32:35.542 [2024-11-20 06:43:07.172162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.542 [2024-11-20 06:43:07.172195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.542 qpair failed and we were unable to recover it. 00:32:35.542 [2024-11-20 06:43:07.172428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.542 [2024-11-20 06:43:07.172460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.542 qpair failed and we were unable to recover it. 00:32:35.542 [2024-11-20 06:43:07.172734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.542 [2024-11-20 06:43:07.172756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.542 qpair failed and we were unable to recover it. 00:32:35.542 [2024-11-20 06:43:07.172920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.542 [2024-11-20 06:43:07.172941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.542 qpair failed and we were unable to recover it. 00:32:35.542 [2024-11-20 06:43:07.173115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.542 [2024-11-20 06:43:07.173147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.542 qpair failed and we were unable to recover it. 00:32:35.542 [2024-11-20 06:43:07.173377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.542 [2024-11-20 06:43:07.173410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.542 qpair failed and we were unable to recover it. 00:32:35.542 [2024-11-20 06:43:07.173552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.542 [2024-11-20 06:43:07.173592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.542 qpair failed and we were unable to recover it. 00:32:35.542 [2024-11-20 06:43:07.173849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.542 [2024-11-20 06:43:07.173871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.542 qpair failed and we were unable to recover it. 00:32:35.542 [2024-11-20 06:43:07.173979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.542 [2024-11-20 06:43:07.174000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.542 qpair failed and we were unable to recover it. 00:32:35.542 [2024-11-20 06:43:07.174291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.542 [2024-11-20 06:43:07.174315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.542 qpair failed and we were unable to recover it. 00:32:35.542 [2024-11-20 06:43:07.174610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.542 [2024-11-20 06:43:07.174642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.542 qpair failed and we were unable to recover it. 00:32:35.542 [2024-11-20 06:43:07.174850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.542 [2024-11-20 06:43:07.174882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.542 qpair failed and we were unable to recover it. 00:32:35.542 [2024-11-20 06:43:07.175101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.542 [2024-11-20 06:43:07.175133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.542 qpair failed and we were unable to recover it. 00:32:35.542 [2024-11-20 06:43:07.175432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.542 [2024-11-20 06:43:07.175466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.542 qpair failed and we were unable to recover it. 00:32:35.542 [2024-11-20 06:43:07.175733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.542 [2024-11-20 06:43:07.175765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.542 qpair failed and we were unable to recover it. 00:32:35.542 [2024-11-20 06:43:07.176045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.542 [2024-11-20 06:43:07.176077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.542 qpair failed and we were unable to recover it. 00:32:35.542 [2024-11-20 06:43:07.176363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.542 [2024-11-20 06:43:07.176397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.542 qpair failed and we were unable to recover it. 00:32:35.542 [2024-11-20 06:43:07.176620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.542 [2024-11-20 06:43:07.176652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.542 qpair failed and we were unable to recover it. 00:32:35.542 [2024-11-20 06:43:07.176919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.542 [2024-11-20 06:43:07.176941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.542 qpair failed and we were unable to recover it. 00:32:35.542 [2024-11-20 06:43:07.177122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.542 [2024-11-20 06:43:07.177144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.542 qpair failed and we were unable to recover it. 00:32:35.542 [2024-11-20 06:43:07.177346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.542 [2024-11-20 06:43:07.177369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.542 qpair failed and we were unable to recover it. 00:32:35.542 [2024-11-20 06:43:07.177530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.542 [2024-11-20 06:43:07.177552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.542 qpair failed and we were unable to recover it. 00:32:35.542 [2024-11-20 06:43:07.177859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.542 [2024-11-20 06:43:07.177891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.542 qpair failed and we were unable to recover it. 00:32:35.542 [2024-11-20 06:43:07.178172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.542 [2024-11-20 06:43:07.178221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.542 qpair failed and we were unable to recover it. 00:32:35.542 [2024-11-20 06:43:07.178515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.542 [2024-11-20 06:43:07.178547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.542 qpair failed and we were unable to recover it. 00:32:35.542 [2024-11-20 06:43:07.178808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.542 [2024-11-20 06:43:07.178840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.542 qpair failed and we were unable to recover it. 00:32:35.542 [2024-11-20 06:43:07.179148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.542 [2024-11-20 06:43:07.179180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.542 qpair failed and we were unable to recover it. 00:32:35.543 [2024-11-20 06:43:07.179394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.543 [2024-11-20 06:43:07.179417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.543 qpair failed and we were unable to recover it. 00:32:35.543 [2024-11-20 06:43:07.179670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.543 [2024-11-20 06:43:07.179702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.543 qpair failed and we were unable to recover it. 00:32:35.543 [2024-11-20 06:43:07.179976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.543 [2024-11-20 06:43:07.180008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.543 qpair failed and we were unable to recover it. 00:32:35.543 [2024-11-20 06:43:07.180226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.543 [2024-11-20 06:43:07.180261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.543 qpair failed and we were unable to recover it. 00:32:35.543 [2024-11-20 06:43:07.180408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.543 [2024-11-20 06:43:07.180440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.543 qpair failed and we were unable to recover it. 00:32:35.543 [2024-11-20 06:43:07.180648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.543 [2024-11-20 06:43:07.180689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.543 qpair failed and we were unable to recover it. 00:32:35.543 [2024-11-20 06:43:07.180895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.543 [2024-11-20 06:43:07.180918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.543 qpair failed and we were unable to recover it. 00:32:35.543 [2024-11-20 06:43:07.181084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.543 [2024-11-20 06:43:07.181106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.543 qpair failed and we were unable to recover it. 00:32:35.543 [2024-11-20 06:43:07.181382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.543 [2024-11-20 06:43:07.181415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.543 qpair failed and we were unable to recover it. 00:32:35.543 [2024-11-20 06:43:07.181698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.543 [2024-11-20 06:43:07.181721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.543 qpair failed and we were unable to recover it. 00:32:35.543 [2024-11-20 06:43:07.181849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.543 [2024-11-20 06:43:07.181871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.543 qpair failed and we were unable to recover it. 00:32:35.543 [2024-11-20 06:43:07.182052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.543 [2024-11-20 06:43:07.182083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.543 qpair failed and we were unable to recover it. 00:32:35.543 [2024-11-20 06:43:07.182286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.543 [2024-11-20 06:43:07.182320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.543 qpair failed and we were unable to recover it. 00:32:35.543 [2024-11-20 06:43:07.182464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.543 [2024-11-20 06:43:07.182496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.543 qpair failed and we were unable to recover it. 00:32:35.543 [2024-11-20 06:43:07.182794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.543 [2024-11-20 06:43:07.182826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.543 qpair failed and we were unable to recover it. 00:32:35.543 [2024-11-20 06:43:07.183118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.543 [2024-11-20 06:43:07.183150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.543 qpair failed and we were unable to recover it. 00:32:35.543 [2024-11-20 06:43:07.183428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.543 [2024-11-20 06:43:07.183462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.543 qpair failed and we were unable to recover it. 00:32:35.543 [2024-11-20 06:43:07.183754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.543 [2024-11-20 06:43:07.183786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.543 qpair failed and we were unable to recover it. 00:32:35.543 [2024-11-20 06:43:07.184066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.543 [2024-11-20 06:43:07.184098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.543 qpair failed and we were unable to recover it. 00:32:35.543 [2024-11-20 06:43:07.184389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.543 [2024-11-20 06:43:07.184423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.543 qpair failed and we were unable to recover it. 00:32:35.543 [2024-11-20 06:43:07.184698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.543 [2024-11-20 06:43:07.184740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.543 qpair failed and we were unable to recover it. 00:32:35.543 [2024-11-20 06:43:07.184904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.543 [2024-11-20 06:43:07.184926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.543 qpair failed and we were unable to recover it. 00:32:35.543 [2024-11-20 06:43:07.185212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.543 [2024-11-20 06:43:07.185235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.543 qpair failed and we were unable to recover it. 00:32:35.543 [2024-11-20 06:43:07.185366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.543 [2024-11-20 06:43:07.185390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.543 qpair failed and we were unable to recover it. 00:32:35.543 [2024-11-20 06:43:07.185506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.543 [2024-11-20 06:43:07.185529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.543 qpair failed and we were unable to recover it. 00:32:35.543 [2024-11-20 06:43:07.185782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.543 [2024-11-20 06:43:07.185804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.543 qpair failed and we were unable to recover it. 00:32:35.543 [2024-11-20 06:43:07.185980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.543 [2024-11-20 06:43:07.186002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.543 qpair failed and we were unable to recover it. 00:32:35.543 [2024-11-20 06:43:07.186235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.543 [2024-11-20 06:43:07.186259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.543 qpair failed and we were unable to recover it. 00:32:35.543 [2024-11-20 06:43:07.186524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.543 [2024-11-20 06:43:07.186546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.543 qpair failed and we were unable to recover it. 00:32:35.543 [2024-11-20 06:43:07.186770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.543 [2024-11-20 06:43:07.186792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.543 qpair failed and we were unable to recover it. 00:32:35.543 [2024-11-20 06:43:07.187050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.543 [2024-11-20 06:43:07.187074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.543 qpair failed and we were unable to recover it. 00:32:35.543 [2024-11-20 06:43:07.187327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.543 [2024-11-20 06:43:07.187360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.543 qpair failed and we were unable to recover it. 00:32:35.543 [2024-11-20 06:43:07.187643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.543 [2024-11-20 06:43:07.187675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.543 qpair failed and we were unable to recover it. 00:32:35.543 [2024-11-20 06:43:07.187956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.543 [2024-11-20 06:43:07.187979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.543 qpair failed and we were unable to recover it. 00:32:35.543 [2024-11-20 06:43:07.188178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.543 [2024-11-20 06:43:07.188208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.543 qpair failed and we were unable to recover it. 00:32:35.543 [2024-11-20 06:43:07.188392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.543 [2024-11-20 06:43:07.188414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.543 qpair failed and we were unable to recover it. 00:32:35.543 [2024-11-20 06:43:07.188590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.543 [2024-11-20 06:43:07.188615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.543 qpair failed and we were unable to recover it. 00:32:35.543 [2024-11-20 06:43:07.188869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.543 [2024-11-20 06:43:07.188891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.543 qpair failed and we were unable to recover it. 00:32:35.543 [2024-11-20 06:43:07.189078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.544 [2024-11-20 06:43:07.189099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.544 qpair failed and we were unable to recover it. 00:32:35.544 [2024-11-20 06:43:07.189335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.544 [2024-11-20 06:43:07.189357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.544 qpair failed and we were unable to recover it. 00:32:35.544 [2024-11-20 06:43:07.189610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.544 [2024-11-20 06:43:07.189632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.544 qpair failed and we were unable to recover it. 00:32:35.544 [2024-11-20 06:43:07.189830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.544 [2024-11-20 06:43:07.189853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.544 qpair failed and we were unable to recover it. 00:32:35.544 [2024-11-20 06:43:07.190144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.544 [2024-11-20 06:43:07.190175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.544 qpair failed and we were unable to recover it. 00:32:35.544 [2024-11-20 06:43:07.190466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.544 [2024-11-20 06:43:07.190500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.544 qpair failed and we were unable to recover it. 00:32:35.544 [2024-11-20 06:43:07.190804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.544 [2024-11-20 06:43:07.190837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.544 qpair failed and we were unable to recover it. 00:32:35.544 [2024-11-20 06:43:07.191101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.544 [2024-11-20 06:43:07.191133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.544 qpair failed and we were unable to recover it. 00:32:35.544 [2024-11-20 06:43:07.191338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.544 [2024-11-20 06:43:07.191362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.544 qpair failed and we were unable to recover it. 00:32:35.544 [2024-11-20 06:43:07.191549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.544 [2024-11-20 06:43:07.191581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.544 qpair failed and we were unable to recover it. 00:32:35.544 [2024-11-20 06:43:07.191833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.544 [2024-11-20 06:43:07.191866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.544 qpair failed and we were unable to recover it. 00:32:35.544 [2024-11-20 06:43:07.192124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.544 [2024-11-20 06:43:07.192156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.544 qpair failed and we were unable to recover it. 00:32:35.544 [2024-11-20 06:43:07.192375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.544 [2024-11-20 06:43:07.192410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.544 qpair failed and we were unable to recover it. 00:32:35.544 [2024-11-20 06:43:07.192682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.544 [2024-11-20 06:43:07.192713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.544 qpair failed and we were unable to recover it. 00:32:35.544 [2024-11-20 06:43:07.192981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.544 [2024-11-20 06:43:07.193013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.544 qpair failed and we were unable to recover it. 00:32:35.544 [2024-11-20 06:43:07.193313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.544 [2024-11-20 06:43:07.193359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.544 qpair failed and we were unable to recover it. 00:32:35.544 [2024-11-20 06:43:07.193612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.544 [2024-11-20 06:43:07.193634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.544 qpair failed and we were unable to recover it. 00:32:35.544 [2024-11-20 06:43:07.193759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.544 [2024-11-20 06:43:07.193782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.544 qpair failed and we were unable to recover it. 00:32:35.544 [2024-11-20 06:43:07.194060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.544 [2024-11-20 06:43:07.194091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.544 qpair failed and we were unable to recover it. 00:32:35.544 [2024-11-20 06:43:07.194375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.544 [2024-11-20 06:43:07.194409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.544 qpair failed and we were unable to recover it. 00:32:35.544 [2024-11-20 06:43:07.194692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.544 [2024-11-20 06:43:07.194724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.544 qpair failed and we were unable to recover it. 00:32:35.544 [2024-11-20 06:43:07.194904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.544 [2024-11-20 06:43:07.194935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.544 qpair failed and we were unable to recover it. 00:32:35.544 [2024-11-20 06:43:07.195219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.544 [2024-11-20 06:43:07.195252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.544 qpair failed and we were unable to recover it. 00:32:35.544 [2024-11-20 06:43:07.195538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.544 [2024-11-20 06:43:07.195580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.544 qpair failed and we were unable to recover it. 00:32:35.544 [2024-11-20 06:43:07.195839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.544 [2024-11-20 06:43:07.195861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.544 qpair failed and we were unable to recover it. 00:32:35.544 [2024-11-20 06:43:07.196070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.544 [2024-11-20 06:43:07.196102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.544 qpair failed and we were unable to recover it. 00:32:35.544 [2024-11-20 06:43:07.196243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.544 [2024-11-20 06:43:07.196277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.544 qpair failed and we were unable to recover it. 00:32:35.544 [2024-11-20 06:43:07.196469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.544 [2024-11-20 06:43:07.196501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.544 qpair failed and we were unable to recover it. 00:32:35.544 [2024-11-20 06:43:07.196779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.544 [2024-11-20 06:43:07.196810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.544 qpair failed and we were unable to recover it. 00:32:35.544 [2024-11-20 06:43:07.197077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.544 [2024-11-20 06:43:07.197099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.544 qpair failed and we were unable to recover it. 00:32:35.544 [2024-11-20 06:43:07.197356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.544 [2024-11-20 06:43:07.197379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.544 qpair failed and we were unable to recover it. 00:32:35.544 [2024-11-20 06:43:07.197591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.544 [2024-11-20 06:43:07.197613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.544 qpair failed and we were unable to recover it. 00:32:35.544 [2024-11-20 06:43:07.197865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.544 [2024-11-20 06:43:07.197887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.544 qpair failed and we were unable to recover it. 00:32:35.544 [2024-11-20 06:43:07.198121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.544 [2024-11-20 06:43:07.198142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.544 qpair failed and we were unable to recover it. 00:32:35.544 [2024-11-20 06:43:07.198344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.544 [2024-11-20 06:43:07.198369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.544 qpair failed and we were unable to recover it. 00:32:35.544 [2024-11-20 06:43:07.198549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.544 [2024-11-20 06:43:07.198571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.544 qpair failed and we were unable to recover it. 00:32:35.544 [2024-11-20 06:43:07.198699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.544 [2024-11-20 06:43:07.198739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.544 qpair failed and we were unable to recover it. 00:32:35.544 [2024-11-20 06:43:07.198982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.544 [2024-11-20 06:43:07.199015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.544 qpair failed and we were unable to recover it. 00:32:35.544 [2024-11-20 06:43:07.199318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.544 [2024-11-20 06:43:07.199351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.545 qpair failed and we were unable to recover it. 00:32:35.545 [2024-11-20 06:43:07.199647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.545 [2024-11-20 06:43:07.199685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.545 qpair failed and we were unable to recover it. 00:32:35.545 [2024-11-20 06:43:07.199820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.545 [2024-11-20 06:43:07.199851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.545 qpair failed and we were unable to recover it. 00:32:35.545 [2024-11-20 06:43:07.200155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.545 [2024-11-20 06:43:07.200187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.545 qpair failed and we were unable to recover it. 00:32:35.545 [2024-11-20 06:43:07.200454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.545 [2024-11-20 06:43:07.200487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.545 qpair failed and we were unable to recover it. 00:32:35.545 [2024-11-20 06:43:07.200787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.545 [2024-11-20 06:43:07.200818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.545 qpair failed and we were unable to recover it. 00:32:35.545 [2024-11-20 06:43:07.201087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.545 [2024-11-20 06:43:07.201119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.545 qpair failed and we were unable to recover it. 00:32:35.545 [2024-11-20 06:43:07.201357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.545 [2024-11-20 06:43:07.201393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.545 qpair failed and we were unable to recover it. 00:32:35.545 [2024-11-20 06:43:07.201697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.545 [2024-11-20 06:43:07.201729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.545 qpair failed and we were unable to recover it. 00:32:35.545 [2024-11-20 06:43:07.201928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.545 [2024-11-20 06:43:07.201961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.545 qpair failed and we were unable to recover it. 00:32:35.545 [2024-11-20 06:43:07.202222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.545 [2024-11-20 06:43:07.202256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.545 qpair failed and we were unable to recover it. 00:32:35.545 [2024-11-20 06:43:07.202447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.545 [2024-11-20 06:43:07.202480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.545 qpair failed and we were unable to recover it. 00:32:35.545 [2024-11-20 06:43:07.202734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.545 [2024-11-20 06:43:07.202766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.545 qpair failed and we were unable to recover it. 00:32:35.545 [2024-11-20 06:43:07.202989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.545 [2024-11-20 06:43:07.203011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.545 qpair failed and we were unable to recover it. 00:32:35.545 [2024-11-20 06:43:07.203243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.545 [2024-11-20 06:43:07.203266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.545 qpair failed and we were unable to recover it. 00:32:35.545 [2024-11-20 06:43:07.203437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.545 [2024-11-20 06:43:07.203460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.545 qpair failed and we were unable to recover it. 00:32:35.545 [2024-11-20 06:43:07.203574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.545 [2024-11-20 06:43:07.203606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.545 qpair failed and we were unable to recover it. 00:32:35.545 [2024-11-20 06:43:07.203792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.545 [2024-11-20 06:43:07.203823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.545 qpair failed and we were unable to recover it. 00:32:35.545 [2024-11-20 06:43:07.204106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.545 [2024-11-20 06:43:07.204138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.545 qpair failed and we were unable to recover it. 00:32:35.545 [2024-11-20 06:43:07.204446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.545 [2024-11-20 06:43:07.204479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.545 qpair failed and we were unable to recover it. 00:32:35.545 [2024-11-20 06:43:07.204738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.545 [2024-11-20 06:43:07.204770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.545 qpair failed and we were unable to recover it. 00:32:35.545 [2024-11-20 06:43:07.204953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.545 [2024-11-20 06:43:07.204985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.545 qpair failed and we were unable to recover it. 00:32:35.545 [2024-11-20 06:43:07.205216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.545 [2024-11-20 06:43:07.205249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.545 qpair failed and we were unable to recover it. 00:32:35.545 [2024-11-20 06:43:07.205507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.545 [2024-11-20 06:43:07.205546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.545 qpair failed and we were unable to recover it. 00:32:35.545 [2024-11-20 06:43:07.205743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.545 [2024-11-20 06:43:07.205765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.545 qpair failed and we were unable to recover it. 00:32:35.545 [2024-11-20 06:43:07.205994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.545 [2024-11-20 06:43:07.206016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.545 qpair failed and we were unable to recover it. 00:32:35.545 [2024-11-20 06:43:07.206138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.545 [2024-11-20 06:43:07.206160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.545 qpair failed and we were unable to recover it. 00:32:35.545 [2024-11-20 06:43:07.206436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.545 [2024-11-20 06:43:07.206469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.545 qpair failed and we were unable to recover it. 00:32:35.545 [2024-11-20 06:43:07.206677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.545 [2024-11-20 06:43:07.206715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.545 qpair failed and we were unable to recover it. 00:32:35.545 [2024-11-20 06:43:07.206985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.545 [2024-11-20 06:43:07.207007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.545 qpair failed and we were unable to recover it. 00:32:35.545 [2024-11-20 06:43:07.207190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.545 [2024-11-20 06:43:07.207235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.545 qpair failed and we were unable to recover it. 00:32:35.545 [2024-11-20 06:43:07.207514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.545 [2024-11-20 06:43:07.207546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.545 qpair failed and we were unable to recover it. 00:32:35.545 [2024-11-20 06:43:07.207823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.545 [2024-11-20 06:43:07.207855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.545 qpair failed and we were unable to recover it. 00:32:35.545 [2024-11-20 06:43:07.208040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.545 [2024-11-20 06:43:07.208072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.545 qpair failed and we were unable to recover it. 00:32:35.545 [2024-11-20 06:43:07.208344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.545 [2024-11-20 06:43:07.208387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.545 qpair failed and we were unable to recover it. 00:32:35.545 [2024-11-20 06:43:07.208501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.545 [2024-11-20 06:43:07.208524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.545 qpair failed and we were unable to recover it. 00:32:35.545 [2024-11-20 06:43:07.208781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.545 [2024-11-20 06:43:07.208814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.545 qpair failed and we were unable to recover it. 00:32:35.545 [2024-11-20 06:43:07.209028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.545 [2024-11-20 06:43:07.209060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.545 qpair failed and we were unable to recover it. 00:32:35.545 [2024-11-20 06:43:07.209351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.545 [2024-11-20 06:43:07.209385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.546 qpair failed and we were unable to recover it. 00:32:35.546 [2024-11-20 06:43:07.209591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.546 [2024-11-20 06:43:07.209623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.546 qpair failed and we were unable to recover it. 00:32:35.546 [2024-11-20 06:43:07.209829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.546 [2024-11-20 06:43:07.209861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.546 qpair failed and we were unable to recover it. 00:32:35.546 [2024-11-20 06:43:07.210139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.546 [2024-11-20 06:43:07.210161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.546 qpair failed and we were unable to recover it. 00:32:35.546 [2024-11-20 06:43:07.210405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.546 [2024-11-20 06:43:07.210428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.546 qpair failed and we were unable to recover it. 00:32:35.546 [2024-11-20 06:43:07.210660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.546 [2024-11-20 06:43:07.210683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.546 qpair failed and we were unable to recover it. 00:32:35.546 [2024-11-20 06:43:07.210814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.546 [2024-11-20 06:43:07.210836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.546 qpair failed and we were unable to recover it. 00:32:35.546 [2024-11-20 06:43:07.211090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.546 [2024-11-20 06:43:07.211111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.546 qpair failed and we were unable to recover it. 00:32:35.546 [2024-11-20 06:43:07.211297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.546 [2024-11-20 06:43:07.211321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.546 qpair failed and we were unable to recover it. 00:32:35.546 [2024-11-20 06:43:07.211526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.546 [2024-11-20 06:43:07.211549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.546 qpair failed and we were unable to recover it. 00:32:35.546 [2024-11-20 06:43:07.211713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.546 [2024-11-20 06:43:07.211735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.546 qpair failed and we were unable to recover it. 00:32:35.546 [2024-11-20 06:43:07.211960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.546 [2024-11-20 06:43:07.211992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.546 qpair failed and we were unable to recover it. 00:32:35.546 [2024-11-20 06:43:07.212199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.546 [2024-11-20 06:43:07.212245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.546 qpair failed and we were unable to recover it. 00:32:35.546 [2024-11-20 06:43:07.212550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.546 [2024-11-20 06:43:07.212583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.546 qpair failed and we were unable to recover it. 00:32:35.546 [2024-11-20 06:43:07.212730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.546 [2024-11-20 06:43:07.212763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.546 qpair failed and we were unable to recover it. 00:32:35.546 [2024-11-20 06:43:07.213043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.546 [2024-11-20 06:43:07.213076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.546 qpair failed and we were unable to recover it. 00:32:35.546 [2024-11-20 06:43:07.213322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.546 [2024-11-20 06:43:07.213355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.546 qpair failed and we were unable to recover it. 00:32:35.546 [2024-11-20 06:43:07.213657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.546 [2024-11-20 06:43:07.213689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.546 qpair failed and we were unable to recover it. 00:32:35.546 [2024-11-20 06:43:07.214007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.546 [2024-11-20 06:43:07.214039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.546 qpair failed and we were unable to recover it. 00:32:35.546 [2024-11-20 06:43:07.214327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.546 [2024-11-20 06:43:07.214361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.546 qpair failed and we were unable to recover it. 00:32:35.546 [2024-11-20 06:43:07.214627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.546 [2024-11-20 06:43:07.214659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.546 qpair failed and we were unable to recover it. 00:32:35.546 [2024-11-20 06:43:07.214956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.546 [2024-11-20 06:43:07.214988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.546 qpair failed and we were unable to recover it. 00:32:35.546 [2024-11-20 06:43:07.215283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.546 [2024-11-20 06:43:07.215317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.546 qpair failed and we were unable to recover it. 00:32:35.546 [2024-11-20 06:43:07.215585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.546 [2024-11-20 06:43:07.215607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.546 qpair failed and we were unable to recover it. 00:32:35.546 [2024-11-20 06:43:07.215902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.546 [2024-11-20 06:43:07.215924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.546 qpair failed and we were unable to recover it. 00:32:35.546 [2024-11-20 06:43:07.216131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.546 [2024-11-20 06:43:07.216173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.546 qpair failed and we were unable to recover it. 00:32:35.546 [2024-11-20 06:43:07.216525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.546 [2024-11-20 06:43:07.216594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:35.546 qpair failed and we were unable to recover it. 00:32:35.546 [2024-11-20 06:43:07.216834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.546 [2024-11-20 06:43:07.216874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:35.546 qpair failed and we were unable to recover it. 00:32:35.546 [2024-11-20 06:43:07.217175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.546 [2024-11-20 06:43:07.217228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:35.546 qpair failed and we were unable to recover it. 00:32:35.546 [2024-11-20 06:43:07.217470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.546 [2024-11-20 06:43:07.217496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.546 qpair failed and we were unable to recover it. 00:32:35.546 [2024-11-20 06:43:07.217759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.546 [2024-11-20 06:43:07.217793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.546 qpair failed and we were unable to recover it. 00:32:35.546 [2024-11-20 06:43:07.218053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.546 [2024-11-20 06:43:07.218087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.546 qpair failed and we were unable to recover it. 00:32:35.546 [2024-11-20 06:43:07.218354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.546 [2024-11-20 06:43:07.218399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.546 qpair failed and we were unable to recover it. 00:32:35.546 [2024-11-20 06:43:07.218591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.546 [2024-11-20 06:43:07.218613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.546 qpair failed and we were unable to recover it. 00:32:35.546 [2024-11-20 06:43:07.218790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.547 [2024-11-20 06:43:07.218813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.547 qpair failed and we were unable to recover it. 00:32:35.547 [2024-11-20 06:43:07.218994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.547 [2024-11-20 06:43:07.219025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.547 qpair failed and we were unable to recover it. 00:32:35.547 [2024-11-20 06:43:07.219282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.547 [2024-11-20 06:43:07.219316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.547 qpair failed and we were unable to recover it. 00:32:35.547 [2024-11-20 06:43:07.219461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.547 [2024-11-20 06:43:07.219483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.547 qpair failed and we were unable to recover it. 00:32:35.547 [2024-11-20 06:43:07.219665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.547 [2024-11-20 06:43:07.219705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.547 qpair failed and we were unable to recover it. 00:32:35.547 [2024-11-20 06:43:07.219962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.547 [2024-11-20 06:43:07.219994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.547 qpair failed and we were unable to recover it. 00:32:35.547 [2024-11-20 06:43:07.220216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.547 [2024-11-20 06:43:07.220249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.547 qpair failed and we were unable to recover it. 00:32:35.547 [2024-11-20 06:43:07.220510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.547 [2024-11-20 06:43:07.220542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.547 qpair failed and we were unable to recover it. 00:32:35.547 [2024-11-20 06:43:07.220861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.547 [2024-11-20 06:43:07.220893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.547 qpair failed and we were unable to recover it. 00:32:35.547 [2024-11-20 06:43:07.221139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.547 [2024-11-20 06:43:07.221170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.547 qpair failed and we were unable to recover it. 00:32:35.547 [2024-11-20 06:43:07.221414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.547 [2024-11-20 06:43:07.221446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.547 qpair failed and we were unable to recover it. 00:32:35.547 [2024-11-20 06:43:07.221736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.547 [2024-11-20 06:43:07.221768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.547 qpair failed and we were unable to recover it. 00:32:35.547 [2024-11-20 06:43:07.221958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.547 [2024-11-20 06:43:07.221980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.547 qpair failed and we were unable to recover it. 00:32:35.547 [2024-11-20 06:43:07.222159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.547 [2024-11-20 06:43:07.222191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.547 qpair failed and we were unable to recover it. 00:32:35.547 [2024-11-20 06:43:07.222489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.547 [2024-11-20 06:43:07.222522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.547 qpair failed and we were unable to recover it. 00:32:35.547 [2024-11-20 06:43:07.222721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.547 [2024-11-20 06:43:07.222754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.547 qpair failed and we were unable to recover it. 00:32:35.547 [2024-11-20 06:43:07.223058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.547 [2024-11-20 06:43:07.223089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.547 qpair failed and we were unable to recover it. 00:32:35.547 [2024-11-20 06:43:07.223352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.547 [2024-11-20 06:43:07.223386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.547 qpair failed and we were unable to recover it. 00:32:35.547 [2024-11-20 06:43:07.223673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.547 [2024-11-20 06:43:07.223706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.547 qpair failed and we were unable to recover it. 00:32:35.547 [2024-11-20 06:43:07.223987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.547 [2024-11-20 06:43:07.224020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.547 qpair failed and we were unable to recover it. 00:32:35.547 [2024-11-20 06:43:07.224217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.547 [2024-11-20 06:43:07.224250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.547 qpair failed and we were unable to recover it. 00:32:35.547 [2024-11-20 06:43:07.224523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.547 [2024-11-20 06:43:07.224556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.547 qpair failed and we were unable to recover it. 00:32:35.547 [2024-11-20 06:43:07.224809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.547 [2024-11-20 06:43:07.224841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.547 qpair failed and we were unable to recover it. 00:32:35.547 [2024-11-20 06:43:07.225048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.547 [2024-11-20 06:43:07.225079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.547 qpair failed and we were unable to recover it. 00:32:35.547 [2024-11-20 06:43:07.225361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.547 [2024-11-20 06:43:07.225401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.547 qpair failed and we were unable to recover it. 00:32:35.547 [2024-11-20 06:43:07.225600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.547 [2024-11-20 06:43:07.225632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.547 qpair failed and we were unable to recover it. 00:32:35.547 [2024-11-20 06:43:07.225840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.547 [2024-11-20 06:43:07.225872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.547 qpair failed and we were unable to recover it. 00:32:35.547 [2024-11-20 06:43:07.226152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.547 [2024-11-20 06:43:07.226184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.547 qpair failed and we were unable to recover it. 00:32:35.547 [2024-11-20 06:43:07.226475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.547 [2024-11-20 06:43:07.226508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.547 qpair failed and we were unable to recover it. 00:32:35.547 [2024-11-20 06:43:07.226787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.547 [2024-11-20 06:43:07.226820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.547 qpair failed and we were unable to recover it. 00:32:35.547 [2024-11-20 06:43:07.227110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.547 [2024-11-20 06:43:07.227142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.547 qpair failed and we were unable to recover it. 00:32:35.547 [2024-11-20 06:43:07.227418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.547 [2024-11-20 06:43:07.227462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.547 qpair failed and we were unable to recover it. 00:32:35.547 [2024-11-20 06:43:07.227713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.547 [2024-11-20 06:43:07.227735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.547 qpair failed and we were unable to recover it. 00:32:35.547 [2024-11-20 06:43:07.227938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.547 [2024-11-20 06:43:07.227960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.547 qpair failed and we were unable to recover it. 00:32:35.547 [2024-11-20 06:43:07.228241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.547 [2024-11-20 06:43:07.228264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.547 qpair failed and we were unable to recover it. 00:32:35.547 [2024-11-20 06:43:07.228525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.547 [2024-11-20 06:43:07.228557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.547 qpair failed and we were unable to recover it. 00:32:35.547 [2024-11-20 06:43:07.228829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.547 [2024-11-20 06:43:07.228861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.547 qpair failed and we were unable to recover it. 00:32:35.547 [2024-11-20 06:43:07.229152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.547 [2024-11-20 06:43:07.229185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.547 qpair failed and we were unable to recover it. 00:32:35.547 [2024-11-20 06:43:07.229461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.547 [2024-11-20 06:43:07.229494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.548 qpair failed and we were unable to recover it. 00:32:35.548 [2024-11-20 06:43:07.229805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.548 [2024-11-20 06:43:07.229836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.548 qpair failed and we were unable to recover it. 00:32:35.548 [2024-11-20 06:43:07.230098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.548 [2024-11-20 06:43:07.230129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.548 qpair failed and we were unable to recover it. 00:32:35.548 [2024-11-20 06:43:07.230440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.548 [2024-11-20 06:43:07.230474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.548 qpair failed and we were unable to recover it. 00:32:35.548 [2024-11-20 06:43:07.230759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.548 [2024-11-20 06:43:07.230791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.548 qpair failed and we were unable to recover it. 00:32:35.548 [2024-11-20 06:43:07.231010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.548 [2024-11-20 06:43:07.231042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.548 qpair failed and we were unable to recover it. 00:32:35.548 [2024-11-20 06:43:07.231241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.548 [2024-11-20 06:43:07.231274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.548 qpair failed and we were unable to recover it. 00:32:35.548 [2024-11-20 06:43:07.231471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.548 [2024-11-20 06:43:07.231502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.548 qpair failed and we were unable to recover it. 00:32:35.548 [2024-11-20 06:43:07.231777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.548 [2024-11-20 06:43:07.231809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.548 qpair failed and we were unable to recover it. 00:32:35.548 [2024-11-20 06:43:07.232066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.548 [2024-11-20 06:43:07.232098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.548 qpair failed and we were unable to recover it. 00:32:35.548 [2024-11-20 06:43:07.232298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.548 [2024-11-20 06:43:07.232332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.548 qpair failed and we were unable to recover it. 00:32:35.548 [2024-11-20 06:43:07.232610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.548 [2024-11-20 06:43:07.232642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.548 qpair failed and we were unable to recover it. 00:32:35.548 [2024-11-20 06:43:07.232874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.548 [2024-11-20 06:43:07.232895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.548 qpair failed and we were unable to recover it. 00:32:35.548 [2024-11-20 06:43:07.233155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.548 [2024-11-20 06:43:07.233180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.548 qpair failed and we were unable to recover it. 00:32:35.548 [2024-11-20 06:43:07.233304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.548 [2024-11-20 06:43:07.233327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.548 qpair failed and we were unable to recover it. 00:32:35.548 [2024-11-20 06:43:07.233562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.548 [2024-11-20 06:43:07.233584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.548 qpair failed and we were unable to recover it. 00:32:35.548 [2024-11-20 06:43:07.233849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.548 [2024-11-20 06:43:07.233880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.548 qpair failed and we were unable to recover it. 00:32:35.548 [2024-11-20 06:43:07.234098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.548 [2024-11-20 06:43:07.234130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.548 qpair failed and we were unable to recover it. 00:32:35.548 [2024-11-20 06:43:07.234337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.548 [2024-11-20 06:43:07.234371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.548 qpair failed and we were unable to recover it. 00:32:35.548 [2024-11-20 06:43:07.234599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.548 [2024-11-20 06:43:07.234631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.548 qpair failed and we were unable to recover it. 00:32:35.548 [2024-11-20 06:43:07.234884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.548 [2024-11-20 06:43:07.234916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.548 qpair failed and we were unable to recover it. 00:32:35.548 [2024-11-20 06:43:07.235185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.548 [2024-11-20 06:43:07.235227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.548 qpair failed and we were unable to recover it. 00:32:35.548 [2024-11-20 06:43:07.235485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.548 [2024-11-20 06:43:07.235518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.548 qpair failed and we were unable to recover it. 00:32:35.548 [2024-11-20 06:43:07.235771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.548 [2024-11-20 06:43:07.235803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.548 qpair failed and we were unable to recover it. 00:32:35.548 [2024-11-20 06:43:07.236022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.548 [2024-11-20 06:43:07.236054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.548 qpair failed and we were unable to recover it. 00:32:35.548 [2024-11-20 06:43:07.236300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.548 [2024-11-20 06:43:07.236334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.548 qpair failed and we were unable to recover it. 00:32:35.548 [2024-11-20 06:43:07.236539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.548 [2024-11-20 06:43:07.236571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.548 qpair failed and we were unable to recover it. 00:32:35.548 [2024-11-20 06:43:07.236825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.548 [2024-11-20 06:43:07.236863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.548 qpair failed and we were unable to recover it. 00:32:35.548 [2024-11-20 06:43:07.237179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.548 [2024-11-20 06:43:07.237221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.548 qpair failed and we were unable to recover it. 00:32:35.548 [2024-11-20 06:43:07.237499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.548 [2024-11-20 06:43:07.237540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.548 qpair failed and we were unable to recover it. 00:32:35.548 [2024-11-20 06:43:07.237658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.548 [2024-11-20 06:43:07.237680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.548 qpair failed and we were unable to recover it. 00:32:35.548 [2024-11-20 06:43:07.237865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.548 [2024-11-20 06:43:07.237887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.548 qpair failed and we were unable to recover it. 00:32:35.548 [2024-11-20 06:43:07.238063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.548 [2024-11-20 06:43:07.238084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.548 qpair failed and we were unable to recover it. 00:32:35.548 [2024-11-20 06:43:07.238267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.548 [2024-11-20 06:43:07.238301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.548 qpair failed and we were unable to recover it. 00:32:35.548 [2024-11-20 06:43:07.238577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.548 [2024-11-20 06:43:07.238599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.548 qpair failed and we were unable to recover it. 00:32:35.548 [2024-11-20 06:43:07.238878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.548 [2024-11-20 06:43:07.238921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.548 qpair failed and we were unable to recover it. 00:32:35.548 [2024-11-20 06:43:07.239217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.548 [2024-11-20 06:43:07.239249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.548 qpair failed and we were unable to recover it. 00:32:35.548 [2024-11-20 06:43:07.239473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.548 [2024-11-20 06:43:07.239505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.548 qpair failed and we were unable to recover it. 00:32:35.548 [2024-11-20 06:43:07.239791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.548 [2024-11-20 06:43:07.239823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.548 qpair failed and we were unable to recover it. 00:32:35.549 [2024-11-20 06:43:07.240101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.549 [2024-11-20 06:43:07.240123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.549 qpair failed and we were unable to recover it. 00:32:35.549 [2024-11-20 06:43:07.240251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.549 [2024-11-20 06:43:07.240275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.549 qpair failed and we were unable to recover it. 00:32:35.549 [2024-11-20 06:43:07.240538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.549 [2024-11-20 06:43:07.240560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.549 qpair failed and we were unable to recover it. 00:32:35.549 [2024-11-20 06:43:07.240747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.549 [2024-11-20 06:43:07.240780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.549 qpair failed and we were unable to recover it. 00:32:35.549 [2024-11-20 06:43:07.241059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.549 [2024-11-20 06:43:07.241092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.549 qpair failed and we were unable to recover it. 00:32:35.549 [2024-11-20 06:43:07.241378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.549 [2024-11-20 06:43:07.241410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.549 qpair failed and we were unable to recover it. 00:32:35.549 [2024-11-20 06:43:07.241698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.549 [2024-11-20 06:43:07.241730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.549 qpair failed and we were unable to recover it. 00:32:35.549 [2024-11-20 06:43:07.241864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.549 [2024-11-20 06:43:07.241896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.549 qpair failed and we were unable to recover it. 00:32:35.549 [2024-11-20 06:43:07.242094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.549 [2024-11-20 06:43:07.242126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.549 qpair failed and we were unable to recover it. 00:32:35.549 [2024-11-20 06:43:07.242329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.549 [2024-11-20 06:43:07.242363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.549 qpair failed and we were unable to recover it. 00:32:35.549 [2024-11-20 06:43:07.242646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.549 [2024-11-20 06:43:07.242678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.549 qpair failed and we were unable to recover it. 00:32:35.549 [2024-11-20 06:43:07.242868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.549 [2024-11-20 06:43:07.242889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.549 qpair failed and we were unable to recover it. 00:32:35.549 [2024-11-20 06:43:07.243053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.549 [2024-11-20 06:43:07.243075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.549 qpair failed and we were unable to recover it. 00:32:35.549 [2024-11-20 06:43:07.243261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.549 [2024-11-20 06:43:07.243284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.549 qpair failed and we were unable to recover it. 00:32:35.549 [2024-11-20 06:43:07.243460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.549 [2024-11-20 06:43:07.243482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.549 qpair failed and we were unable to recover it. 00:32:35.549 [2024-11-20 06:43:07.243648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.549 [2024-11-20 06:43:07.243673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.549 qpair failed and we were unable to recover it. 00:32:35.549 [2024-11-20 06:43:07.243915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.549 [2024-11-20 06:43:07.243947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.549 qpair failed and we were unable to recover it. 00:32:35.549 [2024-11-20 06:43:07.244147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.549 [2024-11-20 06:43:07.244180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.549 qpair failed and we were unable to recover it. 00:32:35.549 [2024-11-20 06:43:07.244374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.549 [2024-11-20 06:43:07.244407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.549 qpair failed and we were unable to recover it. 00:32:35.549 [2024-11-20 06:43:07.244686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.549 [2024-11-20 06:43:07.244719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.549 qpair failed and we were unable to recover it. 00:32:35.549 [2024-11-20 06:43:07.245001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.549 [2024-11-20 06:43:07.245033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.549 qpair failed and we were unable to recover it. 00:32:35.549 [2024-11-20 06:43:07.245316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.549 [2024-11-20 06:43:07.245350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.549 qpair failed and we were unable to recover it. 00:32:35.549 [2024-11-20 06:43:07.245652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.549 [2024-11-20 06:43:07.245685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.549 qpair failed and we were unable to recover it. 00:32:35.549 [2024-11-20 06:43:07.245958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.549 [2024-11-20 06:43:07.245997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.549 qpair failed and we were unable to recover it. 00:32:35.549 [2024-11-20 06:43:07.246237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.549 [2024-11-20 06:43:07.246261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.549 qpair failed and we were unable to recover it. 00:32:35.549 [2024-11-20 06:43:07.246444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.549 [2024-11-20 06:43:07.246467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.549 qpair failed and we were unable to recover it. 00:32:35.549 [2024-11-20 06:43:07.246626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.549 [2024-11-20 06:43:07.246648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.549 qpair failed and we were unable to recover it. 00:32:35.549 [2024-11-20 06:43:07.246857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.549 [2024-11-20 06:43:07.246889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.549 qpair failed and we were unable to recover it. 00:32:35.549 [2024-11-20 06:43:07.247189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.549 [2024-11-20 06:43:07.247231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.549 qpair failed and we were unable to recover it. 00:32:35.549 [2024-11-20 06:43:07.247388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.549 [2024-11-20 06:43:07.247421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.549 qpair failed and we were unable to recover it. 00:32:35.549 [2024-11-20 06:43:07.247720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.549 [2024-11-20 06:43:07.247753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.549 qpair failed and we were unable to recover it. 00:32:35.549 [2024-11-20 06:43:07.248012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.549 [2024-11-20 06:43:07.248034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.549 qpair failed and we were unable to recover it. 00:32:35.549 [2024-11-20 06:43:07.248218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.549 [2024-11-20 06:43:07.248241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.549 qpair failed and we were unable to recover it. 00:32:35.549 [2024-11-20 06:43:07.248472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.549 [2024-11-20 06:43:07.248494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.549 qpair failed and we were unable to recover it. 00:32:35.549 [2024-11-20 06:43:07.248763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.549 [2024-11-20 06:43:07.248795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.549 qpair failed and we were unable to recover it. 00:32:35.549 [2024-11-20 06:43:07.249094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.549 [2024-11-20 06:43:07.249126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.549 qpair failed and we were unable to recover it. 00:32:35.549 [2024-11-20 06:43:07.249348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.549 [2024-11-20 06:43:07.249382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.549 qpair failed and we were unable to recover it. 00:32:35.549 [2024-11-20 06:43:07.249593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.549 [2024-11-20 06:43:07.249626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.549 qpair failed and we were unable to recover it. 00:32:35.549 [2024-11-20 06:43:07.249827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.550 [2024-11-20 06:43:07.249859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.550 qpair failed and we were unable to recover it. 00:32:35.550 [2024-11-20 06:43:07.250135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.550 [2024-11-20 06:43:07.250167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.550 qpair failed and we were unable to recover it. 00:32:35.550 [2024-11-20 06:43:07.250482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.550 [2024-11-20 06:43:07.250516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.550 qpair failed and we were unable to recover it. 00:32:35.550 [2024-11-20 06:43:07.250817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.550 [2024-11-20 06:43:07.250849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.550 qpair failed and we were unable to recover it. 00:32:35.550 [2024-11-20 06:43:07.251115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.550 [2024-11-20 06:43:07.251147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.550 qpair failed and we were unable to recover it. 00:32:35.550 [2024-11-20 06:43:07.251421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.550 [2024-11-20 06:43:07.251455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.550 qpair failed and we were unable to recover it. 00:32:35.550 [2024-11-20 06:43:07.251741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.550 [2024-11-20 06:43:07.251764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.550 qpair failed and we were unable to recover it. 00:32:35.550 [2024-11-20 06:43:07.252054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.550 [2024-11-20 06:43:07.252086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.550 qpair failed and we were unable to recover it. 00:32:35.550 [2024-11-20 06:43:07.252300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.550 [2024-11-20 06:43:07.252334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.550 qpair failed and we were unable to recover it. 00:32:35.550 [2024-11-20 06:43:07.252590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.550 [2024-11-20 06:43:07.252622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.550 qpair failed and we were unable to recover it. 00:32:35.550 [2024-11-20 06:43:07.252775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.550 [2024-11-20 06:43:07.252806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.550 qpair failed and we were unable to recover it. 00:32:35.550 [2024-11-20 06:43:07.252987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.550 [2024-11-20 06:43:07.253019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.550 qpair failed and we were unable to recover it. 00:32:35.550 [2024-11-20 06:43:07.253247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.550 [2024-11-20 06:43:07.253305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.550 qpair failed and we were unable to recover it. 00:32:35.550 [2024-11-20 06:43:07.253540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.550 [2024-11-20 06:43:07.253572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.550 qpair failed and we were unable to recover it. 00:32:35.550 [2024-11-20 06:43:07.253833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.550 [2024-11-20 06:43:07.253865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.550 qpair failed and we were unable to recover it. 00:32:35.550 [2024-11-20 06:43:07.254170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.550 [2024-11-20 06:43:07.254220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.550 qpair failed and we were unable to recover it. 00:32:35.550 [2024-11-20 06:43:07.254497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.550 [2024-11-20 06:43:07.254529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.550 qpair failed and we were unable to recover it. 00:32:35.550 [2024-11-20 06:43:07.254757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.550 [2024-11-20 06:43:07.254788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.550 qpair failed and we were unable to recover it. 00:32:35.550 [2024-11-20 06:43:07.255042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.550 [2024-11-20 06:43:07.255080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.550 qpair failed and we were unable to recover it. 00:32:35.550 [2024-11-20 06:43:07.255384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.550 [2024-11-20 06:43:07.255417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.550 qpair failed and we were unable to recover it. 00:32:35.550 [2024-11-20 06:43:07.255703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.550 [2024-11-20 06:43:07.255735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.550 qpair failed and we were unable to recover it. 00:32:35.550 [2024-11-20 06:43:07.255937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.550 [2024-11-20 06:43:07.255969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.550 qpair failed and we were unable to recover it. 00:32:35.550 [2024-11-20 06:43:07.256222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.550 [2024-11-20 06:43:07.256256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.550 qpair failed and we were unable to recover it. 00:32:35.550 [2024-11-20 06:43:07.256471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.550 [2024-11-20 06:43:07.256504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.550 qpair failed and we were unable to recover it. 00:32:35.550 [2024-11-20 06:43:07.256766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.550 [2024-11-20 06:43:07.256798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.550 qpair failed and we were unable to recover it. 00:32:35.550 [2024-11-20 06:43:07.256976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.550 [2024-11-20 06:43:07.256997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.550 qpair failed and we were unable to recover it. 00:32:35.550 [2024-11-20 06:43:07.257172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.550 [2024-11-20 06:43:07.257214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.550 qpair failed and we were unable to recover it. 00:32:35.550 [2024-11-20 06:43:07.257512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.550 [2024-11-20 06:43:07.257543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.550 qpair failed and we were unable to recover it. 00:32:35.550 [2024-11-20 06:43:07.257804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.550 [2024-11-20 06:43:07.257836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.550 qpair failed and we were unable to recover it. 00:32:35.550 [2024-11-20 06:43:07.258020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.550 [2024-11-20 06:43:07.258052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.550 qpair failed and we were unable to recover it. 00:32:35.550 [2024-11-20 06:43:07.258354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.550 [2024-11-20 06:43:07.258387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.550 qpair failed and we were unable to recover it. 00:32:35.550 [2024-11-20 06:43:07.258597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.550 [2024-11-20 06:43:07.258619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.550 qpair failed and we were unable to recover it. 00:32:35.550 [2024-11-20 06:43:07.258798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.550 [2024-11-20 06:43:07.258820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.550 qpair failed and we were unable to recover it. 00:32:35.550 [2024-11-20 06:43:07.259054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.550 [2024-11-20 06:43:07.259085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.550 qpair failed and we were unable to recover it. 00:32:35.550 [2024-11-20 06:43:07.259366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.550 [2024-11-20 06:43:07.259400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.550 qpair failed and we were unable to recover it. 00:32:35.550 [2024-11-20 06:43:07.259606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.550 [2024-11-20 06:43:07.259638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.550 qpair failed and we were unable to recover it. 00:32:35.550 [2024-11-20 06:43:07.259890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.550 [2024-11-20 06:43:07.259922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.550 qpair failed and we were unable to recover it. 00:32:35.550 [2024-11-20 06:43:07.260177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.550 [2024-11-20 06:43:07.260216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.550 qpair failed and we were unable to recover it. 00:32:35.550 [2024-11-20 06:43:07.260522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.551 [2024-11-20 06:43:07.260555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.551 qpair failed and we were unable to recover it. 00:32:35.551 [2024-11-20 06:43:07.260855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.551 [2024-11-20 06:43:07.260887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.551 qpair failed and we were unable to recover it. 00:32:35.551 [2024-11-20 06:43:07.261106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.551 [2024-11-20 06:43:07.261137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.551 qpair failed and we were unable to recover it. 00:32:35.551 [2024-11-20 06:43:07.261339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.551 [2024-11-20 06:43:07.261373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.551 qpair failed and we were unable to recover it. 00:32:35.551 [2024-11-20 06:43:07.261560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.551 [2024-11-20 06:43:07.261592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.551 qpair failed and we were unable to recover it. 00:32:35.551 [2024-11-20 06:43:07.261872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.551 [2024-11-20 06:43:07.261904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.551 qpair failed and we were unable to recover it. 00:32:35.551 [2024-11-20 06:43:07.262043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.551 [2024-11-20 06:43:07.262074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.551 qpair failed and we were unable to recover it. 00:32:35.551 [2024-11-20 06:43:07.262260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.551 [2024-11-20 06:43:07.262300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.551 qpair failed and we were unable to recover it. 00:32:35.551 [2024-11-20 06:43:07.262498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.551 [2024-11-20 06:43:07.262531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.551 qpair failed and we were unable to recover it. 00:32:35.551 [2024-11-20 06:43:07.262788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.551 [2024-11-20 06:43:07.262819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.551 qpair failed and we were unable to recover it. 00:32:35.551 [2024-11-20 06:43:07.263122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.551 [2024-11-20 06:43:07.263154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.551 qpair failed and we were unable to recover it. 00:32:35.551 [2024-11-20 06:43:07.263466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.551 [2024-11-20 06:43:07.263500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.551 qpair failed and we were unable to recover it. 00:32:35.551 [2024-11-20 06:43:07.263707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.551 [2024-11-20 06:43:07.263738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.551 qpair failed and we were unable to recover it. 00:32:35.551 [2024-11-20 06:43:07.263942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.551 [2024-11-20 06:43:07.263980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.551 qpair failed and we were unable to recover it. 00:32:35.551 [2024-11-20 06:43:07.264216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.551 [2024-11-20 06:43:07.264239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.551 qpair failed and we were unable to recover it. 00:32:35.551 [2024-11-20 06:43:07.264479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.551 [2024-11-20 06:43:07.264501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.551 qpair failed and we were unable to recover it. 00:32:35.551 [2024-11-20 06:43:07.264755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.551 [2024-11-20 06:43:07.264778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.551 qpair failed and we were unable to recover it. 00:32:35.551 [2024-11-20 06:43:07.265016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.551 [2024-11-20 06:43:07.265039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.551 qpair failed and we were unable to recover it. 00:32:35.551 [2024-11-20 06:43:07.265304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.551 [2024-11-20 06:43:07.265339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.551 qpair failed and we were unable to recover it. 00:32:35.551 [2024-11-20 06:43:07.265477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.551 [2024-11-20 06:43:07.265510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.551 qpair failed and we were unable to recover it. 00:32:35.551 [2024-11-20 06:43:07.265734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.551 [2024-11-20 06:43:07.265767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.551 qpair failed and we were unable to recover it. 00:32:35.551 [2024-11-20 06:43:07.266024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.551 [2024-11-20 06:43:07.266047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.551 qpair failed and we were unable to recover it. 00:32:35.551 [2024-11-20 06:43:07.266310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.551 [2024-11-20 06:43:07.266333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.551 qpair failed and we were unable to recover it. 00:32:35.551 [2024-11-20 06:43:07.266600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.551 [2024-11-20 06:43:07.266622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.551 qpair failed and we were unable to recover it. 00:32:35.551 [2024-11-20 06:43:07.266870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.551 [2024-11-20 06:43:07.266902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.551 qpair failed and we were unable to recover it. 00:32:35.551 [2024-11-20 06:43:07.267166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.551 [2024-11-20 06:43:07.267198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.551 qpair failed and we were unable to recover it. 00:32:35.551 [2024-11-20 06:43:07.267439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.551 [2024-11-20 06:43:07.267472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.551 qpair failed and we were unable to recover it. 00:32:35.551 [2024-11-20 06:43:07.267751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.551 [2024-11-20 06:43:07.267783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.551 qpair failed and we were unable to recover it. 00:32:35.551 [2024-11-20 06:43:07.268072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.551 [2024-11-20 06:43:07.268104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.551 qpair failed and we were unable to recover it. 00:32:35.551 [2024-11-20 06:43:07.268381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.551 [2024-11-20 06:43:07.268415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.551 qpair failed and we were unable to recover it. 00:32:35.551 [2024-11-20 06:43:07.268685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.551 [2024-11-20 06:43:07.268717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.551 qpair failed and we were unable to recover it. 00:32:35.551 [2024-11-20 06:43:07.268941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.551 [2024-11-20 06:43:07.268983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.551 qpair failed and we were unable to recover it. 00:32:35.551 [2024-11-20 06:43:07.269164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.551 [2024-11-20 06:43:07.269186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.551 qpair failed and we were unable to recover it. 00:32:35.551 [2024-11-20 06:43:07.269378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.551 [2024-11-20 06:43:07.269401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.551 qpair failed and we were unable to recover it. 00:32:35.552 [2024-11-20 06:43:07.269656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.552 [2024-11-20 06:43:07.269677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.552 qpair failed and we were unable to recover it. 00:32:35.552 [2024-11-20 06:43:07.269862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.552 [2024-11-20 06:43:07.269885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.552 qpair failed and we were unable to recover it. 00:32:35.552 [2024-11-20 06:43:07.270051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.552 [2024-11-20 06:43:07.270074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.552 qpair failed and we were unable to recover it. 00:32:35.552 [2024-11-20 06:43:07.270329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.552 [2024-11-20 06:43:07.270363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.552 qpair failed and we were unable to recover it. 00:32:35.552 [2024-11-20 06:43:07.270587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.552 [2024-11-20 06:43:07.270619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.552 qpair failed and we were unable to recover it. 00:32:35.552 [2024-11-20 06:43:07.270836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.552 [2024-11-20 06:43:07.270869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.552 qpair failed and we were unable to recover it. 00:32:35.552 [2024-11-20 06:43:07.271053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.552 [2024-11-20 06:43:07.271075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.552 qpair failed and we were unable to recover it. 00:32:35.552 [2024-11-20 06:43:07.271253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.552 [2024-11-20 06:43:07.271276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.552 qpair failed and we were unable to recover it. 00:32:35.552 [2024-11-20 06:43:07.271456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.552 [2024-11-20 06:43:07.271478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.552 qpair failed and we were unable to recover it. 00:32:35.552 [2024-11-20 06:43:07.271665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.552 [2024-11-20 06:43:07.271687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.552 qpair failed and we were unable to recover it. 00:32:35.552 [2024-11-20 06:43:07.271942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.552 [2024-11-20 06:43:07.271964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.552 qpair failed and we were unable to recover it. 00:32:35.552 [2024-11-20 06:43:07.272220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.552 [2024-11-20 06:43:07.272244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.552 qpair failed and we were unable to recover it. 00:32:35.552 [2024-11-20 06:43:07.272379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.552 [2024-11-20 06:43:07.272403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.552 qpair failed and we were unable to recover it. 00:32:35.552 [2024-11-20 06:43:07.272646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.552 [2024-11-20 06:43:07.272668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.552 qpair failed and we were unable to recover it. 00:32:35.552 [2024-11-20 06:43:07.272936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.552 [2024-11-20 06:43:07.272976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.552 qpair failed and we were unable to recover it. 00:32:35.552 [2024-11-20 06:43:07.273197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.552 [2024-11-20 06:43:07.273242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.552 qpair failed and we were unable to recover it. 00:32:35.552 [2024-11-20 06:43:07.273456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.552 [2024-11-20 06:43:07.273498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.552 qpair failed and we were unable to recover it. 00:32:35.552 [2024-11-20 06:43:07.273669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.552 [2024-11-20 06:43:07.273692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.552 qpair failed and we were unable to recover it. 00:32:35.552 [2024-11-20 06:43:07.273877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.552 [2024-11-20 06:43:07.273909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.552 qpair failed and we were unable to recover it. 00:32:35.552 [2024-11-20 06:43:07.274166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.552 [2024-11-20 06:43:07.274200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.552 qpair failed and we were unable to recover it. 00:32:35.552 [2024-11-20 06:43:07.274458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.552 [2024-11-20 06:43:07.274494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.552 qpair failed and we were unable to recover it. 00:32:35.552 [2024-11-20 06:43:07.274780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.552 [2024-11-20 06:43:07.274812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.552 qpair failed and we were unable to recover it. 00:32:35.552 [2024-11-20 06:43:07.275091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.552 [2024-11-20 06:43:07.275124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.552 qpair failed and we were unable to recover it. 00:32:35.552 [2024-11-20 06:43:07.275408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.552 [2024-11-20 06:43:07.275443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.552 qpair failed and we were unable to recover it. 00:32:35.552 [2024-11-20 06:43:07.275590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.552 [2024-11-20 06:43:07.275622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.552 qpair failed and we were unable to recover it. 00:32:35.552 [2024-11-20 06:43:07.275784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.552 [2024-11-20 06:43:07.275806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.552 qpair failed and we were unable to recover it. 00:32:35.552 [2024-11-20 06:43:07.275912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.552 [2024-11-20 06:43:07.275934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.552 qpair failed and we were unable to recover it. 00:32:35.552 [2024-11-20 06:43:07.276113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.552 [2024-11-20 06:43:07.276134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.552 qpair failed and we were unable to recover it. 00:32:35.552 [2024-11-20 06:43:07.276375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.552 [2024-11-20 06:43:07.276398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.552 qpair failed and we were unable to recover it. 00:32:35.552 [2024-11-20 06:43:07.276581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.552 [2024-11-20 06:43:07.276606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.552 qpair failed and we were unable to recover it. 00:32:35.552 [2024-11-20 06:43:07.276859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.552 [2024-11-20 06:43:07.276881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.552 qpair failed and we were unable to recover it. 00:32:35.552 [2024-11-20 06:43:07.277178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.552 [2024-11-20 06:43:07.277219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.552 qpair failed and we were unable to recover it. 00:32:35.552 [2024-11-20 06:43:07.277470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.552 [2024-11-20 06:43:07.277504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.552 qpair failed and we were unable to recover it. 00:32:35.552 [2024-11-20 06:43:07.277783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.552 [2024-11-20 06:43:07.277816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.552 qpair failed and we were unable to recover it. 00:32:35.552 [2024-11-20 06:43:07.278071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.552 [2024-11-20 06:43:07.278104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.552 qpair failed and we were unable to recover it. 00:32:35.552 [2024-11-20 06:43:07.278397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.552 [2024-11-20 06:43:07.278432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.552 qpair failed and we were unable to recover it. 00:32:35.552 [2024-11-20 06:43:07.278728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.552 [2024-11-20 06:43:07.278769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.552 qpair failed and we were unable to recover it. 00:32:35.552 [2024-11-20 06:43:07.279028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.552 [2024-11-20 06:43:07.279051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.552 qpair failed and we were unable to recover it. 00:32:35.552 [2024-11-20 06:43:07.279236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.553 [2024-11-20 06:43:07.279261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.553 qpair failed and we were unable to recover it. 00:32:35.553 [2024-11-20 06:43:07.279540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.553 [2024-11-20 06:43:07.279563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.553 qpair failed and we were unable to recover it. 00:32:35.553 [2024-11-20 06:43:07.279822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.553 [2024-11-20 06:43:07.279845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.553 qpair failed and we were unable to recover it. 00:32:35.553 [2024-11-20 06:43:07.280100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.553 [2024-11-20 06:43:07.280129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.553 qpair failed and we were unable to recover it. 00:32:35.553 [2024-11-20 06:43:07.280360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.553 [2024-11-20 06:43:07.280385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.553 qpair failed and we were unable to recover it. 00:32:35.553 [2024-11-20 06:43:07.280517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.553 [2024-11-20 06:43:07.280541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.553 qpair failed and we were unable to recover it. 00:32:35.553 [2024-11-20 06:43:07.280800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.553 [2024-11-20 06:43:07.280834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.553 qpair failed and we were unable to recover it. 00:32:35.553 [2024-11-20 06:43:07.281115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.553 [2024-11-20 06:43:07.281148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.553 qpair failed and we were unable to recover it. 00:32:35.553 [2024-11-20 06:43:07.281274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.553 [2024-11-20 06:43:07.281309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.553 qpair failed and we were unable to recover it. 00:32:35.553 [2024-11-20 06:43:07.281509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.553 [2024-11-20 06:43:07.281542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.553 qpair failed and we were unable to recover it. 00:32:35.553 [2024-11-20 06:43:07.281821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.553 [2024-11-20 06:43:07.281854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.553 qpair failed and we were unable to recover it. 00:32:35.553 [2024-11-20 06:43:07.282057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.553 [2024-11-20 06:43:07.282090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.553 qpair failed and we were unable to recover it. 00:32:35.553 [2024-11-20 06:43:07.282335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.553 [2024-11-20 06:43:07.282371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.553 qpair failed and we were unable to recover it. 00:32:35.553 [2024-11-20 06:43:07.282650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.553 [2024-11-20 06:43:07.282683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.553 qpair failed and we were unable to recover it. 00:32:35.553 [2024-11-20 06:43:07.282893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.553 [2024-11-20 06:43:07.282918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.553 qpair failed and we were unable to recover it. 00:32:35.553 [2024-11-20 06:43:07.283166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.553 [2024-11-20 06:43:07.283190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.553 qpair failed and we were unable to recover it. 00:32:35.553 [2024-11-20 06:43:07.283401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.553 [2024-11-20 06:43:07.283427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.553 qpair failed and we were unable to recover it. 00:32:35.553 [2024-11-20 06:43:07.283670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.553 [2024-11-20 06:43:07.283697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.553 qpair failed and we were unable to recover it. 00:32:35.553 [2024-11-20 06:43:07.283903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.553 [2024-11-20 06:43:07.283926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.553 qpair failed and we were unable to recover it. 00:32:35.553 [2024-11-20 06:43:07.284142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.553 [2024-11-20 06:43:07.284166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.553 qpair failed and we were unable to recover it. 00:32:35.553 [2024-11-20 06:43:07.284357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.553 [2024-11-20 06:43:07.284383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.553 qpair failed and we were unable to recover it. 00:32:35.553 [2024-11-20 06:43:07.284563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.553 [2024-11-20 06:43:07.284587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.553 qpair failed and we were unable to recover it. 00:32:35.553 [2024-11-20 06:43:07.284767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.553 [2024-11-20 06:43:07.284801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.553 qpair failed and we were unable to recover it. 00:32:35.553 [2024-11-20 06:43:07.285013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.553 [2024-11-20 06:43:07.285046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.553 qpair failed and we were unable to recover it. 00:32:35.553 [2024-11-20 06:43:07.285347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.553 [2024-11-20 06:43:07.285383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.553 qpair failed and we were unable to recover it. 00:32:35.553 [2024-11-20 06:43:07.285568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.553 [2024-11-20 06:43:07.285602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.553 qpair failed and we were unable to recover it. 00:32:35.553 [2024-11-20 06:43:07.285901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.553 [2024-11-20 06:43:07.285934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.553 qpair failed and we were unable to recover it. 00:32:35.553 [2024-11-20 06:43:07.286133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.553 [2024-11-20 06:43:07.286165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.553 qpair failed and we were unable to recover it. 00:32:35.553 [2024-11-20 06:43:07.286321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.553 [2024-11-20 06:43:07.286354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.553 qpair failed and we were unable to recover it. 00:32:35.553 [2024-11-20 06:43:07.286642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.553 [2024-11-20 06:43:07.286673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.553 qpair failed and we were unable to recover it. 00:32:35.553 [2024-11-20 06:43:07.286978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.553 [2024-11-20 06:43:07.287000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.553 qpair failed and we were unable to recover it. 00:32:35.553 [2024-11-20 06:43:07.287289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.553 [2024-11-20 06:43:07.287313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.553 qpair failed and we were unable to recover it. 00:32:35.553 [2024-11-20 06:43:07.287506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.553 [2024-11-20 06:43:07.287528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.553 qpair failed and we were unable to recover it. 00:32:35.553 [2024-11-20 06:43:07.287761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.553 [2024-11-20 06:43:07.287784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.553 qpair failed and we were unable to recover it. 00:32:35.553 [2024-11-20 06:43:07.287963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.553 [2024-11-20 06:43:07.287988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.553 qpair failed and we were unable to recover it. 00:32:35.553 [2024-11-20 06:43:07.288215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.553 [2024-11-20 06:43:07.288238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.553 qpair failed and we were unable to recover it. 00:32:35.553 [2024-11-20 06:43:07.288489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.553 [2024-11-20 06:43:07.288513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.553 qpair failed and we were unable to recover it. 00:32:35.553 [2024-11-20 06:43:07.288703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.553 [2024-11-20 06:43:07.288729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.553 qpair failed and we were unable to recover it. 00:32:35.553 [2024-11-20 06:43:07.288970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.553 [2024-11-20 06:43:07.288993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.554 qpair failed and we were unable to recover it. 00:32:35.554 [2024-11-20 06:43:07.289271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.554 [2024-11-20 06:43:07.289298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.554 qpair failed and we were unable to recover it. 00:32:35.554 [2024-11-20 06:43:07.289506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.554 [2024-11-20 06:43:07.289530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.554 qpair failed and we were unable to recover it. 00:32:35.554 [2024-11-20 06:43:07.289784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.554 [2024-11-20 06:43:07.289808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.554 qpair failed and we were unable to recover it. 00:32:35.554 [2024-11-20 06:43:07.290014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.554 [2024-11-20 06:43:07.290037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.554 qpair failed and we were unable to recover it. 00:32:35.554 [2024-11-20 06:43:07.290233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.554 [2024-11-20 06:43:07.290257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.554 qpair failed and we were unable to recover it. 00:32:35.554 [2024-11-20 06:43:07.290448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.554 [2024-11-20 06:43:07.290476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.554 qpair failed and we were unable to recover it. 00:32:35.554 [2024-11-20 06:43:07.290732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.554 [2024-11-20 06:43:07.290754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.554 qpair failed and we were unable to recover it. 00:32:35.554 [2024-11-20 06:43:07.290960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.554 [2024-11-20 06:43:07.290993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.554 qpair failed and we were unable to recover it. 00:32:35.554 [2024-11-20 06:43:07.291283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.554 [2024-11-20 06:43:07.291317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.554 qpair failed and we were unable to recover it. 00:32:35.554 [2024-11-20 06:43:07.291506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.554 [2024-11-20 06:43:07.291541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.554 qpair failed and we were unable to recover it. 00:32:35.554 [2024-11-20 06:43:07.291765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.554 [2024-11-20 06:43:07.291798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.554 qpair failed and we were unable to recover it. 00:32:35.554 [2024-11-20 06:43:07.291999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.554 [2024-11-20 06:43:07.292031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.554 qpair failed and we were unable to recover it. 00:32:35.554 [2024-11-20 06:43:07.292335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.554 [2024-11-20 06:43:07.292368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.554 qpair failed and we were unable to recover it. 00:32:35.554 [2024-11-20 06:43:07.292596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.554 [2024-11-20 06:43:07.292628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.554 qpair failed and we were unable to recover it. 00:32:35.554 [2024-11-20 06:43:07.292824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.554 [2024-11-20 06:43:07.292856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.554 qpair failed and we were unable to recover it. 00:32:35.554 [2024-11-20 06:43:07.293061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.554 [2024-11-20 06:43:07.293085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.554 qpair failed and we were unable to recover it. 00:32:35.554 [2024-11-20 06:43:07.293333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.554 [2024-11-20 06:43:07.293357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.554 qpair failed and we were unable to recover it. 00:32:35.554 [2024-11-20 06:43:07.293537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.554 [2024-11-20 06:43:07.293560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.554 qpair failed and we were unable to recover it. 00:32:35.554 [2024-11-20 06:43:07.293746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.554 [2024-11-20 06:43:07.293768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.554 qpair failed and we were unable to recover it. 00:32:35.554 [2024-11-20 06:43:07.294061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.554 [2024-11-20 06:43:07.294094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.554 qpair failed and we were unable to recover it. 00:32:35.554 [2024-11-20 06:43:07.294332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.554 [2024-11-20 06:43:07.294367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.554 qpair failed and we were unable to recover it. 00:32:35.554 [2024-11-20 06:43:07.294509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.554 [2024-11-20 06:43:07.294541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.554 qpair failed and we were unable to recover it. 00:32:35.554 [2024-11-20 06:43:07.294764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.554 [2024-11-20 06:43:07.294796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.554 qpair failed and we were unable to recover it. 00:32:35.554 [2024-11-20 06:43:07.295018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.554 [2024-11-20 06:43:07.295050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.554 qpair failed and we were unable to recover it. 00:32:35.554 [2024-11-20 06:43:07.295333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.554 [2024-11-20 06:43:07.295366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.554 qpair failed and we were unable to recover it. 00:32:35.554 [2024-11-20 06:43:07.295626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.554 [2024-11-20 06:43:07.295657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.554 qpair failed and we were unable to recover it. 00:32:35.554 [2024-11-20 06:43:07.295961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.554 [2024-11-20 06:43:07.295996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.554 qpair failed and we were unable to recover it. 00:32:35.554 [2024-11-20 06:43:07.296209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.554 [2024-11-20 06:43:07.296232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.554 qpair failed and we were unable to recover it. 00:32:35.554 [2024-11-20 06:43:07.296489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.554 [2024-11-20 06:43:07.296512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.554 qpair failed and we were unable to recover it. 00:32:35.554 [2024-11-20 06:43:07.296696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.554 [2024-11-20 06:43:07.296719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.554 qpair failed and we were unable to recover it. 00:32:35.554 [2024-11-20 06:43:07.296980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.554 [2024-11-20 06:43:07.297002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.554 qpair failed and we were unable to recover it. 00:32:35.554 [2024-11-20 06:43:07.297182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.554 [2024-11-20 06:43:07.297215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.554 qpair failed and we were unable to recover it. 00:32:35.554 [2024-11-20 06:43:07.297383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.554 [2024-11-20 06:43:07.297406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.554 qpair failed and we were unable to recover it. 00:32:35.554 [2024-11-20 06:43:07.297666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.554 [2024-11-20 06:43:07.297689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.554 qpair failed and we were unable to recover it. 00:32:35.554 [2024-11-20 06:43:07.298006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.554 [2024-11-20 06:43:07.298039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.554 qpair failed and we were unable to recover it. 00:32:35.554 [2024-11-20 06:43:07.298246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.554 [2024-11-20 06:43:07.298281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.554 qpair failed and we were unable to recover it. 00:32:35.554 [2024-11-20 06:43:07.298481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.554 [2024-11-20 06:43:07.298517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.554 qpair failed and we were unable to recover it. 00:32:35.554 [2024-11-20 06:43:07.298770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.554 [2024-11-20 06:43:07.298792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.554 qpair failed and we were unable to recover it. 00:32:35.555 [2024-11-20 06:43:07.299095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.555 [2024-11-20 06:43:07.299117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.555 qpair failed and we were unable to recover it. 00:32:35.555 [2024-11-20 06:43:07.299278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.555 [2024-11-20 06:43:07.299302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.555 qpair failed and we were unable to recover it. 00:32:35.555 [2024-11-20 06:43:07.299534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.555 [2024-11-20 06:43:07.299556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.555 qpair failed and we were unable to recover it. 00:32:35.555 [2024-11-20 06:43:07.299744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.555 [2024-11-20 06:43:07.299766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.555 qpair failed and we were unable to recover it. 00:32:35.555 [2024-11-20 06:43:07.299972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.555 [2024-11-20 06:43:07.299995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.555 qpair failed and we were unable to recover it. 00:32:35.555 [2024-11-20 06:43:07.300169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.555 [2024-11-20 06:43:07.300190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.555 qpair failed and we were unable to recover it. 00:32:35.555 [2024-11-20 06:43:07.300322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.555 [2024-11-20 06:43:07.300354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.555 qpair failed and we were unable to recover it. 00:32:35.555 [2024-11-20 06:43:07.300607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.555 [2024-11-20 06:43:07.300639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.555 qpair failed and we were unable to recover it. 00:32:35.555 [2024-11-20 06:43:07.300856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.555 [2024-11-20 06:43:07.300898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.555 qpair failed and we were unable to recover it. 00:32:35.555 [2024-11-20 06:43:07.301144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.555 [2024-11-20 06:43:07.301167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.555 qpair failed and we were unable to recover it. 00:32:35.555 [2024-11-20 06:43:07.301402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.555 [2024-11-20 06:43:07.301425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.555 qpair failed and we were unable to recover it. 00:32:35.555 [2024-11-20 06:43:07.301604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.555 [2024-11-20 06:43:07.301626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.555 qpair failed and we were unable to recover it. 00:32:35.555 [2024-11-20 06:43:07.301744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.555 [2024-11-20 06:43:07.301766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.555 qpair failed and we were unable to recover it. 00:32:35.555 [2024-11-20 06:43:07.302035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.555 [2024-11-20 06:43:07.302067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.555 qpair failed and we were unable to recover it. 00:32:35.555 [2024-11-20 06:43:07.302260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.555 [2024-11-20 06:43:07.302293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.555 qpair failed and we were unable to recover it. 00:32:35.555 [2024-11-20 06:43:07.302500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.555 [2024-11-20 06:43:07.302533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.555 qpair failed and we were unable to recover it. 00:32:35.555 [2024-11-20 06:43:07.302835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.555 [2024-11-20 06:43:07.302868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.555 qpair failed and we were unable to recover it. 00:32:35.555 [2024-11-20 06:43:07.303147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.555 [2024-11-20 06:43:07.303178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.555 qpair failed and we were unable to recover it. 00:32:35.555 [2024-11-20 06:43:07.303485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.555 [2024-11-20 06:43:07.303518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.555 qpair failed and we were unable to recover it. 00:32:35.555 [2024-11-20 06:43:07.303714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.555 [2024-11-20 06:43:07.303735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.555 qpair failed and we were unable to recover it. 00:32:35.555 [2024-11-20 06:43:07.303995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.555 [2024-11-20 06:43:07.304027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.555 qpair failed and we were unable to recover it. 00:32:35.555 [2024-11-20 06:43:07.304283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.555 [2024-11-20 06:43:07.304317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.555 qpair failed and we were unable to recover it. 00:32:35.555 [2024-11-20 06:43:07.304538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.555 [2024-11-20 06:43:07.304571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.555 qpair failed and we were unable to recover it. 00:32:35.555 [2024-11-20 06:43:07.304835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.555 [2024-11-20 06:43:07.304867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.555 qpair failed and we were unable to recover it. 00:32:35.555 [2024-11-20 06:43:07.305052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.555 [2024-11-20 06:43:07.305074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.555 qpair failed and we were unable to recover it. 00:32:35.555 [2024-11-20 06:43:07.305245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.555 [2024-11-20 06:43:07.305279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.555 qpair failed and we were unable to recover it. 00:32:35.555 [2024-11-20 06:43:07.305554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.555 [2024-11-20 06:43:07.305586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.555 qpair failed and we were unable to recover it. 00:32:35.555 [2024-11-20 06:43:07.305877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.555 [2024-11-20 06:43:07.305909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.555 qpair failed and we were unable to recover it. 00:32:35.555 [2024-11-20 06:43:07.306249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.555 [2024-11-20 06:43:07.306284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.555 qpair failed and we were unable to recover it. 00:32:35.555 [2024-11-20 06:43:07.306530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.555 [2024-11-20 06:43:07.306563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.555 qpair failed and we were unable to recover it. 00:32:35.555 [2024-11-20 06:43:07.306872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.555 [2024-11-20 06:43:07.306904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.555 qpair failed and we were unable to recover it. 00:32:35.555 [2024-11-20 06:43:07.307105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.555 [2024-11-20 06:43:07.307137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.555 qpair failed and we were unable to recover it. 00:32:35.555 [2024-11-20 06:43:07.307343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.555 [2024-11-20 06:43:07.307377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.555 qpair failed and we were unable to recover it. 00:32:35.555 [2024-11-20 06:43:07.307586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.555 [2024-11-20 06:43:07.307618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.555 qpair failed and we were unable to recover it. 00:32:35.555 [2024-11-20 06:43:07.307919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.556 [2024-11-20 06:43:07.307958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.556 qpair failed and we were unable to recover it. 00:32:35.556 [2024-11-20 06:43:07.308231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.556 [2024-11-20 06:43:07.308272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.556 qpair failed and we were unable to recover it. 00:32:35.556 [2024-11-20 06:43:07.308483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.556 [2024-11-20 06:43:07.308515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.556 qpair failed and we were unable to recover it. 00:32:35.556 [2024-11-20 06:43:07.308773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.556 [2024-11-20 06:43:07.308805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.556 qpair failed and we were unable to recover it. 00:32:35.556 [2024-11-20 06:43:07.308995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.556 [2024-11-20 06:43:07.309017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.556 qpair failed and we were unable to recover it. 00:32:35.556 [2024-11-20 06:43:07.309192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.556 [2024-11-20 06:43:07.309222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.556 qpair failed and we were unable to recover it. 00:32:35.556 [2024-11-20 06:43:07.309418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.556 [2024-11-20 06:43:07.309450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.556 qpair failed and we were unable to recover it. 00:32:35.556 [2024-11-20 06:43:07.309703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.556 [2024-11-20 06:43:07.309735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.556 qpair failed and we were unable to recover it. 00:32:35.556 [2024-11-20 06:43:07.309957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.556 [2024-11-20 06:43:07.309989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.556 qpair failed and we were unable to recover it. 00:32:35.556 [2024-11-20 06:43:07.310173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.556 [2024-11-20 06:43:07.310194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.556 qpair failed and we were unable to recover it. 00:32:35.556 [2024-11-20 06:43:07.310373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.556 [2024-11-20 06:43:07.310405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.556 qpair failed and we were unable to recover it. 00:32:35.556 [2024-11-20 06:43:07.310607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.556 [2024-11-20 06:43:07.310639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.556 qpair failed and we were unable to recover it. 00:32:35.556 [2024-11-20 06:43:07.310939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.556 [2024-11-20 06:43:07.310971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.556 qpair failed and we were unable to recover it. 00:32:35.556 [2024-11-20 06:43:07.311248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.556 [2024-11-20 06:43:07.311283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.556 qpair failed and we were unable to recover it. 00:32:35.556 [2024-11-20 06:43:07.311498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.556 [2024-11-20 06:43:07.311530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.556 qpair failed and we were unable to recover it. 00:32:35.556 [2024-11-20 06:43:07.311849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.556 [2024-11-20 06:43:07.311883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.556 qpair failed and we were unable to recover it. 00:32:35.556 [2024-11-20 06:43:07.312183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.556 [2024-11-20 06:43:07.312225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.556 qpair failed and we were unable to recover it. 00:32:35.556 [2024-11-20 06:43:07.312494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.556 [2024-11-20 06:43:07.312525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.556 qpair failed and we were unable to recover it. 00:32:35.556 [2024-11-20 06:43:07.312738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.556 [2024-11-20 06:43:07.312760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.556 qpair failed and we were unable to recover it. 00:32:35.556 [2024-11-20 06:43:07.312956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.556 [2024-11-20 06:43:07.312977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.556 qpair failed and we were unable to recover it. 00:32:35.556 [2024-11-20 06:43:07.313165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.556 [2024-11-20 06:43:07.313188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.556 qpair failed and we were unable to recover it. 00:32:35.556 [2024-11-20 06:43:07.313414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.556 [2024-11-20 06:43:07.313437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.556 qpair failed and we were unable to recover it. 00:32:35.556 [2024-11-20 06:43:07.313645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.556 [2024-11-20 06:43:07.313667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.556 qpair failed and we were unable to recover it. 00:32:35.556 [2024-11-20 06:43:07.313902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.556 [2024-11-20 06:43:07.313924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.556 qpair failed and we were unable to recover it. 00:32:35.556 [2024-11-20 06:43:07.314162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.556 [2024-11-20 06:43:07.314184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.556 qpair failed and we were unable to recover it. 00:32:35.556 [2024-11-20 06:43:07.314440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.556 [2024-11-20 06:43:07.314462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.556 qpair failed and we were unable to recover it. 00:32:35.556 [2024-11-20 06:43:07.314715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.556 [2024-11-20 06:43:07.314737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.556 qpair failed and we were unable to recover it. 00:32:35.556 [2024-11-20 06:43:07.314897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.556 [2024-11-20 06:43:07.314919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.556 qpair failed and we were unable to recover it. 00:32:35.556 [2024-11-20 06:43:07.315189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.556 [2024-11-20 06:43:07.315232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.556 qpair failed and we were unable to recover it. 00:32:35.556 [2024-11-20 06:43:07.315385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.556 [2024-11-20 06:43:07.315419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.556 qpair failed and we were unable to recover it. 00:32:35.556 [2024-11-20 06:43:07.315614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.556 [2024-11-20 06:43:07.315646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.556 qpair failed and we were unable to recover it. 00:32:35.556 [2024-11-20 06:43:07.315930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.556 [2024-11-20 06:43:07.315961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.556 qpair failed and we were unable to recover it. 00:32:35.556 [2024-11-20 06:43:07.316155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.556 [2024-11-20 06:43:07.316188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.556 qpair failed and we were unable to recover it. 00:32:35.556 [2024-11-20 06:43:07.316473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.556 [2024-11-20 06:43:07.316505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.556 qpair failed and we were unable to recover it. 00:32:35.556 [2024-11-20 06:43:07.316710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.556 [2024-11-20 06:43:07.316741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.556 qpair failed and we were unable to recover it. 00:32:35.556 [2024-11-20 06:43:07.317018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.556 [2024-11-20 06:43:07.317050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.556 qpair failed and we were unable to recover it. 00:32:35.556 [2024-11-20 06:43:07.317258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.556 [2024-11-20 06:43:07.317292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.556 qpair failed and we were unable to recover it. 00:32:35.556 [2024-11-20 06:43:07.317546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.556 [2024-11-20 06:43:07.317577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.556 qpair failed and we were unable to recover it. 00:32:35.556 [2024-11-20 06:43:07.317828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.557 [2024-11-20 06:43:07.317850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.557 qpair failed and we were unable to recover it. 00:32:35.557 [2024-11-20 06:43:07.318087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.557 [2024-11-20 06:43:07.318110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.557 qpair failed and we were unable to recover it. 00:32:35.557 [2024-11-20 06:43:07.318208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.557 [2024-11-20 06:43:07.318231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.557 qpair failed and we were unable to recover it. 00:32:35.557 [2024-11-20 06:43:07.318418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.557 [2024-11-20 06:43:07.318450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.557 qpair failed and we were unable to recover it. 00:32:35.557 [2024-11-20 06:43:07.318741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.557 [2024-11-20 06:43:07.318779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.557 qpair failed and we were unable to recover it. 00:32:35.557 [2024-11-20 06:43:07.318907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.557 [2024-11-20 06:43:07.318939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.557 qpair failed and we were unable to recover it. 00:32:35.557 [2024-11-20 06:43:07.319135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.557 [2024-11-20 06:43:07.319157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.557 qpair failed and we were unable to recover it. 00:32:35.557 [2024-11-20 06:43:07.319351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.557 [2024-11-20 06:43:07.319375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.557 qpair failed and we were unable to recover it. 00:32:35.557 [2024-11-20 06:43:07.319620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.557 [2024-11-20 06:43:07.319642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.557 qpair failed and we were unable to recover it. 00:32:35.557 [2024-11-20 06:43:07.319882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.557 [2024-11-20 06:43:07.319914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.557 qpair failed and we were unable to recover it. 00:32:35.557 [2024-11-20 06:43:07.320166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.557 [2024-11-20 06:43:07.320199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.557 qpair failed and we were unable to recover it. 00:32:35.557 [2024-11-20 06:43:07.320417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.557 [2024-11-20 06:43:07.320449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.557 qpair failed and we were unable to recover it. 00:32:35.557 [2024-11-20 06:43:07.320716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.557 [2024-11-20 06:43:07.320748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.557 qpair failed and we were unable to recover it. 00:32:35.557 [2024-11-20 06:43:07.321011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.557 [2024-11-20 06:43:07.321043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.557 qpair failed and we were unable to recover it. 00:32:35.557 [2024-11-20 06:43:07.321346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.557 [2024-11-20 06:43:07.321380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.557 qpair failed and we were unable to recover it. 00:32:35.557 [2024-11-20 06:43:07.321640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.557 [2024-11-20 06:43:07.321672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.557 qpair failed and we were unable to recover it. 00:32:35.557 [2024-11-20 06:43:07.321968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.557 [2024-11-20 06:43:07.321999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.557 qpair failed and we were unable to recover it. 00:32:35.557 [2024-11-20 06:43:07.322272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.557 [2024-11-20 06:43:07.322295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.557 qpair failed and we were unable to recover it. 00:32:35.557 [2024-11-20 06:43:07.322530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.557 [2024-11-20 06:43:07.322552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.557 qpair failed and we were unable to recover it. 00:32:35.557 [2024-11-20 06:43:07.322857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.557 [2024-11-20 06:43:07.322889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.557 qpair failed and we were unable to recover it. 00:32:35.557 [2024-11-20 06:43:07.323092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.557 [2024-11-20 06:43:07.323125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.557 qpair failed and we were unable to recover it. 00:32:35.557 [2024-11-20 06:43:07.323415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.557 [2024-11-20 06:43:07.323449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.557 qpair failed and we were unable to recover it. 00:32:35.557 [2024-11-20 06:43:07.323659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.557 [2024-11-20 06:43:07.323691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.557 qpair failed and we were unable to recover it. 00:32:35.557 [2024-11-20 06:43:07.323925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.557 [2024-11-20 06:43:07.323956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.557 qpair failed and we were unable to recover it. 00:32:35.557 [2024-11-20 06:43:07.324175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.557 [2024-11-20 06:43:07.324218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.557 qpair failed and we were unable to recover it. 00:32:35.557 [2024-11-20 06:43:07.324477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.557 [2024-11-20 06:43:07.324509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.557 qpair failed and we were unable to recover it. 00:32:35.557 [2024-11-20 06:43:07.324786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.557 [2024-11-20 06:43:07.324808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.557 qpair failed and we were unable to recover it. 00:32:35.557 [2024-11-20 06:43:07.325087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.557 [2024-11-20 06:43:07.325109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.557 qpair failed and we were unable to recover it. 00:32:35.557 [2024-11-20 06:43:07.325297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.557 [2024-11-20 06:43:07.325320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.557 qpair failed and we were unable to recover it. 00:32:35.557 [2024-11-20 06:43:07.325586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.557 [2024-11-20 06:43:07.325608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.557 qpair failed and we were unable to recover it. 00:32:35.557 [2024-11-20 06:43:07.325889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.557 [2024-11-20 06:43:07.325911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.557 qpair failed and we were unable to recover it. 00:32:35.557 [2024-11-20 06:43:07.326092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.557 [2024-11-20 06:43:07.326130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.557 qpair failed and we were unable to recover it. 00:32:35.557 [2024-11-20 06:43:07.326317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.557 [2024-11-20 06:43:07.326350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.557 qpair failed and we were unable to recover it. 00:32:35.557 [2024-11-20 06:43:07.326494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.557 [2024-11-20 06:43:07.326526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.557 qpair failed and we were unable to recover it. 00:32:35.557 [2024-11-20 06:43:07.326805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.557 [2024-11-20 06:43:07.326837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.557 qpair failed and we were unable to recover it. 00:32:35.557 [2024-11-20 06:43:07.326973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.557 [2024-11-20 06:43:07.326995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.557 qpair failed and we were unable to recover it. 00:32:35.557 [2024-11-20 06:43:07.327173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.557 [2024-11-20 06:43:07.327195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.557 qpair failed and we were unable to recover it. 00:32:35.557 [2024-11-20 06:43:07.327495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.557 [2024-11-20 06:43:07.327518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.557 qpair failed and we were unable to recover it. 00:32:35.557 [2024-11-20 06:43:07.327793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.557 [2024-11-20 06:43:07.327815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.557 qpair failed and we were unable to recover it. 00:32:35.558 [2024-11-20 06:43:07.328014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.558 [2024-11-20 06:43:07.328035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.558 qpair failed and we were unable to recover it. 00:32:35.558 [2024-11-20 06:43:07.328296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.558 [2024-11-20 06:43:07.328319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.558 qpair failed and we were unable to recover it. 00:32:35.558 [2024-11-20 06:43:07.328480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.558 [2024-11-20 06:43:07.328501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.558 qpair failed and we were unable to recover it. 00:32:35.558 [2024-11-20 06:43:07.328755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.558 [2024-11-20 06:43:07.328777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.558 qpair failed and we were unable to recover it. 00:32:35.558 [2024-11-20 06:43:07.329027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.558 [2024-11-20 06:43:07.329049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.558 qpair failed and we were unable to recover it. 00:32:35.558 [2024-11-20 06:43:07.329302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.558 [2024-11-20 06:43:07.329325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.558 qpair failed and we were unable to recover it. 00:32:35.558 [2024-11-20 06:43:07.329453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.558 [2024-11-20 06:43:07.329476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.558 qpair failed and we were unable to recover it. 00:32:35.558 [2024-11-20 06:43:07.329734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.558 [2024-11-20 06:43:07.329756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.558 qpair failed and we were unable to recover it. 00:32:35.558 [2024-11-20 06:43:07.329873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.558 [2024-11-20 06:43:07.329895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.558 qpair failed and we were unable to recover it. 00:32:35.558 [2024-11-20 06:43:07.330065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.558 [2024-11-20 06:43:07.330087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.558 qpair failed and we were unable to recover it. 00:32:35.900 [2024-11-20 06:43:07.330325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.900 [2024-11-20 06:43:07.330356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.900 qpair failed and we were unable to recover it. 00:32:35.900 [2024-11-20 06:43:07.330469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.900 [2024-11-20 06:43:07.330501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.900 qpair failed and we were unable to recover it. 00:32:35.900 [2024-11-20 06:43:07.330629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.900 [2024-11-20 06:43:07.330680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.900 qpair failed and we were unable to recover it. 00:32:35.900 [2024-11-20 06:43:07.330979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.900 [2024-11-20 06:43:07.331003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.900 qpair failed and we were unable to recover it. 00:32:35.900 [2024-11-20 06:43:07.331163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.900 [2024-11-20 06:43:07.331185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.900 qpair failed and we were unable to recover it. 00:32:35.900 [2024-11-20 06:43:07.331406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.900 [2024-11-20 06:43:07.331439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.900 qpair failed and we were unable to recover it. 00:32:35.900 [2024-11-20 06:43:07.331728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.900 [2024-11-20 06:43:07.331760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.900 qpair failed and we were unable to recover it. 00:32:35.900 [2024-11-20 06:43:07.332035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.900 [2024-11-20 06:43:07.332059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.900 qpair failed and we were unable to recover it. 00:32:35.900 [2024-11-20 06:43:07.332321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.900 [2024-11-20 06:43:07.332346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.900 qpair failed and we were unable to recover it. 00:32:35.900 [2024-11-20 06:43:07.332587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.900 [2024-11-20 06:43:07.332619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.900 qpair failed and we were unable to recover it. 00:32:35.900 [2024-11-20 06:43:07.332824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.900 [2024-11-20 06:43:07.332852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.900 qpair failed and we were unable to recover it. 00:32:35.900 [2024-11-20 06:43:07.333096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.900 [2024-11-20 06:43:07.333119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.900 qpair failed and we were unable to recover it. 00:32:35.900 [2024-11-20 06:43:07.333327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.900 [2024-11-20 06:43:07.333351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.900 qpair failed and we were unable to recover it. 00:32:35.900 [2024-11-20 06:43:07.333609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.900 [2024-11-20 06:43:07.333632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.900 qpair failed and we were unable to recover it. 00:32:35.900 [2024-11-20 06:43:07.333866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.900 [2024-11-20 06:43:07.333889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.900 qpair failed and we were unable to recover it. 00:32:35.900 [2024-11-20 06:43:07.334075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.900 [2024-11-20 06:43:07.334098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.900 qpair failed and we were unable to recover it. 00:32:35.900 [2024-11-20 06:43:07.334287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.900 [2024-11-20 06:43:07.334310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.900 qpair failed and we were unable to recover it. 00:32:35.900 [2024-11-20 06:43:07.334569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.900 [2024-11-20 06:43:07.334592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.900 qpair failed and we were unable to recover it. 00:32:35.900 [2024-11-20 06:43:07.334769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.900 [2024-11-20 06:43:07.334791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.900 qpair failed and we were unable to recover it. 00:32:35.900 [2024-11-20 06:43:07.335025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.900 [2024-11-20 06:43:07.335048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.900 qpair failed and we were unable to recover it. 00:32:35.900 [2024-11-20 06:43:07.335218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.900 [2024-11-20 06:43:07.335241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.901 qpair failed and we were unable to recover it. 00:32:35.901 [2024-11-20 06:43:07.335450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.901 [2024-11-20 06:43:07.335473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.901 qpair failed and we were unable to recover it. 00:32:35.901 [2024-11-20 06:43:07.335648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.901 [2024-11-20 06:43:07.335671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.901 qpair failed and we were unable to recover it. 00:32:35.901 [2024-11-20 06:43:07.335913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.901 [2024-11-20 06:43:07.335947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.901 qpair failed and we were unable to recover it. 00:32:35.901 [2024-11-20 06:43:07.336124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.901 [2024-11-20 06:43:07.336146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.901 qpair failed and we were unable to recover it. 00:32:35.901 [2024-11-20 06:43:07.336327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.901 [2024-11-20 06:43:07.336350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.901 qpair failed and we were unable to recover it. 00:32:35.901 [2024-11-20 06:43:07.336610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.901 [2024-11-20 06:43:07.336633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.901 qpair failed and we were unable to recover it. 00:32:35.901 [2024-11-20 06:43:07.336799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.901 [2024-11-20 06:43:07.336822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.901 qpair failed and we were unable to recover it. 00:32:35.901 [2024-11-20 06:43:07.337006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.901 [2024-11-20 06:43:07.337027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.901 qpair failed and we were unable to recover it. 00:32:35.901 [2024-11-20 06:43:07.337136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.901 [2024-11-20 06:43:07.337158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.901 qpair failed and we were unable to recover it. 00:32:35.901 [2024-11-20 06:43:07.337414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.901 [2024-11-20 06:43:07.337437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.901 qpair failed and we were unable to recover it. 00:32:35.901 [2024-11-20 06:43:07.337698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.901 [2024-11-20 06:43:07.337720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.901 qpair failed and we were unable to recover it. 00:32:35.901 [2024-11-20 06:43:07.337907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.901 [2024-11-20 06:43:07.337929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.901 qpair failed and we were unable to recover it. 00:32:35.901 [2024-11-20 06:43:07.338212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.901 [2024-11-20 06:43:07.338235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.901 qpair failed and we were unable to recover it. 00:32:35.901 [2024-11-20 06:43:07.338397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.901 [2024-11-20 06:43:07.338420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.901 qpair failed and we were unable to recover it. 00:32:35.901 [2024-11-20 06:43:07.338609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.901 [2024-11-20 06:43:07.338632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.901 qpair failed and we were unable to recover it. 00:32:35.901 [2024-11-20 06:43:07.338893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.901 [2024-11-20 06:43:07.338916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.901 qpair failed and we were unable to recover it. 00:32:35.901 [2024-11-20 06:43:07.339181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.901 [2024-11-20 06:43:07.339210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.901 qpair failed and we were unable to recover it. 00:32:35.901 [2024-11-20 06:43:07.339401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.901 [2024-11-20 06:43:07.339424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.901 qpair failed and we were unable to recover it. 00:32:35.901 [2024-11-20 06:43:07.339656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.901 [2024-11-20 06:43:07.339678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.901 qpair failed and we were unable to recover it. 00:32:35.901 [2024-11-20 06:43:07.339935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.901 [2024-11-20 06:43:07.339957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.901 qpair failed and we were unable to recover it. 00:32:35.901 [2024-11-20 06:43:07.340083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.901 [2024-11-20 06:43:07.340105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.901 qpair failed and we were unable to recover it. 00:32:35.901 [2024-11-20 06:43:07.340268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.901 [2024-11-20 06:43:07.340292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.901 qpair failed and we were unable to recover it. 00:32:35.901 [2024-11-20 06:43:07.340391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.901 [2024-11-20 06:43:07.340414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.901 qpair failed and we were unable to recover it. 00:32:35.901 [2024-11-20 06:43:07.340626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.901 [2024-11-20 06:43:07.340648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.901 qpair failed and we were unable to recover it. 00:32:35.901 [2024-11-20 06:43:07.340807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.901 [2024-11-20 06:43:07.340830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.901 qpair failed and we were unable to recover it. 00:32:35.901 [2024-11-20 06:43:07.341020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.901 [2024-11-20 06:43:07.341042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.901 qpair failed and we were unable to recover it. 00:32:35.901 [2024-11-20 06:43:07.341249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.901 [2024-11-20 06:43:07.341273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.901 qpair failed and we were unable to recover it. 00:32:35.901 [2024-11-20 06:43:07.341475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.901 [2024-11-20 06:43:07.341499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.901 qpair failed and we were unable to recover it. 00:32:35.901 [2024-11-20 06:43:07.341759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.901 [2024-11-20 06:43:07.341782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.901 qpair failed and we were unable to recover it. 00:32:35.901 [2024-11-20 06:43:07.342072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.901 [2024-11-20 06:43:07.342099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.901 qpair failed and we were unable to recover it. 00:32:35.901 [2024-11-20 06:43:07.342280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.901 [2024-11-20 06:43:07.342304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.901 qpair failed and we were unable to recover it. 00:32:35.901 [2024-11-20 06:43:07.342555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.901 [2024-11-20 06:43:07.342578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.901 qpair failed and we were unable to recover it. 00:32:35.901 [2024-11-20 06:43:07.342810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.902 [2024-11-20 06:43:07.342833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.902 qpair failed and we were unable to recover it. 00:32:35.902 [2024-11-20 06:43:07.342935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.902 [2024-11-20 06:43:07.342957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.902 qpair failed and we were unable to recover it. 00:32:35.902 [2024-11-20 06:43:07.343059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.902 [2024-11-20 06:43:07.343081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.902 qpair failed and we were unable to recover it. 00:32:35.902 [2024-11-20 06:43:07.343195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.902 [2024-11-20 06:43:07.343226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.902 qpair failed and we were unable to recover it. 00:32:35.902 [2024-11-20 06:43:07.343411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.902 [2024-11-20 06:43:07.343434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.902 qpair failed and we were unable to recover it. 00:32:35.902 [2024-11-20 06:43:07.343694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.902 [2024-11-20 06:43:07.343717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.902 qpair failed and we were unable to recover it. 00:32:35.902 [2024-11-20 06:43:07.343976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.902 [2024-11-20 06:43:07.343999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.902 qpair failed and we were unable to recover it. 00:32:35.902 [2024-11-20 06:43:07.344256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.902 [2024-11-20 06:43:07.344279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.902 qpair failed and we were unable to recover it. 00:32:35.902 [2024-11-20 06:43:07.344510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.902 [2024-11-20 06:43:07.344532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.902 qpair failed and we were unable to recover it. 00:32:35.902 [2024-11-20 06:43:07.344821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.902 [2024-11-20 06:43:07.344844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.902 qpair failed and we were unable to recover it. 00:32:35.902 [2024-11-20 06:43:07.345051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.902 [2024-11-20 06:43:07.345073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.902 qpair failed and we were unable to recover it. 00:32:35.902 [2024-11-20 06:43:07.345250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.902 [2024-11-20 06:43:07.345274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.902 qpair failed and we were unable to recover it. 00:32:35.902 [2024-11-20 06:43:07.345507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.902 [2024-11-20 06:43:07.345529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.902 qpair failed and we were unable to recover it. 00:32:35.902 [2024-11-20 06:43:07.345794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.902 [2024-11-20 06:43:07.345817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.902 qpair failed and we were unable to recover it. 00:32:35.902 [2024-11-20 06:43:07.346049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.902 [2024-11-20 06:43:07.346072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.902 qpair failed and we were unable to recover it. 00:32:35.902 [2024-11-20 06:43:07.346261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.902 [2024-11-20 06:43:07.346284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.902 qpair failed and we were unable to recover it. 00:32:35.902 [2024-11-20 06:43:07.346480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.902 [2024-11-20 06:43:07.346502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.902 qpair failed and we were unable to recover it. 00:32:35.902 [2024-11-20 06:43:07.346738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.902 [2024-11-20 06:43:07.346761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.902 qpair failed and we were unable to recover it. 00:32:35.902 [2024-11-20 06:43:07.347053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.902 [2024-11-20 06:43:07.347076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.902 qpair failed and we were unable to recover it. 00:32:35.902 [2024-11-20 06:43:07.347364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.902 [2024-11-20 06:43:07.347388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.902 qpair failed and we were unable to recover it. 00:32:35.902 [2024-11-20 06:43:07.347563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.902 [2024-11-20 06:43:07.347586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.902 qpair failed and we were unable to recover it. 00:32:35.902 [2024-11-20 06:43:07.347821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.902 [2024-11-20 06:43:07.347844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.902 qpair failed and we were unable to recover it. 00:32:35.902 [2024-11-20 06:43:07.348026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.902 [2024-11-20 06:43:07.348049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.902 qpair failed and we were unable to recover it. 00:32:35.902 [2024-11-20 06:43:07.348283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.902 [2024-11-20 06:43:07.348306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.902 qpair failed and we were unable to recover it. 00:32:35.902 [2024-11-20 06:43:07.348544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.902 [2024-11-20 06:43:07.348566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.902 qpair failed and we were unable to recover it. 00:32:35.902 [2024-11-20 06:43:07.348824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.902 [2024-11-20 06:43:07.348847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.902 qpair failed and we were unable to recover it. 00:32:35.902 [2024-11-20 06:43:07.348975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.902 [2024-11-20 06:43:07.348997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.902 qpair failed and we were unable to recover it. 00:32:35.902 [2024-11-20 06:43:07.349266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.902 [2024-11-20 06:43:07.349289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.902 qpair failed and we were unable to recover it. 00:32:35.902 [2024-11-20 06:43:07.349465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.902 [2024-11-20 06:43:07.349487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.902 qpair failed and we were unable to recover it. 00:32:35.902 [2024-11-20 06:43:07.349744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.902 [2024-11-20 06:43:07.349766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.902 qpair failed and we were unable to recover it. 00:32:35.902 [2024-11-20 06:43:07.350052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.902 [2024-11-20 06:43:07.350075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.902 qpair failed and we were unable to recover it. 00:32:35.902 [2024-11-20 06:43:07.350322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.902 [2024-11-20 06:43:07.350345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.902 qpair failed and we were unable to recover it. 00:32:35.902 [2024-11-20 06:43:07.350608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.902 [2024-11-20 06:43:07.350630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.902 qpair failed and we were unable to recover it. 00:32:35.902 [2024-11-20 06:43:07.350834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.902 [2024-11-20 06:43:07.350857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.902 qpair failed and we were unable to recover it. 00:32:35.902 [2024-11-20 06:43:07.351060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.902 [2024-11-20 06:43:07.351082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.902 qpair failed and we were unable to recover it. 00:32:35.902 [2024-11-20 06:43:07.351344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.902 [2024-11-20 06:43:07.351367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.902 qpair failed and we were unable to recover it. 00:32:35.902 [2024-11-20 06:43:07.351654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.902 [2024-11-20 06:43:07.351677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.903 qpair failed and we were unable to recover it. 00:32:35.903 [2024-11-20 06:43:07.351875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.903 [2024-11-20 06:43:07.351897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.903 qpair failed and we were unable to recover it. 00:32:35.903 [2024-11-20 06:43:07.352072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.903 [2024-11-20 06:43:07.352099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.903 qpair failed and we were unable to recover it. 00:32:35.903 [2024-11-20 06:43:07.352286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.903 [2024-11-20 06:43:07.352309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.903 qpair failed and we were unable to recover it. 00:32:35.903 [2024-11-20 06:43:07.352468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.903 [2024-11-20 06:43:07.352491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.903 qpair failed and we were unable to recover it. 00:32:35.903 [2024-11-20 06:43:07.352651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.903 [2024-11-20 06:43:07.352674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.903 qpair failed and we were unable to recover it. 00:32:35.903 [2024-11-20 06:43:07.352868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.903 [2024-11-20 06:43:07.352891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.903 qpair failed and we were unable to recover it. 00:32:35.903 [2024-11-20 06:43:07.353124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.903 [2024-11-20 06:43:07.353146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.903 qpair failed and we were unable to recover it. 00:32:35.903 [2024-11-20 06:43:07.353370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.903 [2024-11-20 06:43:07.353393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.903 qpair failed and we were unable to recover it. 00:32:35.903 [2024-11-20 06:43:07.353656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.903 [2024-11-20 06:43:07.353679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.903 qpair failed and we were unable to recover it. 00:32:35.903 [2024-11-20 06:43:07.353913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.903 [2024-11-20 06:43:07.353936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.903 qpair failed and we were unable to recover it. 00:32:35.903 [2024-11-20 06:43:07.354169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.903 [2024-11-20 06:43:07.354191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.903 qpair failed and we were unable to recover it. 00:32:35.903 [2024-11-20 06:43:07.354455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.903 [2024-11-20 06:43:07.354478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.903 qpair failed and we were unable to recover it. 00:32:35.903 [2024-11-20 06:43:07.354641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.903 [2024-11-20 06:43:07.354663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.903 qpair failed and we were unable to recover it. 00:32:35.903 [2024-11-20 06:43:07.354920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.903 [2024-11-20 06:43:07.354943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.903 qpair failed and we were unable to recover it. 00:32:35.903 [2024-11-20 06:43:07.355197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.903 [2024-11-20 06:43:07.355228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.903 qpair failed and we were unable to recover it. 00:32:35.903 [2024-11-20 06:43:07.355469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.903 [2024-11-20 06:43:07.355492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.903 qpair failed and we were unable to recover it. 00:32:35.903 [2024-11-20 06:43:07.355724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.903 [2024-11-20 06:43:07.355747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.903 qpair failed and we were unable to recover it. 00:32:35.903 [2024-11-20 06:43:07.356008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.903 [2024-11-20 06:43:07.356031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.903 qpair failed and we were unable to recover it. 00:32:35.903 [2024-11-20 06:43:07.356219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.903 [2024-11-20 06:43:07.356242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.903 qpair failed and we were unable to recover it. 00:32:35.903 [2024-11-20 06:43:07.356341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.903 [2024-11-20 06:43:07.356363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.903 qpair failed and we were unable to recover it. 00:32:35.903 [2024-11-20 06:43:07.356627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.903 [2024-11-20 06:43:07.356649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.903 qpair failed and we were unable to recover it. 00:32:35.903 [2024-11-20 06:43:07.356943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.903 [2024-11-20 06:43:07.356965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.903 qpair failed and we were unable to recover it. 00:32:35.903 [2024-11-20 06:43:07.357222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.903 [2024-11-20 06:43:07.357261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.903 qpair failed and we were unable to recover it. 00:32:35.903 [2024-11-20 06:43:07.357464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.903 [2024-11-20 06:43:07.357487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.903 qpair failed and we were unable to recover it. 00:32:35.903 [2024-11-20 06:43:07.357723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.903 [2024-11-20 06:43:07.357746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.903 qpair failed and we were unable to recover it. 00:32:35.903 [2024-11-20 06:43:07.357991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.903 [2024-11-20 06:43:07.358013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.903 qpair failed and we were unable to recover it. 00:32:35.903 [2024-11-20 06:43:07.358264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.903 [2024-11-20 06:43:07.358287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.903 qpair failed and we were unable to recover it. 00:32:35.903 [2024-11-20 06:43:07.358469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.903 [2024-11-20 06:43:07.358492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.903 qpair failed and we were unable to recover it. 00:32:35.903 [2024-11-20 06:43:07.358749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.903 [2024-11-20 06:43:07.358776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.903 qpair failed and we were unable to recover it. 00:32:35.903 [2024-11-20 06:43:07.358958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.903 [2024-11-20 06:43:07.358981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.903 qpair failed and we were unable to recover it. 00:32:35.903 [2024-11-20 06:43:07.359081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.903 [2024-11-20 06:43:07.359104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.903 qpair failed and we were unable to recover it. 00:32:35.903 [2024-11-20 06:43:07.359382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.903 [2024-11-20 06:43:07.359405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.903 qpair failed and we were unable to recover it. 00:32:35.903 [2024-11-20 06:43:07.359603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.904 [2024-11-20 06:43:07.359626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.904 qpair failed and we were unable to recover it. 00:32:35.904 [2024-11-20 06:43:07.359887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.904 [2024-11-20 06:43:07.359910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.904 qpair failed and we were unable to recover it. 00:32:35.904 [2024-11-20 06:43:07.360089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.904 [2024-11-20 06:43:07.360112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.904 qpair failed and we were unable to recover it. 00:32:35.904 [2024-11-20 06:43:07.360391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.904 [2024-11-20 06:43:07.360414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.904 qpair failed and we were unable to recover it. 00:32:35.904 [2024-11-20 06:43:07.360700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.904 [2024-11-20 06:43:07.360723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.904 qpair failed and we were unable to recover it. 00:32:35.904 [2024-11-20 06:43:07.361006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.904 [2024-11-20 06:43:07.361029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.904 qpair failed and we were unable to recover it. 00:32:35.904 [2024-11-20 06:43:07.361267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.904 [2024-11-20 06:43:07.361291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.904 qpair failed and we were unable to recover it. 00:32:35.904 [2024-11-20 06:43:07.361502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.904 [2024-11-20 06:43:07.361524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.904 qpair failed and we were unable to recover it. 00:32:35.904 [2024-11-20 06:43:07.361754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.904 [2024-11-20 06:43:07.361776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.904 qpair failed and we were unable to recover it. 00:32:35.904 [2024-11-20 06:43:07.362042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.904 [2024-11-20 06:43:07.362065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.904 qpair failed and we were unable to recover it. 00:32:35.904 [2024-11-20 06:43:07.362190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.904 [2024-11-20 06:43:07.362233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.904 qpair failed and we were unable to recover it. 00:32:35.904 [2024-11-20 06:43:07.362423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.904 [2024-11-20 06:43:07.362446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.904 qpair failed and we were unable to recover it. 00:32:35.904 [2024-11-20 06:43:07.362727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.904 [2024-11-20 06:43:07.362750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.904 qpair failed and we were unable to recover it. 00:32:35.904 [2024-11-20 06:43:07.363033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.904 [2024-11-20 06:43:07.363055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.904 qpair failed and we were unable to recover it. 00:32:35.904 [2024-11-20 06:43:07.363239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.904 [2024-11-20 06:43:07.363262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.904 qpair failed and we were unable to recover it. 00:32:35.904 [2024-11-20 06:43:07.363437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.904 [2024-11-20 06:43:07.363460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.904 qpair failed and we were unable to recover it. 00:32:35.904 [2024-11-20 06:43:07.363719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.904 [2024-11-20 06:43:07.363741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.904 qpair failed and we were unable to recover it. 00:32:35.904 [2024-11-20 06:43:07.364046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.904 [2024-11-20 06:43:07.364068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.904 qpair failed and we were unable to recover it. 00:32:35.904 [2024-11-20 06:43:07.364286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.904 [2024-11-20 06:43:07.364309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.904 qpair failed and we were unable to recover it. 00:32:35.904 [2024-11-20 06:43:07.364572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.904 [2024-11-20 06:43:07.364594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.904 qpair failed and we were unable to recover it. 00:32:35.904 [2024-11-20 06:43:07.364843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.904 [2024-11-20 06:43:07.364865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.904 qpair failed and we were unable to recover it. 00:32:35.904 [2024-11-20 06:43:07.365134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.904 [2024-11-20 06:43:07.365157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.904 qpair failed and we were unable to recover it. 00:32:35.904 [2024-11-20 06:43:07.365411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.904 [2024-11-20 06:43:07.365434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.904 qpair failed and we were unable to recover it. 00:32:35.904 [2024-11-20 06:43:07.365694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.904 [2024-11-20 06:43:07.365717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.904 qpair failed and we were unable to recover it. 00:32:35.904 [2024-11-20 06:43:07.365960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.904 [2024-11-20 06:43:07.365983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.904 qpair failed and we were unable to recover it. 00:32:35.904 [2024-11-20 06:43:07.366219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.904 [2024-11-20 06:43:07.366243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.904 qpair failed and we were unable to recover it. 00:32:35.904 [2024-11-20 06:43:07.366523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.904 [2024-11-20 06:43:07.366547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.904 qpair failed and we were unable to recover it. 00:32:35.904 [2024-11-20 06:43:07.366755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.904 [2024-11-20 06:43:07.366779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.904 qpair failed and we were unable to recover it. 00:32:35.904 [2024-11-20 06:43:07.367080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.904 [2024-11-20 06:43:07.367113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.904 qpair failed and we were unable to recover it. 00:32:35.904 [2024-11-20 06:43:07.367324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.904 [2024-11-20 06:43:07.367359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.904 qpair failed and we were unable to recover it. 00:32:35.904 [2024-11-20 06:43:07.367566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.904 [2024-11-20 06:43:07.367600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.904 qpair failed and we were unable to recover it. 00:32:35.904 [2024-11-20 06:43:07.367805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.904 [2024-11-20 06:43:07.367836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.904 qpair failed and we were unable to recover it. 00:32:35.904 [2024-11-20 06:43:07.368041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.904 [2024-11-20 06:43:07.368073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.904 qpair failed and we were unable to recover it. 00:32:35.904 [2024-11-20 06:43:07.368335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.904 [2024-11-20 06:43:07.368368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.904 qpair failed and we were unable to recover it. 00:32:35.904 [2024-11-20 06:43:07.368635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.904 [2024-11-20 06:43:07.368667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.904 qpair failed and we were unable to recover it. 00:32:35.904 [2024-11-20 06:43:07.368860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.904 [2024-11-20 06:43:07.368892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.904 qpair failed and we were unable to recover it. 00:32:35.904 [2024-11-20 06:43:07.369109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.904 [2024-11-20 06:43:07.369140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.904 qpair failed and we were unable to recover it. 00:32:35.904 [2024-11-20 06:43:07.369451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.904 [2024-11-20 06:43:07.369503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.904 qpair failed and we were unable to recover it. 00:32:35.905 [2024-11-20 06:43:07.369696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.905 [2024-11-20 06:43:07.369728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.905 qpair failed and we were unable to recover it. 00:32:35.905 [2024-11-20 06:43:07.370006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.905 [2024-11-20 06:43:07.370037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.905 qpair failed and we were unable to recover it. 00:32:35.905 [2024-11-20 06:43:07.370235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.905 [2024-11-20 06:43:07.370269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.905 qpair failed and we were unable to recover it. 00:32:35.905 [2024-11-20 06:43:07.370557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.905 [2024-11-20 06:43:07.370589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.905 qpair failed and we were unable to recover it. 00:32:35.905 [2024-11-20 06:43:07.370800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.905 [2024-11-20 06:43:07.370832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.905 qpair failed and we were unable to recover it. 00:32:35.905 [2024-11-20 06:43:07.371037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.905 [2024-11-20 06:43:07.371070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.905 qpair failed and we were unable to recover it. 00:32:35.905 [2024-11-20 06:43:07.371195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.905 [2024-11-20 06:43:07.371232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.905 qpair failed and we were unable to recover it. 00:32:35.905 [2024-11-20 06:43:07.371525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.905 [2024-11-20 06:43:07.371547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.905 qpair failed and we were unable to recover it. 00:32:35.905 [2024-11-20 06:43:07.371823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.905 [2024-11-20 06:43:07.371855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.905 qpair failed and we were unable to recover it. 00:32:35.905 [2024-11-20 06:43:07.372161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.905 [2024-11-20 06:43:07.372192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.905 qpair failed and we were unable to recover it. 00:32:35.905 [2024-11-20 06:43:07.372492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.905 [2024-11-20 06:43:07.372525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.905 qpair failed and we were unable to recover it. 00:32:35.905 [2024-11-20 06:43:07.372794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.905 [2024-11-20 06:43:07.372826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.905 qpair failed and we were unable to recover it. 00:32:35.905 [2024-11-20 06:43:07.373052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.905 [2024-11-20 06:43:07.373084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.905 qpair failed and we were unable to recover it. 00:32:35.905 [2024-11-20 06:43:07.373302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.905 [2024-11-20 06:43:07.373337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.905 qpair failed and we were unable to recover it. 00:32:35.905 [2024-11-20 06:43:07.373618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.905 [2024-11-20 06:43:07.373651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.905 qpair failed and we were unable to recover it. 00:32:35.905 [2024-11-20 06:43:07.373863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.905 [2024-11-20 06:43:07.373895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.905 qpair failed and we were unable to recover it. 00:32:35.905 [2024-11-20 06:43:07.374152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.905 [2024-11-20 06:43:07.374184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.905 qpair failed and we were unable to recover it. 00:32:35.905 [2024-11-20 06:43:07.374514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.905 [2024-11-20 06:43:07.374546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.905 qpair failed and we were unable to recover it. 00:32:35.905 [2024-11-20 06:43:07.374781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.905 [2024-11-20 06:43:07.374813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.905 qpair failed and we were unable to recover it. 00:32:35.905 [2024-11-20 06:43:07.375089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.905 [2024-11-20 06:43:07.375122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.905 qpair failed and we were unable to recover it. 00:32:35.905 [2024-11-20 06:43:07.375413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.905 [2024-11-20 06:43:07.375447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.905 qpair failed and we were unable to recover it. 00:32:35.905 [2024-11-20 06:43:07.375723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.905 [2024-11-20 06:43:07.375756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.905 qpair failed and we were unable to recover it. 00:32:35.905 [2024-11-20 06:43:07.376016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.905 [2024-11-20 06:43:07.376049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.905 qpair failed and we were unable to recover it. 00:32:35.905 [2024-11-20 06:43:07.376252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.905 [2024-11-20 06:43:07.376275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.905 qpair failed and we were unable to recover it. 00:32:35.905 [2024-11-20 06:43:07.376456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.905 [2024-11-20 06:43:07.376488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.905 qpair failed and we were unable to recover it. 00:32:35.905 [2024-11-20 06:43:07.376747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.906 [2024-11-20 06:43:07.376779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.906 qpair failed and we were unable to recover it. 00:32:35.906 [2024-11-20 06:43:07.376961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.906 [2024-11-20 06:43:07.376983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.906 qpair failed and we were unable to recover it. 00:32:35.906 [2024-11-20 06:43:07.377253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.906 [2024-11-20 06:43:07.377287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.906 qpair failed and we were unable to recover it. 00:32:35.906 [2024-11-20 06:43:07.377572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.906 [2024-11-20 06:43:07.377605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.906 qpair failed and we were unable to recover it. 00:32:35.906 [2024-11-20 06:43:07.377887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.906 [2024-11-20 06:43:07.377920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.906 qpair failed and we were unable to recover it. 00:32:35.906 [2024-11-20 06:43:07.378146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.906 [2024-11-20 06:43:07.378178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.906 qpair failed and we were unable to recover it. 00:32:35.906 [2024-11-20 06:43:07.378347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.906 [2024-11-20 06:43:07.378370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.906 qpair failed and we were unable to recover it. 00:32:35.906 [2024-11-20 06:43:07.378607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.906 [2024-11-20 06:43:07.378638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.906 qpair failed and we were unable to recover it. 00:32:35.906 [2024-11-20 06:43:07.378835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.906 [2024-11-20 06:43:07.378866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.906 qpair failed and we were unable to recover it. 00:32:35.906 [2024-11-20 06:43:07.379151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.906 [2024-11-20 06:43:07.379182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.906 qpair failed and we were unable to recover it. 00:32:35.906 [2024-11-20 06:43:07.379340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.906 [2024-11-20 06:43:07.379373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.906 qpair failed and we were unable to recover it. 00:32:35.906 [2024-11-20 06:43:07.379578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.906 [2024-11-20 06:43:07.379610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.906 qpair failed and we were unable to recover it. 00:32:35.906 [2024-11-20 06:43:07.379822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.906 [2024-11-20 06:43:07.379854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.906 qpair failed and we were unable to recover it. 00:32:35.906 [2024-11-20 06:43:07.380063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.906 [2024-11-20 06:43:07.380096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.906 qpair failed and we were unable to recover it. 00:32:35.906 [2024-11-20 06:43:07.380350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.906 [2024-11-20 06:43:07.380382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.906 qpair failed and we were unable to recover it. 00:32:35.906 [2024-11-20 06:43:07.380677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.906 [2024-11-20 06:43:07.380711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.906 qpair failed and we were unable to recover it. 00:32:35.906 [2024-11-20 06:43:07.381001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.906 [2024-11-20 06:43:07.381034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.906 qpair failed and we were unable to recover it. 00:32:35.906 [2024-11-20 06:43:07.381330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.906 [2024-11-20 06:43:07.381354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.906 qpair failed and we were unable to recover it. 00:32:35.906 [2024-11-20 06:43:07.381616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.906 [2024-11-20 06:43:07.381639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.906 qpair failed and we were unable to recover it. 00:32:35.906 [2024-11-20 06:43:07.381872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.906 [2024-11-20 06:43:07.381895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.906 qpair failed and we were unable to recover it. 00:32:35.906 [2024-11-20 06:43:07.382078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.906 [2024-11-20 06:43:07.382100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.906 qpair failed and we were unable to recover it. 00:32:35.906 [2024-11-20 06:43:07.382282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.906 [2024-11-20 06:43:07.382305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.906 qpair failed and we were unable to recover it. 00:32:35.906 [2024-11-20 06:43:07.382544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.906 [2024-11-20 06:43:07.382576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.906 qpair failed and we were unable to recover it. 00:32:35.906 [2024-11-20 06:43:07.382778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.906 [2024-11-20 06:43:07.382810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.906 qpair failed and we were unable to recover it. 00:32:35.906 [2024-11-20 06:43:07.383082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.906 [2024-11-20 06:43:07.383114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.906 qpair failed and we were unable to recover it. 00:32:35.906 [2024-11-20 06:43:07.383402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.906 [2024-11-20 06:43:07.383436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.906 qpair failed and we were unable to recover it. 00:32:35.906 [2024-11-20 06:43:07.383736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.906 [2024-11-20 06:43:07.383768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.906 qpair failed and we were unable to recover it. 00:32:35.906 [2024-11-20 06:43:07.384031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.906 [2024-11-20 06:43:07.384063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.906 qpair failed and we were unable to recover it. 00:32:35.906 [2024-11-20 06:43:07.384317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.906 [2024-11-20 06:43:07.384340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.906 qpair failed and we were unable to recover it. 00:32:35.906 [2024-11-20 06:43:07.384509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.906 [2024-11-20 06:43:07.384532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.906 qpair failed and we were unable to recover it. 00:32:35.906 [2024-11-20 06:43:07.384766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.906 [2024-11-20 06:43:07.384788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.906 qpair failed and we were unable to recover it. 00:32:35.906 [2024-11-20 06:43:07.385054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.906 [2024-11-20 06:43:07.385094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.906 qpair failed and we were unable to recover it. 00:32:35.906 [2024-11-20 06:43:07.385398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.906 [2024-11-20 06:43:07.385433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.906 qpair failed and we were unable to recover it. 00:32:35.906 [2024-11-20 06:43:07.385577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.906 [2024-11-20 06:43:07.385609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.906 qpair failed and we were unable to recover it. 00:32:35.906 [2024-11-20 06:43:07.385894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.906 [2024-11-20 06:43:07.385933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.906 qpair failed and we were unable to recover it. 00:32:35.906 [2024-11-20 06:43:07.386093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.906 [2024-11-20 06:43:07.386115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.906 qpair failed and we were unable to recover it. 00:32:35.906 [2024-11-20 06:43:07.386377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.906 [2024-11-20 06:43:07.386400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.906 qpair failed and we were unable to recover it. 00:32:35.906 [2024-11-20 06:43:07.386574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.906 [2024-11-20 06:43:07.386596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.906 qpair failed and we were unable to recover it. 00:32:35.906 [2024-11-20 06:43:07.386724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.907 [2024-11-20 06:43:07.386756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.907 qpair failed and we were unable to recover it. 00:32:35.907 [2024-11-20 06:43:07.387042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.907 [2024-11-20 06:43:07.387075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.907 qpair failed and we were unable to recover it. 00:32:35.907 [2024-11-20 06:43:07.387379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.907 [2024-11-20 06:43:07.387412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.907 qpair failed and we were unable to recover it. 00:32:35.907 [2024-11-20 06:43:07.387607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.907 [2024-11-20 06:43:07.387639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.907 qpair failed and we were unable to recover it. 00:32:35.907 [2024-11-20 06:43:07.387872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.907 [2024-11-20 06:43:07.387916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.907 qpair failed and we were unable to recover it. 00:32:35.907 [2024-11-20 06:43:07.388167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.907 [2024-11-20 06:43:07.388190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.907 qpair failed and we were unable to recover it. 00:32:35.907 [2024-11-20 06:43:07.388436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.907 [2024-11-20 06:43:07.388459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.907 qpair failed and we were unable to recover it. 00:32:35.907 [2024-11-20 06:43:07.388631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.907 [2024-11-20 06:43:07.388654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.907 qpair failed and we were unable to recover it. 00:32:35.907 [2024-11-20 06:43:07.388887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.907 [2024-11-20 06:43:07.388909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.907 qpair failed and we were unable to recover it. 00:32:35.907 [2024-11-20 06:43:07.389112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.907 [2024-11-20 06:43:07.389134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.907 qpair failed and we were unable to recover it. 00:32:35.907 [2024-11-20 06:43:07.389371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.907 [2024-11-20 06:43:07.389395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.907 qpair failed and we were unable to recover it. 00:32:35.907 [2024-11-20 06:43:07.389629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.907 [2024-11-20 06:43:07.389651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.907 qpair failed and we were unable to recover it. 00:32:35.907 [2024-11-20 06:43:07.389831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.907 [2024-11-20 06:43:07.389854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.907 qpair failed and we were unable to recover it. 00:32:35.907 [2024-11-20 06:43:07.390041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.907 [2024-11-20 06:43:07.390063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.907 qpair failed and we were unable to recover it. 00:32:35.907 [2024-11-20 06:43:07.390339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.907 [2024-11-20 06:43:07.390362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.907 qpair failed and we were unable to recover it. 00:32:35.907 [2024-11-20 06:43:07.390573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.907 [2024-11-20 06:43:07.390594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.907 qpair failed and we were unable to recover it. 00:32:35.907 [2024-11-20 06:43:07.390835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.907 [2024-11-20 06:43:07.390857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.907 qpair failed and we were unable to recover it. 00:32:35.907 [2024-11-20 06:43:07.391038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.907 [2024-11-20 06:43:07.391060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.907 qpair failed and we were unable to recover it. 00:32:35.907 [2024-11-20 06:43:07.391323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.907 [2024-11-20 06:43:07.391346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.907 qpair failed and we were unable to recover it. 00:32:35.907 [2024-11-20 06:43:07.391611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.907 [2024-11-20 06:43:07.391634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.907 qpair failed and we were unable to recover it. 00:32:35.907 [2024-11-20 06:43:07.391893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.907 [2024-11-20 06:43:07.391915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.907 qpair failed and we were unable to recover it. 00:32:35.907 [2024-11-20 06:43:07.392171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.907 [2024-11-20 06:43:07.392193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.907 qpair failed and we were unable to recover it. 00:32:35.907 [2024-11-20 06:43:07.392469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.907 [2024-11-20 06:43:07.392491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.907 qpair failed and we were unable to recover it. 00:32:35.907 [2024-11-20 06:43:07.392672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.907 [2024-11-20 06:43:07.392694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.907 qpair failed and we were unable to recover it. 00:32:35.907 [2024-11-20 06:43:07.392945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.907 [2024-11-20 06:43:07.392967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.907 qpair failed and we were unable to recover it. 00:32:35.907 [2024-11-20 06:43:07.393147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.907 [2024-11-20 06:43:07.393169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.907 qpair failed and we were unable to recover it. 00:32:35.907 [2024-11-20 06:43:07.393462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.907 [2024-11-20 06:43:07.393486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.907 qpair failed and we were unable to recover it. 00:32:35.907 [2024-11-20 06:43:07.393646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.907 [2024-11-20 06:43:07.393669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.907 qpair failed and we were unable to recover it. 00:32:35.907 [2024-11-20 06:43:07.393861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.907 [2024-11-20 06:43:07.393883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.907 qpair failed and we were unable to recover it. 00:32:35.907 [2024-11-20 06:43:07.394116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.907 [2024-11-20 06:43:07.394138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.907 qpair failed and we were unable to recover it. 00:32:35.907 [2024-11-20 06:43:07.394335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.907 [2024-11-20 06:43:07.394359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.907 qpair failed and we were unable to recover it. 00:32:35.907 [2024-11-20 06:43:07.394543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.907 [2024-11-20 06:43:07.394565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.907 qpair failed and we were unable to recover it. 00:32:35.907 [2024-11-20 06:43:07.394823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.907 [2024-11-20 06:43:07.394845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.907 qpair failed and we were unable to recover it. 00:32:35.907 [2024-11-20 06:43:07.395026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.907 [2024-11-20 06:43:07.395049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.907 qpair failed and we were unable to recover it. 00:32:35.907 [2024-11-20 06:43:07.395233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.907 [2024-11-20 06:43:07.395257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.907 qpair failed and we were unable to recover it. 00:32:35.907 [2024-11-20 06:43:07.395515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.907 [2024-11-20 06:43:07.395537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.907 qpair failed and we were unable to recover it. 00:32:35.907 [2024-11-20 06:43:07.395773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.907 [2024-11-20 06:43:07.395796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.907 qpair failed and we were unable to recover it. 00:32:35.907 [2024-11-20 06:43:07.396021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.907 [2024-11-20 06:43:07.396044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.907 qpair failed and we were unable to recover it. 00:32:35.907 [2024-11-20 06:43:07.396217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.908 [2024-11-20 06:43:07.396241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.908 qpair failed and we were unable to recover it. 00:32:35.908 [2024-11-20 06:43:07.396517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.908 [2024-11-20 06:43:07.396539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.908 qpair failed and we were unable to recover it. 00:32:35.908 [2024-11-20 06:43:07.396727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.908 [2024-11-20 06:43:07.396749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.908 qpair failed and we were unable to recover it. 00:32:35.908 [2024-11-20 06:43:07.396980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.908 [2024-11-20 06:43:07.397002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.908 qpair failed and we were unable to recover it. 00:32:35.908 [2024-11-20 06:43:07.397184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.908 [2024-11-20 06:43:07.397212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.908 qpair failed and we were unable to recover it. 00:32:35.908 [2024-11-20 06:43:07.397454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.908 [2024-11-20 06:43:07.397477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.908 qpair failed and we were unable to recover it. 00:32:35.908 [2024-11-20 06:43:07.397759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.908 [2024-11-20 06:43:07.397781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.908 qpair failed and we were unable to recover it. 00:32:35.908 [2024-11-20 06:43:07.397986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.908 [2024-11-20 06:43:07.398008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.908 qpair failed and we were unable to recover it. 00:32:35.908 [2024-11-20 06:43:07.398185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.908 [2024-11-20 06:43:07.398215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.908 qpair failed and we were unable to recover it. 00:32:35.908 [2024-11-20 06:43:07.398446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.908 [2024-11-20 06:43:07.398469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.908 qpair failed and we were unable to recover it. 00:32:35.908 [2024-11-20 06:43:07.398723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.908 [2024-11-20 06:43:07.398746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.908 qpair failed and we were unable to recover it. 00:32:35.908 [2024-11-20 06:43:07.398870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.908 [2024-11-20 06:43:07.398892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.908 qpair failed and we were unable to recover it. 00:32:35.908 [2024-11-20 06:43:07.399146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.908 [2024-11-20 06:43:07.399168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.908 qpair failed and we were unable to recover it. 00:32:35.908 [2024-11-20 06:43:07.399388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.908 [2024-11-20 06:43:07.399411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.908 qpair failed and we were unable to recover it. 00:32:35.908 [2024-11-20 06:43:07.399595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.908 [2024-11-20 06:43:07.399617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.908 qpair failed and we were unable to recover it. 00:32:35.908 [2024-11-20 06:43:07.399806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.908 [2024-11-20 06:43:07.399828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.908 qpair failed and we were unable to recover it. 00:32:35.908 [2024-11-20 06:43:07.400085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.908 [2024-11-20 06:43:07.400107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.908 qpair failed and we were unable to recover it. 00:32:35.908 [2024-11-20 06:43:07.400401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.908 [2024-11-20 06:43:07.400425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.908 qpair failed and we were unable to recover it. 00:32:35.908 [2024-11-20 06:43:07.400528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.908 [2024-11-20 06:43:07.400550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.908 qpair failed and we were unable to recover it. 00:32:35.908 [2024-11-20 06:43:07.400824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.908 [2024-11-20 06:43:07.400847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.908 qpair failed and we were unable to recover it. 00:32:35.908 [2024-11-20 06:43:07.401082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.908 [2024-11-20 06:43:07.401105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.908 qpair failed and we were unable to recover it. 00:32:35.908 [2024-11-20 06:43:07.401363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.908 [2024-11-20 06:43:07.401386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.908 qpair failed and we were unable to recover it. 00:32:35.908 [2024-11-20 06:43:07.401546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.908 [2024-11-20 06:43:07.401568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.908 qpair failed and we were unable to recover it. 00:32:35.908 [2024-11-20 06:43:07.401827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.908 [2024-11-20 06:43:07.401849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.908 qpair failed and we were unable to recover it. 00:32:35.908 [2024-11-20 06:43:07.402072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.908 [2024-11-20 06:43:07.402094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.908 qpair failed and we were unable to recover it. 00:32:35.908 [2024-11-20 06:43:07.402351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.908 [2024-11-20 06:43:07.402374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.908 qpair failed and we were unable to recover it. 00:32:35.908 [2024-11-20 06:43:07.402564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.908 [2024-11-20 06:43:07.402586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.908 qpair failed and we were unable to recover it. 00:32:35.908 [2024-11-20 06:43:07.402783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.908 [2024-11-20 06:43:07.402804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.908 qpair failed and we were unable to recover it. 00:32:35.908 [2024-11-20 06:43:07.403002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.908 [2024-11-20 06:43:07.403023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.908 qpair failed and we were unable to recover it. 00:32:35.908 [2024-11-20 06:43:07.403325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.908 [2024-11-20 06:43:07.403348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.908 qpair failed and we were unable to recover it. 00:32:35.908 [2024-11-20 06:43:07.403513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.908 [2024-11-20 06:43:07.403535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.908 qpair failed and we were unable to recover it. 00:32:35.908 [2024-11-20 06:43:07.403812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.908 [2024-11-20 06:43:07.403833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.908 qpair failed and we were unable to recover it. 00:32:35.908 [2024-11-20 06:43:07.404094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.908 [2024-11-20 06:43:07.404116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.908 qpair failed and we were unable to recover it. 00:32:35.908 [2024-11-20 06:43:07.404371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.908 [2024-11-20 06:43:07.404394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.908 qpair failed and we were unable to recover it. 00:32:35.908 [2024-11-20 06:43:07.404629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.908 [2024-11-20 06:43:07.404657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.908 qpair failed and we were unable to recover it. 00:32:35.908 [2024-11-20 06:43:07.404843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.908 [2024-11-20 06:43:07.404865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.908 qpair failed and we were unable to recover it. 00:32:35.908 [2024-11-20 06:43:07.405123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.908 [2024-11-20 06:43:07.405145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.908 qpair failed and we were unable to recover it. 00:32:35.908 [2024-11-20 06:43:07.405340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.908 [2024-11-20 06:43:07.405364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.908 qpair failed and we were unable to recover it. 00:32:35.908 [2024-11-20 06:43:07.405628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.909 [2024-11-20 06:43:07.405650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.909 qpair failed and we were unable to recover it. 00:32:35.909 [2024-11-20 06:43:07.405910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.909 [2024-11-20 06:43:07.405932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.909 qpair failed and we were unable to recover it. 00:32:35.909 [2024-11-20 06:43:07.406144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.909 [2024-11-20 06:43:07.406165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.909 qpair failed and we were unable to recover it. 00:32:35.909 [2024-11-20 06:43:07.406429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.909 [2024-11-20 06:43:07.406451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.909 qpair failed and we were unable to recover it. 00:32:35.909 [2024-11-20 06:43:07.406643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.909 [2024-11-20 06:43:07.406664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.909 qpair failed and we were unable to recover it. 00:32:35.909 [2024-11-20 06:43:07.406845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.909 [2024-11-20 06:43:07.406867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.909 qpair failed and we were unable to recover it. 00:32:35.909 [2024-11-20 06:43:07.407088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.909 [2024-11-20 06:43:07.407110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.909 qpair failed and we were unable to recover it. 00:32:35.909 [2024-11-20 06:43:07.407385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.909 [2024-11-20 06:43:07.407409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.909 qpair failed and we were unable to recover it. 00:32:35.909 [2024-11-20 06:43:07.407571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.909 [2024-11-20 06:43:07.407592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.909 qpair failed and we were unable to recover it. 00:32:35.909 [2024-11-20 06:43:07.407881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.909 [2024-11-20 06:43:07.407913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.909 qpair failed and we were unable to recover it. 00:32:35.909 [2024-11-20 06:43:07.408186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.909 [2024-11-20 06:43:07.408230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.909 qpair failed and we were unable to recover it. 00:32:35.909 [2024-11-20 06:43:07.408453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.909 [2024-11-20 06:43:07.408476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.909 qpair failed and we were unable to recover it. 00:32:35.909 [2024-11-20 06:43:07.408773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.909 [2024-11-20 06:43:07.408795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.909 qpair failed and we were unable to recover it. 00:32:35.909 [2024-11-20 06:43:07.408977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.909 [2024-11-20 06:43:07.408999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.909 qpair failed and we were unable to recover it. 00:32:35.909 [2024-11-20 06:43:07.409176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.909 [2024-11-20 06:43:07.409199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.909 qpair failed and we were unable to recover it. 00:32:35.909 [2024-11-20 06:43:07.409472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.909 [2024-11-20 06:43:07.409494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.909 qpair failed and we were unable to recover it. 00:32:35.909 [2024-11-20 06:43:07.409679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.909 [2024-11-20 06:43:07.409702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.909 qpair failed and we were unable to recover it. 00:32:35.909 [2024-11-20 06:43:07.409874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.909 [2024-11-20 06:43:07.409896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.909 qpair failed and we were unable to recover it. 00:32:35.909 [2024-11-20 06:43:07.410106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.909 [2024-11-20 06:43:07.410137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.909 qpair failed and we were unable to recover it. 00:32:35.909 [2024-11-20 06:43:07.410421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.909 [2024-11-20 06:43:07.410455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.909 qpair failed and we were unable to recover it. 00:32:35.909 [2024-11-20 06:43:07.410663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.909 [2024-11-20 06:43:07.410695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.909 qpair failed and we were unable to recover it. 00:32:35.909 [2024-11-20 06:43:07.410983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.909 [2024-11-20 06:43:07.411013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.909 qpair failed and we were unable to recover it. 00:32:35.909 [2024-11-20 06:43:07.411258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.909 [2024-11-20 06:43:07.411281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.909 qpair failed and we were unable to recover it. 00:32:35.909 [2024-11-20 06:43:07.411462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.909 [2024-11-20 06:43:07.411484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.909 qpair failed and we were unable to recover it. 00:32:35.909 [2024-11-20 06:43:07.411682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.909 [2024-11-20 06:43:07.411705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.909 qpair failed and we were unable to recover it. 00:32:35.909 [2024-11-20 06:43:07.411970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.909 [2024-11-20 06:43:07.412002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.909 qpair failed and we were unable to recover it. 00:32:35.909 [2024-11-20 06:43:07.412200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.909 [2024-11-20 06:43:07.412241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.909 qpair failed and we were unable to recover it. 00:32:35.909 [2024-11-20 06:43:07.412564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.909 [2024-11-20 06:43:07.412596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.909 qpair failed and we were unable to recover it. 00:32:35.909 [2024-11-20 06:43:07.412784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.909 [2024-11-20 06:43:07.412815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.909 qpair failed and we were unable to recover it. 00:32:35.909 [2024-11-20 06:43:07.413108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.909 [2024-11-20 06:43:07.413149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.909 qpair failed and we were unable to recover it. 00:32:35.909 [2024-11-20 06:43:07.413339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.909 [2024-11-20 06:43:07.413362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.909 qpair failed and we were unable to recover it. 00:32:35.909 [2024-11-20 06:43:07.413611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.909 [2024-11-20 06:43:07.413633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.909 qpair failed and we were unable to recover it. 00:32:35.909 [2024-11-20 06:43:07.413889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.909 [2024-11-20 06:43:07.413922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.909 qpair failed and we were unable to recover it. 00:32:35.909 [2024-11-20 06:43:07.414184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.909 [2024-11-20 06:43:07.414210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.909 qpair failed and we were unable to recover it. 00:32:35.909 [2024-11-20 06:43:07.414469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.909 [2024-11-20 06:43:07.414491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.909 qpair failed and we were unable to recover it. 00:32:35.909 [2024-11-20 06:43:07.414733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.909 [2024-11-20 06:43:07.414755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.909 qpair failed and we were unable to recover it. 00:32:35.909 [2024-11-20 06:43:07.415041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.909 [2024-11-20 06:43:07.415064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.909 qpair failed and we were unable to recover it. 00:32:35.909 [2024-11-20 06:43:07.415269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.909 [2024-11-20 06:43:07.415296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.909 qpair failed and we were unable to recover it. 00:32:35.909 [2024-11-20 06:43:07.415539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.909 [2024-11-20 06:43:07.415562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.910 qpair failed and we were unable to recover it. 00:32:35.910 [2024-11-20 06:43:07.415812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.910 [2024-11-20 06:43:07.415834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.910 qpair failed and we were unable to recover it. 00:32:35.910 [2024-11-20 06:43:07.415960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.910 [2024-11-20 06:43:07.415983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.910 qpair failed and we were unable to recover it. 00:32:35.910 [2024-11-20 06:43:07.416158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.910 [2024-11-20 06:43:07.416180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.910 qpair failed and we were unable to recover it. 00:32:35.910 [2024-11-20 06:43:07.416445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.910 [2024-11-20 06:43:07.416468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.910 qpair failed and we were unable to recover it. 00:32:35.910 [2024-11-20 06:43:07.416574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.910 [2024-11-20 06:43:07.416596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.910 qpair failed and we were unable to recover it. 00:32:35.910 [2024-11-20 06:43:07.416780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.910 [2024-11-20 06:43:07.416802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.910 qpair failed and we were unable to recover it. 00:32:35.910 [2024-11-20 06:43:07.417039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.910 [2024-11-20 06:43:07.417061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.910 qpair failed and we were unable to recover it. 00:32:35.910 [2024-11-20 06:43:07.417273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.910 [2024-11-20 06:43:07.417297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.910 qpair failed and we were unable to recover it. 00:32:35.910 [2024-11-20 06:43:07.417526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.910 [2024-11-20 06:43:07.417548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.910 qpair failed and we were unable to recover it. 00:32:35.910 [2024-11-20 06:43:07.417811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.910 [2024-11-20 06:43:07.417834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.910 qpair failed and we were unable to recover it. 00:32:35.910 [2024-11-20 06:43:07.418055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.910 [2024-11-20 06:43:07.418076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.910 qpair failed and we were unable to recover it. 00:32:35.910 [2024-11-20 06:43:07.418252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.910 [2024-11-20 06:43:07.418287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.910 qpair failed and we were unable to recover it. 00:32:35.910 [2024-11-20 06:43:07.418551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.910 [2024-11-20 06:43:07.418585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.910 qpair failed and we were unable to recover it. 00:32:35.910 [2024-11-20 06:43:07.418841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.910 [2024-11-20 06:43:07.418873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.910 qpair failed and we were unable to recover it. 00:32:35.910 [2024-11-20 06:43:07.419175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.910 [2024-11-20 06:43:07.419217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.910 qpair failed and we were unable to recover it. 00:32:35.910 [2024-11-20 06:43:07.419436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.910 [2024-11-20 06:43:07.419468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.910 qpair failed and we were unable to recover it. 00:32:35.910 [2024-11-20 06:43:07.419696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.910 [2024-11-20 06:43:07.419728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.910 qpair failed and we were unable to recover it. 00:32:35.910 [2024-11-20 06:43:07.419936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.910 [2024-11-20 06:43:07.419967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.910 qpair failed and we were unable to recover it. 00:32:35.910 [2024-11-20 06:43:07.420277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.910 [2024-11-20 06:43:07.420323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.910 qpair failed and we were unable to recover it. 00:32:35.910 [2024-11-20 06:43:07.420566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.910 [2024-11-20 06:43:07.420589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.910 qpair failed and we were unable to recover it. 00:32:35.910 [2024-11-20 06:43:07.420753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.910 [2024-11-20 06:43:07.420784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.910 qpair failed and we were unable to recover it. 00:32:35.910 [2024-11-20 06:43:07.420992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.910 [2024-11-20 06:43:07.421024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.910 qpair failed and we were unable to recover it. 00:32:35.910 [2024-11-20 06:43:07.421243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.910 [2024-11-20 06:43:07.421278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.910 qpair failed and we were unable to recover it. 00:32:35.910 [2024-11-20 06:43:07.421499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.910 [2024-11-20 06:43:07.421531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.910 qpair failed and we were unable to recover it. 00:32:35.910 [2024-11-20 06:43:07.421817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.910 [2024-11-20 06:43:07.421849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.910 qpair failed and we were unable to recover it. 00:32:35.910 [2024-11-20 06:43:07.422131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.910 [2024-11-20 06:43:07.422169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.910 qpair failed and we were unable to recover it. 00:32:35.910 [2024-11-20 06:43:07.422434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.910 [2024-11-20 06:43:07.422467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.910 qpair failed and we were unable to recover it. 00:32:35.910 [2024-11-20 06:43:07.422780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.910 [2024-11-20 06:43:07.422812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.910 qpair failed and we were unable to recover it. 00:32:35.910 [2024-11-20 06:43:07.423031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.910 [2024-11-20 06:43:07.423062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.910 qpair failed and we were unable to recover it. 00:32:35.910 [2024-11-20 06:43:07.423324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.910 [2024-11-20 06:43:07.423358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.910 qpair failed and we were unable to recover it. 00:32:35.910 [2024-11-20 06:43:07.423572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.910 [2024-11-20 06:43:07.423605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.910 qpair failed and we were unable to recover it. 00:32:35.910 [2024-11-20 06:43:07.423801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.910 [2024-11-20 06:43:07.423832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.910 qpair failed and we were unable to recover it. 00:32:35.910 [2024-11-20 06:43:07.424107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.910 [2024-11-20 06:43:07.424129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.910 qpair failed and we were unable to recover it. 00:32:35.911 [2024-11-20 06:43:07.424386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.911 [2024-11-20 06:43:07.424410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.911 qpair failed and we were unable to recover it. 00:32:35.911 [2024-11-20 06:43:07.424599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.911 [2024-11-20 06:43:07.424621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.911 qpair failed and we were unable to recover it. 00:32:35.911 [2024-11-20 06:43:07.424822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.911 [2024-11-20 06:43:07.424844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.911 qpair failed and we were unable to recover it. 00:32:35.911 [2024-11-20 06:43:07.425026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.911 [2024-11-20 06:43:07.425048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.911 qpair failed and we were unable to recover it. 00:32:35.911 [2024-11-20 06:43:07.425229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.911 [2024-11-20 06:43:07.425252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.911 qpair failed and we were unable to recover it. 00:32:35.911 [2024-11-20 06:43:07.425516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.911 [2024-11-20 06:43:07.425549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.911 qpair failed and we were unable to recover it. 00:32:35.911 [2024-11-20 06:43:07.425859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.911 [2024-11-20 06:43:07.425893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.911 qpair failed and we were unable to recover it. 00:32:35.911 [2024-11-20 06:43:07.426166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.911 [2024-11-20 06:43:07.426187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.911 qpair failed and we were unable to recover it. 00:32:35.911 [2024-11-20 06:43:07.426356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.911 [2024-11-20 06:43:07.426379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.911 qpair failed and we were unable to recover it. 00:32:35.911 [2024-11-20 06:43:07.426564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.911 [2024-11-20 06:43:07.426597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.911 qpair failed and we were unable to recover it. 00:32:35.911 [2024-11-20 06:43:07.426799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.911 [2024-11-20 06:43:07.426831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.911 qpair failed and we were unable to recover it. 00:32:35.911 [2024-11-20 06:43:07.426961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.911 [2024-11-20 06:43:07.426993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.911 qpair failed and we were unable to recover it. 00:32:35.911 [2024-11-20 06:43:07.427275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.911 [2024-11-20 06:43:07.427299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.911 qpair failed and we were unable to recover it. 00:32:35.911 [2024-11-20 06:43:07.427557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.911 [2024-11-20 06:43:07.427579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.911 qpair failed and we were unable to recover it. 00:32:35.911 [2024-11-20 06:43:07.427838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.911 [2024-11-20 06:43:07.427860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.911 qpair failed and we were unable to recover it. 00:32:35.911 [2024-11-20 06:43:07.428048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.911 [2024-11-20 06:43:07.428070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.911 qpair failed and we were unable to recover it. 00:32:35.911 [2024-11-20 06:43:07.428254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.911 [2024-11-20 06:43:07.428289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.911 qpair failed and we were unable to recover it. 00:32:35.911 [2024-11-20 06:43:07.428483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.911 [2024-11-20 06:43:07.428515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.911 qpair failed and we were unable to recover it. 00:32:35.911 [2024-11-20 06:43:07.428712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.911 [2024-11-20 06:43:07.428744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.911 qpair failed and we were unable to recover it. 00:32:35.911 [2024-11-20 06:43:07.428998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.911 [2024-11-20 06:43:07.429030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.911 qpair failed and we were unable to recover it. 00:32:35.911 [2024-11-20 06:43:07.429338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.911 [2024-11-20 06:43:07.429371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.911 qpair failed and we were unable to recover it. 00:32:35.911 [2024-11-20 06:43:07.429635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.911 [2024-11-20 06:43:07.429667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.911 qpair failed and we were unable to recover it. 00:32:35.911 [2024-11-20 06:43:07.429863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.911 [2024-11-20 06:43:07.429894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.911 qpair failed and we were unable to recover it. 00:32:35.911 [2024-11-20 06:43:07.430079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.911 [2024-11-20 06:43:07.430101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.911 qpair failed and we were unable to recover it. 00:32:35.911 [2024-11-20 06:43:07.430382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.911 [2024-11-20 06:43:07.430405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.911 qpair failed and we were unable to recover it. 00:32:35.911 [2024-11-20 06:43:07.430689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.911 [2024-11-20 06:43:07.430711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.911 qpair failed and we were unable to recover it. 00:32:35.911 [2024-11-20 06:43:07.431007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.911 [2024-11-20 06:43:07.431039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.911 qpair failed and we were unable to recover it. 00:32:35.911 [2024-11-20 06:43:07.431188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.911 [2024-11-20 06:43:07.431229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.911 qpair failed and we were unable to recover it. 00:32:35.911 [2024-11-20 06:43:07.431432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.911 [2024-11-20 06:43:07.431464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.911 qpair failed and we were unable to recover it. 00:32:35.911 [2024-11-20 06:43:07.431903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.911 [2024-11-20 06:43:07.431939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.911 qpair failed and we were unable to recover it. 00:32:35.911 [2024-11-20 06:43:07.432085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.911 [2024-11-20 06:43:07.432120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.911 qpair failed and we were unable to recover it. 00:32:35.911 [2024-11-20 06:43:07.432321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.911 [2024-11-20 06:43:07.432346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.911 qpair failed and we were unable to recover it. 00:32:35.911 [2024-11-20 06:43:07.432627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.911 [2024-11-20 06:43:07.432649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.911 qpair failed and we were unable to recover it. 00:32:35.911 [2024-11-20 06:43:07.432939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.911 [2024-11-20 06:43:07.432981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.911 qpair failed and we were unable to recover it. 00:32:35.911 [2024-11-20 06:43:07.433315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.911 [2024-11-20 06:43:07.433340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.911 qpair failed and we were unable to recover it. 00:32:35.911 [2024-11-20 06:43:07.433554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.911 [2024-11-20 06:43:07.433586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.911 qpair failed and we were unable to recover it. 00:32:35.911 [2024-11-20 06:43:07.433892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.911 [2024-11-20 06:43:07.433924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.911 qpair failed and we were unable to recover it. 00:32:35.911 [2024-11-20 06:43:07.434136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.911 [2024-11-20 06:43:07.434168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.911 qpair failed and we were unable to recover it. 00:32:35.911 [2024-11-20 06:43:07.434446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.912 [2024-11-20 06:43:07.434479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.912 qpair failed and we were unable to recover it. 00:32:35.912 [2024-11-20 06:43:07.434789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.912 [2024-11-20 06:43:07.434822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.912 qpair failed and we were unable to recover it. 00:32:35.912 [2024-11-20 06:43:07.435071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.912 [2024-11-20 06:43:07.435104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.912 qpair failed and we were unable to recover it. 00:32:35.912 [2024-11-20 06:43:07.435397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.912 [2024-11-20 06:43:07.435431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.912 qpair failed and we were unable to recover it. 00:32:35.912 [2024-11-20 06:43:07.435710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.912 [2024-11-20 06:43:07.435743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.912 qpair failed and we were unable to recover it. 00:32:35.912 [2024-11-20 06:43:07.436012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.912 [2024-11-20 06:43:07.436044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.912 qpair failed and we were unable to recover it. 00:32:35.912 [2024-11-20 06:43:07.436346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.912 [2024-11-20 06:43:07.436380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.912 qpair failed and we were unable to recover it. 00:32:35.912 [2024-11-20 06:43:07.436677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.912 [2024-11-20 06:43:07.436700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.912 qpair failed and we were unable to recover it. 00:32:35.912 [2024-11-20 06:43:07.436878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.912 [2024-11-20 06:43:07.436900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.912 qpair failed and we were unable to recover it. 00:32:35.912 [2024-11-20 06:43:07.437137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.912 [2024-11-20 06:43:07.437159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.912 qpair failed and we were unable to recover it. 00:32:35.912 [2024-11-20 06:43:07.437431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.912 [2024-11-20 06:43:07.437454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.912 qpair failed and we were unable to recover it. 00:32:35.912 [2024-11-20 06:43:07.437636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.912 [2024-11-20 06:43:07.437658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.912 qpair failed and we were unable to recover it. 00:32:35.912 [2024-11-20 06:43:07.437861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.912 [2024-11-20 06:43:07.437883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.912 qpair failed and we were unable to recover it. 00:32:35.912 [2024-11-20 06:43:07.438145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.912 [2024-11-20 06:43:07.438167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.912 qpair failed and we were unable to recover it. 00:32:35.912 [2024-11-20 06:43:07.438405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.912 [2024-11-20 06:43:07.438428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.912 qpair failed and we were unable to recover it. 00:32:35.912 [2024-11-20 06:43:07.438606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.912 [2024-11-20 06:43:07.438628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.912 qpair failed and we were unable to recover it. 00:32:35.912 [2024-11-20 06:43:07.438840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.912 [2024-11-20 06:43:07.438862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.912 qpair failed and we were unable to recover it. 00:32:35.912 [2024-11-20 06:43:07.439094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.912 [2024-11-20 06:43:07.439116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.912 qpair failed and we were unable to recover it. 00:32:35.912 [2024-11-20 06:43:07.439300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.912 [2024-11-20 06:43:07.439334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.912 qpair failed and we were unable to recover it. 00:32:35.912 [2024-11-20 06:43:07.439621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.912 [2024-11-20 06:43:07.439654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.912 qpair failed and we were unable to recover it. 00:32:35.912 [2024-11-20 06:43:07.439929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.912 [2024-11-20 06:43:07.439961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.912 qpair failed and we were unable to recover it. 00:32:35.912 [2024-11-20 06:43:07.440223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.912 [2024-11-20 06:43:07.440247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.912 qpair failed and we were unable to recover it. 00:32:35.912 [2024-11-20 06:43:07.440414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.912 [2024-11-20 06:43:07.440444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.912 qpair failed and we were unable to recover it. 00:32:35.912 [2024-11-20 06:43:07.440624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.912 [2024-11-20 06:43:07.440646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.912 qpair failed and we were unable to recover it. 00:32:35.912 [2024-11-20 06:43:07.440881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.912 [2024-11-20 06:43:07.440913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.912 qpair failed and we were unable to recover it. 00:32:35.912 [2024-11-20 06:43:07.441193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.912 [2024-11-20 06:43:07.441235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.912 qpair failed and we were unable to recover it. 00:32:35.912 [2024-11-20 06:43:07.441513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.912 [2024-11-20 06:43:07.441535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.912 qpair failed and we were unable to recover it. 00:32:35.912 [2024-11-20 06:43:07.441771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.912 [2024-11-20 06:43:07.441793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.912 qpair failed and we were unable to recover it. 00:32:35.912 [2024-11-20 06:43:07.442000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.912 [2024-11-20 06:43:07.442023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.912 qpair failed and we were unable to recover it. 00:32:35.912 [2024-11-20 06:43:07.442252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.912 [2024-11-20 06:43:07.442275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.912 qpair failed and we were unable to recover it. 00:32:35.912 [2024-11-20 06:43:07.442505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.912 [2024-11-20 06:43:07.442527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.912 qpair failed and we were unable to recover it. 00:32:35.912 [2024-11-20 06:43:07.442700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.912 [2024-11-20 06:43:07.442723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.912 qpair failed and we were unable to recover it. 00:32:35.912 [2024-11-20 06:43:07.442916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.912 [2024-11-20 06:43:07.442938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.912 qpair failed and we were unable to recover it. 00:32:35.912 [2024-11-20 06:43:07.443098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.912 [2024-11-20 06:43:07.443120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.912 qpair failed and we were unable to recover it. 00:32:35.912 [2024-11-20 06:43:07.443369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.912 [2024-11-20 06:43:07.443403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.912 qpair failed and we were unable to recover it. 00:32:35.912 [2024-11-20 06:43:07.443687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.912 [2024-11-20 06:43:07.443718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.912 qpair failed and we were unable to recover it. 00:32:35.912 [2024-11-20 06:43:07.443937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.912 [2024-11-20 06:43:07.443970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.912 qpair failed and we were unable to recover it. 00:32:35.912 [2024-11-20 06:43:07.444171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.912 [2024-11-20 06:43:07.444212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.912 qpair failed and we were unable to recover it. 00:32:35.912 [2024-11-20 06:43:07.444438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.912 [2024-11-20 06:43:07.444461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.912 qpair failed and we were unable to recover it. 00:32:35.913 [2024-11-20 06:43:07.444714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.913 [2024-11-20 06:43:07.444746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.913 qpair failed and we were unable to recover it. 00:32:35.913 [2024-11-20 06:43:07.444951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.913 [2024-11-20 06:43:07.444983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.913 qpair failed and we were unable to recover it. 00:32:35.913 [2024-11-20 06:43:07.445237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.913 [2024-11-20 06:43:07.445260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.913 qpair failed and we were unable to recover it. 00:32:35.913 [2024-11-20 06:43:07.445546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.913 [2024-11-20 06:43:07.445578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.913 qpair failed and we were unable to recover it. 00:32:35.913 [2024-11-20 06:43:07.445726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.913 [2024-11-20 06:43:07.445758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.913 qpair failed and we were unable to recover it. 00:32:35.913 [2024-11-20 06:43:07.446035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.913 [2024-11-20 06:43:07.446066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.913 qpair failed and we were unable to recover it. 00:32:35.913 [2024-11-20 06:43:07.446386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.913 [2024-11-20 06:43:07.446419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.913 qpair failed and we were unable to recover it. 00:32:35.913 [2024-11-20 06:43:07.446624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.913 [2024-11-20 06:43:07.446656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.913 qpair failed and we were unable to recover it. 00:32:35.913 [2024-11-20 06:43:07.446779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.913 [2024-11-20 06:43:07.446810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.913 qpair failed and we were unable to recover it. 00:32:35.913 [2024-11-20 06:43:07.447084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.913 [2024-11-20 06:43:07.447116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.913 qpair failed and we were unable to recover it. 00:32:35.913 [2024-11-20 06:43:07.447309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.913 [2024-11-20 06:43:07.447333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.913 qpair failed and we were unable to recover it. 00:32:35.913 [2024-11-20 06:43:07.447547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.913 [2024-11-20 06:43:07.447579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.913 qpair failed and we were unable to recover it. 00:32:35.913 [2024-11-20 06:43:07.447788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.913 [2024-11-20 06:43:07.447820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.913 qpair failed and we were unable to recover it. 00:32:35.913 [2024-11-20 06:43:07.448032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.913 [2024-11-20 06:43:07.448064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.913 qpair failed and we were unable to recover it. 00:32:35.913 [2024-11-20 06:43:07.448368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.913 [2024-11-20 06:43:07.448401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.913 qpair failed and we were unable to recover it. 00:32:35.913 [2024-11-20 06:43:07.448669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.913 [2024-11-20 06:43:07.448701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.913 qpair failed and we were unable to recover it. 00:32:35.913 [2024-11-20 06:43:07.448960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.913 [2024-11-20 06:43:07.449003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.913 qpair failed and we were unable to recover it. 00:32:35.913 [2024-11-20 06:43:07.449180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.913 [2024-11-20 06:43:07.449208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.913 qpair failed and we were unable to recover it. 00:32:35.913 [2024-11-20 06:43:07.449439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.913 [2024-11-20 06:43:07.449462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.913 qpair failed and we were unable to recover it. 00:32:35.913 [2024-11-20 06:43:07.449724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.913 [2024-11-20 06:43:07.449746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.913 qpair failed and we were unable to recover it. 00:32:35.913 [2024-11-20 06:43:07.449990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.913 [2024-11-20 06:43:07.450011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.913 qpair failed and we were unable to recover it. 00:32:35.913 [2024-11-20 06:43:07.450231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.913 [2024-11-20 06:43:07.450254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.913 qpair failed and we were unable to recover it. 00:32:35.913 [2024-11-20 06:43:07.450437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.913 [2024-11-20 06:43:07.450459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.913 qpair failed and we were unable to recover it. 00:32:35.913 [2024-11-20 06:43:07.450715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.913 [2024-11-20 06:43:07.450737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.913 qpair failed and we were unable to recover it. 00:32:35.913 [2024-11-20 06:43:07.450975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.913 [2024-11-20 06:43:07.451001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.913 qpair failed and we were unable to recover it. 00:32:35.913 [2024-11-20 06:43:07.451251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.913 [2024-11-20 06:43:07.451275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.913 qpair failed and we were unable to recover it. 00:32:35.913 [2024-11-20 06:43:07.451515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.913 [2024-11-20 06:43:07.451538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.913 qpair failed and we were unable to recover it. 00:32:35.913 [2024-11-20 06:43:07.451709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.913 [2024-11-20 06:43:07.451731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.913 qpair failed and we were unable to recover it. 00:32:35.913 [2024-11-20 06:43:07.451999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.913 [2024-11-20 06:43:07.452021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.913 qpair failed and we were unable to recover it. 00:32:35.913 [2024-11-20 06:43:07.452194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.913 [2024-11-20 06:43:07.452235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.913 qpair failed and we were unable to recover it. 00:32:35.913 [2024-11-20 06:43:07.452487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.913 [2024-11-20 06:43:07.452508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.913 qpair failed and we were unable to recover it. 00:32:35.913 [2024-11-20 06:43:07.452758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.913 [2024-11-20 06:43:07.452781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.913 qpair failed and we were unable to recover it. 00:32:35.913 [2024-11-20 06:43:07.453075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.913 [2024-11-20 06:43:07.453109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.913 qpair failed and we were unable to recover it. 00:32:35.913 [2024-11-20 06:43:07.453393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.913 [2024-11-20 06:43:07.453427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.913 qpair failed and we were unable to recover it. 00:32:35.913 [2024-11-20 06:43:07.453728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.913 [2024-11-20 06:43:07.453761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.913 qpair failed and we were unable to recover it. 00:32:35.913 [2024-11-20 06:43:07.453990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.913 [2024-11-20 06:43:07.454021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.913 qpair failed and we were unable to recover it. 00:32:35.913 [2024-11-20 06:43:07.454147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.913 [2024-11-20 06:43:07.454178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.913 qpair failed and we were unable to recover it. 00:32:35.913 [2024-11-20 06:43:07.454400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.913 [2024-11-20 06:43:07.454434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.913 qpair failed and we were unable to recover it. 00:32:35.914 [2024-11-20 06:43:07.454727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.914 [2024-11-20 06:43:07.454749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.914 qpair failed and we were unable to recover it. 00:32:35.914 [2024-11-20 06:43:07.454945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.914 [2024-11-20 06:43:07.454967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.914 qpair failed and we were unable to recover it. 00:32:35.914 [2024-11-20 06:43:07.455210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.914 [2024-11-20 06:43:07.455233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.914 qpair failed and we were unable to recover it. 00:32:35.914 [2024-11-20 06:43:07.455515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.914 [2024-11-20 06:43:07.455557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.914 qpair failed and we were unable to recover it. 00:32:35.914 [2024-11-20 06:43:07.455833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.914 [2024-11-20 06:43:07.455866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.914 qpair failed and we were unable to recover it. 00:32:35.914 [2024-11-20 06:43:07.456151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.914 [2024-11-20 06:43:07.456183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.914 qpair failed and we were unable to recover it. 00:32:35.914 [2024-11-20 06:43:07.456466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.914 [2024-11-20 06:43:07.456498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.914 qpair failed and we were unable to recover it. 00:32:35.914 [2024-11-20 06:43:07.456784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.914 [2024-11-20 06:43:07.456816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.914 qpair failed and we were unable to recover it. 00:32:35.914 [2024-11-20 06:43:07.457100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.914 [2024-11-20 06:43:07.457132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.914 qpair failed and we were unable to recover it. 00:32:35.914 [2024-11-20 06:43:07.457381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.914 [2024-11-20 06:43:07.457422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.914 qpair failed and we were unable to recover it. 00:32:35.914 [2024-11-20 06:43:07.457721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.914 [2024-11-20 06:43:07.457754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.914 qpair failed and we were unable to recover it. 00:32:35.914 [2024-11-20 06:43:07.457967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.914 [2024-11-20 06:43:07.457999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.914 qpair failed and we were unable to recover it. 00:32:35.914 [2024-11-20 06:43:07.458263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.914 [2024-11-20 06:43:07.458302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.914 qpair failed and we were unable to recover it. 00:32:35.914 [2024-11-20 06:43:07.458541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.914 [2024-11-20 06:43:07.458568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.914 qpair failed and we were unable to recover it. 00:32:35.914 [2024-11-20 06:43:07.458751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.914 [2024-11-20 06:43:07.458774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.914 qpair failed and we were unable to recover it. 00:32:35.914 [2024-11-20 06:43:07.459029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.914 [2024-11-20 06:43:07.459051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.914 qpair failed and we were unable to recover it. 00:32:35.914 [2024-11-20 06:43:07.459332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.914 [2024-11-20 06:43:07.459355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.914 qpair failed and we were unable to recover it. 00:32:35.914 [2024-11-20 06:43:07.459545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.914 [2024-11-20 06:43:07.459567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.914 qpair failed and we were unable to recover it. 00:32:35.914 [2024-11-20 06:43:07.459826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.914 [2024-11-20 06:43:07.459858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.914 qpair failed and we were unable to recover it. 00:32:35.914 [2024-11-20 06:43:07.460053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.914 [2024-11-20 06:43:07.460084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.914 qpair failed and we were unable to recover it. 00:32:35.914 [2024-11-20 06:43:07.460393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.914 [2024-11-20 06:43:07.460427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.914 qpair failed and we were unable to recover it. 00:32:35.914 [2024-11-20 06:43:07.460713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.914 [2024-11-20 06:43:07.460736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.914 qpair failed and we were unable to recover it. 00:32:35.914 [2024-11-20 06:43:07.460993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.914 [2024-11-20 06:43:07.461016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.914 qpair failed and we were unable to recover it. 00:32:35.914 [2024-11-20 06:43:07.461230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.914 [2024-11-20 06:43:07.461269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.914 qpair failed and we were unable to recover it. 00:32:35.914 [2024-11-20 06:43:07.461519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.914 [2024-11-20 06:43:07.461541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.914 qpair failed and we were unable to recover it. 00:32:35.914 [2024-11-20 06:43:07.461740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.914 [2024-11-20 06:43:07.461763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.914 qpair failed and we were unable to recover it. 00:32:35.914 [2024-11-20 06:43:07.461901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.914 [2024-11-20 06:43:07.461923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.914 qpair failed and we were unable to recover it. 00:32:35.914 [2024-11-20 06:43:07.462164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.914 [2024-11-20 06:43:07.462197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.914 qpair failed and we were unable to recover it. 00:32:35.914 [2024-11-20 06:43:07.462518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.914 [2024-11-20 06:43:07.462552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.914 qpair failed and we were unable to recover it. 00:32:35.914 [2024-11-20 06:43:07.462809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.914 [2024-11-20 06:43:07.462840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.914 qpair failed and we were unable to recover it. 00:32:35.914 [2024-11-20 06:43:07.463132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.914 [2024-11-20 06:43:07.463164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.914 qpair failed and we were unable to recover it. 00:32:35.914 [2024-11-20 06:43:07.463471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.914 [2024-11-20 06:43:07.463505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.914 qpair failed and we were unable to recover it. 00:32:35.914 [2024-11-20 06:43:07.463716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.914 [2024-11-20 06:43:07.463748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.915 qpair failed and we were unable to recover it. 00:32:35.915 [2024-11-20 06:43:07.464035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.915 [2024-11-20 06:43:07.464067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.915 qpair failed and we were unable to recover it. 00:32:35.915 [2024-11-20 06:43:07.464213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.915 [2024-11-20 06:43:07.464247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.915 qpair failed and we were unable to recover it. 00:32:35.915 [2024-11-20 06:43:07.464544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.915 [2024-11-20 06:43:07.464577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.915 qpair failed and we were unable to recover it. 00:32:35.915 [2024-11-20 06:43:07.464855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.915 [2024-11-20 06:43:07.464886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.915 qpair failed and we were unable to recover it. 00:32:35.915 [2024-11-20 06:43:07.465171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.915 [2024-11-20 06:43:07.465213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.915 qpair failed and we were unable to recover it. 00:32:35.915 [2024-11-20 06:43:07.465487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.915 [2024-11-20 06:43:07.465519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.915 qpair failed and we were unable to recover it. 00:32:35.915 [2024-11-20 06:43:07.465801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.915 [2024-11-20 06:43:07.465832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.915 qpair failed and we were unable to recover it. 00:32:35.915 [2024-11-20 06:43:07.466119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.915 [2024-11-20 06:43:07.466151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.915 qpair failed and we were unable to recover it. 00:32:35.915 [2024-11-20 06:43:07.466429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.915 [2024-11-20 06:43:07.466463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.915 qpair failed and we were unable to recover it. 00:32:35.915 [2024-11-20 06:43:07.466751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.915 [2024-11-20 06:43:07.466784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.915 qpair failed and we were unable to recover it. 00:32:35.915 [2024-11-20 06:43:07.467087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.915 [2024-11-20 06:43:07.467120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.915 qpair failed and we were unable to recover it. 00:32:35.915 [2024-11-20 06:43:07.467323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.915 [2024-11-20 06:43:07.467357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.915 qpair failed and we were unable to recover it. 00:32:35.915 [2024-11-20 06:43:07.467636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.915 [2024-11-20 06:43:07.467658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.915 qpair failed and we were unable to recover it. 00:32:35.915 [2024-11-20 06:43:07.467948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.915 [2024-11-20 06:43:07.467979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.915 qpair failed and we were unable to recover it. 00:32:35.915 [2024-11-20 06:43:07.468272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.915 [2024-11-20 06:43:07.468307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.915 qpair failed and we were unable to recover it. 00:32:35.915 [2024-11-20 06:43:07.468507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.915 [2024-11-20 06:43:07.468539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.915 qpair failed and we were unable to recover it. 00:32:35.915 [2024-11-20 06:43:07.468846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.915 [2024-11-20 06:43:07.468879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.915 qpair failed and we were unable to recover it. 00:32:35.915 [2024-11-20 06:43:07.469113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.915 [2024-11-20 06:43:07.469145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.915 qpair failed and we were unable to recover it. 00:32:35.915 [2024-11-20 06:43:07.469474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.915 [2024-11-20 06:43:07.469508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.915 qpair failed and we were unable to recover it. 00:32:35.915 [2024-11-20 06:43:07.469802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.915 [2024-11-20 06:43:07.469834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.915 qpair failed and we were unable to recover it. 00:32:35.915 [2024-11-20 06:43:07.470135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.915 [2024-11-20 06:43:07.470167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.915 qpair failed and we were unable to recover it. 00:32:35.915 [2024-11-20 06:43:07.470386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.915 [2024-11-20 06:43:07.470426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.915 qpair failed and we were unable to recover it. 00:32:35.915 [2024-11-20 06:43:07.470710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.915 [2024-11-20 06:43:07.470743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.915 qpair failed and we were unable to recover it. 00:32:35.915 [2024-11-20 06:43:07.471024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.915 [2024-11-20 06:43:07.471056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.915 qpair failed and we were unable to recover it. 00:32:35.915 [2024-11-20 06:43:07.471266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.915 [2024-11-20 06:43:07.471300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.915 qpair failed and we were unable to recover it. 00:32:35.915 [2024-11-20 06:43:07.471509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.915 [2024-11-20 06:43:07.471531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.915 qpair failed and we were unable to recover it. 00:32:35.915 [2024-11-20 06:43:07.471765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.915 [2024-11-20 06:43:07.471788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.915 qpair failed and we were unable to recover it. 00:32:35.915 [2024-11-20 06:43:07.471946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.915 [2024-11-20 06:43:07.471968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.915 qpair failed and we were unable to recover it. 00:32:35.915 [2024-11-20 06:43:07.472224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.915 [2024-11-20 06:43:07.472246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.915 qpair failed and we were unable to recover it. 00:32:35.915 [2024-11-20 06:43:07.472422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.915 [2024-11-20 06:43:07.472445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.915 qpair failed and we were unable to recover it. 00:32:35.915 [2024-11-20 06:43:07.472608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.915 [2024-11-20 06:43:07.472630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.915 qpair failed and we were unable to recover it. 00:32:35.915 [2024-11-20 06:43:07.472817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.915 [2024-11-20 06:43:07.472849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.915 qpair failed and we were unable to recover it. 00:32:35.915 [2024-11-20 06:43:07.473131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.915 [2024-11-20 06:43:07.473164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.915 qpair failed and we were unable to recover it. 00:32:35.915 [2024-11-20 06:43:07.473455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.915 [2024-11-20 06:43:07.473489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.915 qpair failed and we were unable to recover it. 00:32:35.915 [2024-11-20 06:43:07.473762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.915 [2024-11-20 06:43:07.473793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.915 qpair failed and we were unable to recover it. 00:32:35.915 [2024-11-20 06:43:07.474090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.915 [2024-11-20 06:43:07.474122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.915 qpair failed and we were unable to recover it. 00:32:35.915 [2024-11-20 06:43:07.474322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.915 [2024-11-20 06:43:07.474357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.915 qpair failed and we were unable to recover it. 00:32:35.915 [2024-11-20 06:43:07.474553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.915 [2024-11-20 06:43:07.474574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.915 qpair failed and we were unable to recover it. 00:32:35.916 [2024-11-20 06:43:07.474756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.916 [2024-11-20 06:43:07.474778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.916 qpair failed and we were unable to recover it. 00:32:35.916 [2024-11-20 06:43:07.474911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.916 [2024-11-20 06:43:07.474943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.916 qpair failed and we were unable to recover it. 00:32:35.916 [2024-11-20 06:43:07.475227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.916 [2024-11-20 06:43:07.475260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.916 qpair failed and we were unable to recover it. 00:32:35.916 [2024-11-20 06:43:07.475564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.916 [2024-11-20 06:43:07.475596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.916 qpair failed and we were unable to recover it. 00:32:35.916 [2024-11-20 06:43:07.475860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.916 [2024-11-20 06:43:07.475891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.916 qpair failed and we were unable to recover it. 00:32:35.916 [2024-11-20 06:43:07.476023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.916 [2024-11-20 06:43:07.476055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.916 qpair failed and we were unable to recover it. 00:32:35.916 [2024-11-20 06:43:07.476244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.916 [2024-11-20 06:43:07.476278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.916 qpair failed and we were unable to recover it. 00:32:35.916 [2024-11-20 06:43:07.476479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.916 [2024-11-20 06:43:07.476511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.916 qpair failed and we were unable to recover it. 00:32:35.916 [2024-11-20 06:43:07.476714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.916 [2024-11-20 06:43:07.476745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.916 qpair failed and we were unable to recover it. 00:32:35.916 [2024-11-20 06:43:07.476972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.916 [2024-11-20 06:43:07.477003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.916 qpair failed and we were unable to recover it. 00:32:35.916 [2024-11-20 06:43:07.477212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.916 [2024-11-20 06:43:07.477236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.916 qpair failed and we were unable to recover it. 00:32:35.916 [2024-11-20 06:43:07.477487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.916 [2024-11-20 06:43:07.477509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.916 qpair failed and we were unable to recover it. 00:32:35.916 [2024-11-20 06:43:07.477744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.916 [2024-11-20 06:43:07.477767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.916 qpair failed and we were unable to recover it. 00:32:35.916 [2024-11-20 06:43:07.478024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.916 [2024-11-20 06:43:07.478046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.916 qpair failed and we were unable to recover it. 00:32:35.916 [2024-11-20 06:43:07.478292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.916 [2024-11-20 06:43:07.478315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.916 qpair failed and we were unable to recover it. 00:32:35.916 [2024-11-20 06:43:07.478619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.916 [2024-11-20 06:43:07.478652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.916 qpair failed and we were unable to recover it. 00:32:35.916 [2024-11-20 06:43:07.481452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.916 [2024-11-20 06:43:07.481488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.916 qpair failed and we were unable to recover it. 00:32:35.916 [2024-11-20 06:43:07.481711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.916 [2024-11-20 06:43:07.481743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.916 qpair failed and we were unable to recover it. 00:32:35.916 [2024-11-20 06:43:07.481941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.916 [2024-11-20 06:43:07.481973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.916 qpair failed and we were unable to recover it. 00:32:35.916 [2024-11-20 06:43:07.482232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.916 [2024-11-20 06:43:07.482265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.916 qpair failed and we were unable to recover it. 00:32:35.916 [2024-11-20 06:43:07.482546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.916 [2024-11-20 06:43:07.482579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.916 qpair failed and we were unable to recover it. 00:32:35.916 [2024-11-20 06:43:07.482867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.916 [2024-11-20 06:43:07.482900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.916 qpair failed and we were unable to recover it. 00:32:35.916 [2024-11-20 06:43:07.483213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.916 [2024-11-20 06:43:07.483247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.916 qpair failed and we were unable to recover it. 00:32:35.916 [2024-11-20 06:43:07.483403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.916 [2024-11-20 06:43:07.483436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.916 qpair failed and we were unable to recover it. 00:32:35.916 [2024-11-20 06:43:07.483651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.916 [2024-11-20 06:43:07.483684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.916 qpair failed and we were unable to recover it. 00:32:35.916 [2024-11-20 06:43:07.483944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.916 [2024-11-20 06:43:07.483975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.916 qpair failed and we were unable to recover it. 00:32:35.916 [2024-11-20 06:43:07.484162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.916 [2024-11-20 06:43:07.484193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.916 qpair failed and we were unable to recover it. 00:32:35.916 [2024-11-20 06:43:07.484427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.916 [2024-11-20 06:43:07.484459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.916 qpair failed and we were unable to recover it. 00:32:35.916 [2024-11-20 06:43:07.484665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.916 [2024-11-20 06:43:07.484697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.916 qpair failed and we were unable to recover it. 00:32:35.916 [2024-11-20 06:43:07.484994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.916 [2024-11-20 06:43:07.485025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.916 qpair failed and we were unable to recover it. 00:32:35.916 [2024-11-20 06:43:07.485295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.916 [2024-11-20 06:43:07.485328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.916 qpair failed and we were unable to recover it. 00:32:35.916 [2024-11-20 06:43:07.485608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.916 [2024-11-20 06:43:07.485640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.916 qpair failed and we were unable to recover it. 00:32:35.916 [2024-11-20 06:43:07.485925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.916 [2024-11-20 06:43:07.485957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.916 qpair failed and we were unable to recover it. 00:32:35.916 [2024-11-20 06:43:07.486243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.916 [2024-11-20 06:43:07.486276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.916 qpair failed and we were unable to recover it. 00:32:35.916 [2024-11-20 06:43:07.486514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.916 [2024-11-20 06:43:07.486547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.916 qpair failed and we were unable to recover it. 00:32:35.916 [2024-11-20 06:43:07.486728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.916 [2024-11-20 06:43:07.486760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.916 qpair failed and we were unable to recover it. 00:32:35.916 [2024-11-20 06:43:07.487019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.916 [2024-11-20 06:43:07.487051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.916 qpair failed and we were unable to recover it. 00:32:35.916 [2024-11-20 06:43:07.487351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.916 [2024-11-20 06:43:07.487385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.916 qpair failed and we were unable to recover it. 00:32:35.916 [2024-11-20 06:43:07.487655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.917 [2024-11-20 06:43:07.487687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.917 qpair failed and we were unable to recover it. 00:32:35.917 [2024-11-20 06:43:07.487897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.917 [2024-11-20 06:43:07.487929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.917 qpair failed and we were unable to recover it. 00:32:35.917 [2024-11-20 06:43:07.488226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.917 [2024-11-20 06:43:07.488260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.917 qpair failed and we were unable to recover it. 00:32:35.917 [2024-11-20 06:43:07.488529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.917 [2024-11-20 06:43:07.488561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.917 qpair failed and we were unable to recover it. 00:32:35.917 [2024-11-20 06:43:07.488831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.917 [2024-11-20 06:43:07.488863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.917 qpair failed and we were unable to recover it. 00:32:35.917 [2024-11-20 06:43:07.489116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.917 [2024-11-20 06:43:07.489148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.917 qpair failed and we were unable to recover it. 00:32:35.917 [2024-11-20 06:43:07.489438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.917 [2024-11-20 06:43:07.489472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.917 qpair failed and we were unable to recover it. 00:32:35.917 [2024-11-20 06:43:07.489746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.917 [2024-11-20 06:43:07.489778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.917 qpair failed and we were unable to recover it. 00:32:35.917 [2024-11-20 06:43:07.490073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.917 [2024-11-20 06:43:07.490104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.917 qpair failed and we were unable to recover it. 00:32:35.917 [2024-11-20 06:43:07.490382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.917 [2024-11-20 06:43:07.490416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.917 qpair failed and we were unable to recover it. 00:32:35.917 [2024-11-20 06:43:07.490700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.917 [2024-11-20 06:43:07.490732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.917 qpair failed and we were unable to recover it. 00:32:35.917 [2024-11-20 06:43:07.490987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.917 [2024-11-20 06:43:07.491018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.917 qpair failed and we were unable to recover it. 00:32:35.917 [2024-11-20 06:43:07.491222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.917 [2024-11-20 06:43:07.491256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.917 qpair failed and we were unable to recover it. 00:32:35.917 [2024-11-20 06:43:07.491535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.917 [2024-11-20 06:43:07.491575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.917 qpair failed and we were unable to recover it. 00:32:35.917 [2024-11-20 06:43:07.491847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.917 [2024-11-20 06:43:07.491878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.917 qpair failed and we were unable to recover it. 00:32:35.917 [2024-11-20 06:43:07.492132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.917 [2024-11-20 06:43:07.492165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.917 qpair failed and we were unable to recover it. 00:32:35.917 [2024-11-20 06:43:07.492395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.917 [2024-11-20 06:43:07.492429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.917 qpair failed and we were unable to recover it. 00:32:35.917 [2024-11-20 06:43:07.492682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.917 [2024-11-20 06:43:07.492714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.917 qpair failed and we were unable to recover it. 00:32:35.917 [2024-11-20 06:43:07.492917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.917 [2024-11-20 06:43:07.492949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.917 qpair failed and we were unable to recover it. 00:32:35.917 [2024-11-20 06:43:07.493233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.917 [2024-11-20 06:43:07.493268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.917 qpair failed and we were unable to recover it. 00:32:35.917 [2024-11-20 06:43:07.493412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.917 [2024-11-20 06:43:07.493444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.917 qpair failed and we were unable to recover it. 00:32:35.917 [2024-11-20 06:43:07.493743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.917 [2024-11-20 06:43:07.493774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.917 qpair failed and we were unable to recover it. 00:32:35.917 [2024-11-20 06:43:07.494046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.917 [2024-11-20 06:43:07.494078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.917 qpair failed and we were unable to recover it. 00:32:35.917 [2024-11-20 06:43:07.494284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.917 [2024-11-20 06:43:07.494317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.917 qpair failed and we were unable to recover it. 00:32:35.917 [2024-11-20 06:43:07.494585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.917 [2024-11-20 06:43:07.494617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.917 qpair failed and we were unable to recover it. 00:32:35.917 [2024-11-20 06:43:07.494899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.917 [2024-11-20 06:43:07.494931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.917 qpair failed and we were unable to recover it. 00:32:35.917 [2024-11-20 06:43:07.495215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.917 [2024-11-20 06:43:07.495249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.917 qpair failed and we were unable to recover it. 00:32:35.917 [2024-11-20 06:43:07.495535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.917 [2024-11-20 06:43:07.495567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.917 qpair failed and we were unable to recover it. 00:32:35.917 [2024-11-20 06:43:07.495856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.917 [2024-11-20 06:43:07.495887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.917 qpair failed and we were unable to recover it. 00:32:35.917 [2024-11-20 06:43:07.496095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.917 [2024-11-20 06:43:07.496127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.917 qpair failed and we were unable to recover it. 00:32:35.917 [2024-11-20 06:43:07.496334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.917 [2024-11-20 06:43:07.496368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.917 qpair failed and we were unable to recover it. 00:32:35.917 [2024-11-20 06:43:07.496626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.917 [2024-11-20 06:43:07.496659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.917 qpair failed and we were unable to recover it. 00:32:35.917 [2024-11-20 06:43:07.496912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.917 [2024-11-20 06:43:07.496944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.917 qpair failed and we were unable to recover it. 00:32:35.917 [2024-11-20 06:43:07.497248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.917 [2024-11-20 06:43:07.497281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.917 qpair failed and we were unable to recover it. 00:32:35.917 [2024-11-20 06:43:07.497563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.917 [2024-11-20 06:43:07.497596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.917 qpair failed and we were unable to recover it. 00:32:35.917 [2024-11-20 06:43:07.497877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.917 [2024-11-20 06:43:07.497909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.917 qpair failed and we were unable to recover it. 00:32:35.917 [2024-11-20 06:43:07.498198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.917 [2024-11-20 06:43:07.498242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.917 qpair failed and we were unable to recover it. 00:32:35.917 [2024-11-20 06:43:07.498510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.917 [2024-11-20 06:43:07.498542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.917 qpair failed and we were unable to recover it. 00:32:35.917 [2024-11-20 06:43:07.498837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.918 [2024-11-20 06:43:07.498868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.918 qpair failed and we were unable to recover it. 00:32:35.918 [2024-11-20 06:43:07.499143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.918 [2024-11-20 06:43:07.499175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.918 qpair failed and we were unable to recover it. 00:32:35.918 [2024-11-20 06:43:07.499471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.918 [2024-11-20 06:43:07.499503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.918 qpair failed and we were unable to recover it. 00:32:35.918 [2024-11-20 06:43:07.499776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.918 [2024-11-20 06:43:07.499808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.918 qpair failed and we were unable to recover it. 00:32:35.918 [2024-11-20 06:43:07.500075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.918 [2024-11-20 06:43:07.500107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.918 qpair failed and we were unable to recover it. 00:32:35.918 [2024-11-20 06:43:07.500368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.918 [2024-11-20 06:43:07.500402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.918 qpair failed and we were unable to recover it. 00:32:35.918 [2024-11-20 06:43:07.500705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.918 [2024-11-20 06:43:07.500737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.918 qpair failed and we were unable to recover it. 00:32:35.918 [2024-11-20 06:43:07.500946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.918 [2024-11-20 06:43:07.500977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.918 qpair failed and we were unable to recover it. 00:32:35.918 [2024-11-20 06:43:07.501261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.918 [2024-11-20 06:43:07.501294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.918 qpair failed and we were unable to recover it. 00:32:35.918 [2024-11-20 06:43:07.501560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.918 [2024-11-20 06:43:07.501592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.918 qpair failed and we were unable to recover it. 00:32:35.918 [2024-11-20 06:43:07.501808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.918 [2024-11-20 06:43:07.501840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.918 qpair failed and we were unable to recover it. 00:32:35.918 [2024-11-20 06:43:07.502096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.918 [2024-11-20 06:43:07.502128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.918 qpair failed and we were unable to recover it. 00:32:35.918 [2024-11-20 06:43:07.502389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.918 [2024-11-20 06:43:07.502423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.918 qpair failed and we were unable to recover it. 00:32:35.918 [2024-11-20 06:43:07.502706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.918 [2024-11-20 06:43:07.502738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.918 qpair failed and we were unable to recover it. 00:32:35.918 [2024-11-20 06:43:07.502953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.918 [2024-11-20 06:43:07.502984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.918 qpair failed and we were unable to recover it. 00:32:35.918 [2024-11-20 06:43:07.503180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.918 [2024-11-20 06:43:07.503220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.918 qpair failed and we were unable to recover it. 00:32:35.918 [2024-11-20 06:43:07.503499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.918 [2024-11-20 06:43:07.503537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.918 qpair failed and we were unable to recover it. 00:32:35.918 [2024-11-20 06:43:07.503837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.918 [2024-11-20 06:43:07.503869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.918 qpair failed and we were unable to recover it. 00:32:35.918 [2024-11-20 06:43:07.504129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.918 [2024-11-20 06:43:07.504161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.918 qpair failed and we were unable to recover it. 00:32:35.918 [2024-11-20 06:43:07.504459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.918 [2024-11-20 06:43:07.504492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.918 qpair failed and we were unable to recover it. 00:32:35.918 [2024-11-20 06:43:07.504751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.918 [2024-11-20 06:43:07.504782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.918 qpair failed and we were unable to recover it. 00:32:35.918 [2024-11-20 06:43:07.505038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.918 [2024-11-20 06:43:07.505071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.918 qpair failed and we were unable to recover it. 00:32:35.918 [2024-11-20 06:43:07.505383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.918 [2024-11-20 06:43:07.505416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.918 qpair failed and we were unable to recover it. 00:32:35.918 [2024-11-20 06:43:07.505716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.918 [2024-11-20 06:43:07.505750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.918 qpair failed and we were unable to recover it. 00:32:35.918 [2024-11-20 06:43:07.506023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.918 [2024-11-20 06:43:07.506055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.918 qpair failed and we were unable to recover it. 00:32:35.918 [2024-11-20 06:43:07.506351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.918 [2024-11-20 06:43:07.506384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.918 qpair failed and we were unable to recover it. 00:32:35.918 [2024-11-20 06:43:07.506597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.918 [2024-11-20 06:43:07.506628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.918 qpair failed and we were unable to recover it. 00:32:35.918 [2024-11-20 06:43:07.506832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.918 [2024-11-20 06:43:07.506864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.918 qpair failed and we were unable to recover it. 00:32:35.918 [2024-11-20 06:43:07.507140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.918 [2024-11-20 06:43:07.507172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.918 qpair failed and we were unable to recover it. 00:32:35.918 [2024-11-20 06:43:07.507385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.918 [2024-11-20 06:43:07.507418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.918 qpair failed and we were unable to recover it. 00:32:35.918 [2024-11-20 06:43:07.507702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.918 [2024-11-20 06:43:07.507735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.918 qpair failed and we were unable to recover it. 00:32:35.918 [2024-11-20 06:43:07.508043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.918 [2024-11-20 06:43:07.508074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.918 qpair failed and we were unable to recover it. 00:32:35.918 [2024-11-20 06:43:07.508331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.918 [2024-11-20 06:43:07.508365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.918 qpair failed and we were unable to recover it. 00:32:35.918 [2024-11-20 06:43:07.508624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.918 [2024-11-20 06:43:07.508656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.918 qpair failed and we were unable to recover it. 00:32:35.918 [2024-11-20 06:43:07.508956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.918 [2024-11-20 06:43:07.508988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.918 qpair failed and we were unable to recover it. 00:32:35.918 [2024-11-20 06:43:07.509257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.918 [2024-11-20 06:43:07.509292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.918 qpair failed and we were unable to recover it. 00:32:35.918 [2024-11-20 06:43:07.509565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.918 [2024-11-20 06:43:07.509597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.918 qpair failed and we were unable to recover it. 00:32:35.918 [2024-11-20 06:43:07.509884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.918 [2024-11-20 06:43:07.509915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.918 qpair failed and we were unable to recover it. 00:32:35.918 [2024-11-20 06:43:07.510223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.919 [2024-11-20 06:43:07.510256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.919 qpair failed and we were unable to recover it. 00:32:35.919 [2024-11-20 06:43:07.510516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.919 [2024-11-20 06:43:07.510549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.919 qpair failed and we were unable to recover it. 00:32:35.919 [2024-11-20 06:43:07.510781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.919 [2024-11-20 06:43:07.510813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.919 qpair failed and we were unable to recover it. 00:32:35.919 [2024-11-20 06:43:07.511003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.919 [2024-11-20 06:43:07.511036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.919 qpair failed and we were unable to recover it. 00:32:35.919 [2024-11-20 06:43:07.511342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.919 [2024-11-20 06:43:07.511377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.919 qpair failed and we were unable to recover it. 00:32:35.919 [2024-11-20 06:43:07.511607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.919 [2024-11-20 06:43:07.511644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.919 qpair failed and we were unable to recover it. 00:32:35.919 [2024-11-20 06:43:07.511878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.919 [2024-11-20 06:43:07.511911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.919 qpair failed and we were unable to recover it. 00:32:35.919 [2024-11-20 06:43:07.512190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.919 [2024-11-20 06:43:07.512247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.919 qpair failed and we were unable to recover it. 00:32:35.919 [2024-11-20 06:43:07.512434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.919 [2024-11-20 06:43:07.512466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.919 qpair failed and we were unable to recover it. 00:32:35.919 [2024-11-20 06:43:07.512654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.919 [2024-11-20 06:43:07.512686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.919 qpair failed and we were unable to recover it. 00:32:35.919 [2024-11-20 06:43:07.512963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.919 [2024-11-20 06:43:07.512995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.919 qpair failed and we were unable to recover it. 00:32:35.919 [2024-11-20 06:43:07.513227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.919 [2024-11-20 06:43:07.513262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.919 qpair failed and we were unable to recover it. 00:32:35.919 [2024-11-20 06:43:07.513543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.919 [2024-11-20 06:43:07.513576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.919 qpair failed and we were unable to recover it. 00:32:35.919 [2024-11-20 06:43:07.513851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.919 [2024-11-20 06:43:07.513883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.919 qpair failed and we were unable to recover it. 00:32:35.919 [2024-11-20 06:43:07.514174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.919 [2024-11-20 06:43:07.514216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.919 qpair failed and we were unable to recover it. 00:32:35.919 [2024-11-20 06:43:07.514484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.919 [2024-11-20 06:43:07.514517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.919 qpair failed and we were unable to recover it. 00:32:35.919 [2024-11-20 06:43:07.514813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.919 [2024-11-20 06:43:07.514844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.919 qpair failed and we were unable to recover it. 00:32:35.919 [2024-11-20 06:43:07.515096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.919 [2024-11-20 06:43:07.515127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.919 qpair failed and we were unable to recover it. 00:32:35.919 [2024-11-20 06:43:07.515332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.919 [2024-11-20 06:43:07.515365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.919 qpair failed and we were unable to recover it. 00:32:35.919 [2024-11-20 06:43:07.515626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.919 [2024-11-20 06:43:07.515659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.919 qpair failed and we were unable to recover it. 00:32:35.919 [2024-11-20 06:43:07.515915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.919 [2024-11-20 06:43:07.515947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.919 qpair failed and we were unable to recover it. 00:32:35.919 [2024-11-20 06:43:07.516261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.919 [2024-11-20 06:43:07.516295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.919 qpair failed and we were unable to recover it. 00:32:35.919 [2024-11-20 06:43:07.516521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.919 [2024-11-20 06:43:07.516554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.919 qpair failed and we were unable to recover it. 00:32:35.919 [2024-11-20 06:43:07.516829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.919 [2024-11-20 06:43:07.516861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.919 qpair failed and we were unable to recover it. 00:32:35.919 [2024-11-20 06:43:07.517150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.919 [2024-11-20 06:43:07.517182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.919 qpair failed and we were unable to recover it. 00:32:35.919 [2024-11-20 06:43:07.517466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.919 [2024-11-20 06:43:07.517500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.919 qpair failed and we were unable to recover it. 00:32:35.919 [2024-11-20 06:43:07.517730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.919 [2024-11-20 06:43:07.517762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.919 qpair failed and we were unable to recover it. 00:32:35.919 [2024-11-20 06:43:07.518044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.919 [2024-11-20 06:43:07.518076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.919 qpair failed and we were unable to recover it. 00:32:35.919 [2024-11-20 06:43:07.518311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.919 [2024-11-20 06:43:07.518346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.919 qpair failed and we were unable to recover it. 00:32:35.919 [2024-11-20 06:43:07.518624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.919 [2024-11-20 06:43:07.518656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.919 qpair failed and we were unable to recover it. 00:32:35.919 [2024-11-20 06:43:07.518941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.919 [2024-11-20 06:43:07.518973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.919 qpair failed and we were unable to recover it. 00:32:35.919 [2024-11-20 06:43:07.519257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.919 [2024-11-20 06:43:07.519291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.919 qpair failed and we were unable to recover it. 00:32:35.919 [2024-11-20 06:43:07.519513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.919 [2024-11-20 06:43:07.519545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.919 qpair failed and we were unable to recover it. 00:32:35.919 [2024-11-20 06:43:07.519761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.920 [2024-11-20 06:43:07.519794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.920 qpair failed and we were unable to recover it. 00:32:35.920 [2024-11-20 06:43:07.520047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.920 [2024-11-20 06:43:07.520079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.920 qpair failed and we were unable to recover it. 00:32:35.920 [2024-11-20 06:43:07.520334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.920 [2024-11-20 06:43:07.520369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.920 qpair failed and we were unable to recover it. 00:32:35.920 [2024-11-20 06:43:07.520572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.920 [2024-11-20 06:43:07.520604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.920 qpair failed and we were unable to recover it. 00:32:35.920 [2024-11-20 06:43:07.520807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.920 [2024-11-20 06:43:07.520839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.920 qpair failed and we were unable to recover it. 00:32:35.920 [2024-11-20 06:43:07.521117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.920 [2024-11-20 06:43:07.521149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.920 qpair failed and we were unable to recover it. 00:32:35.920 [2024-11-20 06:43:07.521467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.920 [2024-11-20 06:43:07.521500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.920 qpair failed and we were unable to recover it. 00:32:35.920 [2024-11-20 06:43:07.521775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.920 [2024-11-20 06:43:07.521808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.920 qpair failed and we were unable to recover it. 00:32:35.920 [2024-11-20 06:43:07.522082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.920 [2024-11-20 06:43:07.522115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.920 qpair failed and we were unable to recover it. 00:32:35.920 [2024-11-20 06:43:07.522413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.920 [2024-11-20 06:43:07.522447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.920 qpair failed and we were unable to recover it. 00:32:35.920 [2024-11-20 06:43:07.522714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.920 [2024-11-20 06:43:07.522746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.920 qpair failed and we were unable to recover it. 00:32:35.920 [2024-11-20 06:43:07.523043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.920 [2024-11-20 06:43:07.523076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.920 qpair failed and we were unable to recover it. 00:32:35.920 [2024-11-20 06:43:07.523350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.920 [2024-11-20 06:43:07.523385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.920 qpair failed and we were unable to recover it. 00:32:35.920 [2024-11-20 06:43:07.523669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.920 [2024-11-20 06:43:07.523708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.920 qpair failed and we were unable to recover it. 00:32:35.920 [2024-11-20 06:43:07.523925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.920 [2024-11-20 06:43:07.523957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.920 qpair failed and we were unable to recover it. 00:32:35.920 [2024-11-20 06:43:07.524167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.920 [2024-11-20 06:43:07.524199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.920 qpair failed and we were unable to recover it. 00:32:35.920 [2024-11-20 06:43:07.524486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.920 [2024-11-20 06:43:07.524518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.920 qpair failed and we were unable to recover it. 00:32:35.920 [2024-11-20 06:43:07.524797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.920 [2024-11-20 06:43:07.524829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.920 qpair failed and we were unable to recover it. 00:32:35.920 [2024-11-20 06:43:07.525115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.920 [2024-11-20 06:43:07.525147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.920 qpair failed and we were unable to recover it. 00:32:35.920 [2024-11-20 06:43:07.525433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.920 [2024-11-20 06:43:07.525467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.920 qpair failed and we were unable to recover it. 00:32:35.920 [2024-11-20 06:43:07.525720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.920 [2024-11-20 06:43:07.525752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.920 qpair failed and we were unable to recover it. 00:32:35.920 [2024-11-20 06:43:07.526009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.920 [2024-11-20 06:43:07.526041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.920 qpair failed and we were unable to recover it. 00:32:35.920 [2024-11-20 06:43:07.526355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.920 [2024-11-20 06:43:07.526388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.920 qpair failed and we were unable to recover it. 00:32:35.920 [2024-11-20 06:43:07.526664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.920 [2024-11-20 06:43:07.526695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.920 qpair failed and we were unable to recover it. 00:32:35.920 [2024-11-20 06:43:07.526846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.920 [2024-11-20 06:43:07.526878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.920 qpair failed and we were unable to recover it. 00:32:35.920 [2024-11-20 06:43:07.527158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.920 [2024-11-20 06:43:07.527190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.920 qpair failed and we were unable to recover it. 00:32:35.920 [2024-11-20 06:43:07.527399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.920 [2024-11-20 06:43:07.527431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.920 qpair failed and we were unable to recover it. 00:32:35.920 [2024-11-20 06:43:07.527704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.920 [2024-11-20 06:43:07.527736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.920 qpair failed and we were unable to recover it. 00:32:35.920 [2024-11-20 06:43:07.527939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.920 [2024-11-20 06:43:07.527970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.920 qpair failed and we were unable to recover it. 00:32:35.920 [2024-11-20 06:43:07.528174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.920 [2024-11-20 06:43:07.528229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.920 qpair failed and we were unable to recover it. 00:32:35.920 [2024-11-20 06:43:07.528489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.920 [2024-11-20 06:43:07.528522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.920 qpair failed and we were unable to recover it. 00:32:35.920 [2024-11-20 06:43:07.528802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.920 [2024-11-20 06:43:07.528834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.920 qpair failed and we were unable to recover it. 00:32:35.920 [2024-11-20 06:43:07.529118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.920 [2024-11-20 06:43:07.529149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.920 qpair failed and we were unable to recover it. 00:32:35.920 [2024-11-20 06:43:07.529439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.920 [2024-11-20 06:43:07.529472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.920 qpair failed and we were unable to recover it. 00:32:35.920 [2024-11-20 06:43:07.529750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.920 [2024-11-20 06:43:07.529783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.920 qpair failed and we were unable to recover it. 00:32:35.920 [2024-11-20 06:43:07.530069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.920 [2024-11-20 06:43:07.530100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.920 qpair failed and we were unable to recover it. 00:32:35.920 [2024-11-20 06:43:07.530333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.920 [2024-11-20 06:43:07.530367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.920 qpair failed and we were unable to recover it. 00:32:35.920 [2024-11-20 06:43:07.530674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.920 [2024-11-20 06:43:07.530706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.920 qpair failed and we were unable to recover it. 00:32:35.920 [2024-11-20 06:43:07.530987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.920 [2024-11-20 06:43:07.531019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.920 qpair failed and we were unable to recover it. 00:32:35.921 [2024-11-20 06:43:07.531244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.921 [2024-11-20 06:43:07.531277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.921 qpair failed and we were unable to recover it. 00:32:35.921 [2024-11-20 06:43:07.531479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.921 [2024-11-20 06:43:07.531517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.921 qpair failed and we were unable to recover it. 00:32:35.921 [2024-11-20 06:43:07.531709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.921 [2024-11-20 06:43:07.531740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.921 qpair failed and we were unable to recover it. 00:32:35.921 [2024-11-20 06:43:07.531943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.921 [2024-11-20 06:43:07.531974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.921 qpair failed and we were unable to recover it. 00:32:35.921 [2024-11-20 06:43:07.532254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.921 [2024-11-20 06:43:07.532287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.921 qpair failed and we were unable to recover it. 00:32:35.921 [2024-11-20 06:43:07.532479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.921 [2024-11-20 06:43:07.532511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.921 qpair failed and we were unable to recover it. 00:32:35.921 [2024-11-20 06:43:07.532795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.921 [2024-11-20 06:43:07.532827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.921 qpair failed and we were unable to recover it. 00:32:35.921 [2024-11-20 06:43:07.533086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.921 [2024-11-20 06:43:07.533118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.921 qpair failed and we were unable to recover it. 00:32:35.921 [2024-11-20 06:43:07.533421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.921 [2024-11-20 06:43:07.533455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.921 qpair failed and we were unable to recover it. 00:32:35.921 [2024-11-20 06:43:07.533711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.921 [2024-11-20 06:43:07.533744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.921 qpair failed and we were unable to recover it. 00:32:35.921 [2024-11-20 06:43:07.534020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.921 [2024-11-20 06:43:07.534051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.921 qpair failed and we were unable to recover it. 00:32:35.921 [2024-11-20 06:43:07.534305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.921 [2024-11-20 06:43:07.534339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.921 qpair failed and we were unable to recover it. 00:32:35.921 [2024-11-20 06:43:07.534547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.921 [2024-11-20 06:43:07.534580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.921 qpair failed and we were unable to recover it. 00:32:35.921 [2024-11-20 06:43:07.534765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.921 [2024-11-20 06:43:07.534796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.921 qpair failed and we were unable to recover it. 00:32:35.921 [2024-11-20 06:43:07.535078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.921 [2024-11-20 06:43:07.535110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.921 qpair failed and we were unable to recover it. 00:32:35.921 [2024-11-20 06:43:07.535382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.921 [2024-11-20 06:43:07.535416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.921 qpair failed and we were unable to recover it. 00:32:35.921 [2024-11-20 06:43:07.535647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.921 [2024-11-20 06:43:07.535679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.921 qpair failed and we were unable to recover it. 00:32:35.921 [2024-11-20 06:43:07.535880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.921 [2024-11-20 06:43:07.535912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.921 qpair failed and we were unable to recover it. 00:32:35.921 [2024-11-20 06:43:07.536172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.921 [2024-11-20 06:43:07.536226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.921 qpair failed and we were unable to recover it. 00:32:35.921 [2024-11-20 06:43:07.536442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.921 [2024-11-20 06:43:07.536475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.921 qpair failed and we were unable to recover it. 00:32:35.921 [2024-11-20 06:43:07.536674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.921 [2024-11-20 06:43:07.536705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.921 qpair failed and we were unable to recover it. 00:32:35.921 [2024-11-20 06:43:07.536892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.921 [2024-11-20 06:43:07.536924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.921 qpair failed and we were unable to recover it. 00:32:35.921 [2024-11-20 06:43:07.537214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.921 [2024-11-20 06:43:07.537249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.921 qpair failed and we were unable to recover it. 00:32:35.921 [2024-11-20 06:43:07.537513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.921 [2024-11-20 06:43:07.537545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.921 qpair failed and we were unable to recover it. 00:32:35.921 [2024-11-20 06:43:07.537695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.921 [2024-11-20 06:43:07.537728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.921 qpair failed and we were unable to recover it. 00:32:35.921 [2024-11-20 06:43:07.538008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.921 [2024-11-20 06:43:07.538039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.921 qpair failed and we were unable to recover it. 00:32:35.921 [2024-11-20 06:43:07.538169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.921 [2024-11-20 06:43:07.538212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.921 qpair failed and we were unable to recover it. 00:32:35.921 [2024-11-20 06:43:07.538475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.921 [2024-11-20 06:43:07.538508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.921 qpair failed and we were unable to recover it. 00:32:35.921 [2024-11-20 06:43:07.538785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.921 [2024-11-20 06:43:07.538816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.921 qpair failed and we were unable to recover it. 00:32:35.921 [2024-11-20 06:43:07.539099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.921 [2024-11-20 06:43:07.539131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.921 qpair failed and we were unable to recover it. 00:32:35.921 [2024-11-20 06:43:07.539415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.921 [2024-11-20 06:43:07.539450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.921 qpair failed and we were unable to recover it. 00:32:35.921 [2024-11-20 06:43:07.539731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.921 [2024-11-20 06:43:07.539762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.921 qpair failed and we were unable to recover it. 00:32:35.921 [2024-11-20 06:43:07.540045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.921 [2024-11-20 06:43:07.540077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.921 qpair failed and we were unable to recover it. 00:32:35.921 [2024-11-20 06:43:07.540317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.921 [2024-11-20 06:43:07.540351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.921 qpair failed and we were unable to recover it. 00:32:35.921 [2024-11-20 06:43:07.540631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.921 [2024-11-20 06:43:07.540662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.921 qpair failed and we were unable to recover it. 00:32:35.921 [2024-11-20 06:43:07.540946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.921 [2024-11-20 06:43:07.540977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.921 qpair failed and we were unable to recover it. 00:32:35.921 [2024-11-20 06:43:07.541262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.921 [2024-11-20 06:43:07.541297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.921 qpair failed and we were unable to recover it. 00:32:35.921 [2024-11-20 06:43:07.541580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.921 [2024-11-20 06:43:07.541612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.921 qpair failed and we were unable to recover it. 00:32:35.922 [2024-11-20 06:43:07.541836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.922 [2024-11-20 06:43:07.541868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.922 qpair failed and we were unable to recover it. 00:32:35.922 [2024-11-20 06:43:07.542114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.922 [2024-11-20 06:43:07.542147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.922 qpair failed and we were unable to recover it. 00:32:35.922 [2024-11-20 06:43:07.542459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.922 [2024-11-20 06:43:07.542492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.922 qpair failed and we were unable to recover it. 00:32:35.922 [2024-11-20 06:43:07.542766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.922 [2024-11-20 06:43:07.542798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.922 qpair failed and we were unable to recover it. 00:32:35.922 [2024-11-20 06:43:07.542984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.922 [2024-11-20 06:43:07.543022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.922 qpair failed and we were unable to recover it. 00:32:35.922 [2024-11-20 06:43:07.543324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.922 [2024-11-20 06:43:07.543357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.922 qpair failed and we were unable to recover it. 00:32:35.922 [2024-11-20 06:43:07.543652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.922 [2024-11-20 06:43:07.543684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.922 qpair failed and we were unable to recover it. 00:32:35.922 [2024-11-20 06:43:07.543914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.922 [2024-11-20 06:43:07.543947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.922 qpair failed and we were unable to recover it. 00:32:35.922 [2024-11-20 06:43:07.544128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.922 [2024-11-20 06:43:07.544160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.922 qpair failed and we were unable to recover it. 00:32:35.922 [2024-11-20 06:43:07.544430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.922 [2024-11-20 06:43:07.544463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.922 qpair failed and we were unable to recover it. 00:32:35.922 [2024-11-20 06:43:07.544751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.922 [2024-11-20 06:43:07.544783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.922 qpair failed and we were unable to recover it. 00:32:35.922 [2024-11-20 06:43:07.545102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.922 [2024-11-20 06:43:07.545134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.922 qpair failed and we were unable to recover it. 00:32:35.922 [2024-11-20 06:43:07.545393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.922 [2024-11-20 06:43:07.545427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.922 qpair failed and we were unable to recover it. 00:32:35.922 [2024-11-20 06:43:07.545683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.922 [2024-11-20 06:43:07.545716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.922 qpair failed and we were unable to recover it. 00:32:35.922 [2024-11-20 06:43:07.546009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.922 [2024-11-20 06:43:07.546041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.922 qpair failed and we were unable to recover it. 00:32:35.922 [2024-11-20 06:43:07.546266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.922 [2024-11-20 06:43:07.546299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.922 qpair failed and we were unable to recover it. 00:32:35.922 [2024-11-20 06:43:07.546580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.922 [2024-11-20 06:43:07.546613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.922 qpair failed and we were unable to recover it. 00:32:35.922 [2024-11-20 06:43:07.546880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.922 [2024-11-20 06:43:07.546910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.922 qpair failed and we were unable to recover it. 00:32:35.922 [2024-11-20 06:43:07.547125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.922 [2024-11-20 06:43:07.547157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.922 qpair failed and we were unable to recover it. 00:32:35.922 [2024-11-20 06:43:07.547376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.922 [2024-11-20 06:43:07.547408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.922 qpair failed and we were unable to recover it. 00:32:35.922 [2024-11-20 06:43:07.547604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.922 [2024-11-20 06:43:07.547635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.922 qpair failed and we were unable to recover it. 00:32:35.922 [2024-11-20 06:43:07.547897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.922 [2024-11-20 06:43:07.547929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.922 qpair failed and we were unable to recover it. 00:32:35.922 [2024-11-20 06:43:07.548229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.922 [2024-11-20 06:43:07.548263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.922 qpair failed and we were unable to recover it. 00:32:35.922 [2024-11-20 06:43:07.548527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.922 [2024-11-20 06:43:07.548559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.922 qpair failed and we were unable to recover it. 00:32:35.922 [2024-11-20 06:43:07.548691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.922 [2024-11-20 06:43:07.548722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.922 qpair failed and we were unable to recover it. 00:32:35.922 [2024-11-20 06:43:07.548983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.922 [2024-11-20 06:43:07.549015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.922 qpair failed and we were unable to recover it. 00:32:35.922 [2024-11-20 06:43:07.549321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.922 [2024-11-20 06:43:07.549355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.922 qpair failed and we were unable to recover it. 00:32:35.922 [2024-11-20 06:43:07.549572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.922 [2024-11-20 06:43:07.549605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.922 qpair failed and we were unable to recover it. 00:32:35.922 [2024-11-20 06:43:07.549821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.922 [2024-11-20 06:43:07.549852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.922 qpair failed and we were unable to recover it. 00:32:35.922 [2024-11-20 06:43:07.550075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.922 [2024-11-20 06:43:07.550107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.922 qpair failed and we were unable to recover it. 00:32:35.922 [2024-11-20 06:43:07.550401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.922 [2024-11-20 06:43:07.550435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.922 qpair failed and we were unable to recover it. 00:32:35.922 [2024-11-20 06:43:07.550708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.922 [2024-11-20 06:43:07.550746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.922 qpair failed and we were unable to recover it. 00:32:35.922 [2024-11-20 06:43:07.551002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.922 [2024-11-20 06:43:07.551033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.922 qpair failed and we were unable to recover it. 00:32:35.922 [2024-11-20 06:43:07.551233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.922 [2024-11-20 06:43:07.551265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.922 qpair failed and we were unable to recover it. 00:32:35.922 [2024-11-20 06:43:07.551461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.922 [2024-11-20 06:43:07.551493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.922 qpair failed and we were unable to recover it. 00:32:35.922 [2024-11-20 06:43:07.551778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.922 [2024-11-20 06:43:07.551811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.922 qpair failed and we were unable to recover it. 00:32:35.922 [2024-11-20 06:43:07.552110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.922 [2024-11-20 06:43:07.552142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.922 qpair failed and we were unable to recover it. 00:32:35.922 [2024-11-20 06:43:07.552413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.923 [2024-11-20 06:43:07.552446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.923 qpair failed and we were unable to recover it. 00:32:35.923 [2024-11-20 06:43:07.552742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.923 [2024-11-20 06:43:07.552774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.923 qpair failed and we were unable to recover it. 00:32:35.923 [2024-11-20 06:43:07.553044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.923 [2024-11-20 06:43:07.553075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.923 qpair failed and we were unable to recover it. 00:32:35.923 [2024-11-20 06:43:07.553379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.923 [2024-11-20 06:43:07.553413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.923 qpair failed and we were unable to recover it. 00:32:35.923 [2024-11-20 06:43:07.553621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.923 [2024-11-20 06:43:07.553653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.923 qpair failed and we were unable to recover it. 00:32:35.923 [2024-11-20 06:43:07.553851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.923 [2024-11-20 06:43:07.553883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.923 qpair failed and we were unable to recover it. 00:32:35.923 [2024-11-20 06:43:07.554092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.923 [2024-11-20 06:43:07.554124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.923 qpair failed and we were unable to recover it. 00:32:35.923 [2024-11-20 06:43:07.554375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.923 [2024-11-20 06:43:07.554409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.923 qpair failed and we were unable to recover it. 00:32:35.923 [2024-11-20 06:43:07.554679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.923 [2024-11-20 06:43:07.554712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.923 qpair failed and we were unable to recover it. 00:32:35.923 [2024-11-20 06:43:07.554998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.923 [2024-11-20 06:43:07.555029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.923 qpair failed and we were unable to recover it. 00:32:35.923 [2024-11-20 06:43:07.555313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.923 [2024-11-20 06:43:07.555346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.923 qpair failed and we were unable to recover it. 00:32:35.923 [2024-11-20 06:43:07.555629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.923 [2024-11-20 06:43:07.555662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.923 qpair failed and we were unable to recover it. 00:32:35.923 [2024-11-20 06:43:07.555942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.923 [2024-11-20 06:43:07.555973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.923 qpair failed and we were unable to recover it. 00:32:35.923 [2024-11-20 06:43:07.556264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.923 [2024-11-20 06:43:07.556297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.923 qpair failed and we were unable to recover it. 00:32:35.923 [2024-11-20 06:43:07.556579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.923 [2024-11-20 06:43:07.556611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.923 qpair failed and we were unable to recover it. 00:32:35.923 [2024-11-20 06:43:07.556871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.923 [2024-11-20 06:43:07.556901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.923 qpair failed and we were unable to recover it. 00:32:35.923 [2024-11-20 06:43:07.557082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.923 [2024-11-20 06:43:07.557114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.923 qpair failed and we were unable to recover it. 00:32:35.923 [2024-11-20 06:43:07.557396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.923 [2024-11-20 06:43:07.557431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.923 qpair failed and we were unable to recover it. 00:32:35.923 [2024-11-20 06:43:07.557622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.923 [2024-11-20 06:43:07.557653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.923 qpair failed and we were unable to recover it. 00:32:35.923 [2024-11-20 06:43:07.557911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.923 [2024-11-20 06:43:07.557943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.923 qpair failed and we were unable to recover it. 00:32:35.923 [2024-11-20 06:43:07.558227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.923 [2024-11-20 06:43:07.558260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.923 qpair failed and we were unable to recover it. 00:32:35.923 [2024-11-20 06:43:07.558543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.923 [2024-11-20 06:43:07.558575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.923 qpair failed and we were unable to recover it. 00:32:35.923 [2024-11-20 06:43:07.558863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.923 [2024-11-20 06:43:07.558895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.923 qpair failed and we were unable to recover it. 00:32:35.923 [2024-11-20 06:43:07.559175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.923 [2024-11-20 06:43:07.559215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.923 qpair failed and we were unable to recover it. 00:32:35.923 [2024-11-20 06:43:07.559491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.923 [2024-11-20 06:43:07.559524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.923 qpair failed and we were unable to recover it. 00:32:35.923 [2024-11-20 06:43:07.559641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.923 [2024-11-20 06:43:07.559672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.923 qpair failed and we were unable to recover it. 00:32:35.923 [2024-11-20 06:43:07.559876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.923 [2024-11-20 06:43:07.559907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.923 qpair failed and we were unable to recover it. 00:32:35.923 [2024-11-20 06:43:07.560188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.923 [2024-11-20 06:43:07.560244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.923 qpair failed and we were unable to recover it. 00:32:35.923 [2024-11-20 06:43:07.560452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.923 [2024-11-20 06:43:07.560484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.923 qpair failed and we were unable to recover it. 00:32:35.923 [2024-11-20 06:43:07.560669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.923 [2024-11-20 06:43:07.560701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.923 qpair failed and we were unable to recover it. 00:32:35.923 [2024-11-20 06:43:07.560908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.923 [2024-11-20 06:43:07.560939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.923 qpair failed and we were unable to recover it. 00:32:35.923 [2024-11-20 06:43:07.561222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.923 [2024-11-20 06:43:07.561256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.923 qpair failed and we were unable to recover it. 00:32:35.923 [2024-11-20 06:43:07.561532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.923 [2024-11-20 06:43:07.561565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.923 qpair failed and we were unable to recover it. 00:32:35.923 [2024-11-20 06:43:07.561793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.923 [2024-11-20 06:43:07.561824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.923 qpair failed and we were unable to recover it. 00:32:35.923 [2024-11-20 06:43:07.562077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.923 [2024-11-20 06:43:07.562109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.923 qpair failed and we were unable to recover it. 00:32:35.923 [2024-11-20 06:43:07.562360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.923 [2024-11-20 06:43:07.562401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.923 qpair failed and we were unable to recover it. 00:32:35.923 [2024-11-20 06:43:07.562705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.923 [2024-11-20 06:43:07.562737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.923 qpair failed and we were unable to recover it. 00:32:35.923 [2024-11-20 06:43:07.562997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.923 [2024-11-20 06:43:07.563030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.923 qpair failed and we were unable to recover it. 00:32:35.923 [2024-11-20 06:43:07.563263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.924 [2024-11-20 06:43:07.563297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.924 qpair failed and we were unable to recover it. 00:32:35.924 [2024-11-20 06:43:07.563582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.924 [2024-11-20 06:43:07.563614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.924 qpair failed and we were unable to recover it. 00:32:35.924 [2024-11-20 06:43:07.563895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.924 [2024-11-20 06:43:07.563927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.924 qpair failed and we were unable to recover it. 00:32:35.924 [2024-11-20 06:43:07.564190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.924 [2024-11-20 06:43:07.564248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.924 qpair failed and we were unable to recover it. 00:32:35.924 [2024-11-20 06:43:07.564467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.924 [2024-11-20 06:43:07.564500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.924 qpair failed and we were unable to recover it. 00:32:35.924 [2024-11-20 06:43:07.564700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.924 [2024-11-20 06:43:07.564731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.924 qpair failed and we were unable to recover it. 00:32:35.924 [2024-11-20 06:43:07.565006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.924 [2024-11-20 06:43:07.565038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.924 qpair failed and we were unable to recover it. 00:32:35.924 [2024-11-20 06:43:07.565249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.924 [2024-11-20 06:43:07.565305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.924 qpair failed and we were unable to recover it. 00:32:35.924 [2024-11-20 06:43:07.565590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.924 [2024-11-20 06:43:07.565622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.924 qpair failed and we were unable to recover it. 00:32:35.924 [2024-11-20 06:43:07.565817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.924 [2024-11-20 06:43:07.565848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.924 qpair failed and we were unable to recover it. 00:32:35.924 [2024-11-20 06:43:07.566052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.924 [2024-11-20 06:43:07.566084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.924 qpair failed and we were unable to recover it. 00:32:35.924 [2024-11-20 06:43:07.566288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.924 [2024-11-20 06:43:07.566322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.924 qpair failed and we were unable to recover it. 00:32:35.924 [2024-11-20 06:43:07.566603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.924 [2024-11-20 06:43:07.566635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.924 qpair failed and we were unable to recover it. 00:32:35.924 [2024-11-20 06:43:07.566917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.924 [2024-11-20 06:43:07.566949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.924 qpair failed and we were unable to recover it. 00:32:35.924 [2024-11-20 06:43:07.567163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.924 [2024-11-20 06:43:07.567195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.924 qpair failed and we were unable to recover it. 00:32:35.924 [2024-11-20 06:43:07.567507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.924 [2024-11-20 06:43:07.567540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.924 qpair failed and we were unable to recover it. 00:32:35.924 [2024-11-20 06:43:07.567799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.924 [2024-11-20 06:43:07.567832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.924 qpair failed and we were unable to recover it. 00:32:35.924 [2024-11-20 06:43:07.568130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.924 [2024-11-20 06:43:07.568161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.924 qpair failed and we were unable to recover it. 00:32:35.924 [2024-11-20 06:43:07.568420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.924 [2024-11-20 06:43:07.568453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.924 qpair failed and we were unable to recover it. 00:32:35.924 [2024-11-20 06:43:07.568747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.924 [2024-11-20 06:43:07.568779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.924 qpair failed and we were unable to recover it. 00:32:35.924 [2024-11-20 06:43:07.568963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.924 [2024-11-20 06:43:07.568995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.924 qpair failed and we were unable to recover it. 00:32:35.924 [2024-11-20 06:43:07.569260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.924 [2024-11-20 06:43:07.569295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.924 qpair failed and we were unable to recover it. 00:32:35.924 [2024-11-20 06:43:07.569579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.924 [2024-11-20 06:43:07.569610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.924 qpair failed and we were unable to recover it. 00:32:35.924 [2024-11-20 06:43:07.569831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.924 [2024-11-20 06:43:07.569863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.924 qpair failed and we were unable to recover it. 00:32:35.924 [2024-11-20 06:43:07.570142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.924 [2024-11-20 06:43:07.570174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.924 qpair failed and we were unable to recover it. 00:32:35.924 [2024-11-20 06:43:07.570407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.924 [2024-11-20 06:43:07.570442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.924 qpair failed and we were unable to recover it. 00:32:35.924 [2024-11-20 06:43:07.570722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.924 [2024-11-20 06:43:07.570754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.924 qpair failed and we were unable to recover it. 00:32:35.924 [2024-11-20 06:43:07.571007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.924 [2024-11-20 06:43:07.571040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.924 qpair failed and we were unable to recover it. 00:32:35.924 [2024-11-20 06:43:07.571347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.924 [2024-11-20 06:43:07.571380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.924 qpair failed and we were unable to recover it. 00:32:35.924 [2024-11-20 06:43:07.571595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.924 [2024-11-20 06:43:07.571627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.924 qpair failed and we were unable to recover it. 00:32:35.924 [2024-11-20 06:43:07.571922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.925 [2024-11-20 06:43:07.571954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.925 qpair failed and we were unable to recover it. 00:32:35.925 [2024-11-20 06:43:07.572106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.925 [2024-11-20 06:43:07.572138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.925 qpair failed and we were unable to recover it. 00:32:35.925 [2024-11-20 06:43:07.572366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.925 [2024-11-20 06:43:07.572400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.925 qpair failed and we were unable to recover it. 00:32:35.925 [2024-11-20 06:43:07.572535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.925 [2024-11-20 06:43:07.572567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.925 qpair failed and we were unable to recover it. 00:32:35.925 [2024-11-20 06:43:07.572827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.925 [2024-11-20 06:43:07.572857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.925 qpair failed and we were unable to recover it. 00:32:35.925 [2024-11-20 06:43:07.573055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.925 [2024-11-20 06:43:07.573087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.925 qpair failed and we were unable to recover it. 00:32:35.925 [2024-11-20 06:43:07.573364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.925 [2024-11-20 06:43:07.573397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.925 qpair failed and we were unable to recover it. 00:32:35.925 [2024-11-20 06:43:07.573684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.925 [2024-11-20 06:43:07.573716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.925 qpair failed and we were unable to recover it. 00:32:35.925 [2024-11-20 06:43:07.573992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.925 [2024-11-20 06:43:07.574025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.925 qpair failed and we were unable to recover it. 00:32:35.925 [2024-11-20 06:43:07.574319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.925 [2024-11-20 06:43:07.574352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.925 qpair failed and we were unable to recover it. 00:32:35.925 [2024-11-20 06:43:07.574626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.925 [2024-11-20 06:43:07.574658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.925 qpair failed and we were unable to recover it. 00:32:35.925 [2024-11-20 06:43:07.574949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.925 [2024-11-20 06:43:07.574981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.925 qpair failed and we were unable to recover it. 00:32:35.925 [2024-11-20 06:43:07.575216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.925 [2024-11-20 06:43:07.575250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.925 qpair failed and we were unable to recover it. 00:32:35.925 [2024-11-20 06:43:07.575454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.925 [2024-11-20 06:43:07.575485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.925 qpair failed and we were unable to recover it. 00:32:35.925 [2024-11-20 06:43:07.575783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.925 [2024-11-20 06:43:07.575815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.925 qpair failed and we were unable to recover it. 00:32:35.925 [2024-11-20 06:43:07.576107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.925 [2024-11-20 06:43:07.576140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.925 qpair failed and we were unable to recover it. 00:32:35.925 [2024-11-20 06:43:07.576450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.925 [2024-11-20 06:43:07.576482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.925 qpair failed and we were unable to recover it. 00:32:35.925 [2024-11-20 06:43:07.576688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.925 [2024-11-20 06:43:07.576719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.925 qpair failed and we were unable to recover it. 00:32:35.925 [2024-11-20 06:43:07.576973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.925 [2024-11-20 06:43:07.577005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.925 qpair failed and we were unable to recover it. 00:32:35.925 [2024-11-20 06:43:07.577197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.925 [2024-11-20 06:43:07.577250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.925 qpair failed and we were unable to recover it. 00:32:35.925 [2024-11-20 06:43:07.577446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.925 [2024-11-20 06:43:07.577479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.925 qpair failed and we were unable to recover it. 00:32:35.925 [2024-11-20 06:43:07.577721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.925 [2024-11-20 06:43:07.577753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.925 qpair failed and we were unable to recover it. 00:32:35.925 [2024-11-20 06:43:07.577960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.925 [2024-11-20 06:43:07.577993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.925 qpair failed and we were unable to recover it. 00:32:35.925 [2024-11-20 06:43:07.578227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.925 [2024-11-20 06:43:07.578260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.925 qpair failed and we were unable to recover it. 00:32:35.925 [2024-11-20 06:43:07.578465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.925 [2024-11-20 06:43:07.578497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.925 qpair failed and we were unable to recover it. 00:32:35.925 [2024-11-20 06:43:07.578792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.925 [2024-11-20 06:43:07.578823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.925 qpair failed and we were unable to recover it. 00:32:35.925 [2024-11-20 06:43:07.579100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.925 [2024-11-20 06:43:07.579131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.925 qpair failed and we were unable to recover it. 00:32:35.925 [2024-11-20 06:43:07.579414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.925 [2024-11-20 06:43:07.579449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.925 qpair failed and we were unable to recover it. 00:32:35.925 [2024-11-20 06:43:07.579658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.925 [2024-11-20 06:43:07.579690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.925 qpair failed and we were unable to recover it. 00:32:35.925 [2024-11-20 06:43:07.579875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.925 [2024-11-20 06:43:07.579907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.925 qpair failed and we were unable to recover it. 00:32:35.925 [2024-11-20 06:43:07.580116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.925 [2024-11-20 06:43:07.580148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.925 qpair failed and we were unable to recover it. 00:32:35.925 [2024-11-20 06:43:07.580403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.925 [2024-11-20 06:43:07.580436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.925 qpair failed and we were unable to recover it. 00:32:35.925 [2024-11-20 06:43:07.580712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.925 [2024-11-20 06:43:07.580744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.925 qpair failed and we were unable to recover it. 00:32:35.925 [2024-11-20 06:43:07.581005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.925 [2024-11-20 06:43:07.581036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.925 qpair failed and we were unable to recover it. 00:32:35.926 [2024-11-20 06:43:07.581346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.926 [2024-11-20 06:43:07.581381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.926 qpair failed and we were unable to recover it. 00:32:35.926 [2024-11-20 06:43:07.581573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.926 [2024-11-20 06:43:07.581611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.926 qpair failed and we were unable to recover it. 00:32:35.926 [2024-11-20 06:43:07.581840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.926 [2024-11-20 06:43:07.581871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.926 qpair failed and we were unable to recover it. 00:32:35.926 [2024-11-20 06:43:07.582074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.926 [2024-11-20 06:43:07.582107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.926 qpair failed and we were unable to recover it. 00:32:35.926 [2024-11-20 06:43:07.582412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.926 [2024-11-20 06:43:07.582445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.926 qpair failed and we were unable to recover it. 00:32:35.926 [2024-11-20 06:43:07.582727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.926 [2024-11-20 06:43:07.582759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.926 qpair failed and we were unable to recover it. 00:32:35.926 [2024-11-20 06:43:07.583046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.926 [2024-11-20 06:43:07.583078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.926 qpair failed and we were unable to recover it. 00:32:35.926 [2024-11-20 06:43:07.583358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.926 [2024-11-20 06:43:07.583393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.926 qpair failed and we were unable to recover it. 00:32:35.926 [2024-11-20 06:43:07.583523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.926 [2024-11-20 06:43:07.583554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.926 qpair failed and we were unable to recover it. 00:32:35.926 [2024-11-20 06:43:07.583766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.926 [2024-11-20 06:43:07.583797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.926 qpair failed and we were unable to recover it. 00:32:35.926 [2024-11-20 06:43:07.584105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.926 [2024-11-20 06:43:07.584138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.926 qpair failed and we were unable to recover it. 00:32:35.926 [2024-11-20 06:43:07.584422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.926 [2024-11-20 06:43:07.584455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.926 qpair failed and we were unable to recover it. 00:32:35.926 [2024-11-20 06:43:07.584736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.926 [2024-11-20 06:43:07.584768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.926 qpair failed and we were unable to recover it. 00:32:35.926 [2024-11-20 06:43:07.585061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.926 [2024-11-20 06:43:07.585093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.926 qpair failed and we were unable to recover it. 00:32:35.926 [2024-11-20 06:43:07.585372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.926 [2024-11-20 06:43:07.585405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.926 qpair failed and we were unable to recover it. 00:32:35.926 [2024-11-20 06:43:07.585672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.926 [2024-11-20 06:43:07.585704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.926 qpair failed and we were unable to recover it. 00:32:35.926 [2024-11-20 06:43:07.585987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.926 [2024-11-20 06:43:07.586018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.926 qpair failed and we were unable to recover it. 00:32:35.926 [2024-11-20 06:43:07.586302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.926 [2024-11-20 06:43:07.586336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.926 qpair failed and we were unable to recover it. 00:32:35.926 [2024-11-20 06:43:07.586619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.926 [2024-11-20 06:43:07.586651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.926 qpair failed and we were unable to recover it. 00:32:35.926 [2024-11-20 06:43:07.586930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.926 [2024-11-20 06:43:07.586962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.926 qpair failed and we were unable to recover it. 00:32:35.926 [2024-11-20 06:43:07.587148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.926 [2024-11-20 06:43:07.587180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.926 qpair failed and we were unable to recover it. 00:32:35.926 [2024-11-20 06:43:07.587474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.926 [2024-11-20 06:43:07.587507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.926 qpair failed and we were unable to recover it. 00:32:35.926 [2024-11-20 06:43:07.587760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.926 [2024-11-20 06:43:07.587792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.926 qpair failed and we were unable to recover it. 00:32:35.926 [2024-11-20 06:43:07.588095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.926 [2024-11-20 06:43:07.588127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.926 qpair failed and we were unable to recover it. 00:32:35.926 [2024-11-20 06:43:07.588440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.926 [2024-11-20 06:43:07.588474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.926 qpair failed and we were unable to recover it. 00:32:35.926 [2024-11-20 06:43:07.588759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.926 [2024-11-20 06:43:07.588792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.926 qpair failed and we were unable to recover it. 00:32:35.926 [2024-11-20 06:43:07.589070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.926 [2024-11-20 06:43:07.589102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.926 qpair failed and we were unable to recover it. 00:32:35.926 [2024-11-20 06:43:07.589240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.926 [2024-11-20 06:43:07.589274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.926 qpair failed and we were unable to recover it. 00:32:35.926 [2024-11-20 06:43:07.589476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.926 [2024-11-20 06:43:07.589508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.926 qpair failed and we were unable to recover it. 00:32:35.926 [2024-11-20 06:43:07.589820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.926 [2024-11-20 06:43:07.589852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.926 qpair failed and we were unable to recover it. 00:32:35.926 [2024-11-20 06:43:07.590130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.926 [2024-11-20 06:43:07.590162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.926 qpair failed and we were unable to recover it. 00:32:35.926 [2024-11-20 06:43:07.590427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.926 [2024-11-20 06:43:07.590461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.926 qpair failed and we were unable to recover it. 00:32:35.926 [2024-11-20 06:43:07.590759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.926 [2024-11-20 06:43:07.590790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.926 qpair failed and we were unable to recover it. 00:32:35.926 [2024-11-20 06:43:07.591060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.926 [2024-11-20 06:43:07.591092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.926 qpair failed and we were unable to recover it. 00:32:35.926 [2024-11-20 06:43:07.591390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.926 [2024-11-20 06:43:07.591424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.926 qpair failed and we were unable to recover it. 00:32:35.926 [2024-11-20 06:43:07.591715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.926 [2024-11-20 06:43:07.591746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.926 qpair failed and we were unable to recover it. 00:32:35.927 [2024-11-20 06:43:07.592021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.927 [2024-11-20 06:43:07.592053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.927 qpair failed and we were unable to recover it. 00:32:35.927 [2024-11-20 06:43:07.592349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.927 [2024-11-20 06:43:07.592383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.927 qpair failed and we were unable to recover it. 00:32:35.927 [2024-11-20 06:43:07.592658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.927 [2024-11-20 06:43:07.592690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.927 qpair failed and we were unable to recover it. 00:32:35.927 [2024-11-20 06:43:07.592832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.927 [2024-11-20 06:43:07.592864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.927 qpair failed and we were unable to recover it. 00:32:35.927 [2024-11-20 06:43:07.593049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.927 [2024-11-20 06:43:07.593080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.927 qpair failed and we were unable to recover it. 00:32:35.927 [2024-11-20 06:43:07.593374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.927 [2024-11-20 06:43:07.593408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.927 qpair failed and we were unable to recover it. 00:32:35.927 [2024-11-20 06:43:07.593685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.927 [2024-11-20 06:43:07.593717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.927 qpair failed and we were unable to recover it. 00:32:35.927 [2024-11-20 06:43:07.593983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.927 [2024-11-20 06:43:07.594015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.927 qpair failed and we were unable to recover it. 00:32:35.927 [2024-11-20 06:43:07.594293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.927 [2024-11-20 06:43:07.594326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.927 qpair failed and we were unable to recover it. 00:32:35.927 [2024-11-20 06:43:07.594520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.927 [2024-11-20 06:43:07.594552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.927 qpair failed and we were unable to recover it. 00:32:35.927 [2024-11-20 06:43:07.594748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.927 [2024-11-20 06:43:07.594780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.927 qpair failed and we were unable to recover it. 00:32:35.927 [2024-11-20 06:43:07.595056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.927 [2024-11-20 06:43:07.595088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.927 qpair failed and we were unable to recover it. 00:32:35.927 [2024-11-20 06:43:07.595397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.927 [2024-11-20 06:43:07.595430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.927 qpair failed and we were unable to recover it. 00:32:35.927 [2024-11-20 06:43:07.595698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.927 [2024-11-20 06:43:07.595730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.927 qpair failed and we were unable to recover it. 00:32:35.927 [2024-11-20 06:43:07.596031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.927 [2024-11-20 06:43:07.596063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.927 qpair failed and we were unable to recover it. 00:32:35.927 [2024-11-20 06:43:07.596275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.927 [2024-11-20 06:43:07.596309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.927 qpair failed and we were unable to recover it. 00:32:35.927 [2024-11-20 06:43:07.596584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.927 [2024-11-20 06:43:07.596616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.927 qpair failed and we were unable to recover it. 00:32:35.927 [2024-11-20 06:43:07.596844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.927 [2024-11-20 06:43:07.596876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.927 qpair failed and we were unable to recover it. 00:32:35.927 [2024-11-20 06:43:07.597181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.927 [2024-11-20 06:43:07.597223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.927 qpair failed and we were unable to recover it. 00:32:35.927 [2024-11-20 06:43:07.597501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.927 [2024-11-20 06:43:07.597534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.927 qpair failed and we were unable to recover it. 00:32:35.927 [2024-11-20 06:43:07.597813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.927 [2024-11-20 06:43:07.597845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.927 qpair failed and we were unable to recover it. 00:32:35.927 [2024-11-20 06:43:07.598078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.927 [2024-11-20 06:43:07.598111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.927 qpair failed and we were unable to recover it. 00:32:35.927 [2024-11-20 06:43:07.598390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.927 [2024-11-20 06:43:07.598424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.927 qpair failed and we were unable to recover it. 00:32:35.927 [2024-11-20 06:43:07.598707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.927 [2024-11-20 06:43:07.598738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.927 qpair failed and we were unable to recover it. 00:32:35.927 [2024-11-20 06:43:07.599024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.927 [2024-11-20 06:43:07.599056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.927 qpair failed and we were unable to recover it. 00:32:35.927 [2024-11-20 06:43:07.599338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.927 [2024-11-20 06:43:07.599372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.927 qpair failed and we were unable to recover it. 00:32:35.927 [2024-11-20 06:43:07.599574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.927 [2024-11-20 06:43:07.599606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.927 qpair failed and we were unable to recover it. 00:32:35.927 [2024-11-20 06:43:07.599864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.927 [2024-11-20 06:43:07.599896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.927 qpair failed and we were unable to recover it. 00:32:35.927 [2024-11-20 06:43:07.600098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.927 [2024-11-20 06:43:07.600130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.927 qpair failed and we were unable to recover it. 00:32:35.927 [2024-11-20 06:43:07.600325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.927 [2024-11-20 06:43:07.600359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.927 qpair failed and we were unable to recover it. 00:32:35.927 [2024-11-20 06:43:07.600499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.927 [2024-11-20 06:43:07.600531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.927 qpair failed and we were unable to recover it. 00:32:35.927 [2024-11-20 06:43:07.600685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.927 [2024-11-20 06:43:07.600717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.927 qpair failed and we were unable to recover it. 00:32:35.927 [2024-11-20 06:43:07.600905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.927 [2024-11-20 06:43:07.600937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.927 qpair failed and we were unable to recover it. 00:32:35.927 [2024-11-20 06:43:07.601242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.927 [2024-11-20 06:43:07.601282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.927 qpair failed and we were unable to recover it. 00:32:35.927 [2024-11-20 06:43:07.601567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.928 [2024-11-20 06:43:07.601599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.928 qpair failed and we were unable to recover it. 00:32:35.928 [2024-11-20 06:43:07.601908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.928 [2024-11-20 06:43:07.601939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.928 qpair failed and we were unable to recover it. 00:32:35.928 [2024-11-20 06:43:07.602209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.928 [2024-11-20 06:43:07.602243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.928 qpair failed and we were unable to recover it. 00:32:35.928 [2024-11-20 06:43:07.602517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.928 [2024-11-20 06:43:07.602550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.928 qpair failed and we were unable to recover it. 00:32:35.928 [2024-11-20 06:43:07.602826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.928 [2024-11-20 06:43:07.602858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.928 qpair failed and we were unable to recover it. 00:32:35.928 [2024-11-20 06:43:07.603096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.928 [2024-11-20 06:43:07.603128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.928 qpair failed and we were unable to recover it. 00:32:35.928 [2024-11-20 06:43:07.603409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.928 [2024-11-20 06:43:07.603443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.928 qpair failed and we were unable to recover it. 00:32:35.928 [2024-11-20 06:43:07.603628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.928 [2024-11-20 06:43:07.603660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.928 qpair failed and we were unable to recover it. 00:32:35.928 [2024-11-20 06:43:07.603927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.928 [2024-11-20 06:43:07.603958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.928 qpair failed and we were unable to recover it. 00:32:35.928 [2024-11-20 06:43:07.604257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.928 [2024-11-20 06:43:07.604291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.928 qpair failed and we were unable to recover it. 00:32:35.928 [2024-11-20 06:43:07.604559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.928 [2024-11-20 06:43:07.604590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.928 qpair failed and we were unable to recover it. 00:32:35.928 [2024-11-20 06:43:07.604745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.928 [2024-11-20 06:43:07.604776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.928 qpair failed and we were unable to recover it. 00:32:35.928 [2024-11-20 06:43:07.605000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.928 [2024-11-20 06:43:07.605032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.928 qpair failed and we were unable to recover it. 00:32:35.928 [2024-11-20 06:43:07.605317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.928 [2024-11-20 06:43:07.605351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.928 qpair failed and we were unable to recover it. 00:32:35.928 [2024-11-20 06:43:07.605634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.928 [2024-11-20 06:43:07.605665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.928 qpair failed and we were unable to recover it. 00:32:35.928 [2024-11-20 06:43:07.605946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.928 [2024-11-20 06:43:07.605977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.928 qpair failed and we were unable to recover it. 00:32:35.928 [2024-11-20 06:43:07.606275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.928 [2024-11-20 06:43:07.606309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.928 qpair failed and we were unable to recover it. 00:32:35.928 [2024-11-20 06:43:07.606579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.928 [2024-11-20 06:43:07.606610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.928 qpair failed and we were unable to recover it. 00:32:35.928 [2024-11-20 06:43:07.606862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.928 [2024-11-20 06:43:07.606894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.928 qpair failed and we were unable to recover it. 00:32:35.928 [2024-11-20 06:43:07.607102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.928 [2024-11-20 06:43:07.607135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.928 qpair failed and we were unable to recover it. 00:32:35.928 [2024-11-20 06:43:07.607408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.928 [2024-11-20 06:43:07.607441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.928 qpair failed and we were unable to recover it. 00:32:35.928 [2024-11-20 06:43:07.607723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.928 [2024-11-20 06:43:07.607755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.928 qpair failed and we were unable to recover it. 00:32:35.928 [2024-11-20 06:43:07.607956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.928 [2024-11-20 06:43:07.607988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.928 qpair failed and we were unable to recover it. 00:32:35.928 [2024-11-20 06:43:07.608268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.928 [2024-11-20 06:43:07.608301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.928 qpair failed and we were unable to recover it. 00:32:35.928 [2024-11-20 06:43:07.608486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.928 [2024-11-20 06:43:07.608518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.928 qpair failed and we were unable to recover it. 00:32:35.928 [2024-11-20 06:43:07.608787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.928 [2024-11-20 06:43:07.608819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.928 qpair failed and we were unable to recover it. 00:32:35.928 [2024-11-20 06:43:07.609029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.928 [2024-11-20 06:43:07.609060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.928 qpair failed and we were unable to recover it. 00:32:35.928 [2024-11-20 06:43:07.609336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.928 [2024-11-20 06:43:07.609370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.928 qpair failed and we were unable to recover it. 00:32:35.928 [2024-11-20 06:43:07.609592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.928 [2024-11-20 06:43:07.609624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.928 qpair failed and we were unable to recover it. 00:32:35.928 [2024-11-20 06:43:07.609929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.928 [2024-11-20 06:43:07.609961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.928 qpair failed and we were unable to recover it. 00:32:35.928 [2024-11-20 06:43:07.610243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.928 [2024-11-20 06:43:07.610277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.928 qpair failed and we were unable to recover it. 00:32:35.928 [2024-11-20 06:43:07.610533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.928 [2024-11-20 06:43:07.610565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.928 qpair failed and we were unable to recover it. 00:32:35.928 [2024-11-20 06:43:07.610848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.928 [2024-11-20 06:43:07.610880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.928 qpair failed and we were unable to recover it. 00:32:35.928 [2024-11-20 06:43:07.611134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.928 [2024-11-20 06:43:07.611165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.928 qpair failed and we were unable to recover it. 00:32:35.928 [2024-11-20 06:43:07.611393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.928 [2024-11-20 06:43:07.611427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.928 qpair failed and we were unable to recover it. 00:32:35.928 [2024-11-20 06:43:07.611690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.928 [2024-11-20 06:43:07.611722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.928 qpair failed and we were unable to recover it. 00:32:35.928 [2024-11-20 06:43:07.612021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.928 [2024-11-20 06:43:07.612052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.928 qpair failed and we were unable to recover it. 00:32:35.928 [2024-11-20 06:43:07.612330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.929 [2024-11-20 06:43:07.612363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.929 qpair failed and we were unable to recover it. 00:32:35.929 [2024-11-20 06:43:07.612579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.929 [2024-11-20 06:43:07.612611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.929 qpair failed and we were unable to recover it. 00:32:35.929 [2024-11-20 06:43:07.612761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.929 [2024-11-20 06:43:07.612793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.929 qpair failed and we were unable to recover it. 00:32:35.929 [2024-11-20 06:43:07.613076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.929 [2024-11-20 06:43:07.613113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.929 qpair failed and we were unable to recover it. 00:32:35.929 [2024-11-20 06:43:07.613420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.929 [2024-11-20 06:43:07.613453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.929 qpair failed and we were unable to recover it. 00:32:35.929 [2024-11-20 06:43:07.613728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.929 [2024-11-20 06:43:07.613760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.929 qpair failed and we were unable to recover it. 00:32:35.929 [2024-11-20 06:43:07.614046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.929 [2024-11-20 06:43:07.614078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.929 qpair failed and we were unable to recover it. 00:32:35.929 [2024-11-20 06:43:07.614313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.929 [2024-11-20 06:43:07.614347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.929 qpair failed and we were unable to recover it. 00:32:35.929 [2024-11-20 06:43:07.614561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.929 [2024-11-20 06:43:07.614594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.929 qpair failed and we were unable to recover it. 00:32:35.929 [2024-11-20 06:43:07.614869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.929 [2024-11-20 06:43:07.614901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.929 qpair failed and we were unable to recover it. 00:32:35.929 [2024-11-20 06:43:07.615187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.929 [2024-11-20 06:43:07.615230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.929 qpair failed and we were unable to recover it. 00:32:35.929 [2024-11-20 06:43:07.615503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.929 [2024-11-20 06:43:07.615535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.929 qpair failed and we were unable to recover it. 00:32:35.929 [2024-11-20 06:43:07.615818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.929 [2024-11-20 06:43:07.615851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.929 qpair failed and we were unable to recover it. 00:32:35.929 [2024-11-20 06:43:07.616135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.929 [2024-11-20 06:43:07.616167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.929 qpair failed and we were unable to recover it. 00:32:35.929 [2024-11-20 06:43:07.616451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.929 [2024-11-20 06:43:07.616483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.929 qpair failed and we were unable to recover it. 00:32:35.929 [2024-11-20 06:43:07.616773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.929 [2024-11-20 06:43:07.616805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.929 qpair failed and we were unable to recover it. 00:32:35.929 [2024-11-20 06:43:07.617083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.929 [2024-11-20 06:43:07.617115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.929 qpair failed and we were unable to recover it. 00:32:35.929 [2024-11-20 06:43:07.617337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.929 [2024-11-20 06:43:07.617372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.929 qpair failed and we were unable to recover it. 00:32:35.929 [2024-11-20 06:43:07.617573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.929 [2024-11-20 06:43:07.617604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.929 qpair failed and we were unable to recover it. 00:32:35.929 [2024-11-20 06:43:07.617803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.929 [2024-11-20 06:43:07.617835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.929 qpair failed and we were unable to recover it. 00:32:35.929 [2024-11-20 06:43:07.618057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.929 [2024-11-20 06:43:07.618089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.929 qpair failed and we were unable to recover it. 00:32:35.929 [2024-11-20 06:43:07.618273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.929 [2024-11-20 06:43:07.618306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.929 qpair failed and we were unable to recover it. 00:32:35.929 [2024-11-20 06:43:07.618580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.929 [2024-11-20 06:43:07.618612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.929 qpair failed and we were unable to recover it. 00:32:35.929 [2024-11-20 06:43:07.618806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.929 [2024-11-20 06:43:07.618838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.929 qpair failed and we were unable to recover it. 00:32:35.929 [2024-11-20 06:43:07.619117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.929 [2024-11-20 06:43:07.619150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.929 qpair failed and we were unable to recover it. 00:32:35.929 [2024-11-20 06:43:07.619440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.929 [2024-11-20 06:43:07.619473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.929 qpair failed and we were unable to recover it. 00:32:35.929 [2024-11-20 06:43:07.619752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.929 [2024-11-20 06:43:07.619783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.929 qpair failed and we were unable to recover it. 00:32:35.929 [2024-11-20 06:43:07.619965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.929 [2024-11-20 06:43:07.619997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.929 qpair failed and we were unable to recover it. 00:32:35.929 [2024-11-20 06:43:07.620215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.930 [2024-11-20 06:43:07.620250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.930 qpair failed and we were unable to recover it. 00:32:35.930 [2024-11-20 06:43:07.620434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.930 [2024-11-20 06:43:07.620466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.930 qpair failed and we were unable to recover it. 00:32:35.930 [2024-11-20 06:43:07.620653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.930 [2024-11-20 06:43:07.620690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.930 qpair failed and we were unable to recover it. 00:32:35.930 [2024-11-20 06:43:07.620880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.930 [2024-11-20 06:43:07.620912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.930 qpair failed and we were unable to recover it. 00:32:35.930 [2024-11-20 06:43:07.621178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.930 [2024-11-20 06:43:07.621229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.930 qpair failed and we were unable to recover it. 00:32:35.930 [2024-11-20 06:43:07.621450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.930 [2024-11-20 06:43:07.621482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.930 qpair failed and we were unable to recover it. 00:32:35.930 [2024-11-20 06:43:07.621629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.930 [2024-11-20 06:43:07.621662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.930 qpair failed and we were unable to recover it. 00:32:35.930 [2024-11-20 06:43:07.621962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.930 [2024-11-20 06:43:07.621994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.930 qpair failed and we were unable to recover it. 00:32:35.930 [2024-11-20 06:43:07.622281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.930 [2024-11-20 06:43:07.622315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.930 qpair failed and we were unable to recover it. 00:32:35.930 [2024-11-20 06:43:07.622525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.930 [2024-11-20 06:43:07.622557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.930 qpair failed and we were unable to recover it. 00:32:35.930 [2024-11-20 06:43:07.622834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.930 [2024-11-20 06:43:07.622865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.930 qpair failed and we were unable to recover it. 00:32:35.930 [2024-11-20 06:43:07.623066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.930 [2024-11-20 06:43:07.623098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.930 qpair failed and we were unable to recover it. 00:32:35.930 [2024-11-20 06:43:07.623365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.930 [2024-11-20 06:43:07.623399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.930 qpair failed and we were unable to recover it. 00:32:35.930 [2024-11-20 06:43:07.623655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.930 [2024-11-20 06:43:07.623687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.930 qpair failed and we were unable to recover it. 00:32:35.930 [2024-11-20 06:43:07.623943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.930 [2024-11-20 06:43:07.623977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.930 qpair failed and we were unable to recover it. 00:32:35.930 [2024-11-20 06:43:07.624280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.930 [2024-11-20 06:43:07.624313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.930 qpair failed and we were unable to recover it. 00:32:35.930 [2024-11-20 06:43:07.624459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.930 [2024-11-20 06:43:07.624493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.930 qpair failed and we were unable to recover it. 00:32:35.930 [2024-11-20 06:43:07.624746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.930 [2024-11-20 06:43:07.624778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.930 qpair failed and we were unable to recover it. 00:32:35.930 [2024-11-20 06:43:07.625063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.930 [2024-11-20 06:43:07.625095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.930 qpair failed and we were unable to recover it. 00:32:35.930 [2024-11-20 06:43:07.625381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.930 [2024-11-20 06:43:07.625415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.930 qpair failed and we were unable to recover it. 00:32:35.930 [2024-11-20 06:43:07.625616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.930 [2024-11-20 06:43:07.625648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.930 qpair failed and we were unable to recover it. 00:32:35.930 [2024-11-20 06:43:07.625956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.930 [2024-11-20 06:43:07.625988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.930 qpair failed and we were unable to recover it. 00:32:35.930 [2024-11-20 06:43:07.626224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.930 [2024-11-20 06:43:07.626257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.930 qpair failed and we were unable to recover it. 00:32:35.930 [2024-11-20 06:43:07.626543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.930 [2024-11-20 06:43:07.626574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.930 qpair failed and we were unable to recover it. 00:32:35.930 [2024-11-20 06:43:07.626845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.930 [2024-11-20 06:43:07.626877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.930 qpair failed and we were unable to recover it. 00:32:35.930 [2024-11-20 06:43:07.627168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.930 [2024-11-20 06:43:07.627199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.930 qpair failed and we were unable to recover it. 00:32:35.930 [2024-11-20 06:43:07.627365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.930 [2024-11-20 06:43:07.627397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.930 qpair failed and we were unable to recover it. 00:32:35.930 [2024-11-20 06:43:07.627636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.930 [2024-11-20 06:43:07.627668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.930 qpair failed and we were unable to recover it. 00:32:35.930 [2024-11-20 06:43:07.627880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.930 [2024-11-20 06:43:07.627912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.930 qpair failed and we were unable to recover it. 00:32:35.930 [2024-11-20 06:43:07.628247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.930 [2024-11-20 06:43:07.628281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.930 qpair failed and we were unable to recover it. 00:32:35.930 [2024-11-20 06:43:07.628565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.930 [2024-11-20 06:43:07.628598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.930 qpair failed and we were unable to recover it. 00:32:35.930 [2024-11-20 06:43:07.628868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.930 [2024-11-20 06:43:07.628900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.930 qpair failed and we were unable to recover it. 00:32:35.930 [2024-11-20 06:43:07.629107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.930 [2024-11-20 06:43:07.629139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.930 qpair failed and we were unable to recover it. 00:32:35.930 [2024-11-20 06:43:07.629281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.930 [2024-11-20 06:43:07.629314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.930 qpair failed and we were unable to recover it. 00:32:35.930 [2024-11-20 06:43:07.629619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.930 [2024-11-20 06:43:07.629651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.930 qpair failed and we were unable to recover it. 00:32:35.930 [2024-11-20 06:43:07.629936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.930 [2024-11-20 06:43:07.629968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.930 qpair failed and we were unable to recover it. 00:32:35.930 [2024-11-20 06:43:07.630249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.930 [2024-11-20 06:43:07.630283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.930 qpair failed and we were unable to recover it. 00:32:35.930 [2024-11-20 06:43:07.630510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.931 [2024-11-20 06:43:07.630542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.931 qpair failed and we were unable to recover it. 00:32:35.931 [2024-11-20 06:43:07.630846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.931 [2024-11-20 06:43:07.630877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.931 qpair failed and we were unable to recover it. 00:32:35.931 [2024-11-20 06:43:07.631155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.931 [2024-11-20 06:43:07.631187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.931 qpair failed and we were unable to recover it. 00:32:35.931 [2024-11-20 06:43:07.631475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.931 [2024-11-20 06:43:07.631508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.931 qpair failed and we were unable to recover it. 00:32:35.931 [2024-11-20 06:43:07.631788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.931 [2024-11-20 06:43:07.631820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.931 qpair failed and we were unable to recover it. 00:32:35.931 [2024-11-20 06:43:07.632034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.931 [2024-11-20 06:43:07.632065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.931 qpair failed and we were unable to recover it. 00:32:35.931 [2024-11-20 06:43:07.632341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.931 [2024-11-20 06:43:07.632380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.931 qpair failed and we were unable to recover it. 00:32:35.931 [2024-11-20 06:43:07.632609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.931 [2024-11-20 06:43:07.632640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.931 qpair failed and we were unable to recover it. 00:32:35.931 [2024-11-20 06:43:07.632847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.931 [2024-11-20 06:43:07.632880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.931 qpair failed and we were unable to recover it. 00:32:35.931 [2024-11-20 06:43:07.633082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.931 [2024-11-20 06:43:07.633113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.931 qpair failed and we were unable to recover it. 00:32:35.931 [2024-11-20 06:43:07.633366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.931 [2024-11-20 06:43:07.633400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.931 qpair failed and we were unable to recover it. 00:32:35.931 [2024-11-20 06:43:07.633622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.931 [2024-11-20 06:43:07.633654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.931 qpair failed and we were unable to recover it. 00:32:35.931 [2024-11-20 06:43:07.633900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.931 [2024-11-20 06:43:07.633932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.931 qpair failed and we were unable to recover it. 00:32:35.931 [2024-11-20 06:43:07.634118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.931 [2024-11-20 06:43:07.634150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.931 qpair failed and we were unable to recover it. 00:32:35.931 [2024-11-20 06:43:07.634424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.931 [2024-11-20 06:43:07.634458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.931 qpair failed and we were unable to recover it. 00:32:35.931 [2024-11-20 06:43:07.634670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.931 [2024-11-20 06:43:07.634701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.931 qpair failed and we were unable to recover it. 00:32:35.931 [2024-11-20 06:43:07.634998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.931 [2024-11-20 06:43:07.635030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.931 qpair failed and we were unable to recover it. 00:32:35.931 [2024-11-20 06:43:07.635255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.931 [2024-11-20 06:43:07.635289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.931 qpair failed and we were unable to recover it. 00:32:35.931 [2024-11-20 06:43:07.635510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.931 [2024-11-20 06:43:07.635541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.931 qpair failed and we were unable to recover it. 00:32:35.931 [2024-11-20 06:43:07.635795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.931 [2024-11-20 06:43:07.635827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.931 qpair failed and we were unable to recover it. 00:32:35.931 [2024-11-20 06:43:07.636136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.931 [2024-11-20 06:43:07.636169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.931 qpair failed and we were unable to recover it. 00:32:35.931 [2024-11-20 06:43:07.636450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.931 [2024-11-20 06:43:07.636483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.931 qpair failed and we were unable to recover it. 00:32:35.931 [2024-11-20 06:43:07.636689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.931 [2024-11-20 06:43:07.636721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.931 qpair failed and we were unable to recover it. 00:32:35.931 [2024-11-20 06:43:07.637011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.931 [2024-11-20 06:43:07.637043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.931 qpair failed and we were unable to recover it. 00:32:35.931 [2024-11-20 06:43:07.637330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.931 [2024-11-20 06:43:07.637364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.931 qpair failed and we were unable to recover it. 00:32:35.931 [2024-11-20 06:43:07.637567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.931 [2024-11-20 06:43:07.637600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.931 qpair failed and we were unable to recover it. 00:32:35.931 [2024-11-20 06:43:07.637737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.931 [2024-11-20 06:43:07.637768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.931 qpair failed and we were unable to recover it. 00:32:35.931 [2024-11-20 06:43:07.638045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.931 [2024-11-20 06:43:07.638077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.931 qpair failed and we were unable to recover it. 00:32:35.931 [2024-11-20 06:43:07.638356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.931 [2024-11-20 06:43:07.638390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.931 qpair failed and we were unable to recover it. 00:32:35.931 [2024-11-20 06:43:07.638623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.931 [2024-11-20 06:43:07.638654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.931 qpair failed and we were unable to recover it. 00:32:35.931 [2024-11-20 06:43:07.638912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.931 [2024-11-20 06:43:07.638944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.931 qpair failed and we were unable to recover it. 00:32:35.931 [2024-11-20 06:43:07.639172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.931 [2024-11-20 06:43:07.639215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.931 qpair failed and we were unable to recover it. 00:32:35.931 [2024-11-20 06:43:07.639507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.931 [2024-11-20 06:43:07.639540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.931 qpair failed and we were unable to recover it. 00:32:35.931 [2024-11-20 06:43:07.639802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.931 [2024-11-20 06:43:07.639839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.931 qpair failed and we were unable to recover it. 00:32:35.931 [2024-11-20 06:43:07.640135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.931 [2024-11-20 06:43:07.640167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.931 qpair failed and we were unable to recover it. 00:32:35.931 [2024-11-20 06:43:07.640398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.932 [2024-11-20 06:43:07.640430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.932 qpair failed and we were unable to recover it. 00:32:35.932 [2024-11-20 06:43:07.640714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.932 [2024-11-20 06:43:07.640745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.932 qpair failed and we were unable to recover it. 00:32:35.932 [2024-11-20 06:43:07.640948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.932 [2024-11-20 06:43:07.640979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.932 qpair failed and we were unable to recover it. 00:32:35.932 [2024-11-20 06:43:07.641241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.932 [2024-11-20 06:43:07.641274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.932 qpair failed and we were unable to recover it. 00:32:35.932 [2024-11-20 06:43:07.641559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.932 [2024-11-20 06:43:07.641591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.932 qpair failed and we were unable to recover it. 00:32:35.932 [2024-11-20 06:43:07.641862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.932 [2024-11-20 06:43:07.641894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.932 qpair failed and we were unable to recover it. 00:32:35.932 [2024-11-20 06:43:07.642188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.932 [2024-11-20 06:43:07.642229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.932 qpair failed and we were unable to recover it. 00:32:35.932 [2024-11-20 06:43:07.642504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.932 [2024-11-20 06:43:07.642536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.932 qpair failed and we were unable to recover it. 00:32:35.932 [2024-11-20 06:43:07.642741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.932 [2024-11-20 06:43:07.642773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.932 qpair failed and we were unable to recover it. 00:32:35.932 [2024-11-20 06:43:07.643003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.932 [2024-11-20 06:43:07.643035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.932 qpair failed and we were unable to recover it. 00:32:35.932 [2024-11-20 06:43:07.643338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.932 [2024-11-20 06:43:07.643376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.932 qpair failed and we were unable to recover it. 00:32:35.932 [2024-11-20 06:43:07.643656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.932 [2024-11-20 06:43:07.643690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.932 qpair failed and we were unable to recover it. 00:32:35.932 [2024-11-20 06:43:07.643908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.932 [2024-11-20 06:43:07.643939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.932 qpair failed and we were unable to recover it. 00:32:35.932 [2024-11-20 06:43:07.644172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.932 [2024-11-20 06:43:07.644223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.932 qpair failed and we were unable to recover it. 00:32:35.932 [2024-11-20 06:43:07.644483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.932 [2024-11-20 06:43:07.644516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.932 qpair failed and we were unable to recover it. 00:32:35.932 [2024-11-20 06:43:07.644728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.932 [2024-11-20 06:43:07.644759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.932 qpair failed and we were unable to recover it. 00:32:35.932 [2024-11-20 06:43:07.645026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.932 [2024-11-20 06:43:07.645058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.932 qpair failed and we were unable to recover it. 00:32:35.932 [2024-11-20 06:43:07.645359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.932 [2024-11-20 06:43:07.645393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.932 qpair failed and we were unable to recover it. 00:32:35.932 [2024-11-20 06:43:07.645678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.932 [2024-11-20 06:43:07.645710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.932 qpair failed and we were unable to recover it. 00:32:35.932 [2024-11-20 06:43:07.645914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.932 [2024-11-20 06:43:07.645946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.932 qpair failed and we were unable to recover it. 00:32:35.932 [2024-11-20 06:43:07.646128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.932 [2024-11-20 06:43:07.646159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.932 qpair failed and we were unable to recover it. 00:32:35.932 [2024-11-20 06:43:07.646449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.932 [2024-11-20 06:43:07.646482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.932 qpair failed and we were unable to recover it. 00:32:35.932 [2024-11-20 06:43:07.646743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.932 [2024-11-20 06:43:07.646775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.932 qpair failed and we were unable to recover it. 00:32:35.932 [2024-11-20 06:43:07.647024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.932 [2024-11-20 06:43:07.647056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.932 qpair failed and we were unable to recover it. 00:32:35.932 [2024-11-20 06:43:07.647305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.932 [2024-11-20 06:43:07.647338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.932 qpair failed and we were unable to recover it. 00:32:35.932 [2024-11-20 06:43:07.647484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.932 [2024-11-20 06:43:07.647515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.932 qpair failed and we were unable to recover it. 00:32:35.932 [2024-11-20 06:43:07.647648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.932 [2024-11-20 06:43:07.647680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.932 qpair failed and we were unable to recover it. 00:32:35.932 [2024-11-20 06:43:07.647891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.932 [2024-11-20 06:43:07.647924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.932 qpair failed and we were unable to recover it. 00:32:35.932 [2024-11-20 06:43:07.648168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.932 [2024-11-20 06:43:07.648199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.932 qpair failed and we were unable to recover it. 00:32:35.932 [2024-11-20 06:43:07.648422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.932 [2024-11-20 06:43:07.648455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.932 qpair failed and we were unable to recover it. 00:32:35.932 [2024-11-20 06:43:07.648686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.932 [2024-11-20 06:43:07.648717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.932 qpair failed and we were unable to recover it. 00:32:35.932 [2024-11-20 06:43:07.648997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.932 [2024-11-20 06:43:07.649028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.932 qpair failed and we were unable to recover it. 00:32:35.932 [2024-11-20 06:43:07.649340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.932 [2024-11-20 06:43:07.649373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.932 qpair failed and we were unable to recover it. 00:32:35.932 [2024-11-20 06:43:07.649575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.932 [2024-11-20 06:43:07.649606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.932 qpair failed and we were unable to recover it. 00:32:35.932 [2024-11-20 06:43:07.649809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.932 [2024-11-20 06:43:07.649840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.932 qpair failed and we were unable to recover it. 00:32:35.932 [2024-11-20 06:43:07.650118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.932 [2024-11-20 06:43:07.650150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.932 qpair failed and we were unable to recover it. 00:32:35.932 [2024-11-20 06:43:07.650446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.932 [2024-11-20 06:43:07.650478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.932 qpair failed and we were unable to recover it. 00:32:35.932 [2024-11-20 06:43:07.650737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.932 [2024-11-20 06:43:07.650769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.932 qpair failed and we were unable to recover it. 00:32:35.933 [2024-11-20 06:43:07.650914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.933 [2024-11-20 06:43:07.650945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.933 qpair failed and we were unable to recover it. 00:32:35.933 [2024-11-20 06:43:07.651228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.933 [2024-11-20 06:43:07.651269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.933 qpair failed and we were unable to recover it. 00:32:35.933 [2024-11-20 06:43:07.651541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.933 [2024-11-20 06:43:07.651573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.933 qpair failed and we were unable to recover it. 00:32:35.933 [2024-11-20 06:43:07.651803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.933 [2024-11-20 06:43:07.651835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.933 qpair failed and we were unable to recover it. 00:32:35.933 [2024-11-20 06:43:07.652119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.933 [2024-11-20 06:43:07.652151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.933 qpair failed and we were unable to recover it. 00:32:35.933 [2024-11-20 06:43:07.652434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.933 [2024-11-20 06:43:07.652467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.933 qpair failed and we were unable to recover it. 00:32:35.933 [2024-11-20 06:43:07.652650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.933 [2024-11-20 06:43:07.652683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.933 qpair failed and we were unable to recover it. 00:32:35.933 [2024-11-20 06:43:07.652970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.933 [2024-11-20 06:43:07.653000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.933 qpair failed and we were unable to recover it. 00:32:35.933 [2024-11-20 06:43:07.653228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.933 [2024-11-20 06:43:07.653263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.933 qpair failed and we were unable to recover it. 00:32:35.933 [2024-11-20 06:43:07.653540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.933 [2024-11-20 06:43:07.653572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.933 qpair failed and we were unable to recover it. 00:32:35.933 [2024-11-20 06:43:07.653763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.933 [2024-11-20 06:43:07.653794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.933 qpair failed and we were unable to recover it. 00:32:35.933 [2024-11-20 06:43:07.653998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.933 [2024-11-20 06:43:07.654029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.933 qpair failed and we were unable to recover it. 00:32:35.933 [2024-11-20 06:43:07.654323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.933 [2024-11-20 06:43:07.654357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.933 qpair failed and we were unable to recover it. 00:32:35.933 [2024-11-20 06:43:07.654575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.933 [2024-11-20 06:43:07.654607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.933 qpair failed and we were unable to recover it. 00:32:35.933 [2024-11-20 06:43:07.654849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.933 [2024-11-20 06:43:07.654881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.933 qpair failed and we were unable to recover it. 00:32:35.933 [2024-11-20 06:43:07.655100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.933 [2024-11-20 06:43:07.655132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.933 qpair failed and we were unable to recover it. 00:32:35.933 [2024-11-20 06:43:07.655369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.933 [2024-11-20 06:43:07.655403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.933 qpair failed and we were unable to recover it. 00:32:35.933 [2024-11-20 06:43:07.655631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.933 [2024-11-20 06:43:07.655662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.933 qpair failed and we were unable to recover it. 00:32:35.933 [2024-11-20 06:43:07.655848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.933 [2024-11-20 06:43:07.655880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.933 qpair failed and we were unable to recover it. 00:32:35.933 [2024-11-20 06:43:07.656141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.933 [2024-11-20 06:43:07.656173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.933 qpair failed and we were unable to recover it. 00:32:35.933 [2024-11-20 06:43:07.656439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.933 [2024-11-20 06:43:07.656471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.933 qpair failed and we were unable to recover it. 00:32:35.933 [2024-11-20 06:43:07.656750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.933 [2024-11-20 06:43:07.656782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.933 qpair failed and we were unable to recover it. 00:32:35.933 [2024-11-20 06:43:07.657013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.933 [2024-11-20 06:43:07.657045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.933 qpair failed and we were unable to recover it. 00:32:35.933 [2024-11-20 06:43:07.657349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.933 [2024-11-20 06:43:07.657383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.933 qpair failed and we were unable to recover it. 00:32:35.933 [2024-11-20 06:43:07.657595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.933 [2024-11-20 06:43:07.657627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.933 qpair failed and we were unable to recover it. 00:32:35.933 [2024-11-20 06:43:07.657948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.933 [2024-11-20 06:43:07.657980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.933 qpair failed and we were unable to recover it. 00:32:35.933 [2024-11-20 06:43:07.658237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.933 [2024-11-20 06:43:07.658272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.933 qpair failed and we were unable to recover it. 00:32:35.933 [2024-11-20 06:43:07.660522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.933 [2024-11-20 06:43:07.660590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.933 qpair failed and we were unable to recover it. 00:32:35.933 [2024-11-20 06:43:07.660912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.933 [2024-11-20 06:43:07.660946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.933 qpair failed and we were unable to recover it. 00:32:35.933 [2024-11-20 06:43:07.661232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.933 [2024-11-20 06:43:07.661268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.933 qpair failed and we were unable to recover it. 00:32:35.933 [2024-11-20 06:43:07.661467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.933 [2024-11-20 06:43:07.661506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.933 qpair failed and we were unable to recover it. 00:32:35.933 [2024-11-20 06:43:07.661777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.933 [2024-11-20 06:43:07.661811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.933 qpair failed and we were unable to recover it. 00:32:35.933 [2024-11-20 06:43:07.662118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.933 [2024-11-20 06:43:07.662150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.933 qpair failed and we were unable to recover it. 00:32:35.933 [2024-11-20 06:43:07.662439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.933 [2024-11-20 06:43:07.662474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.933 qpair failed and we were unable to recover it. 00:32:35.933 [2024-11-20 06:43:07.662709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.933 [2024-11-20 06:43:07.662742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.933 qpair failed and we were unable to recover it. 00:32:35.933 [2024-11-20 06:43:07.663022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.933 [2024-11-20 06:43:07.663055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.934 qpair failed and we were unable to recover it. 00:32:35.934 [2024-11-20 06:43:07.663326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.934 [2024-11-20 06:43:07.663361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.934 qpair failed and we were unable to recover it. 00:32:35.934 [2024-11-20 06:43:07.663608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.934 [2024-11-20 06:43:07.663643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.934 qpair failed and we were unable to recover it. 00:32:35.934 [2024-11-20 06:43:07.663877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.934 [2024-11-20 06:43:07.663910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.934 qpair failed and we were unable to recover it. 00:32:35.934 [2024-11-20 06:43:07.664128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.934 [2024-11-20 06:43:07.664162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.934 qpair failed and we were unable to recover it. 00:32:35.934 [2024-11-20 06:43:07.664442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.934 [2024-11-20 06:43:07.664476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.934 qpair failed and we were unable to recover it. 00:32:35.934 [2024-11-20 06:43:07.664753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.934 [2024-11-20 06:43:07.664787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.934 qpair failed and we were unable to recover it. 00:32:35.934 [2024-11-20 06:43:07.665116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.934 [2024-11-20 06:43:07.665150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.934 qpair failed and we were unable to recover it. 00:32:35.934 [2024-11-20 06:43:07.665358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.934 [2024-11-20 06:43:07.665392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.934 qpair failed and we were unable to recover it. 00:32:35.934 [2024-11-20 06:43:07.665545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.934 [2024-11-20 06:43:07.665579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.934 qpair failed and we were unable to recover it. 00:32:35.934 [2024-11-20 06:43:07.665787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.934 [2024-11-20 06:43:07.665820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.934 qpair failed and we were unable to recover it. 00:32:35.934 [2024-11-20 06:43:07.666076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.934 [2024-11-20 06:43:07.666114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.934 qpair failed and we were unable to recover it. 00:32:35.934 [2024-11-20 06:43:07.666373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.934 [2024-11-20 06:43:07.666409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.934 qpair failed and we were unable to recover it. 00:32:35.934 [2024-11-20 06:43:07.666625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.934 [2024-11-20 06:43:07.666657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.934 qpair failed and we were unable to recover it. 00:32:35.934 [2024-11-20 06:43:07.666806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.934 [2024-11-20 06:43:07.666839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.934 qpair failed and we were unable to recover it. 00:32:35.934 [2024-11-20 06:43:07.667038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.934 [2024-11-20 06:43:07.667069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.934 qpair failed and we were unable to recover it. 00:32:35.934 [2024-11-20 06:43:07.667328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.934 [2024-11-20 06:43:07.667364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.934 qpair failed and we were unable to recover it. 00:32:35.934 [2024-11-20 06:43:07.667628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.934 [2024-11-20 06:43:07.667660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.934 qpair failed and we were unable to recover it. 00:32:35.934 [2024-11-20 06:43:07.667920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.934 [2024-11-20 06:43:07.667953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.934 qpair failed and we were unable to recover it. 00:32:35.934 [2024-11-20 06:43:07.668261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.934 [2024-11-20 06:43:07.668295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.934 qpair failed and we were unable to recover it. 00:32:35.934 [2024-11-20 06:43:07.668504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.934 [2024-11-20 06:43:07.668537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.934 qpair failed and we were unable to recover it. 00:32:35.934 [2024-11-20 06:43:07.668668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.934 [2024-11-20 06:43:07.668702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.934 qpair failed and we were unable to recover it. 00:32:35.934 [2024-11-20 06:43:07.668958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.934 [2024-11-20 06:43:07.668991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.934 qpair failed and we were unable to recover it. 00:32:35.934 [2024-11-20 06:43:07.669232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.934 [2024-11-20 06:43:07.669267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.934 qpair failed and we were unable to recover it. 00:32:35.934 [2024-11-20 06:43:07.669526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.934 [2024-11-20 06:43:07.669559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.934 qpair failed and we were unable to recover it. 00:32:35.934 [2024-11-20 06:43:07.669768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.934 [2024-11-20 06:43:07.669801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.934 qpair failed and we were unable to recover it. 00:32:35.934 [2024-11-20 06:43:07.669997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.934 [2024-11-20 06:43:07.670029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.934 qpair failed and we were unable to recover it. 00:32:35.934 [2024-11-20 06:43:07.670178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.934 [2024-11-20 06:43:07.670222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.934 qpair failed and we were unable to recover it. 00:32:35.934 [2024-11-20 06:43:07.670428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.934 [2024-11-20 06:43:07.670461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.934 qpair failed and we were unable to recover it. 00:32:35.934 [2024-11-20 06:43:07.670745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.934 [2024-11-20 06:43:07.670777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.934 qpair failed and we were unable to recover it. 00:32:35.934 [2024-11-20 06:43:07.671051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.934 [2024-11-20 06:43:07.671083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.934 qpair failed and we were unable to recover it. 00:32:35.934 [2024-11-20 06:43:07.671383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.934 [2024-11-20 06:43:07.671417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.934 qpair failed and we were unable to recover it. 00:32:35.934 [2024-11-20 06:43:07.671639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.934 [2024-11-20 06:43:07.671672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.934 qpair failed and we were unable to recover it. 00:32:35.934 [2024-11-20 06:43:07.671809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.934 [2024-11-20 06:43:07.671841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.934 qpair failed and we were unable to recover it. 00:32:35.934 [2024-11-20 06:43:07.672123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.934 [2024-11-20 06:43:07.672162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.934 qpair failed and we were unable to recover it. 00:32:35.934 [2024-11-20 06:43:07.672395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.934 [2024-11-20 06:43:07.672430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.934 qpair failed and we were unable to recover it. 00:32:35.935 [2024-11-20 06:43:07.672575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.935 [2024-11-20 06:43:07.672607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.935 qpair failed and we were unable to recover it. 00:32:35.935 [2024-11-20 06:43:07.672742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.935 [2024-11-20 06:43:07.672775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.935 qpair failed and we were unable to recover it. 00:32:35.935 [2024-11-20 06:43:07.673062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.935 [2024-11-20 06:43:07.673095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.935 qpair failed and we were unable to recover it. 00:32:35.935 [2024-11-20 06:43:07.673322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.935 [2024-11-20 06:43:07.673355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.935 qpair failed and we were unable to recover it. 00:32:35.935 [2024-11-20 06:43:07.673579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.935 [2024-11-20 06:43:07.673612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.935 qpair failed and we were unable to recover it. 00:32:35.935 [2024-11-20 06:43:07.673813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.935 [2024-11-20 06:43:07.673846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.935 qpair failed and we were unable to recover it. 00:32:35.935 [2024-11-20 06:43:07.673983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.935 [2024-11-20 06:43:07.674014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.935 qpair failed and we were unable to recover it. 00:32:35.935 [2024-11-20 06:43:07.674289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.935 [2024-11-20 06:43:07.674324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.935 qpair failed and we were unable to recover it. 00:32:35.935 [2024-11-20 06:43:07.674531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.935 [2024-11-20 06:43:07.674565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.935 qpair failed and we were unable to recover it. 00:32:35.935 [2024-11-20 06:43:07.674783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.935 [2024-11-20 06:43:07.674814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.935 qpair failed and we were unable to recover it. 00:32:35.935 [2024-11-20 06:43:07.674951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.935 [2024-11-20 06:43:07.674983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.935 qpair failed and we were unable to recover it. 00:32:35.935 [2024-11-20 06:43:07.675181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.935 [2024-11-20 06:43:07.675223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.935 qpair failed and we were unable to recover it. 00:32:35.935 [2024-11-20 06:43:07.675433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.935 [2024-11-20 06:43:07.675465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.935 qpair failed and we were unable to recover it. 00:32:35.935 [2024-11-20 06:43:07.675656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.935 [2024-11-20 06:43:07.675688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.935 qpair failed and we were unable to recover it. 00:32:35.935 [2024-11-20 06:43:07.675892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.935 [2024-11-20 06:43:07.675923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.935 qpair failed and we were unable to recover it. 00:32:35.935 [2024-11-20 06:43:07.676133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.935 [2024-11-20 06:43:07.676164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.935 qpair failed and we were unable to recover it. 00:32:35.935 [2024-11-20 06:43:07.676371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.935 [2024-11-20 06:43:07.676405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.935 qpair failed and we were unable to recover it. 00:32:35.935 [2024-11-20 06:43:07.676542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.935 [2024-11-20 06:43:07.676575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.935 qpair failed and we were unable to recover it. 00:32:35.935 [2024-11-20 06:43:07.676757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.935 [2024-11-20 06:43:07.676789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.935 qpair failed and we were unable to recover it. 00:32:35.935 [2024-11-20 06:43:07.677090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.935 [2024-11-20 06:43:07.677123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.935 qpair failed and we were unable to recover it. 00:32:35.935 [2024-11-20 06:43:07.677341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.935 [2024-11-20 06:43:07.677374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.935 qpair failed and we were unable to recover it. 00:32:35.935 [2024-11-20 06:43:07.677652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.935 [2024-11-20 06:43:07.677686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.935 qpair failed and we were unable to recover it. 00:32:35.935 [2024-11-20 06:43:07.677929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.935 [2024-11-20 06:43:07.677961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.935 qpair failed and we were unable to recover it. 00:32:35.935 [2024-11-20 06:43:07.678162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.935 [2024-11-20 06:43:07.678195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.935 qpair failed and we were unable to recover it. 00:32:35.935 [2024-11-20 06:43:07.678443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.935 [2024-11-20 06:43:07.678478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.935 qpair failed and we were unable to recover it. 00:32:35.935 [2024-11-20 06:43:07.678687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.935 [2024-11-20 06:43:07.678719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.935 qpair failed and we were unable to recover it. 00:32:35.935 [2024-11-20 06:43:07.678923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.935 [2024-11-20 06:43:07.678956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.935 qpair failed and we were unable to recover it. 00:32:35.935 [2024-11-20 06:43:07.679236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.935 [2024-11-20 06:43:07.679270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.935 qpair failed and we were unable to recover it. 00:32:35.935 [2024-11-20 06:43:07.679423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.935 [2024-11-20 06:43:07.679455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.935 qpair failed and we were unable to recover it. 00:32:35.935 [2024-11-20 06:43:07.679680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.935 [2024-11-20 06:43:07.679713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.935 qpair failed and we were unable to recover it. 00:32:35.935 [2024-11-20 06:43:07.680016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.935 [2024-11-20 06:43:07.680049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.935 qpair failed and we were unable to recover it. 00:32:35.935 [2024-11-20 06:43:07.680335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.935 [2024-11-20 06:43:07.680369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.935 qpair failed and we were unable to recover it. 00:32:35.935 [2024-11-20 06:43:07.680571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.935 [2024-11-20 06:43:07.680603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.935 qpair failed and we were unable to recover it. 00:32:35.935 [2024-11-20 06:43:07.680785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.935 [2024-11-20 06:43:07.680818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.935 qpair failed and we were unable to recover it. 00:32:35.935 [2024-11-20 06:43:07.681068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.935 [2024-11-20 06:43:07.681100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.935 qpair failed and we were unable to recover it. 00:32:35.935 [2024-11-20 06:43:07.681354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.935 [2024-11-20 06:43:07.681388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.935 qpair failed and we were unable to recover it. 00:32:35.935 [2024-11-20 06:43:07.681524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.935 [2024-11-20 06:43:07.681557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.935 qpair failed and we were unable to recover it. 00:32:35.935 [2024-11-20 06:43:07.681810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.935 [2024-11-20 06:43:07.681842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.935 qpair failed and we were unable to recover it. 00:32:35.935 [2024-11-20 06:43:07.682046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.936 [2024-11-20 06:43:07.682078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.936 qpair failed and we were unable to recover it. 00:32:35.936 [2024-11-20 06:43:07.682366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.936 [2024-11-20 06:43:07.682401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.936 qpair failed and we were unable to recover it. 00:32:35.936 [2024-11-20 06:43:07.682592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.936 [2024-11-20 06:43:07.682626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.936 qpair failed and we were unable to recover it. 00:32:35.936 [2024-11-20 06:43:07.682841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.936 [2024-11-20 06:43:07.682873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.936 qpair failed and we were unable to recover it. 00:32:35.936 [2024-11-20 06:43:07.683064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.936 [2024-11-20 06:43:07.683097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.936 qpair failed and we were unable to recover it. 00:32:35.936 [2024-11-20 06:43:07.683356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.936 [2024-11-20 06:43:07.683390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.936 qpair failed and we were unable to recover it. 00:32:35.936 [2024-11-20 06:43:07.683601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.936 [2024-11-20 06:43:07.683633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.936 qpair failed and we were unable to recover it. 00:32:35.936 [2024-11-20 06:43:07.683835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.936 [2024-11-20 06:43:07.683866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.936 qpair failed and we were unable to recover it. 00:32:35.936 [2024-11-20 06:43:07.684136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.936 [2024-11-20 06:43:07.684167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.936 qpair failed and we were unable to recover it. 00:32:35.936 [2024-11-20 06:43:07.684383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.936 [2024-11-20 06:43:07.684417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.936 qpair failed and we were unable to recover it. 00:32:35.936 [2024-11-20 06:43:07.684626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.936 [2024-11-20 06:43:07.684659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.936 qpair failed and we were unable to recover it. 00:32:35.936 [2024-11-20 06:43:07.684928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.936 [2024-11-20 06:43:07.684961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.936 qpair failed and we were unable to recover it. 00:32:35.936 [2024-11-20 06:43:07.685224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.936 [2024-11-20 06:43:07.685259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.936 qpair failed and we were unable to recover it. 00:32:35.936 [2024-11-20 06:43:07.685486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.936 [2024-11-20 06:43:07.685518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.936 qpair failed and we were unable to recover it. 00:32:35.936 [2024-11-20 06:43:07.685731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.936 [2024-11-20 06:43:07.685762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.936 qpair failed and we were unable to recover it. 00:32:35.936 [2024-11-20 06:43:07.686022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.936 [2024-11-20 06:43:07.686055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.936 qpair failed and we were unable to recover it. 00:32:35.936 [2024-11-20 06:43:07.686352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.936 [2024-11-20 06:43:07.686387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.936 qpair failed and we were unable to recover it. 00:32:35.936 [2024-11-20 06:43:07.686657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.936 [2024-11-20 06:43:07.686689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.936 qpair failed and we were unable to recover it. 00:32:35.936 [2024-11-20 06:43:07.686998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.936 [2024-11-20 06:43:07.687030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.936 qpair failed and we were unable to recover it. 00:32:35.936 [2024-11-20 06:43:07.687292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.936 [2024-11-20 06:43:07.687326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.936 qpair failed and we were unable to recover it. 00:32:35.936 [2024-11-20 06:43:07.687467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.936 [2024-11-20 06:43:07.687498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.936 qpair failed and we were unable to recover it. 00:32:35.936 [2024-11-20 06:43:07.687685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.936 [2024-11-20 06:43:07.687722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.936 qpair failed and we were unable to recover it. 00:32:35.936 [2024-11-20 06:43:07.687873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.936 [2024-11-20 06:43:07.687904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.936 qpair failed and we were unable to recover it. 00:32:35.936 [2024-11-20 06:43:07.688165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.936 [2024-11-20 06:43:07.688197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.936 qpair failed and we were unable to recover it. 00:32:35.936 [2024-11-20 06:43:07.688401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.936 [2024-11-20 06:43:07.688435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.936 qpair failed and we were unable to recover it. 00:32:35.936 [2024-11-20 06:43:07.688560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.936 [2024-11-20 06:43:07.688596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.936 qpair failed and we were unable to recover it. 00:32:35.936 [2024-11-20 06:43:07.688806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.936 [2024-11-20 06:43:07.688842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.936 qpair failed and we were unable to recover it. 00:32:35.936 [2024-11-20 06:43:07.689046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.936 [2024-11-20 06:43:07.689079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.936 qpair failed and we were unable to recover it. 00:32:35.936 [2024-11-20 06:43:07.689281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.936 [2024-11-20 06:43:07.689321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.936 qpair failed and we were unable to recover it. 00:32:35.936 [2024-11-20 06:43:07.689526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.936 [2024-11-20 06:43:07.689558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.936 qpair failed and we were unable to recover it. 00:32:35.936 [2024-11-20 06:43:07.689702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.936 [2024-11-20 06:43:07.689734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.936 qpair failed and we were unable to recover it. 00:32:35.936 [2024-11-20 06:43:07.690031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.936 [2024-11-20 06:43:07.690063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.936 qpair failed and we were unable to recover it. 00:32:35.936 [2024-11-20 06:43:07.690256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.936 [2024-11-20 06:43:07.690289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.936 qpair failed and we were unable to recover it. 00:32:35.936 [2024-11-20 06:43:07.690492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.936 [2024-11-20 06:43:07.690527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.936 qpair failed and we were unable to recover it. 00:32:35.936 [2024-11-20 06:43:07.690844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.936 [2024-11-20 06:43:07.690875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.936 qpair failed and we were unable to recover it. 00:32:35.936 [2024-11-20 06:43:07.691151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.936 [2024-11-20 06:43:07.691184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.936 qpair failed and we were unable to recover it. 00:32:35.936 [2024-11-20 06:43:07.691474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.936 [2024-11-20 06:43:07.691508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.936 qpair failed and we were unable to recover it. 00:32:35.937 [2024-11-20 06:43:07.691804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.937 [2024-11-20 06:43:07.691837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.937 qpair failed and we were unable to recover it. 00:32:35.937 [2024-11-20 06:43:07.692107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.937 [2024-11-20 06:43:07.692142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.937 qpair failed and we were unable to recover it. 00:32:35.937 [2024-11-20 06:43:07.692317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.937 [2024-11-20 06:43:07.692351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.937 qpair failed and we were unable to recover it. 00:32:35.937 [2024-11-20 06:43:07.692648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.937 [2024-11-20 06:43:07.692682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.937 qpair failed and we were unable to recover it. 00:32:35.937 [2024-11-20 06:43:07.692878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.937 [2024-11-20 06:43:07.692913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.937 qpair failed and we were unable to recover it. 00:32:35.937 [2024-11-20 06:43:07.693216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.937 [2024-11-20 06:43:07.693252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.937 qpair failed and we were unable to recover it. 00:32:35.937 [2024-11-20 06:43:07.693484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.937 [2024-11-20 06:43:07.693518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.937 qpair failed and we were unable to recover it. 00:32:35.937 [2024-11-20 06:43:07.693742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.937 [2024-11-20 06:43:07.693774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.937 qpair failed and we were unable to recover it. 00:32:35.937 [2024-11-20 06:43:07.693916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.937 [2024-11-20 06:43:07.693948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.937 qpair failed and we were unable to recover it. 00:32:35.937 [2024-11-20 06:43:07.694249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.937 [2024-11-20 06:43:07.694284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.937 qpair failed and we were unable to recover it. 00:32:35.937 [2024-11-20 06:43:07.694441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.937 [2024-11-20 06:43:07.694473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.937 qpair failed and we were unable to recover it. 00:32:35.937 [2024-11-20 06:43:07.694726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.937 [2024-11-20 06:43:07.694758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.937 qpair failed and we were unable to recover it. 00:32:35.937 [2024-11-20 06:43:07.694964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.937 [2024-11-20 06:43:07.694997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.937 qpair failed and we were unable to recover it. 00:32:35.937 [2024-11-20 06:43:07.695271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.937 [2024-11-20 06:43:07.695303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.937 qpair failed and we were unable to recover it. 00:32:35.937 [2024-11-20 06:43:07.695446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.937 [2024-11-20 06:43:07.695478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.937 qpair failed and we were unable to recover it. 00:32:35.937 [2024-11-20 06:43:07.695696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.937 [2024-11-20 06:43:07.695729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.937 qpair failed and we were unable to recover it. 00:32:35.937 [2024-11-20 06:43:07.696002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.937 [2024-11-20 06:43:07.696034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.937 qpair failed and we were unable to recover it. 00:32:35.937 [2024-11-20 06:43:07.696346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.937 [2024-11-20 06:43:07.696380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.937 qpair failed and we were unable to recover it. 00:32:35.937 [2024-11-20 06:43:07.696596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.937 [2024-11-20 06:43:07.696627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.937 qpair failed and we were unable to recover it. 00:32:35.937 [2024-11-20 06:43:07.696832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.937 [2024-11-20 06:43:07.696864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.937 qpair failed and we were unable to recover it. 00:32:35.937 [2024-11-20 06:43:07.697096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.937 [2024-11-20 06:43:07.697129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.937 qpair failed and we were unable to recover it. 00:32:35.937 [2024-11-20 06:43:07.697448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.937 [2024-11-20 06:43:07.697482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.937 qpair failed and we were unable to recover it. 00:32:35.937 [2024-11-20 06:43:07.697768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.937 [2024-11-20 06:43:07.697802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.937 qpair failed and we were unable to recover it. 00:32:35.937 [2024-11-20 06:43:07.698075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.937 [2024-11-20 06:43:07.698106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.937 qpair failed and we were unable to recover it. 00:32:35.937 [2024-11-20 06:43:07.698350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.937 [2024-11-20 06:43:07.698383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.937 qpair failed and we were unable to recover it. 00:32:35.937 [2024-11-20 06:43:07.698592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.937 [2024-11-20 06:43:07.698625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.937 qpair failed and we were unable to recover it. 00:32:35.937 [2024-11-20 06:43:07.698763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.937 [2024-11-20 06:43:07.698795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.937 qpair failed and we were unable to recover it. 00:32:35.937 [2024-11-20 06:43:07.698984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.937 [2024-11-20 06:43:07.699016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.937 qpair failed and we were unable to recover it. 00:32:35.937 [2024-11-20 06:43:07.699281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.937 [2024-11-20 06:43:07.699315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.937 qpair failed and we were unable to recover it. 00:32:35.937 [2024-11-20 06:43:07.699531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.937 [2024-11-20 06:43:07.699564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.937 qpair failed and we were unable to recover it. 00:32:35.937 [2024-11-20 06:43:07.699703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.937 [2024-11-20 06:43:07.699737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.937 qpair failed and we were unable to recover it. 00:32:35.937 [2024-11-20 06:43:07.699971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.937 [2024-11-20 06:43:07.700004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.937 qpair failed and we were unable to recover it. 00:32:35.937 [2024-11-20 06:43:07.700149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.937 [2024-11-20 06:43:07.700187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.937 qpair failed and we were unable to recover it. 00:32:35.937 [2024-11-20 06:43:07.700421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.937 [2024-11-20 06:43:07.700454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.937 qpair failed and we were unable to recover it. 00:32:35.938 [2024-11-20 06:43:07.700681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.938 [2024-11-20 06:43:07.700714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.938 qpair failed and we were unable to recover it. 00:32:35.938 [2024-11-20 06:43:07.701015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.938 [2024-11-20 06:43:07.701047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.938 qpair failed and we were unable to recover it. 00:32:35.938 [2024-11-20 06:43:07.701335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.938 [2024-11-20 06:43:07.701369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.938 qpair failed and we were unable to recover it. 00:32:35.938 [2024-11-20 06:43:07.701522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.938 [2024-11-20 06:43:07.701555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.938 qpair failed and we were unable to recover it. 00:32:35.938 [2024-11-20 06:43:07.701762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.938 [2024-11-20 06:43:07.701795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.938 qpair failed and we were unable to recover it. 00:32:35.938 [2024-11-20 06:43:07.702026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.938 [2024-11-20 06:43:07.702057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.938 qpair failed and we were unable to recover it. 00:32:35.938 [2024-11-20 06:43:07.702167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.938 [2024-11-20 06:43:07.702222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.938 qpair failed and we were unable to recover it. 00:32:35.938 [2024-11-20 06:43:07.702369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.938 [2024-11-20 06:43:07.702402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.938 qpair failed and we were unable to recover it. 00:32:35.938 [2024-11-20 06:43:07.702562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.938 [2024-11-20 06:43:07.702594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.938 qpair failed and we were unable to recover it. 00:32:35.938 [2024-11-20 06:43:07.702782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.938 [2024-11-20 06:43:07.702813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.938 qpair failed and we were unable to recover it. 00:32:35.938 [2024-11-20 06:43:07.702956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.938 [2024-11-20 06:43:07.702993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.938 qpair failed and we were unable to recover it. 00:32:35.938 [2024-11-20 06:43:07.703200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.938 [2024-11-20 06:43:07.703247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.938 qpair failed and we were unable to recover it. 00:32:35.938 [2024-11-20 06:43:07.703445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.938 [2024-11-20 06:43:07.703479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.938 qpair failed and we were unable to recover it. 00:32:35.938 [2024-11-20 06:43:07.703614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.938 [2024-11-20 06:43:07.703648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.938 qpair failed and we were unable to recover it. 00:32:35.938 [2024-11-20 06:43:07.703780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.938 [2024-11-20 06:43:07.703813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.938 qpair failed and we were unable to recover it. 00:32:35.938 [2024-11-20 06:43:07.704149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.938 [2024-11-20 06:43:07.704182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.938 qpair failed and we were unable to recover it. 00:32:35.938 [2024-11-20 06:43:07.704418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.938 [2024-11-20 06:43:07.704452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.938 qpair failed and we were unable to recover it. 00:32:35.938 [2024-11-20 06:43:07.704602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.938 [2024-11-20 06:43:07.704635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.938 qpair failed and we were unable to recover it. 00:32:35.938 [2024-11-20 06:43:07.704785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.938 [2024-11-20 06:43:07.704818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.938 qpair failed and we were unable to recover it. 00:32:35.938 [2024-11-20 06:43:07.705034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.938 [2024-11-20 06:43:07.705066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.938 qpair failed and we were unable to recover it. 00:32:35.938 [2024-11-20 06:43:07.705227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.938 [2024-11-20 06:43:07.705261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.938 qpair failed and we were unable to recover it. 00:32:35.938 [2024-11-20 06:43:07.705460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.938 [2024-11-20 06:43:07.705492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.938 qpair failed and we were unable to recover it. 00:32:35.938 [2024-11-20 06:43:07.705691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.938 [2024-11-20 06:43:07.705723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.938 qpair failed and we were unable to recover it. 00:32:35.938 [2024-11-20 06:43:07.705914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.938 [2024-11-20 06:43:07.705945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.938 qpair failed and we were unable to recover it. 00:32:35.938 [2024-11-20 06:43:07.706132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.938 [2024-11-20 06:43:07.706165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.938 qpair failed and we were unable to recover it. 00:32:35.938 [2024-11-20 06:43:07.706364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.938 [2024-11-20 06:43:07.706413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.938 qpair failed and we were unable to recover it. 00:32:35.938 [2024-11-20 06:43:07.706570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.938 [2024-11-20 06:43:07.706602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.938 qpair failed and we were unable to recover it. 00:32:35.938 [2024-11-20 06:43:07.706722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.938 [2024-11-20 06:43:07.706756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.938 qpair failed and we were unable to recover it. 00:32:35.938 [2024-11-20 06:43:07.706972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.938 [2024-11-20 06:43:07.707005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.938 qpair failed and we were unable to recover it. 00:32:35.938 [2024-11-20 06:43:07.707260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.938 [2024-11-20 06:43:07.707294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.938 qpair failed and we were unable to recover it. 00:32:35.938 [2024-11-20 06:43:07.707435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.938 [2024-11-20 06:43:07.707468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.938 qpair failed and we were unable to recover it. 00:32:35.938 [2024-11-20 06:43:07.707667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.938 [2024-11-20 06:43:07.707698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.938 qpair failed and we were unable to recover it. 00:32:35.939 [2024-11-20 06:43:07.707904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.939 [2024-11-20 06:43:07.707939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.939 qpair failed and we were unable to recover it. 00:32:35.939 [2024-11-20 06:43:07.708069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.939 [2024-11-20 06:43:07.708100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.939 qpair failed and we were unable to recover it. 00:32:35.939 [2024-11-20 06:43:07.708244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.939 [2024-11-20 06:43:07.708278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.939 qpair failed and we were unable to recover it. 00:32:35.939 [2024-11-20 06:43:07.708466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.939 [2024-11-20 06:43:07.708498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.939 qpair failed and we were unable to recover it. 00:32:35.939 [2024-11-20 06:43:07.708714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.939 [2024-11-20 06:43:07.708746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.939 qpair failed and we were unable to recover it. 00:32:35.939 [2024-11-20 06:43:07.708881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.939 [2024-11-20 06:43:07.708912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.939 qpair failed and we were unable to recover it. 00:32:35.939 [2024-11-20 06:43:07.709044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.939 [2024-11-20 06:43:07.709075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.939 qpair failed and we were unable to recover it. 00:32:35.939 [2024-11-20 06:43:07.709285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.939 [2024-11-20 06:43:07.709320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.939 qpair failed and we were unable to recover it. 00:32:35.939 [2024-11-20 06:43:07.709445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.939 [2024-11-20 06:43:07.709482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.939 qpair failed and we were unable to recover it. 00:32:35.939 [2024-11-20 06:43:07.709616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.939 [2024-11-20 06:43:07.709649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.939 qpair failed and we were unable to recover it. 00:32:35.939 [2024-11-20 06:43:07.709859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.939 [2024-11-20 06:43:07.709891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.939 qpair failed and we were unable to recover it. 00:32:35.939 [2024-11-20 06:43:07.710116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.939 [2024-11-20 06:43:07.710149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.939 qpair failed and we were unable to recover it. 00:32:35.939 [2024-11-20 06:43:07.710279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.939 [2024-11-20 06:43:07.710314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.939 qpair failed and we were unable to recover it. 00:32:35.939 [2024-11-20 06:43:07.710457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.939 [2024-11-20 06:43:07.710491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.939 qpair failed and we were unable to recover it. 00:32:35.939 [2024-11-20 06:43:07.710646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.939 [2024-11-20 06:43:07.710682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.939 qpair failed and we were unable to recover it. 00:32:35.939 [2024-11-20 06:43:07.710824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.939 [2024-11-20 06:43:07.710858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.939 qpair failed and we were unable to recover it. 00:32:35.939 [2024-11-20 06:43:07.710981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.939 [2024-11-20 06:43:07.711013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.939 qpair failed and we were unable to recover it. 00:32:35.939 [2024-11-20 06:43:07.711198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.939 [2024-11-20 06:43:07.711245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.939 qpair failed and we were unable to recover it. 00:32:35.939 [2024-11-20 06:43:07.711432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.939 [2024-11-20 06:43:07.711465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.939 qpair failed and we were unable to recover it. 00:32:35.939 [2024-11-20 06:43:07.711621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.939 [2024-11-20 06:43:07.711654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.939 qpair failed and we were unable to recover it. 00:32:35.939 [2024-11-20 06:43:07.711789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.939 [2024-11-20 06:43:07.711821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.939 qpair failed and we were unable to recover it. 00:32:35.939 [2024-11-20 06:43:07.712128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.939 [2024-11-20 06:43:07.712163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.939 qpair failed and we were unable to recover it. 00:32:35.939 [2024-11-20 06:43:07.712381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.939 [2024-11-20 06:43:07.712414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.939 qpair failed and we were unable to recover it. 00:32:35.939 [2024-11-20 06:43:07.712564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.939 [2024-11-20 06:43:07.712596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.939 qpair failed and we were unable to recover it. 00:32:35.939 [2024-11-20 06:43:07.712727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.939 [2024-11-20 06:43:07.712760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.939 qpair failed and we were unable to recover it. 00:32:35.939 [2024-11-20 06:43:07.712871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.939 [2024-11-20 06:43:07.712905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:35.939 qpair failed and we were unable to recover it. 00:32:35.939 [2024-11-20 06:43:07.713028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.939 [2024-11-20 06:43:07.713059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.235 qpair failed and we were unable to recover it. 00:32:36.235 [2024-11-20 06:43:07.713192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.235 [2024-11-20 06:43:07.713239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.235 qpair failed and we were unable to recover it. 00:32:36.235 [2024-11-20 06:43:07.713447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.235 [2024-11-20 06:43:07.713480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.235 qpair failed and we were unable to recover it. 00:32:36.235 [2024-11-20 06:43:07.713672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.235 [2024-11-20 06:43:07.713707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.235 qpair failed and we were unable to recover it. 00:32:36.235 [2024-11-20 06:43:07.713983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.235 [2024-11-20 06:43:07.714015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.235 qpair failed and we were unable to recover it. 00:32:36.235 [2024-11-20 06:43:07.714152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.235 [2024-11-20 06:43:07.714183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.235 qpair failed and we were unable to recover it. 00:32:36.235 [2024-11-20 06:43:07.714392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.235 [2024-11-20 06:43:07.714426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.235 qpair failed and we were unable to recover it. 00:32:36.235 [2024-11-20 06:43:07.714619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.235 [2024-11-20 06:43:07.714652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.235 qpair failed and we were unable to recover it. 00:32:36.235 [2024-11-20 06:43:07.714771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.235 [2024-11-20 06:43:07.714809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.235 qpair failed and we were unable to recover it. 00:32:36.235 [2024-11-20 06:43:07.715076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.235 [2024-11-20 06:43:07.715109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.235 qpair failed and we were unable to recover it. 00:32:36.235 [2024-11-20 06:43:07.715255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.235 [2024-11-20 06:43:07.715291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.235 qpair failed and we were unable to recover it. 00:32:36.235 [2024-11-20 06:43:07.715636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.235 [2024-11-20 06:43:07.715670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.235 qpair failed and we were unable to recover it. 00:32:36.235 [2024-11-20 06:43:07.715805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.235 [2024-11-20 06:43:07.715837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.235 qpair failed and we were unable to recover it. 00:32:36.235 [2024-11-20 06:43:07.716040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.235 [2024-11-20 06:43:07.716072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.235 qpair failed and we were unable to recover it. 00:32:36.235 [2024-11-20 06:43:07.716320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.235 [2024-11-20 06:43:07.716353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.235 qpair failed and we were unable to recover it. 00:32:36.235 [2024-11-20 06:43:07.716542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.235 [2024-11-20 06:43:07.716573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.235 qpair failed and we were unable to recover it. 00:32:36.235 [2024-11-20 06:43:07.716723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.235 [2024-11-20 06:43:07.716756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.235 qpair failed and we were unable to recover it. 00:32:36.235 [2024-11-20 06:43:07.716942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.235 [2024-11-20 06:43:07.716975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.235 qpair failed and we were unable to recover it. 00:32:36.235 [2024-11-20 06:43:07.717174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.235 [2024-11-20 06:43:07.717428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.235 qpair failed and we were unable to recover it. 00:32:36.235 [2024-11-20 06:43:07.717585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.235 [2024-11-20 06:43:07.717618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.235 qpair failed and we were unable to recover it. 00:32:36.235 [2024-11-20 06:43:07.717807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.235 [2024-11-20 06:43:07.717838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.235 qpair failed and we were unable to recover it. 00:32:36.235 [2024-11-20 06:43:07.718053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.235 [2024-11-20 06:43:07.718085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.235 qpair failed and we were unable to recover it. 00:32:36.235 [2024-11-20 06:43:07.718237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.235 [2024-11-20 06:43:07.718269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.235 qpair failed and we were unable to recover it. 00:32:36.235 [2024-11-20 06:43:07.718528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.235 [2024-11-20 06:43:07.718561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.235 qpair failed and we were unable to recover it. 00:32:36.235 [2024-11-20 06:43:07.718761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.235 [2024-11-20 06:43:07.718795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.235 qpair failed and we were unable to recover it. 00:32:36.235 [2024-11-20 06:43:07.719077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.235 [2024-11-20 06:43:07.719108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.235 qpair failed and we were unable to recover it. 00:32:36.235 [2024-11-20 06:43:07.719304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.235 [2024-11-20 06:43:07.719339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.235 qpair failed and we were unable to recover it. 00:32:36.235 [2024-11-20 06:43:07.719527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.235 [2024-11-20 06:43:07.719559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.235 qpair failed and we were unable to recover it. 00:32:36.235 [2024-11-20 06:43:07.719805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.236 [2024-11-20 06:43:07.719837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.236 qpair failed and we were unable to recover it. 00:32:36.236 [2024-11-20 06:43:07.719981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.236 [2024-11-20 06:43:07.720014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.236 qpair failed and we were unable to recover it. 00:32:36.236 [2024-11-20 06:43:07.720227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.236 [2024-11-20 06:43:07.720261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.236 qpair failed and we were unable to recover it. 00:32:36.236 [2024-11-20 06:43:07.720452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.236 [2024-11-20 06:43:07.720485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.236 qpair failed and we were unable to recover it. 00:32:36.236 [2024-11-20 06:43:07.720682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.236 [2024-11-20 06:43:07.720713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.236 qpair failed and we were unable to recover it. 00:32:36.236 [2024-11-20 06:43:07.720924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.236 [2024-11-20 06:43:07.720957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.236 qpair failed and we were unable to recover it. 00:32:36.236 [2024-11-20 06:43:07.721221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.236 [2024-11-20 06:43:07.721254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.236 qpair failed and we were unable to recover it. 00:32:36.236 [2024-11-20 06:43:07.721508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.236 [2024-11-20 06:43:07.721548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.236 qpair failed and we were unable to recover it. 00:32:36.236 [2024-11-20 06:43:07.721751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.236 [2024-11-20 06:43:07.721785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.236 qpair failed and we were unable to recover it. 00:32:36.236 [2024-11-20 06:43:07.721981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.236 [2024-11-20 06:43:07.722013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.236 qpair failed and we were unable to recover it. 00:32:36.236 [2024-11-20 06:43:07.722127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.236 [2024-11-20 06:43:07.722159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.236 qpair failed and we were unable to recover it. 00:32:36.236 [2024-11-20 06:43:07.722394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.236 [2024-11-20 06:43:07.722427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.236 qpair failed and we were unable to recover it. 00:32:36.236 [2024-11-20 06:43:07.722562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.236 [2024-11-20 06:43:07.722596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.236 qpair failed and we were unable to recover it. 00:32:36.236 [2024-11-20 06:43:07.722784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.236 [2024-11-20 06:43:07.722816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.236 qpair failed and we were unable to recover it. 00:32:36.236 [2024-11-20 06:43:07.722951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.236 [2024-11-20 06:43:07.722985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.236 qpair failed and we were unable to recover it. 00:32:36.236 [2024-11-20 06:43:07.723170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.236 [2024-11-20 06:43:07.723228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.236 qpair failed and we were unable to recover it. 00:32:36.236 [2024-11-20 06:43:07.723450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.236 [2024-11-20 06:43:07.723482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.236 qpair failed and we were unable to recover it. 00:32:36.236 [2024-11-20 06:43:07.723626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.236 [2024-11-20 06:43:07.723657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.236 qpair failed and we were unable to recover it. 00:32:36.236 [2024-11-20 06:43:07.723793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.236 [2024-11-20 06:43:07.723826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.236 qpair failed and we were unable to recover it. 00:32:36.236 [2024-11-20 06:43:07.724117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.236 [2024-11-20 06:43:07.724149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.236 qpair failed and we were unable to recover it. 00:32:36.236 [2024-11-20 06:43:07.724289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.236 [2024-11-20 06:43:07.724323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.236 qpair failed and we were unable to recover it. 00:32:36.236 [2024-11-20 06:43:07.724615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.236 [2024-11-20 06:43:07.724647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.236 qpair failed and we were unable to recover it. 00:32:36.236 [2024-11-20 06:43:07.724776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.236 [2024-11-20 06:43:07.724809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.236 qpair failed and we were unable to recover it. 00:32:36.236 [2024-11-20 06:43:07.724968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.236 [2024-11-20 06:43:07.725000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.236 qpair failed and we were unable to recover it. 00:32:36.236 [2024-11-20 06:43:07.725218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.236 [2024-11-20 06:43:07.725251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.236 qpair failed and we were unable to recover it. 00:32:36.236 [2024-11-20 06:43:07.725460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.236 [2024-11-20 06:43:07.725494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.236 qpair failed and we were unable to recover it. 00:32:36.236 [2024-11-20 06:43:07.725760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.236 [2024-11-20 06:43:07.725793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.236 qpair failed and we were unable to recover it. 00:32:36.236 [2024-11-20 06:43:07.725924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.236 [2024-11-20 06:43:07.725956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.236 qpair failed and we were unable to recover it. 00:32:36.236 [2024-11-20 06:43:07.726083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.236 [2024-11-20 06:43:07.726116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.236 qpair failed and we were unable to recover it. 00:32:36.236 [2024-11-20 06:43:07.726261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.236 [2024-11-20 06:43:07.726296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.236 qpair failed and we were unable to recover it. 00:32:36.236 [2024-11-20 06:43:07.726479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.236 [2024-11-20 06:43:07.726512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.236 qpair failed and we were unable to recover it. 00:32:36.236 [2024-11-20 06:43:07.726707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.236 [2024-11-20 06:43:07.726739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.236 qpair failed and we were unable to recover it. 00:32:36.236 [2024-11-20 06:43:07.726936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.236 [2024-11-20 06:43:07.726969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.236 qpair failed and we were unable to recover it. 00:32:36.236 [2024-11-20 06:43:07.727151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.236 [2024-11-20 06:43:07.727183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.236 qpair failed and we were unable to recover it. 00:32:36.236 [2024-11-20 06:43:07.727376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.236 [2024-11-20 06:43:07.727410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.236 qpair failed and we were unable to recover it. 00:32:36.236 [2024-11-20 06:43:07.727543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.236 [2024-11-20 06:43:07.727576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.236 qpair failed and we were unable to recover it. 00:32:36.236 [2024-11-20 06:43:07.727715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.236 [2024-11-20 06:43:07.727747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.236 qpair failed and we were unable to recover it. 00:32:36.236 [2024-11-20 06:43:07.727940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.236 [2024-11-20 06:43:07.727973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.236 qpair failed and we were unable to recover it. 00:32:36.236 [2024-11-20 06:43:07.728175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.236 [2024-11-20 06:43:07.728219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.236 qpair failed and we were unable to recover it. 00:32:36.236 [2024-11-20 06:43:07.728421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.236 [2024-11-20 06:43:07.728453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.236 qpair failed and we were unable to recover it. 00:32:36.236 [2024-11-20 06:43:07.728637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.236 [2024-11-20 06:43:07.728669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.236 qpair failed and we were unable to recover it. 00:32:36.236 [2024-11-20 06:43:07.728929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.236 [2024-11-20 06:43:07.728961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.236 qpair failed and we were unable to recover it. 00:32:36.236 [2024-11-20 06:43:07.729073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.236 [2024-11-20 06:43:07.729106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.236 qpair failed and we were unable to recover it. 00:32:36.236 [2024-11-20 06:43:07.729294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.236 [2024-11-20 06:43:07.729328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.236 qpair failed and we were unable to recover it. 00:32:36.236 [2024-11-20 06:43:07.729522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.236 [2024-11-20 06:43:07.729556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.236 qpair failed and we were unable to recover it. 00:32:36.236 [2024-11-20 06:43:07.729678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.236 [2024-11-20 06:43:07.729711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.236 qpair failed and we were unable to recover it. 00:32:36.236 [2024-11-20 06:43:07.729833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.236 [2024-11-20 06:43:07.729865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.236 qpair failed and we were unable to recover it. 00:32:36.236 [2024-11-20 06:43:07.730072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.236 [2024-11-20 06:43:07.730104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.236 qpair failed and we were unable to recover it. 00:32:36.236 [2024-11-20 06:43:07.730354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.236 [2024-11-20 06:43:07.730394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.236 qpair failed and we were unable to recover it. 00:32:36.236 [2024-11-20 06:43:07.730605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.236 [2024-11-20 06:43:07.730637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.236 qpair failed and we were unable to recover it. 00:32:36.236 [2024-11-20 06:43:07.730833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.236 [2024-11-20 06:43:07.730865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.236 qpair failed and we were unable to recover it. 00:32:36.236 [2024-11-20 06:43:07.731046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.236 [2024-11-20 06:43:07.731078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.236 qpair failed and we were unable to recover it. 00:32:36.236 [2024-11-20 06:43:07.731199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.236 [2024-11-20 06:43:07.731261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.236 qpair failed and we were unable to recover it. 00:32:36.236 [2024-11-20 06:43:07.731391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.236 [2024-11-20 06:43:07.731425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.236 qpair failed and we were unable to recover it. 00:32:36.236 [2024-11-20 06:43:07.731635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.236 [2024-11-20 06:43:07.731666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.236 qpair failed and we were unable to recover it. 00:32:36.236 [2024-11-20 06:43:07.731889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.236 [2024-11-20 06:43:07.731921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.236 qpair failed and we were unable to recover it. 00:32:36.236 [2024-11-20 06:43:07.732111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.236 [2024-11-20 06:43:07.732143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.236 qpair failed and we were unable to recover it. 00:32:36.236 [2024-11-20 06:43:07.732305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.236 [2024-11-20 06:43:07.732338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.236 qpair failed and we were unable to recover it. 00:32:36.236 [2024-11-20 06:43:07.732614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.236 [2024-11-20 06:43:07.732645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.236 qpair failed and we were unable to recover it. 00:32:36.236 [2024-11-20 06:43:07.732793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.236 [2024-11-20 06:43:07.732826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.236 qpair failed and we were unable to recover it. 00:32:36.236 [2024-11-20 06:43:07.733019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.236 [2024-11-20 06:43:07.733051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.236 qpair failed and we were unable to recover it. 00:32:36.236 [2024-11-20 06:43:07.733244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.236 [2024-11-20 06:43:07.733277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.236 qpair failed and we were unable to recover it. 00:32:36.236 [2024-11-20 06:43:07.733474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.236 [2024-11-20 06:43:07.733507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.236 qpair failed and we were unable to recover it. 00:32:36.236 [2024-11-20 06:43:07.733659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.236 [2024-11-20 06:43:07.733691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.236 qpair failed and we were unable to recover it. 00:32:36.236 [2024-11-20 06:43:07.733806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.236 [2024-11-20 06:43:07.733838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.236 qpair failed and we were unable to recover it. 00:32:36.236 [2024-11-20 06:43:07.733972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.236 [2024-11-20 06:43:07.734004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.236 qpair failed and we were unable to recover it. 00:32:36.236 [2024-11-20 06:43:07.734262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.236 [2024-11-20 06:43:07.734296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.236 qpair failed and we were unable to recover it. 00:32:36.236 [2024-11-20 06:43:07.734492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.236 [2024-11-20 06:43:07.734525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.236 qpair failed and we were unable to recover it. 00:32:36.236 [2024-11-20 06:43:07.734650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.236 [2024-11-20 06:43:07.734681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.236 qpair failed and we were unable to recover it. 00:32:36.236 [2024-11-20 06:43:07.734832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.236 [2024-11-20 06:43:07.734864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.236 qpair failed and we were unable to recover it. 00:32:36.236 [2024-11-20 06:43:07.735059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.236 [2024-11-20 06:43:07.735090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.236 qpair failed and we were unable to recover it. 00:32:36.236 [2024-11-20 06:43:07.735223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.237 [2024-11-20 06:43:07.735257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.237 qpair failed and we were unable to recover it. 00:32:36.237 [2024-11-20 06:43:07.735504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.237 [2024-11-20 06:43:07.735538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.237 qpair failed and we were unable to recover it. 00:32:36.237 [2024-11-20 06:43:07.735676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.237 [2024-11-20 06:43:07.735708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.237 qpair failed and we were unable to recover it. 00:32:36.237 [2024-11-20 06:43:07.735821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.237 [2024-11-20 06:43:07.735853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.237 qpair failed and we were unable to recover it. 00:32:36.237 [2024-11-20 06:43:07.736150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.237 [2024-11-20 06:43:07.736187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.237 qpair failed and we were unable to recover it. 00:32:36.237 [2024-11-20 06:43:07.736335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.237 [2024-11-20 06:43:07.736367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.237 qpair failed and we were unable to recover it. 00:32:36.237 [2024-11-20 06:43:07.736565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.237 [2024-11-20 06:43:07.736596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.237 qpair failed and we were unable to recover it. 00:32:36.237 [2024-11-20 06:43:07.736778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.237 [2024-11-20 06:43:07.736810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.237 qpair failed and we were unable to recover it. 00:32:36.237 [2024-11-20 06:43:07.737009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.237 [2024-11-20 06:43:07.737041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.237 qpair failed and we were unable to recover it. 00:32:36.237 [2024-11-20 06:43:07.737171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.237 [2024-11-20 06:43:07.737229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.237 qpair failed and we were unable to recover it. 00:32:36.237 [2024-11-20 06:43:07.737357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.237 [2024-11-20 06:43:07.737389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.237 qpair failed and we were unable to recover it. 00:32:36.237 [2024-11-20 06:43:07.737496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.237 [2024-11-20 06:43:07.737528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.237 qpair failed and we were unable to recover it. 00:32:36.237 [2024-11-20 06:43:07.737715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.237 [2024-11-20 06:43:07.737747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.237 qpair failed and we were unable to recover it. 00:32:36.237 [2024-11-20 06:43:07.737929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.237 [2024-11-20 06:43:07.737960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.237 qpair failed and we were unable to recover it. 00:32:36.237 [2024-11-20 06:43:07.738150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.237 [2024-11-20 06:43:07.738182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.237 qpair failed and we were unable to recover it. 00:32:36.237 [2024-11-20 06:43:07.738386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.237 [2024-11-20 06:43:07.738418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.237 qpair failed and we were unable to recover it. 00:32:36.237 [2024-11-20 06:43:07.738540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.237 [2024-11-20 06:43:07.738572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.237 qpair failed and we were unable to recover it. 00:32:36.237 [2024-11-20 06:43:07.738679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.237 [2024-11-20 06:43:07.738712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.237 qpair failed and we were unable to recover it. 00:32:36.237 [2024-11-20 06:43:07.738841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.237 [2024-11-20 06:43:07.738874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.237 qpair failed and we were unable to recover it. 00:32:36.237 [2024-11-20 06:43:07.738983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.237 [2024-11-20 06:43:07.739014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.237 qpair failed and we were unable to recover it. 00:32:36.237 [2024-11-20 06:43:07.739130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.237 [2024-11-20 06:43:07.739161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.237 qpair failed and we were unable to recover it. 00:32:36.237 [2024-11-20 06:43:07.739298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.237 [2024-11-20 06:43:07.739330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.237 qpair failed and we were unable to recover it. 00:32:36.237 [2024-11-20 06:43:07.739456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.237 [2024-11-20 06:43:07.739488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.237 qpair failed and we were unable to recover it. 00:32:36.237 [2024-11-20 06:43:07.739617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.237 [2024-11-20 06:43:07.739648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.237 qpair failed and we were unable to recover it. 00:32:36.237 [2024-11-20 06:43:07.739782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.237 [2024-11-20 06:43:07.739813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.237 qpair failed and we were unable to recover it. 00:32:36.237 [2024-11-20 06:43:07.739993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.237 [2024-11-20 06:43:07.740025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.237 qpair failed and we were unable to recover it. 00:32:36.237 [2024-11-20 06:43:07.740150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.237 [2024-11-20 06:43:07.740182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.237 qpair failed and we were unable to recover it. 00:32:36.237 [2024-11-20 06:43:07.740371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.237 [2024-11-20 06:43:07.740403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.237 qpair failed and we were unable to recover it. 00:32:36.237 [2024-11-20 06:43:07.740583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.237 [2024-11-20 06:43:07.740614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.237 qpair failed and we were unable to recover it. 00:32:36.237 [2024-11-20 06:43:07.740736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.237 [2024-11-20 06:43:07.740769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.237 qpair failed and we were unable to recover it. 00:32:36.237 [2024-11-20 06:43:07.740960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.237 [2024-11-20 06:43:07.740991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.237 qpair failed and we were unable to recover it. 00:32:36.237 [2024-11-20 06:43:07.741124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.237 [2024-11-20 06:43:07.741156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.237 qpair failed and we were unable to recover it. 00:32:36.237 [2024-11-20 06:43:07.741288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.237 [2024-11-20 06:43:07.741322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.237 qpair failed and we were unable to recover it. 00:32:36.237 [2024-11-20 06:43:07.741434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.237 [2024-11-20 06:43:07.741465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.237 qpair failed and we were unable to recover it. 00:32:36.237 [2024-11-20 06:43:07.741656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.237 [2024-11-20 06:43:07.741688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.237 qpair failed and we were unable to recover it. 00:32:36.237 [2024-11-20 06:43:07.741792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.237 [2024-11-20 06:43:07.741823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.237 qpair failed and we were unable to recover it. 00:32:36.237 [2024-11-20 06:43:07.742030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.237 [2024-11-20 06:43:07.742062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.237 qpair failed and we were unable to recover it. 00:32:36.237 [2024-11-20 06:43:07.742258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.237 [2024-11-20 06:43:07.742293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.237 qpair failed and we were unable to recover it. 00:32:36.237 [2024-11-20 06:43:07.742415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.237 [2024-11-20 06:43:07.742448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.237 qpair failed and we were unable to recover it. 00:32:36.237 [2024-11-20 06:43:07.742649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.237 [2024-11-20 06:43:07.742681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.237 qpair failed and we were unable to recover it. 00:32:36.237 [2024-11-20 06:43:07.742856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.237 [2024-11-20 06:43:07.742888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.237 qpair failed and we were unable to recover it. 00:32:36.237 [2024-11-20 06:43:07.743107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.237 [2024-11-20 06:43:07.743139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.237 qpair failed and we were unable to recover it. 00:32:36.237 [2024-11-20 06:43:07.743261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.237 [2024-11-20 06:43:07.743294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.237 qpair failed and we were unable to recover it. 00:32:36.237 [2024-11-20 06:43:07.743430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.237 [2024-11-20 06:43:07.743462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.237 qpair failed and we were unable to recover it. 00:32:36.237 [2024-11-20 06:43:07.743656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.237 [2024-11-20 06:43:07.743687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.237 qpair failed and we were unable to recover it. 00:32:36.237 [2024-11-20 06:43:07.743807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.237 [2024-11-20 06:43:07.743845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.237 qpair failed and we were unable to recover it. 00:32:36.237 [2024-11-20 06:43:07.744058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.237 [2024-11-20 06:43:07.744089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.237 qpair failed and we were unable to recover it. 00:32:36.237 [2024-11-20 06:43:07.744284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.237 [2024-11-20 06:43:07.744317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.237 qpair failed and we were unable to recover it. 00:32:36.237 [2024-11-20 06:43:07.744425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.237 [2024-11-20 06:43:07.744456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.237 qpair failed and we were unable to recover it. 00:32:36.237 [2024-11-20 06:43:07.744562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.237 [2024-11-20 06:43:07.744592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.237 qpair failed and we were unable to recover it. 00:32:36.237 [2024-11-20 06:43:07.744782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.237 [2024-11-20 06:43:07.744813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.237 qpair failed and we were unable to recover it. 00:32:36.237 [2024-11-20 06:43:07.744987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.237 [2024-11-20 06:43:07.745020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.237 qpair failed and we were unable to recover it. 00:32:36.237 [2024-11-20 06:43:07.745141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.237 [2024-11-20 06:43:07.745172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.237 qpair failed and we were unable to recover it. 00:32:36.237 [2024-11-20 06:43:07.745383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.237 [2024-11-20 06:43:07.745461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.237 qpair failed and we were unable to recover it. 00:32:36.237 [2024-11-20 06:43:07.745642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.237 [2024-11-20 06:43:07.745714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.237 qpair failed and we were unable to recover it. 00:32:36.237 [2024-11-20 06:43:07.745920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.237 [2024-11-20 06:43:07.745957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.237 qpair failed and we were unable to recover it. 00:32:36.237 [2024-11-20 06:43:07.746086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.237 [2024-11-20 06:43:07.746119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.237 qpair failed and we were unable to recover it. 00:32:36.237 [2024-11-20 06:43:07.746307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.237 [2024-11-20 06:43:07.746341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.237 qpair failed and we were unable to recover it. 00:32:36.237 [2024-11-20 06:43:07.746464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.237 [2024-11-20 06:43:07.746496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.237 qpair failed and we were unable to recover it. 00:32:36.237 [2024-11-20 06:43:07.746638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.237 [2024-11-20 06:43:07.746671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.237 qpair failed and we were unable to recover it. 00:32:36.237 [2024-11-20 06:43:07.746930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.237 [2024-11-20 06:43:07.746962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.237 qpair failed and we were unable to recover it. 00:32:36.237 [2024-11-20 06:43:07.747087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.237 [2024-11-20 06:43:07.747119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.237 qpair failed and we were unable to recover it. 00:32:36.237 [2024-11-20 06:43:07.747345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.237 [2024-11-20 06:43:07.747379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.237 qpair failed and we were unable to recover it. 00:32:36.237 [2024-11-20 06:43:07.747529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.237 [2024-11-20 06:43:07.747562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.237 qpair failed and we were unable to recover it. 00:32:36.237 [2024-11-20 06:43:07.747816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.237 [2024-11-20 06:43:07.747848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.237 qpair failed and we were unable to recover it. 00:32:36.237 [2024-11-20 06:43:07.747969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.237 [2024-11-20 06:43:07.748001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.237 qpair failed and we were unable to recover it. 00:32:36.237 [2024-11-20 06:43:07.748213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.237 [2024-11-20 06:43:07.748247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.237 qpair failed and we were unable to recover it. 00:32:36.237 [2024-11-20 06:43:07.748360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.237 [2024-11-20 06:43:07.748392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.237 qpair failed and we were unable to recover it. 00:32:36.237 [2024-11-20 06:43:07.748583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.237 [2024-11-20 06:43:07.748616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.237 qpair failed and we were unable to recover it. 00:32:36.237 [2024-11-20 06:43:07.748824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.237 [2024-11-20 06:43:07.748857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.237 qpair failed and we were unable to recover it. 00:32:36.237 [2024-11-20 06:43:07.748991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.238 [2024-11-20 06:43:07.749022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.238 qpair failed and we were unable to recover it. 00:32:36.238 [2024-11-20 06:43:07.749150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.238 [2024-11-20 06:43:07.749181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.238 qpair failed and we were unable to recover it. 00:32:36.238 [2024-11-20 06:43:07.749320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.238 [2024-11-20 06:43:07.749354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.238 qpair failed and we were unable to recover it. 00:32:36.238 [2024-11-20 06:43:07.749494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.238 [2024-11-20 06:43:07.749527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.238 qpair failed and we were unable to recover it. 00:32:36.238 [2024-11-20 06:43:07.749706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.238 [2024-11-20 06:43:07.749738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.238 qpair failed and we were unable to recover it. 00:32:36.238 [2024-11-20 06:43:07.749922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.238 [2024-11-20 06:43:07.749953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.238 qpair failed and we were unable to recover it. 00:32:36.238 [2024-11-20 06:43:07.750087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.238 [2024-11-20 06:43:07.750119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.238 qpair failed and we were unable to recover it. 00:32:36.238 [2024-11-20 06:43:07.750233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.238 [2024-11-20 06:43:07.750266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.238 qpair failed and we were unable to recover it. 00:32:36.238 [2024-11-20 06:43:07.750383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.238 [2024-11-20 06:43:07.750415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.238 qpair failed and we were unable to recover it. 00:32:36.238 [2024-11-20 06:43:07.750540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.238 [2024-11-20 06:43:07.750572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.238 qpair failed and we were unable to recover it. 00:32:36.238 [2024-11-20 06:43:07.750699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.238 [2024-11-20 06:43:07.750730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.238 qpair failed and we were unable to recover it. 00:32:36.238 [2024-11-20 06:43:07.750852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.238 [2024-11-20 06:43:07.750884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.238 qpair failed and we were unable to recover it. 00:32:36.238 [2024-11-20 06:43:07.751074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.238 [2024-11-20 06:43:07.751107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.238 qpair failed and we were unable to recover it. 00:32:36.238 [2024-11-20 06:43:07.751248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.238 [2024-11-20 06:43:07.751281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.238 qpair failed and we were unable to recover it. 00:32:36.238 [2024-11-20 06:43:07.751531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.238 [2024-11-20 06:43:07.751564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.238 qpair failed and we were unable to recover it. 00:32:36.238 [2024-11-20 06:43:07.751687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.238 [2024-11-20 06:43:07.751726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.238 qpair failed and we were unable to recover it. 00:32:36.238 [2024-11-20 06:43:07.751853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.238 [2024-11-20 06:43:07.751884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.238 qpair failed and we were unable to recover it. 00:32:36.238 [2024-11-20 06:43:07.752064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.238 [2024-11-20 06:43:07.752097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.238 qpair failed and we were unable to recover it. 00:32:36.238 [2024-11-20 06:43:07.752214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.238 [2024-11-20 06:43:07.752247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.238 qpair failed and we were unable to recover it. 00:32:36.238 [2024-11-20 06:43:07.752364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.238 [2024-11-20 06:43:07.752396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.238 qpair failed and we were unable to recover it. 00:32:36.238 [2024-11-20 06:43:07.752509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.238 [2024-11-20 06:43:07.752541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.238 qpair failed and we were unable to recover it. 00:32:36.238 [2024-11-20 06:43:07.752674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.238 [2024-11-20 06:43:07.752707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.238 qpair failed and we were unable to recover it. 00:32:36.238 [2024-11-20 06:43:07.752830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.238 [2024-11-20 06:43:07.752863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.238 qpair failed and we were unable to recover it. 00:32:36.238 [2024-11-20 06:43:07.752972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.238 [2024-11-20 06:43:07.753004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.238 qpair failed and we were unable to recover it. 00:32:36.238 [2024-11-20 06:43:07.753214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.238 [2024-11-20 06:43:07.753248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.238 qpair failed and we were unable to recover it. 00:32:36.238 [2024-11-20 06:43:07.753550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.238 [2024-11-20 06:43:07.753582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.238 qpair failed and we were unable to recover it. 00:32:36.238 [2024-11-20 06:43:07.753702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.238 [2024-11-20 06:43:07.753733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.238 qpair failed and we were unable to recover it. 00:32:36.238 [2024-11-20 06:43:07.753933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.238 [2024-11-20 06:43:07.753965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.238 qpair failed and we were unable to recover it. 00:32:36.238 [2024-11-20 06:43:07.754088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.238 [2024-11-20 06:43:07.754119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.238 qpair failed and we were unable to recover it. 00:32:36.238 [2024-11-20 06:43:07.754257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.238 [2024-11-20 06:43:07.754292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.238 qpair failed and we were unable to recover it. 00:32:36.238 [2024-11-20 06:43:07.754402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.238 [2024-11-20 06:43:07.754435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.238 qpair failed and we were unable to recover it. 00:32:36.238 [2024-11-20 06:43:07.754551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.238 [2024-11-20 06:43:07.754583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.238 qpair failed and we were unable to recover it. 00:32:36.238 [2024-11-20 06:43:07.754691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.238 [2024-11-20 06:43:07.754723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.238 qpair failed and we were unable to recover it. 00:32:36.238 [2024-11-20 06:43:07.754914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.238 [2024-11-20 06:43:07.754946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.238 qpair failed and we were unable to recover it. 00:32:36.238 [2024-11-20 06:43:07.755139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.238 [2024-11-20 06:43:07.755170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.238 qpair failed and we were unable to recover it. 00:32:36.238 [2024-11-20 06:43:07.755300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.238 [2024-11-20 06:43:07.755334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.238 qpair failed and we were unable to recover it. 00:32:36.238 [2024-11-20 06:43:07.755517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.238 [2024-11-20 06:43:07.755548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.238 qpair failed and we were unable to recover it. 00:32:36.238 [2024-11-20 06:43:07.755667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.238 [2024-11-20 06:43:07.755699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.238 qpair failed and we were unable to recover it. 00:32:36.238 [2024-11-20 06:43:07.755809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.238 [2024-11-20 06:43:07.755842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.238 qpair failed and we were unable to recover it. 00:32:36.238 [2024-11-20 06:43:07.756020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.238 [2024-11-20 06:43:07.756051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.238 qpair failed and we were unable to recover it. 00:32:36.238 [2024-11-20 06:43:07.756215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.238 [2024-11-20 06:43:07.756249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.238 qpair failed and we were unable to recover it. 00:32:36.238 [2024-11-20 06:43:07.756371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.238 [2024-11-20 06:43:07.756404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.238 qpair failed and we were unable to recover it. 00:32:36.238 [2024-11-20 06:43:07.756600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.238 [2024-11-20 06:43:07.756637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.238 qpair failed and we were unable to recover it. 00:32:36.238 [2024-11-20 06:43:07.756752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.238 [2024-11-20 06:43:07.756784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.238 qpair failed and we were unable to recover it. 00:32:36.238 [2024-11-20 06:43:07.757035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.238 [2024-11-20 06:43:07.757067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.238 qpair failed and we were unable to recover it. 00:32:36.238 [2024-11-20 06:43:07.757228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.238 [2024-11-20 06:43:07.757261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.238 qpair failed and we were unable to recover it. 00:32:36.238 [2024-11-20 06:43:07.757458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.238 [2024-11-20 06:43:07.757490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.238 qpair failed and we were unable to recover it. 00:32:36.238 [2024-11-20 06:43:07.757671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.238 [2024-11-20 06:43:07.757702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.238 qpair failed and we were unable to recover it. 00:32:36.238 [2024-11-20 06:43:07.757830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.238 [2024-11-20 06:43:07.757862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.238 qpair failed and we were unable to recover it. 00:32:36.238 [2024-11-20 06:43:07.758068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.238 [2024-11-20 06:43:07.758100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.238 qpair failed and we were unable to recover it. 00:32:36.238 [2024-11-20 06:43:07.758227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.238 [2024-11-20 06:43:07.758262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.238 qpair failed and we were unable to recover it. 00:32:36.238 [2024-11-20 06:43:07.758518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.238 [2024-11-20 06:43:07.758551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.238 qpair failed and we were unable to recover it. 00:32:36.238 [2024-11-20 06:43:07.758739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.238 [2024-11-20 06:43:07.758772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.238 qpair failed and we were unable to recover it. 00:32:36.238 [2024-11-20 06:43:07.758972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.238 [2024-11-20 06:43:07.759003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.238 qpair failed and we were unable to recover it. 00:32:36.238 [2024-11-20 06:43:07.759138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.238 [2024-11-20 06:43:07.759170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.238 qpair failed and we were unable to recover it. 00:32:36.238 [2024-11-20 06:43:07.759311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.238 [2024-11-20 06:43:07.759345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.238 qpair failed and we were unable to recover it. 00:32:36.238 [2024-11-20 06:43:07.759563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.238 [2024-11-20 06:43:07.759595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.238 qpair failed and we were unable to recover it. 00:32:36.238 [2024-11-20 06:43:07.759708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.238 [2024-11-20 06:43:07.759740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.238 qpair failed and we were unable to recover it. 00:32:36.238 [2024-11-20 06:43:07.759875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.238 [2024-11-20 06:43:07.759906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.238 qpair failed and we were unable to recover it. 00:32:36.238 [2024-11-20 06:43:07.760081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.238 [2024-11-20 06:43:07.760112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.238 qpair failed and we were unable to recover it. 00:32:36.238 [2024-11-20 06:43:07.760297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.238 [2024-11-20 06:43:07.760332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.238 qpair failed and we were unable to recover it. 00:32:36.238 [2024-11-20 06:43:07.760514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.238 [2024-11-20 06:43:07.760548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.238 qpair failed and we were unable to recover it. 00:32:36.238 [2024-11-20 06:43:07.760664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.238 [2024-11-20 06:43:07.760697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.238 qpair failed and we were unable to recover it. 00:32:36.238 [2024-11-20 06:43:07.760958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.239 [2024-11-20 06:43:07.760989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.239 qpair failed and we were unable to recover it. 00:32:36.239 [2024-11-20 06:43:07.761194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.239 [2024-11-20 06:43:07.761237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.239 qpair failed and we were unable to recover it. 00:32:36.239 [2024-11-20 06:43:07.761347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.239 [2024-11-20 06:43:07.761380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.239 qpair failed and we were unable to recover it. 00:32:36.239 [2024-11-20 06:43:07.761632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.239 [2024-11-20 06:43:07.761663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.239 qpair failed and we were unable to recover it. 00:32:36.239 [2024-11-20 06:43:07.761798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.239 [2024-11-20 06:43:07.761831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.239 qpair failed and we were unable to recover it. 00:32:36.239 [2024-11-20 06:43:07.761940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.239 [2024-11-20 06:43:07.761972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.239 qpair failed and we were unable to recover it. 00:32:36.239 [2024-11-20 06:43:07.762108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.239 [2024-11-20 06:43:07.762142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.239 qpair failed and we were unable to recover it. 00:32:36.239 [2024-11-20 06:43:07.762371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.239 [2024-11-20 06:43:07.762405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.239 qpair failed and we were unable to recover it. 00:32:36.239 [2024-11-20 06:43:07.762608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.239 [2024-11-20 06:43:07.762641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.239 qpair failed and we were unable to recover it. 00:32:36.239 [2024-11-20 06:43:07.762865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.239 [2024-11-20 06:43:07.762896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.239 qpair failed and we were unable to recover it. 00:32:36.239 [2024-11-20 06:43:07.763027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.239 [2024-11-20 06:43:07.763059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.239 qpair failed and we were unable to recover it. 00:32:36.239 [2024-11-20 06:43:07.763178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.239 [2024-11-20 06:43:07.763219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.239 qpair failed and we were unable to recover it. 00:32:36.239 [2024-11-20 06:43:07.763331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.239 [2024-11-20 06:43:07.763363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.239 qpair failed and we were unable to recover it. 00:32:36.239 [2024-11-20 06:43:07.763602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.239 [2024-11-20 06:43:07.763633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.239 qpair failed and we were unable to recover it. 00:32:36.239 [2024-11-20 06:43:07.763772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.239 [2024-11-20 06:43:07.763805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.239 qpair failed and we were unable to recover it. 00:32:36.239 [2024-11-20 06:43:07.763936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.239 [2024-11-20 06:43:07.763968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.239 qpair failed and we were unable to recover it. 00:32:36.239 [2024-11-20 06:43:07.764153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.239 [2024-11-20 06:43:07.764184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.239 qpair failed and we were unable to recover it. 00:32:36.239 [2024-11-20 06:43:07.766190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.239 [2024-11-20 06:43:07.766266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.239 qpair failed and we were unable to recover it. 00:32:36.239 [2024-11-20 06:43:07.766526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.239 [2024-11-20 06:43:07.766563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.239 qpair failed and we were unable to recover it. 00:32:36.239 [2024-11-20 06:43:07.766756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.239 [2024-11-20 06:43:07.766798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.239 qpair failed and we were unable to recover it. 00:32:36.239 [2024-11-20 06:43:07.767010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.239 [2024-11-20 06:43:07.767041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.239 qpair failed and we were unable to recover it. 00:32:36.239 [2024-11-20 06:43:07.767290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.239 [2024-11-20 06:43:07.767323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.239 qpair failed and we were unable to recover it. 00:32:36.239 [2024-11-20 06:43:07.767526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.239 [2024-11-20 06:43:07.767557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.239 qpair failed and we were unable to recover it. 00:32:36.239 [2024-11-20 06:43:07.767685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.239 [2024-11-20 06:43:07.767718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.239 qpair failed and we were unable to recover it. 00:32:36.239 [2024-11-20 06:43:07.767913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.239 [2024-11-20 06:43:07.767945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.239 qpair failed and we were unable to recover it. 00:32:36.239 [2024-11-20 06:43:07.768126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.239 [2024-11-20 06:43:07.768158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.239 qpair failed and we were unable to recover it. 00:32:36.239 [2024-11-20 06:43:07.768300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.239 [2024-11-20 06:43:07.768333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.239 qpair failed and we were unable to recover it. 00:32:36.239 [2024-11-20 06:43:07.768445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.239 [2024-11-20 06:43:07.768478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.239 qpair failed and we were unable to recover it. 00:32:36.239 [2024-11-20 06:43:07.768716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.239 [2024-11-20 06:43:07.768748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.239 qpair failed and we were unable to recover it. 00:32:36.239 [2024-11-20 06:43:07.768993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.239 [2024-11-20 06:43:07.769025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.239 qpair failed and we were unable to recover it. 00:32:36.239 [2024-11-20 06:43:07.769218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.239 [2024-11-20 06:43:07.769252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.239 qpair failed and we were unable to recover it. 00:32:36.239 [2024-11-20 06:43:07.769510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.239 [2024-11-20 06:43:07.769542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.239 qpair failed and we were unable to recover it. 00:32:36.239 [2024-11-20 06:43:07.769672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.239 [2024-11-20 06:43:07.769704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.239 qpair failed and we were unable to recover it. 00:32:36.239 [2024-11-20 06:43:07.769844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.239 [2024-11-20 06:43:07.769877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.239 qpair failed and we were unable to recover it. 00:32:36.239 [2024-11-20 06:43:07.770001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.239 [2024-11-20 06:43:07.770033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.239 qpair failed and we were unable to recover it. 00:32:36.239 [2024-11-20 06:43:07.770229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.239 [2024-11-20 06:43:07.770263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.239 qpair failed and we were unable to recover it. 00:32:36.239 [2024-11-20 06:43:07.770487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.239 [2024-11-20 06:43:07.770519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.239 qpair failed and we were unable to recover it. 00:32:36.239 [2024-11-20 06:43:07.770626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.239 [2024-11-20 06:43:07.770658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.239 qpair failed and we were unable to recover it. 00:32:36.239 [2024-11-20 06:43:07.770855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.239 [2024-11-20 06:43:07.770887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.239 qpair failed and we were unable to recover it. 00:32:36.239 [2024-11-20 06:43:07.771021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.239 [2024-11-20 06:43:07.771053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.239 qpair failed and we were unable to recover it. 00:32:36.239 [2024-11-20 06:43:07.771304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.239 [2024-11-20 06:43:07.771337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.239 qpair failed and we were unable to recover it. 00:32:36.239 [2024-11-20 06:43:07.771457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.239 [2024-11-20 06:43:07.771489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.239 qpair failed and we were unable to recover it. 00:32:36.239 [2024-11-20 06:43:07.771624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.239 [2024-11-20 06:43:07.771657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.239 qpair failed and we were unable to recover it. 00:32:36.239 [2024-11-20 06:43:07.771852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.239 [2024-11-20 06:43:07.771885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.239 qpair failed and we were unable to recover it. 00:32:36.239 [2024-11-20 06:43:07.772080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.239 [2024-11-20 06:43:07.772112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.239 qpair failed and we were unable to recover it. 00:32:36.239 [2024-11-20 06:43:07.772239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.239 [2024-11-20 06:43:07.772272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.239 qpair failed and we were unable to recover it. 00:32:36.239 [2024-11-20 06:43:07.772457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.239 [2024-11-20 06:43:07.772488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.239 qpair failed and we were unable to recover it. 00:32:36.239 [2024-11-20 06:43:07.772609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.239 [2024-11-20 06:43:07.772641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.239 qpair failed and we were unable to recover it. 00:32:36.239 [2024-11-20 06:43:07.772837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.239 [2024-11-20 06:43:07.772870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.239 qpair failed and we were unable to recover it. 00:32:36.239 [2024-11-20 06:43:07.773008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.239 [2024-11-20 06:43:07.773043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.239 qpair failed and we were unable to recover it. 00:32:36.239 [2024-11-20 06:43:07.773339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.239 [2024-11-20 06:43:07.773379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.239 qpair failed and we were unable to recover it. 00:32:36.239 [2024-11-20 06:43:07.773583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.239 [2024-11-20 06:43:07.773626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.239 qpair failed and we were unable to recover it. 00:32:36.239 [2024-11-20 06:43:07.773811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.239 [2024-11-20 06:43:07.773844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.239 qpair failed and we were unable to recover it. 00:32:36.239 [2024-11-20 06:43:07.774030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.239 [2024-11-20 06:43:07.774077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.239 qpair failed and we were unable to recover it. 00:32:36.239 [2024-11-20 06:43:07.774241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.239 [2024-11-20 06:43:07.774288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.239 qpair failed and we were unable to recover it. 00:32:36.239 [2024-11-20 06:43:07.774512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.239 [2024-11-20 06:43:07.774556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.239 qpair failed and we were unable to recover it. 00:32:36.239 [2024-11-20 06:43:07.774791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.239 [2024-11-20 06:43:07.774829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.239 qpair failed and we were unable to recover it. 00:32:36.239 [2024-11-20 06:43:07.774962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.239 [2024-11-20 06:43:07.774995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.239 qpair failed and we were unable to recover it. 00:32:36.239 [2024-11-20 06:43:07.775222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.239 [2024-11-20 06:43:07.775255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.239 qpair failed and we were unable to recover it. 00:32:36.239 [2024-11-20 06:43:07.775386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.239 [2024-11-20 06:43:07.775427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.239 qpair failed and we were unable to recover it. 00:32:36.239 [2024-11-20 06:43:07.775609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.239 [2024-11-20 06:43:07.775641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.239 qpair failed and we were unable to recover it. 00:32:36.239 [2024-11-20 06:43:07.775829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.239 [2024-11-20 06:43:07.775862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.239 qpair failed and we were unable to recover it. 00:32:36.239 [2024-11-20 06:43:07.775983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.239 [2024-11-20 06:43:07.776016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.239 qpair failed and we were unable to recover it. 00:32:36.239 [2024-11-20 06:43:07.776220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.239 [2024-11-20 06:43:07.776254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.239 qpair failed and we were unable to recover it. 00:32:36.239 [2024-11-20 06:43:07.776470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.239 [2024-11-20 06:43:07.776502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.239 qpair failed and we were unable to recover it. 00:32:36.239 [2024-11-20 06:43:07.776647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.239 [2024-11-20 06:43:07.776679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.239 qpair failed and we were unable to recover it. 00:32:36.239 [2024-11-20 06:43:07.776820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.239 [2024-11-20 06:43:07.776851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.239 qpair failed and we were unable to recover it. 00:32:36.239 [2024-11-20 06:43:07.776978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.239 [2024-11-20 06:43:07.777010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.239 qpair failed and we were unable to recover it. 00:32:36.239 [2024-11-20 06:43:07.777191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.239 [2024-11-20 06:43:07.777237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.239 qpair failed and we were unable to recover it. 00:32:36.239 [2024-11-20 06:43:07.777414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.239 [2024-11-20 06:43:07.777447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.239 qpair failed and we were unable to recover it. 00:32:36.239 [2024-11-20 06:43:07.777704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.239 [2024-11-20 06:43:07.777736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.239 qpair failed and we were unable to recover it. 00:32:36.239 [2024-11-20 06:43:07.777856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.239 [2024-11-20 06:43:07.777887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.239 qpair failed and we were unable to recover it. 00:32:36.240 [2024-11-20 06:43:07.778138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.240 [2024-11-20 06:43:07.778170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.240 qpair failed and we were unable to recover it. 00:32:36.240 [2024-11-20 06:43:07.778464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.240 [2024-11-20 06:43:07.778500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.240 qpair failed and we were unable to recover it. 00:32:36.240 [2024-11-20 06:43:07.778717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.240 [2024-11-20 06:43:07.778749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.240 qpair failed and we were unable to recover it. 00:32:36.240 [2024-11-20 06:43:07.778877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.240 [2024-11-20 06:43:07.778909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.240 qpair failed and we were unable to recover it. 00:32:36.240 [2024-11-20 06:43:07.779093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.240 [2024-11-20 06:43:07.779125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.240 qpair failed and we were unable to recover it. 00:32:36.240 [2024-11-20 06:43:07.779239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.240 [2024-11-20 06:43:07.779273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.240 qpair failed and we were unable to recover it. 00:32:36.240 [2024-11-20 06:43:07.779410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.240 [2024-11-20 06:43:07.779442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.240 qpair failed and we were unable to recover it. 00:32:36.240 [2024-11-20 06:43:07.779587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.240 [2024-11-20 06:43:07.779618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.240 qpair failed and we were unable to recover it. 00:32:36.240 [2024-11-20 06:43:07.779807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.240 [2024-11-20 06:43:07.779839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.240 qpair failed and we were unable to recover it. 00:32:36.240 [2024-11-20 06:43:07.779951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.240 [2024-11-20 06:43:07.779982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.240 qpair failed and we were unable to recover it. 00:32:36.240 [2024-11-20 06:43:07.780124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.240 [2024-11-20 06:43:07.780156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.240 qpair failed and we were unable to recover it. 00:32:36.240 [2024-11-20 06:43:07.780270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.240 [2024-11-20 06:43:07.780303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.240 qpair failed and we were unable to recover it. 00:32:36.240 [2024-11-20 06:43:07.780417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.240 [2024-11-20 06:43:07.780448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.240 qpair failed and we were unable to recover it. 00:32:36.240 [2024-11-20 06:43:07.780567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.240 [2024-11-20 06:43:07.780598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.240 qpair failed and we were unable to recover it. 00:32:36.240 [2024-11-20 06:43:07.780796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.240 [2024-11-20 06:43:07.780828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.240 qpair failed and we were unable to recover it. 00:32:36.240 [2024-11-20 06:43:07.781007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.240 [2024-11-20 06:43:07.781039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.240 qpair failed and we were unable to recover it. 00:32:36.240 [2024-11-20 06:43:07.781289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.240 [2024-11-20 06:43:07.781322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.240 qpair failed and we were unable to recover it. 00:32:36.240 [2024-11-20 06:43:07.781434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.240 [2024-11-20 06:43:07.781466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.240 qpair failed and we were unable to recover it. 00:32:36.240 [2024-11-20 06:43:07.781584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.240 [2024-11-20 06:43:07.781615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.240 qpair failed and we were unable to recover it. 00:32:36.240 [2024-11-20 06:43:07.781751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.240 [2024-11-20 06:43:07.781783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.240 qpair failed and we were unable to recover it. 00:32:36.240 [2024-11-20 06:43:07.782058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.240 [2024-11-20 06:43:07.782089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.240 qpair failed and we were unable to recover it. 00:32:36.240 [2024-11-20 06:43:07.782219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.240 [2024-11-20 06:43:07.782252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.240 qpair failed and we were unable to recover it. 00:32:36.240 [2024-11-20 06:43:07.782381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.240 [2024-11-20 06:43:07.782413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.240 qpair failed and we were unable to recover it. 00:32:36.240 [2024-11-20 06:43:07.782594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.240 [2024-11-20 06:43:07.782625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.240 qpair failed and we were unable to recover it. 00:32:36.240 [2024-11-20 06:43:07.782734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.240 [2024-11-20 06:43:07.782766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.240 qpair failed and we were unable to recover it. 00:32:36.240 [2024-11-20 06:43:07.782903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.240 [2024-11-20 06:43:07.782935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.240 qpair failed and we were unable to recover it. 00:32:36.240 [2024-11-20 06:43:07.783128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.240 [2024-11-20 06:43:07.783159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.240 qpair failed and we were unable to recover it. 00:32:36.240 [2024-11-20 06:43:07.783362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.240 [2024-11-20 06:43:07.783403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.240 qpair failed and we were unable to recover it. 00:32:36.240 [2024-11-20 06:43:07.783587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.240 [2024-11-20 06:43:07.783620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.240 qpair failed and we were unable to recover it. 00:32:36.240 [2024-11-20 06:43:07.783797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.240 [2024-11-20 06:43:07.783828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.240 qpair failed and we were unable to recover it. 00:32:36.240 [2024-11-20 06:43:07.784071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.240 [2024-11-20 06:43:07.784103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.240 qpair failed and we were unable to recover it. 00:32:36.240 [2024-11-20 06:43:07.784290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.240 [2024-11-20 06:43:07.784323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.240 qpair failed and we were unable to recover it. 00:32:36.240 [2024-11-20 06:43:07.784519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.240 [2024-11-20 06:43:07.784550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.240 qpair failed and we were unable to recover it. 00:32:36.240 [2024-11-20 06:43:07.784660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.240 [2024-11-20 06:43:07.784691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.240 qpair failed and we were unable to recover it. 00:32:36.240 [2024-11-20 06:43:07.784871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.240 [2024-11-20 06:43:07.784902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.240 qpair failed and we were unable to recover it. 00:32:36.240 [2024-11-20 06:43:07.785170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.240 [2024-11-20 06:43:07.785210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.240 qpair failed and we were unable to recover it. 00:32:36.240 [2024-11-20 06:43:07.785331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.240 [2024-11-20 06:43:07.785362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.240 qpair failed and we were unable to recover it. 00:32:36.240 [2024-11-20 06:43:07.785486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.240 [2024-11-20 06:43:07.785516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.240 qpair failed and we were unable to recover it. 00:32:36.240 [2024-11-20 06:43:07.785656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.240 [2024-11-20 06:43:07.785688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.240 qpair failed and we were unable to recover it. 00:32:36.240 [2024-11-20 06:43:07.785877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.240 [2024-11-20 06:43:07.785908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.240 qpair failed and we were unable to recover it. 00:32:36.240 [2024-11-20 06:43:07.786119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.240 [2024-11-20 06:43:07.786150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.240 qpair failed and we were unable to recover it. 00:32:36.240 [2024-11-20 06:43:07.786385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.240 [2024-11-20 06:43:07.786418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.240 qpair failed and we were unable to recover it. 00:32:36.240 [2024-11-20 06:43:07.786541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.240 [2024-11-20 06:43:07.786571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.240 qpair failed and we were unable to recover it. 00:32:36.240 [2024-11-20 06:43:07.786776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.240 [2024-11-20 06:43:07.786808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.240 qpair failed and we were unable to recover it. 00:32:36.240 [2024-11-20 06:43:07.786944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.240 [2024-11-20 06:43:07.786976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.240 qpair failed and we were unable to recover it. 00:32:36.240 [2024-11-20 06:43:07.787091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.240 [2024-11-20 06:43:07.787123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.240 qpair failed and we were unable to recover it. 00:32:36.240 [2024-11-20 06:43:07.787252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.240 [2024-11-20 06:43:07.787284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.240 qpair failed and we were unable to recover it. 00:32:36.240 [2024-11-20 06:43:07.787417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.240 [2024-11-20 06:43:07.787447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.240 qpair failed and we were unable to recover it. 00:32:36.240 [2024-11-20 06:43:07.787625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.240 [2024-11-20 06:43:07.787656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.240 qpair failed and we were unable to recover it. 00:32:36.240 [2024-11-20 06:43:07.787831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.240 [2024-11-20 06:43:07.787863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.240 qpair failed and we were unable to recover it. 00:32:36.240 [2024-11-20 06:43:07.788045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.240 [2024-11-20 06:43:07.788076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.240 qpair failed and we were unable to recover it. 00:32:36.240 [2024-11-20 06:43:07.788262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.240 [2024-11-20 06:43:07.788294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.240 qpair failed and we were unable to recover it. 00:32:36.240 [2024-11-20 06:43:07.788398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.240 [2024-11-20 06:43:07.788429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.240 qpair failed and we were unable to recover it. 00:32:36.240 [2024-11-20 06:43:07.788629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.240 [2024-11-20 06:43:07.788661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.240 qpair failed and we were unable to recover it. 00:32:36.240 [2024-11-20 06:43:07.788798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.240 [2024-11-20 06:43:07.788830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.240 qpair failed and we were unable to recover it. 00:32:36.240 [2024-11-20 06:43:07.789009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.240 [2024-11-20 06:43:07.789041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.240 qpair failed and we were unable to recover it. 00:32:36.240 [2024-11-20 06:43:07.789148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.240 [2024-11-20 06:43:07.789179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.240 qpair failed and we were unable to recover it. 00:32:36.240 [2024-11-20 06:43:07.789408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.240 [2024-11-20 06:43:07.789441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.240 qpair failed and we were unable to recover it. 00:32:36.240 [2024-11-20 06:43:07.789550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.240 [2024-11-20 06:43:07.789583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.240 qpair failed and we were unable to recover it. 00:32:36.240 [2024-11-20 06:43:07.789699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.240 [2024-11-20 06:43:07.789729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.240 qpair failed and we were unable to recover it. 00:32:36.240 [2024-11-20 06:43:07.789862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.240 [2024-11-20 06:43:07.789894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.240 qpair failed and we were unable to recover it. 00:32:36.240 [2024-11-20 06:43:07.790089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.240 [2024-11-20 06:43:07.790120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.240 qpair failed and we were unable to recover it. 00:32:36.240 [2024-11-20 06:43:07.790290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.240 [2024-11-20 06:43:07.790322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.240 qpair failed and we were unable to recover it. 00:32:36.240 [2024-11-20 06:43:07.790433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.240 [2024-11-20 06:43:07.790464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.240 qpair failed and we were unable to recover it. 00:32:36.240 [2024-11-20 06:43:07.790586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.240 [2024-11-20 06:43:07.790618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.240 qpair failed and we were unable to recover it. 00:32:36.240 [2024-11-20 06:43:07.790792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.240 [2024-11-20 06:43:07.790823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.240 qpair failed and we were unable to recover it. 00:32:36.241 [2024-11-20 06:43:07.790945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.241 [2024-11-20 06:43:07.790976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.241 qpair failed and we were unable to recover it. 00:32:36.241 [2024-11-20 06:43:07.791154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.241 [2024-11-20 06:43:07.791192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.241 qpair failed and we were unable to recover it. 00:32:36.241 [2024-11-20 06:43:07.791400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.241 [2024-11-20 06:43:07.791432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.241 qpair failed and we were unable to recover it. 00:32:36.241 [2024-11-20 06:43:07.791607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.241 [2024-11-20 06:43:07.791639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.241 qpair failed and we were unable to recover it. 00:32:36.241 [2024-11-20 06:43:07.791843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.241 [2024-11-20 06:43:07.791874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.241 qpair failed and we were unable to recover it. 00:32:36.241 [2024-11-20 06:43:07.791999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.241 [2024-11-20 06:43:07.792030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.241 qpair failed and we were unable to recover it. 00:32:36.241 [2024-11-20 06:43:07.792273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.241 [2024-11-20 06:43:07.792306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.241 qpair failed and we were unable to recover it. 00:32:36.241 [2024-11-20 06:43:07.792425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.241 [2024-11-20 06:43:07.792457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.241 qpair failed and we were unable to recover it. 00:32:36.241 [2024-11-20 06:43:07.792582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.241 [2024-11-20 06:43:07.792613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.241 qpair failed and we were unable to recover it. 00:32:36.241 [2024-11-20 06:43:07.792746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.241 [2024-11-20 06:43:07.792777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.241 qpair failed and we were unable to recover it. 00:32:36.241 [2024-11-20 06:43:07.792892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.241 [2024-11-20 06:43:07.792922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.241 qpair failed and we were unable to recover it. 00:32:36.241 [2024-11-20 06:43:07.793030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.241 [2024-11-20 06:43:07.793060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.241 qpair failed and we were unable to recover it. 00:32:36.241 [2024-11-20 06:43:07.793181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.241 [2024-11-20 06:43:07.793222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.241 qpair failed and we were unable to recover it. 00:32:36.241 [2024-11-20 06:43:07.793331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.241 [2024-11-20 06:43:07.793363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.241 qpair failed and we were unable to recover it. 00:32:36.241 [2024-11-20 06:43:07.793491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.241 [2024-11-20 06:43:07.793522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.241 qpair failed and we were unable to recover it. 00:32:36.241 [2024-11-20 06:43:07.793642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.241 [2024-11-20 06:43:07.793673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.241 qpair failed and we were unable to recover it. 00:32:36.241 [2024-11-20 06:43:07.793854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.241 [2024-11-20 06:43:07.793885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.241 qpair failed and we were unable to recover it. 00:32:36.241 [2024-11-20 06:43:07.794004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.241 [2024-11-20 06:43:07.794036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.241 qpair failed and we were unable to recover it. 00:32:36.241 [2024-11-20 06:43:07.794154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.241 [2024-11-20 06:43:07.794186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.241 qpair failed and we were unable to recover it. 00:32:36.241 [2024-11-20 06:43:07.794402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.241 [2024-11-20 06:43:07.794434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.241 qpair failed and we were unable to recover it. 00:32:36.241 [2024-11-20 06:43:07.794557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.241 [2024-11-20 06:43:07.794588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.241 qpair failed and we were unable to recover it. 00:32:36.241 [2024-11-20 06:43:07.794728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.241 [2024-11-20 06:43:07.794758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.241 qpair failed and we were unable to recover it. 00:32:36.241 [2024-11-20 06:43:07.794881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.241 [2024-11-20 06:43:07.794912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.241 qpair failed and we were unable to recover it. 00:32:36.241 [2024-11-20 06:43:07.795028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.241 [2024-11-20 06:43:07.795059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.241 qpair failed and we were unable to recover it. 00:32:36.241 [2024-11-20 06:43:07.795273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.241 [2024-11-20 06:43:07.795306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.241 qpair failed and we were unable to recover it. 00:32:36.241 [2024-11-20 06:43:07.795485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.241 [2024-11-20 06:43:07.795517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.241 qpair failed and we were unable to recover it. 00:32:36.241 [2024-11-20 06:43:07.795691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.241 [2024-11-20 06:43:07.795722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.241 qpair failed and we were unable to recover it. 00:32:36.241 [2024-11-20 06:43:07.795849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.241 [2024-11-20 06:43:07.795879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.241 qpair failed and we were unable to recover it. 00:32:36.241 [2024-11-20 06:43:07.796001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.241 [2024-11-20 06:43:07.796032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.241 qpair failed and we were unable to recover it. 00:32:36.241 [2024-11-20 06:43:07.796149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.241 [2024-11-20 06:43:07.796181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.241 qpair failed and we were unable to recover it. 00:32:36.241 [2024-11-20 06:43:07.796313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.241 [2024-11-20 06:43:07.796346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.241 qpair failed and we were unable to recover it. 00:32:36.241 [2024-11-20 06:43:07.796476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.241 [2024-11-20 06:43:07.796508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.241 qpair failed and we were unable to recover it. 00:32:36.241 [2024-11-20 06:43:07.796680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.241 [2024-11-20 06:43:07.796713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.241 qpair failed and we were unable to recover it. 00:32:36.241 [2024-11-20 06:43:07.796955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.241 [2024-11-20 06:43:07.796987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.241 qpair failed and we were unable to recover it. 00:32:36.241 [2024-11-20 06:43:07.797118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.241 [2024-11-20 06:43:07.797149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.241 qpair failed and we were unable to recover it. 00:32:36.241 [2024-11-20 06:43:07.797269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.241 [2024-11-20 06:43:07.797300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.241 qpair failed and we were unable to recover it. 00:32:36.241 [2024-11-20 06:43:07.797461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.241 [2024-11-20 06:43:07.797496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.241 qpair failed and we were unable to recover it. 00:32:36.241 [2024-11-20 06:43:07.797632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.241 [2024-11-20 06:43:07.797663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.241 qpair failed and we were unable to recover it. 00:32:36.241 [2024-11-20 06:43:07.797789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.241 [2024-11-20 06:43:07.797820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.241 qpair failed and we were unable to recover it. 00:32:36.241 [2024-11-20 06:43:07.797999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.241 [2024-11-20 06:43:07.798031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.241 qpair failed and we were unable to recover it. 00:32:36.241 [2024-11-20 06:43:07.798146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.241 [2024-11-20 06:43:07.798178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.241 qpair failed and we were unable to recover it. 00:32:36.241 [2024-11-20 06:43:07.798405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.241 [2024-11-20 06:43:07.798443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.241 qpair failed and we were unable to recover it. 00:32:36.241 [2024-11-20 06:43:07.799888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.241 [2024-11-20 06:43:07.799943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.241 qpair failed and we were unable to recover it. 00:32:36.241 [2024-11-20 06:43:07.800232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.241 [2024-11-20 06:43:07.800268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.241 qpair failed and we were unable to recover it. 00:32:36.241 [2024-11-20 06:43:07.801640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.241 [2024-11-20 06:43:07.801692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.241 qpair failed and we were unable to recover it. 00:32:36.241 [2024-11-20 06:43:07.801961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.241 [2024-11-20 06:43:07.801998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.241 qpair failed and we were unable to recover it. 00:32:36.241 [2024-11-20 06:43:07.802172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.241 [2024-11-20 06:43:07.802219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.241 qpair failed and we were unable to recover it. 00:32:36.241 [2024-11-20 06:43:07.802353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.241 [2024-11-20 06:43:07.802384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.241 qpair failed and we were unable to recover it. 00:32:36.241 [2024-11-20 06:43:07.802587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.241 [2024-11-20 06:43:07.802619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.241 qpair failed and we were unable to recover it. 00:32:36.241 [2024-11-20 06:43:07.802736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.241 [2024-11-20 06:43:07.802768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.241 qpair failed and we were unable to recover it. 00:32:36.241 [2024-11-20 06:43:07.802880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.241 [2024-11-20 06:43:07.802910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.241 qpair failed and we were unable to recover it. 00:32:36.241 [2024-11-20 06:43:07.803183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.241 [2024-11-20 06:43:07.803223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.241 qpair failed and we were unable to recover it. 00:32:36.241 [2024-11-20 06:43:07.803419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.241 [2024-11-20 06:43:07.803450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.241 qpair failed and we were unable to recover it. 00:32:36.241 [2024-11-20 06:43:07.803577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.241 [2024-11-20 06:43:07.803608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.241 qpair failed and we were unable to recover it. 00:32:36.241 [2024-11-20 06:43:07.803725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.241 [2024-11-20 06:43:07.803757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.241 qpair failed and we were unable to recover it. 00:32:36.241 [2024-11-20 06:43:07.803873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.241 [2024-11-20 06:43:07.803904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.241 qpair failed and we were unable to recover it. 00:32:36.241 [2024-11-20 06:43:07.804112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.241 [2024-11-20 06:43:07.804144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.241 qpair failed and we were unable to recover it. 00:32:36.241 [2024-11-20 06:43:07.804317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.241 [2024-11-20 06:43:07.804352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.241 qpair failed and we were unable to recover it. 00:32:36.241 [2024-11-20 06:43:07.804548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.241 [2024-11-20 06:43:07.804579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.241 qpair failed and we were unable to recover it. 00:32:36.241 [2024-11-20 06:43:07.804778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.241 [2024-11-20 06:43:07.804810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.241 qpair failed and we were unable to recover it. 00:32:36.241 [2024-11-20 06:43:07.805009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.241 [2024-11-20 06:43:07.805040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.241 qpair failed and we were unable to recover it. 00:32:36.241 [2024-11-20 06:43:07.805164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.241 [2024-11-20 06:43:07.805195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.241 qpair failed and we were unable to recover it. 00:32:36.241 [2024-11-20 06:43:07.805318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.241 [2024-11-20 06:43:07.805349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.241 qpair failed and we were unable to recover it. 00:32:36.241 [2024-11-20 06:43:07.805535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.241 [2024-11-20 06:43:07.805565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.241 qpair failed and we were unable to recover it. 00:32:36.241 [2024-11-20 06:43:07.805759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.241 [2024-11-20 06:43:07.805790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.241 qpair failed and we were unable to recover it. 00:32:36.241 [2024-11-20 06:43:07.805971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.241 [2024-11-20 06:43:07.806001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.241 qpair failed and we were unable to recover it. 00:32:36.241 [2024-11-20 06:43:07.806128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.241 [2024-11-20 06:43:07.806177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.241 qpair failed and we were unable to recover it. 00:32:36.241 [2024-11-20 06:43:07.806365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.241 [2024-11-20 06:43:07.806398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.241 qpair failed and we were unable to recover it. 00:32:36.241 [2024-11-20 06:43:07.806515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.242 [2024-11-20 06:43:07.806545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.242 qpair failed and we were unable to recover it. 00:32:36.242 [2024-11-20 06:43:07.806661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.242 [2024-11-20 06:43:07.806691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.242 qpair failed and we were unable to recover it. 00:32:36.242 [2024-11-20 06:43:07.806815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.242 [2024-11-20 06:43:07.806846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.242 qpair failed and we were unable to recover it. 00:32:36.242 [2024-11-20 06:43:07.806947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.242 [2024-11-20 06:43:07.806978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.242 qpair failed and we were unable to recover it. 00:32:36.242 [2024-11-20 06:43:07.807151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.242 [2024-11-20 06:43:07.807183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.242 qpair failed and we were unable to recover it. 00:32:36.242 [2024-11-20 06:43:07.807394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.242 [2024-11-20 06:43:07.807426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.242 qpair failed and we were unable to recover it. 00:32:36.242 [2024-11-20 06:43:07.807600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.242 [2024-11-20 06:43:07.807632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.242 qpair failed and we were unable to recover it. 00:32:36.242 [2024-11-20 06:43:07.807818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.242 [2024-11-20 06:43:07.807849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.242 qpair failed and we were unable to recover it. 00:32:36.242 [2024-11-20 06:43:07.807980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.242 [2024-11-20 06:43:07.808011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.242 qpair failed and we were unable to recover it. 00:32:36.242 [2024-11-20 06:43:07.808137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.242 [2024-11-20 06:43:07.808169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.242 qpair failed and we were unable to recover it. 00:32:36.242 [2024-11-20 06:43:07.808304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.242 [2024-11-20 06:43:07.808334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.242 qpair failed and we were unable to recover it. 00:32:36.242 [2024-11-20 06:43:07.808510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.242 [2024-11-20 06:43:07.808540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.242 qpair failed and we were unable to recover it. 00:32:36.242 [2024-11-20 06:43:07.808653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.242 [2024-11-20 06:43:07.808683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.242 qpair failed and we were unable to recover it. 00:32:36.242 [2024-11-20 06:43:07.808802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.242 [2024-11-20 06:43:07.808840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.242 qpair failed and we were unable to recover it. 00:32:36.242 [2024-11-20 06:43:07.809016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.242 [2024-11-20 06:43:07.809048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.242 qpair failed and we were unable to recover it. 00:32:36.242 [2024-11-20 06:43:07.809234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.242 [2024-11-20 06:43:07.809267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.242 qpair failed and we were unable to recover it. 00:32:36.242 [2024-11-20 06:43:07.809446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.242 [2024-11-20 06:43:07.809474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.242 qpair failed and we were unable to recover it. 00:32:36.242 [2024-11-20 06:43:07.809595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.242 [2024-11-20 06:43:07.809622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.242 qpair failed and we were unable to recover it. 00:32:36.242 [2024-11-20 06:43:07.809797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.242 [2024-11-20 06:43:07.809825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.242 qpair failed and we were unable to recover it. 00:32:36.242 [2024-11-20 06:43:07.809932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.242 [2024-11-20 06:43:07.809962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.242 qpair failed and we were unable to recover it. 00:32:36.242 [2024-11-20 06:43:07.810078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.242 [2024-11-20 06:43:07.810105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.242 qpair failed and we were unable to recover it. 00:32:36.242 [2024-11-20 06:43:07.810218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.242 [2024-11-20 06:43:07.810249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.242 qpair failed and we were unable to recover it. 00:32:36.242 [2024-11-20 06:43:07.810430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.242 [2024-11-20 06:43:07.810459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.242 qpair failed and we were unable to recover it. 00:32:36.242 [2024-11-20 06:43:07.810579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.242 [2024-11-20 06:43:07.810606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.242 qpair failed and we were unable to recover it. 00:32:36.242 [2024-11-20 06:43:07.810738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.242 [2024-11-20 06:43:07.810767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.242 qpair failed and we were unable to recover it. 00:32:36.242 [2024-11-20 06:43:07.810885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.242 [2024-11-20 06:43:07.810913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.242 qpair failed and we were unable to recover it. 00:32:36.242 [2024-11-20 06:43:07.811022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.242 [2024-11-20 06:43:07.811050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.242 qpair failed and we were unable to recover it. 00:32:36.242 [2024-11-20 06:43:07.811171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.242 [2024-11-20 06:43:07.811199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.242 qpair failed and we were unable to recover it. 00:32:36.242 [2024-11-20 06:43:07.811383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.242 [2024-11-20 06:43:07.811411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.242 qpair failed and we were unable to recover it. 00:32:36.242 [2024-11-20 06:43:07.811537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.242 [2024-11-20 06:43:07.811566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.242 qpair failed and we were unable to recover it. 00:32:36.242 [2024-11-20 06:43:07.811670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.242 [2024-11-20 06:43:07.811697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.242 qpair failed and we were unable to recover it. 00:32:36.242 [2024-11-20 06:43:07.811820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.242 [2024-11-20 06:43:07.811848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.242 qpair failed and we were unable to recover it. 00:32:36.242 [2024-11-20 06:43:07.812031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.242 [2024-11-20 06:43:07.812060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.242 qpair failed and we were unable to recover it. 00:32:36.242 [2024-11-20 06:43:07.812169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.242 [2024-11-20 06:43:07.812197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.242 qpair failed and we were unable to recover it. 00:32:36.242 [2024-11-20 06:43:07.812398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.242 [2024-11-20 06:43:07.812428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.242 qpair failed and we were unable to recover it. 00:32:36.242 [2024-11-20 06:43:07.812554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.242 [2024-11-20 06:43:07.812583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.242 qpair failed and we were unable to recover it. 00:32:36.242 [2024-11-20 06:43:07.812694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.242 [2024-11-20 06:43:07.812721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.242 qpair failed and we were unable to recover it. 00:32:36.242 [2024-11-20 06:43:07.812843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.242 [2024-11-20 06:43:07.812871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.242 qpair failed and we were unable to recover it. 00:32:36.242 [2024-11-20 06:43:07.812985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.242 [2024-11-20 06:43:07.813013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.242 qpair failed and we were unable to recover it. 00:32:36.242 [2024-11-20 06:43:07.813211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.242 [2024-11-20 06:43:07.813242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.242 qpair failed and we were unable to recover it. 00:32:36.242 [2024-11-20 06:43:07.813370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.242 [2024-11-20 06:43:07.813399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.242 qpair failed and we were unable to recover it. 00:32:36.242 [2024-11-20 06:43:07.813576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.242 [2024-11-20 06:43:07.813606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.242 qpair failed and we were unable to recover it. 00:32:36.242 [2024-11-20 06:43:07.813777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.242 [2024-11-20 06:43:07.813805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.242 qpair failed and we were unable to recover it. 00:32:36.242 [2024-11-20 06:43:07.813983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.242 [2024-11-20 06:43:07.814013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.242 qpair failed and we were unable to recover it. 00:32:36.242 [2024-11-20 06:43:07.814219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.242 [2024-11-20 06:43:07.814251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.242 qpair failed and we were unable to recover it. 00:32:36.242 [2024-11-20 06:43:07.814427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.242 [2024-11-20 06:43:07.814458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.242 qpair failed and we were unable to recover it. 00:32:36.242 [2024-11-20 06:43:07.814578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.242 [2024-11-20 06:43:07.814611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.242 qpair failed and we were unable to recover it. 00:32:36.242 [2024-11-20 06:43:07.814728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.242 [2024-11-20 06:43:07.814759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.242 qpair failed and we were unable to recover it. 00:32:36.242 [2024-11-20 06:43:07.814951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.242 [2024-11-20 06:43:07.814983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.242 qpair failed and we were unable to recover it. 00:32:36.242 [2024-11-20 06:43:07.815185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.242 [2024-11-20 06:43:07.815226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.242 qpair failed and we were unable to recover it. 00:32:36.242 [2024-11-20 06:43:07.815355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.242 [2024-11-20 06:43:07.815387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.242 qpair failed and we were unable to recover it. 00:32:36.242 [2024-11-20 06:43:07.815509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.242 [2024-11-20 06:43:07.815540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.242 qpair failed and we were unable to recover it. 00:32:36.242 [2024-11-20 06:43:07.815752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.242 [2024-11-20 06:43:07.815784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.242 qpair failed and we were unable to recover it. 00:32:36.242 [2024-11-20 06:43:07.815891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.242 [2024-11-20 06:43:07.815928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.242 qpair failed and we were unable to recover it. 00:32:36.242 [2024-11-20 06:43:07.816107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.242 [2024-11-20 06:43:07.816140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.242 qpair failed and we were unable to recover it. 00:32:36.242 [2024-11-20 06:43:07.816395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.242 [2024-11-20 06:43:07.816427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.242 qpair failed and we were unable to recover it. 00:32:36.242 [2024-11-20 06:43:07.816551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.242 [2024-11-20 06:43:07.816579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.242 qpair failed and we were unable to recover it. 00:32:36.242 [2024-11-20 06:43:07.816695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.242 [2024-11-20 06:43:07.816724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.242 qpair failed and we were unable to recover it. 00:32:36.242 [2024-11-20 06:43:07.816830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.242 [2024-11-20 06:43:07.816857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.242 qpair failed and we were unable to recover it. 00:32:36.242 [2024-11-20 06:43:07.816958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.242 [2024-11-20 06:43:07.816986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.242 qpair failed and we were unable to recover it. 00:32:36.242 [2024-11-20 06:43:07.817184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.242 [2024-11-20 06:43:07.817224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.242 qpair failed and we were unable to recover it. 00:32:36.242 [2024-11-20 06:43:07.817328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.242 [2024-11-20 06:43:07.817356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.242 qpair failed and we were unable to recover it. 00:32:36.242 [2024-11-20 06:43:07.817545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.242 [2024-11-20 06:43:07.817577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.242 qpair failed and we were unable to recover it. 00:32:36.242 [2024-11-20 06:43:07.817686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.242 [2024-11-20 06:43:07.817717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.242 qpair failed and we were unable to recover it. 00:32:36.242 [2024-11-20 06:43:07.817823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.242 [2024-11-20 06:43:07.817853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.242 qpair failed and we were unable to recover it. 00:32:36.242 [2024-11-20 06:43:07.817970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.242 [2024-11-20 06:43:07.818002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.242 qpair failed and we were unable to recover it. 00:32:36.242 [2024-11-20 06:43:07.818119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.242 [2024-11-20 06:43:07.818148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.242 qpair failed and we were unable to recover it. 00:32:36.242 [2024-11-20 06:43:07.818264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.242 [2024-11-20 06:43:07.818294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.242 qpair failed and we were unable to recover it. 00:32:36.242 [2024-11-20 06:43:07.818482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.242 [2024-11-20 06:43:07.818512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.242 qpair failed and we were unable to recover it. 00:32:36.242 [2024-11-20 06:43:07.818623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.242 [2024-11-20 06:43:07.818655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.242 qpair failed and we were unable to recover it. 00:32:36.242 [2024-11-20 06:43:07.818766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.242 [2024-11-20 06:43:07.818798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.242 qpair failed and we were unable to recover it. 00:32:36.242 [2024-11-20 06:43:07.818927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.243 [2024-11-20 06:43:07.818958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.243 qpair failed and we were unable to recover it. 00:32:36.243 [2024-11-20 06:43:07.819065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.243 [2024-11-20 06:43:07.819108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.243 qpair failed and we were unable to recover it. 00:32:36.243 [2024-11-20 06:43:07.819226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.243 [2024-11-20 06:43:07.819257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.243 qpair failed and we were unable to recover it. 00:32:36.243 [2024-11-20 06:43:07.819372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.243 [2024-11-20 06:43:07.819404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.243 qpair failed and we were unable to recover it. 00:32:36.243 [2024-11-20 06:43:07.819511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.243 [2024-11-20 06:43:07.819543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.243 qpair failed and we were unable to recover it. 00:32:36.243 [2024-11-20 06:43:07.819677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.243 [2024-11-20 06:43:07.819709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.243 qpair failed and we were unable to recover it. 00:32:36.243 [2024-11-20 06:43:07.819827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.243 [2024-11-20 06:43:07.819858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.243 qpair failed and we were unable to recover it. 00:32:36.243 [2024-11-20 06:43:07.819964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.243 [2024-11-20 06:43:07.819995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.243 qpair failed and we were unable to recover it. 00:32:36.243 [2024-11-20 06:43:07.820184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.243 [2024-11-20 06:43:07.820224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.243 qpair failed and we were unable to recover it. 00:32:36.243 [2024-11-20 06:43:07.820347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.243 [2024-11-20 06:43:07.820379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.243 qpair failed and we were unable to recover it. 00:32:36.243 [2024-11-20 06:43:07.820568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.243 [2024-11-20 06:43:07.820600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.243 qpair failed and we were unable to recover it. 00:32:36.243 [2024-11-20 06:43:07.820725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.243 [2024-11-20 06:43:07.820758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.243 qpair failed and we were unable to recover it. 00:32:36.243 [2024-11-20 06:43:07.820931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.243 [2024-11-20 06:43:07.820962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.243 qpair failed and we were unable to recover it. 00:32:36.243 [2024-11-20 06:43:07.821082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.243 [2024-11-20 06:43:07.821114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.243 qpair failed and we were unable to recover it. 00:32:36.243 [2024-11-20 06:43:07.821232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.243 [2024-11-20 06:43:07.821265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.243 qpair failed and we were unable to recover it. 00:32:36.243 [2024-11-20 06:43:07.821374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.243 [2024-11-20 06:43:07.821406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.243 qpair failed and we were unable to recover it. 00:32:36.243 [2024-11-20 06:43:07.821665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.243 [2024-11-20 06:43:07.821697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.243 qpair failed and we were unable to recover it. 00:32:36.243 [2024-11-20 06:43:07.821876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.243 [2024-11-20 06:43:07.821907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.243 qpair failed and we were unable to recover it. 00:32:36.243 [2024-11-20 06:43:07.822011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.243 [2024-11-20 06:43:07.822043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.243 qpair failed and we were unable to recover it. 00:32:36.243 [2024-11-20 06:43:07.822149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.243 [2024-11-20 06:43:07.822181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.243 qpair failed and we were unable to recover it. 00:32:36.243 [2024-11-20 06:43:07.822428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.243 [2024-11-20 06:43:07.822461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.243 qpair failed and we were unable to recover it. 00:32:36.243 [2024-11-20 06:43:07.822638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.243 [2024-11-20 06:43:07.822669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.243 qpair failed and we were unable to recover it. 00:32:36.243 [2024-11-20 06:43:07.822774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.243 [2024-11-20 06:43:07.822812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.243 qpair failed and we were unable to recover it. 00:32:36.243 [2024-11-20 06:43:07.822936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.243 [2024-11-20 06:43:07.822968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.243 qpair failed and we were unable to recover it. 00:32:36.243 [2024-11-20 06:43:07.823144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.243 [2024-11-20 06:43:07.823176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.243 qpair failed and we were unable to recover it. 00:32:36.243 [2024-11-20 06:43:07.823313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.243 [2024-11-20 06:43:07.823344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.243 qpair failed and we were unable to recover it. 00:32:36.243 [2024-11-20 06:43:07.823582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.243 [2024-11-20 06:43:07.823615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.243 qpair failed and we were unable to recover it. 00:32:36.243 [2024-11-20 06:43:07.823805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.243 [2024-11-20 06:43:07.823837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.243 qpair failed and we were unable to recover it. 00:32:36.243 [2024-11-20 06:43:07.823962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.243 [2024-11-20 06:43:07.823995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.243 qpair failed and we were unable to recover it. 00:32:36.243 [2024-11-20 06:43:07.824144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.243 [2024-11-20 06:43:07.824175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.243 qpair failed and we were unable to recover it. 00:32:36.243 [2024-11-20 06:43:07.824297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.243 [2024-11-20 06:43:07.824328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.243 qpair failed and we were unable to recover it. 00:32:36.243 [2024-11-20 06:43:07.824455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.243 [2024-11-20 06:43:07.824486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.243 qpair failed and we were unable to recover it. 00:32:36.243 [2024-11-20 06:43:07.824665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.243 [2024-11-20 06:43:07.824696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.243 qpair failed and we were unable to recover it. 00:32:36.243 [2024-11-20 06:43:07.824887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.243 [2024-11-20 06:43:07.824920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.243 qpair failed and we were unable to recover it. 00:32:36.243 [2024-11-20 06:43:07.825188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.243 [2024-11-20 06:43:07.825229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.243 qpair failed and we were unable to recover it. 00:32:36.243 [2024-11-20 06:43:07.825414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.243 [2024-11-20 06:43:07.825446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.243 qpair failed and we were unable to recover it. 00:32:36.243 [2024-11-20 06:43:07.825682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.243 [2024-11-20 06:43:07.825714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.243 qpair failed and we were unable to recover it. 00:32:36.243 [2024-11-20 06:43:07.825852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.243 [2024-11-20 06:43:07.825884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.243 qpair failed and we were unable to recover it. 00:32:36.243 [2024-11-20 06:43:07.826005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.243 [2024-11-20 06:43:07.826037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.243 qpair failed and we were unable to recover it. 00:32:36.243 [2024-11-20 06:43:07.826236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.243 [2024-11-20 06:43:07.826270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.243 qpair failed and we were unable to recover it. 00:32:36.243 [2024-11-20 06:43:07.826448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.243 [2024-11-20 06:43:07.826480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.243 qpair failed and we were unable to recover it. 00:32:36.243 [2024-11-20 06:43:07.826665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.243 [2024-11-20 06:43:07.826696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.243 qpair failed and we were unable to recover it. 00:32:36.243 [2024-11-20 06:43:07.826818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.243 [2024-11-20 06:43:07.826850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.243 qpair failed and we were unable to recover it. 00:32:36.243 [2024-11-20 06:43:07.827026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.243 [2024-11-20 06:43:07.827057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.243 qpair failed and we were unable to recover it. 00:32:36.243 [2024-11-20 06:43:07.827168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.243 [2024-11-20 06:43:07.827199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.243 qpair failed and we were unable to recover it. 00:32:36.243 [2024-11-20 06:43:07.827324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.243 [2024-11-20 06:43:07.827372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.243 qpair failed and we were unable to recover it. 00:32:36.243 [2024-11-20 06:43:07.827488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.243 [2024-11-20 06:43:07.827519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.243 qpair failed and we were unable to recover it. 00:32:36.243 [2024-11-20 06:43:07.827713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.243 [2024-11-20 06:43:07.827744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.243 qpair failed and we were unable to recover it. 00:32:36.243 [2024-11-20 06:43:07.827845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.243 [2024-11-20 06:43:07.827876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.243 qpair failed and we were unable to recover it. 00:32:36.243 [2024-11-20 06:43:07.828077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.243 [2024-11-20 06:43:07.828109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.243 qpair failed and we were unable to recover it. 00:32:36.243 [2024-11-20 06:43:07.828286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.243 [2024-11-20 06:43:07.828318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.243 qpair failed and we were unable to recover it. 00:32:36.243 [2024-11-20 06:43:07.828502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.243 [2024-11-20 06:43:07.828534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.243 qpair failed and we were unable to recover it. 00:32:36.243 [2024-11-20 06:43:07.828801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.243 [2024-11-20 06:43:07.828831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.243 qpair failed and we were unable to recover it. 00:32:36.243 [2024-11-20 06:43:07.828965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.243 [2024-11-20 06:43:07.828997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.243 qpair failed and we were unable to recover it. 00:32:36.243 [2024-11-20 06:43:07.829107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.243 [2024-11-20 06:43:07.829138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.243 qpair failed and we were unable to recover it. 00:32:36.243 [2024-11-20 06:43:07.829272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.243 [2024-11-20 06:43:07.829305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.243 qpair failed and we were unable to recover it. 00:32:36.243 [2024-11-20 06:43:07.829506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.243 [2024-11-20 06:43:07.829538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.243 qpair failed and we were unable to recover it. 00:32:36.243 [2024-11-20 06:43:07.829652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.243 [2024-11-20 06:43:07.829683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.243 qpair failed and we were unable to recover it. 00:32:36.243 [2024-11-20 06:43:07.829875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.243 [2024-11-20 06:43:07.829906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.243 qpair failed and we were unable to recover it. 00:32:36.243 [2024-11-20 06:43:07.830013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.243 [2024-11-20 06:43:07.830044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.243 qpair failed and we were unable to recover it. 00:32:36.243 [2024-11-20 06:43:07.830170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.243 [2024-11-20 06:43:07.830212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.243 qpair failed and we were unable to recover it. 00:32:36.243 [2024-11-20 06:43:07.830483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.243 [2024-11-20 06:43:07.830515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.243 qpair failed and we were unable to recover it. 00:32:36.243 [2024-11-20 06:43:07.830628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.243 [2024-11-20 06:43:07.830664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.243 qpair failed and we were unable to recover it. 00:32:36.244 [2024-11-20 06:43:07.830836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.244 [2024-11-20 06:43:07.830868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.244 qpair failed and we were unable to recover it. 00:32:36.244 [2024-11-20 06:43:07.831054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.244 [2024-11-20 06:43:07.831085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.244 qpair failed and we were unable to recover it. 00:32:36.244 [2024-11-20 06:43:07.831217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.244 [2024-11-20 06:43:07.831249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.244 qpair failed and we were unable to recover it. 00:32:36.244 [2024-11-20 06:43:07.831450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.244 [2024-11-20 06:43:07.831482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.244 qpair failed and we were unable to recover it. 00:32:36.244 [2024-11-20 06:43:07.831587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.244 [2024-11-20 06:43:07.831616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.244 qpair failed and we were unable to recover it. 00:32:36.244 [2024-11-20 06:43:07.831796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.244 [2024-11-20 06:43:07.831828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.244 qpair failed and we were unable to recover it. 00:32:36.244 [2024-11-20 06:43:07.832011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.244 [2024-11-20 06:43:07.832042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.244 qpair failed and we were unable to recover it. 00:32:36.244 [2024-11-20 06:43:07.832165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.244 [2024-11-20 06:43:07.832195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.244 qpair failed and we were unable to recover it. 00:32:36.244 [2024-11-20 06:43:07.832342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.244 [2024-11-20 06:43:07.832373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.244 qpair failed and we were unable to recover it. 00:32:36.244 [2024-11-20 06:43:07.832585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.244 [2024-11-20 06:43:07.832617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.244 qpair failed and we were unable to recover it. 00:32:36.244 [2024-11-20 06:43:07.832728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.244 [2024-11-20 06:43:07.832759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.244 qpair failed and we were unable to recover it. 00:32:36.244 [2024-11-20 06:43:07.832890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.244 [2024-11-20 06:43:07.832920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.244 qpair failed and we were unable to recover it. 00:32:36.244 [2024-11-20 06:43:07.833056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.244 [2024-11-20 06:43:07.833087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.244 qpair failed and we were unable to recover it. 00:32:36.244 [2024-11-20 06:43:07.833226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.244 [2024-11-20 06:43:07.833260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.244 qpair failed and we were unable to recover it. 00:32:36.244 [2024-11-20 06:43:07.833366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.244 [2024-11-20 06:43:07.833398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.244 qpair failed and we were unable to recover it. 00:32:36.244 [2024-11-20 06:43:07.833585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.244 [2024-11-20 06:43:07.833617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.244 qpair failed and we were unable to recover it. 00:32:36.244 [2024-11-20 06:43:07.833792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.244 [2024-11-20 06:43:07.833824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.244 qpair failed and we were unable to recover it. 00:32:36.244 [2024-11-20 06:43:07.833951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.244 [2024-11-20 06:43:07.833982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.244 qpair failed and we were unable to recover it. 00:32:36.244 [2024-11-20 06:43:07.834091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.244 [2024-11-20 06:43:07.834122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.244 qpair failed and we were unable to recover it. 00:32:36.244 [2024-11-20 06:43:07.834242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.244 [2024-11-20 06:43:07.834273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.244 qpair failed and we were unable to recover it. 00:32:36.244 [2024-11-20 06:43:07.834388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.244 [2024-11-20 06:43:07.834419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.244 qpair failed and we were unable to recover it. 00:32:36.244 [2024-11-20 06:43:07.834525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.244 [2024-11-20 06:43:07.834557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.244 qpair failed and we were unable to recover it. 00:32:36.244 [2024-11-20 06:43:07.834661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.244 [2024-11-20 06:43:07.834690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.244 qpair failed and we were unable to recover it. 00:32:36.244 [2024-11-20 06:43:07.834805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.244 [2024-11-20 06:43:07.834836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.244 qpair failed and we were unable to recover it. 00:32:36.244 [2024-11-20 06:43:07.835076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.244 [2024-11-20 06:43:07.835107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.244 qpair failed and we were unable to recover it. 00:32:36.244 [2024-11-20 06:43:07.835232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.244 [2024-11-20 06:43:07.835263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.244 qpair failed and we were unable to recover it. 00:32:36.244 [2024-11-20 06:43:07.835395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.244 [2024-11-20 06:43:07.835427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.244 qpair failed and we were unable to recover it. 00:32:36.244 [2024-11-20 06:43:07.835598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.244 [2024-11-20 06:43:07.835629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.244 qpair failed and we were unable to recover it. 00:32:36.244 [2024-11-20 06:43:07.835806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.244 [2024-11-20 06:43:07.835838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.244 qpair failed and we were unable to recover it. 00:32:36.244 [2024-11-20 06:43:07.835956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.244 [2024-11-20 06:43:07.835987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.244 qpair failed and we were unable to recover it. 00:32:36.244 [2024-11-20 06:43:07.836105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.244 [2024-11-20 06:43:07.836136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.244 qpair failed and we were unable to recover it. 00:32:36.244 [2024-11-20 06:43:07.836341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.244 [2024-11-20 06:43:07.836373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.244 qpair failed and we were unable to recover it. 00:32:36.244 [2024-11-20 06:43:07.836569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.244 [2024-11-20 06:43:07.836601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.244 qpair failed and we were unable to recover it. 00:32:36.244 [2024-11-20 06:43:07.836772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.244 [2024-11-20 06:43:07.836803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.244 qpair failed and we were unable to recover it. 00:32:36.244 [2024-11-20 06:43:07.836911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.244 [2024-11-20 06:43:07.836942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.244 qpair failed and we were unable to recover it. 00:32:36.244 [2024-11-20 06:43:07.837051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.244 [2024-11-20 06:43:07.837081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.244 qpair failed and we were unable to recover it. 00:32:36.244 [2024-11-20 06:43:07.837334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.244 [2024-11-20 06:43:07.837366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.244 qpair failed and we were unable to recover it. 00:32:36.244 [2024-11-20 06:43:07.837584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.244 [2024-11-20 06:43:07.837615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.244 qpair failed and we were unable to recover it. 00:32:36.244 [2024-11-20 06:43:07.837742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.244 [2024-11-20 06:43:07.837774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.244 qpair failed and we were unable to recover it. 00:32:36.244 [2024-11-20 06:43:07.837897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.244 [2024-11-20 06:43:07.837940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.244 qpair failed and we were unable to recover it. 00:32:36.244 [2024-11-20 06:43:07.838067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.244 [2024-11-20 06:43:07.838097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.244 qpair failed and we were unable to recover it. 00:32:36.244 [2024-11-20 06:43:07.838237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.244 [2024-11-20 06:43:07.838269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.244 qpair failed and we were unable to recover it. 00:32:36.244 [2024-11-20 06:43:07.838447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.244 [2024-11-20 06:43:07.838479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.244 qpair failed and we were unable to recover it. 00:32:36.244 [2024-11-20 06:43:07.838594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.244 [2024-11-20 06:43:07.838626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.244 qpair failed and we were unable to recover it. 00:32:36.244 [2024-11-20 06:43:07.838737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.244 [2024-11-20 06:43:07.838768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.244 qpair failed and we were unable to recover it. 00:32:36.244 [2024-11-20 06:43:07.838895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.244 [2024-11-20 06:43:07.838927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.244 qpair failed and we were unable to recover it. 00:32:36.244 [2024-11-20 06:43:07.839097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.244 [2024-11-20 06:43:07.839128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.244 qpair failed and we were unable to recover it. 00:32:36.244 [2024-11-20 06:43:07.839257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.244 [2024-11-20 06:43:07.839289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.244 qpair failed and we were unable to recover it. 00:32:36.244 [2024-11-20 06:43:07.839541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.244 [2024-11-20 06:43:07.839573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.244 qpair failed and we were unable to recover it. 00:32:36.244 [2024-11-20 06:43:07.839694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.244 [2024-11-20 06:43:07.839726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.244 qpair failed and we were unable to recover it. 00:32:36.244 [2024-11-20 06:43:07.839932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.244 [2024-11-20 06:43:07.839964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.244 qpair failed and we were unable to recover it. 00:32:36.244 [2024-11-20 06:43:07.840110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.244 [2024-11-20 06:43:07.840140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.244 qpair failed and we were unable to recover it. 00:32:36.244 [2024-11-20 06:43:07.840273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.244 [2024-11-20 06:43:07.840305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.244 qpair failed and we were unable to recover it. 00:32:36.244 [2024-11-20 06:43:07.840440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.244 [2024-11-20 06:43:07.840472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.244 qpair failed and we were unable to recover it. 00:32:36.244 [2024-11-20 06:43:07.840644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.244 [2024-11-20 06:43:07.840676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.244 qpair failed and we were unable to recover it. 00:32:36.244 [2024-11-20 06:43:07.840802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.244 [2024-11-20 06:43:07.840833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.244 qpair failed and we were unable to recover it. 00:32:36.244 [2024-11-20 06:43:07.841010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.244 [2024-11-20 06:43:07.841041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.244 qpair failed and we were unable to recover it. 00:32:36.244 [2024-11-20 06:43:07.841156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.244 [2024-11-20 06:43:07.841186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.244 qpair failed and we were unable to recover it. 00:32:36.244 [2024-11-20 06:43:07.841327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.244 [2024-11-20 06:43:07.841358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.244 qpair failed and we were unable to recover it. 00:32:36.244 [2024-11-20 06:43:07.841460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.244 [2024-11-20 06:43:07.841491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.244 qpair failed and we were unable to recover it. 00:32:36.244 [2024-11-20 06:43:07.841599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.244 [2024-11-20 06:43:07.841630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.244 qpair failed and we were unable to recover it. 00:32:36.244 [2024-11-20 06:43:07.841802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.244 [2024-11-20 06:43:07.841834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.244 qpair failed and we were unable to recover it. 00:32:36.244 [2024-11-20 06:43:07.842043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.244 [2024-11-20 06:43:07.842074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.244 qpair failed and we were unable to recover it. 00:32:36.244 [2024-11-20 06:43:07.842320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.244 [2024-11-20 06:43:07.842354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.244 qpair failed and we were unable to recover it. 00:32:36.244 [2024-11-20 06:43:07.842567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.244 [2024-11-20 06:43:07.842599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.244 qpair failed and we were unable to recover it. 00:32:36.245 [2024-11-20 06:43:07.842701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.245 [2024-11-20 06:43:07.842731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.245 qpair failed and we were unable to recover it. 00:32:36.245 [2024-11-20 06:43:07.842873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.245 [2024-11-20 06:43:07.842904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.245 qpair failed and we were unable to recover it. 00:32:36.245 [2024-11-20 06:43:07.843126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.245 [2024-11-20 06:43:07.843157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.245 qpair failed and we were unable to recover it. 00:32:36.245 [2024-11-20 06:43:07.843350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.245 [2024-11-20 06:43:07.843382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.245 qpair failed and we were unable to recover it. 00:32:36.245 [2024-11-20 06:43:07.843569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.245 [2024-11-20 06:43:07.843601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.245 qpair failed and we were unable to recover it. 00:32:36.245 [2024-11-20 06:43:07.843787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.245 [2024-11-20 06:43:07.843817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.245 qpair failed and we were unable to recover it. 00:32:36.245 [2024-11-20 06:43:07.844085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.245 [2024-11-20 06:43:07.844117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.245 qpair failed and we were unable to recover it. 00:32:36.245 [2024-11-20 06:43:07.844366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.245 [2024-11-20 06:43:07.844398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.245 qpair failed and we were unable to recover it. 00:32:36.245 [2024-11-20 06:43:07.844519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.245 [2024-11-20 06:43:07.844549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.245 qpair failed and we were unable to recover it. 00:32:36.245 [2024-11-20 06:43:07.844678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.245 [2024-11-20 06:43:07.844708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.245 qpair failed and we were unable to recover it. 00:32:36.245 [2024-11-20 06:43:07.844831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.245 [2024-11-20 06:43:07.844863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.245 qpair failed and we were unable to recover it. 00:32:36.245 [2024-11-20 06:43:07.845050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.245 [2024-11-20 06:43:07.845081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.245 qpair failed and we were unable to recover it. 00:32:36.245 [2024-11-20 06:43:07.845322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.245 [2024-11-20 06:43:07.845355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.245 qpair failed and we were unable to recover it. 00:32:36.245 [2024-11-20 06:43:07.845548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.245 [2024-11-20 06:43:07.845580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.245 qpair failed and we were unable to recover it. 00:32:36.245 [2024-11-20 06:43:07.845711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.245 [2024-11-20 06:43:07.845753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.245 qpair failed and we were unable to recover it. 00:32:36.245 [2024-11-20 06:43:07.845861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.245 [2024-11-20 06:43:07.845893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.245 qpair failed and we were unable to recover it. 00:32:36.245 [2024-11-20 06:43:07.846089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.245 [2024-11-20 06:43:07.846119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.245 qpair failed and we were unable to recover it. 00:32:36.245 [2024-11-20 06:43:07.846238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.245 [2024-11-20 06:43:07.846271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.245 qpair failed and we were unable to recover it. 00:32:36.245 [2024-11-20 06:43:07.846487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.245 [2024-11-20 06:43:07.846520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.245 qpair failed and we were unable to recover it. 00:32:36.245 [2024-11-20 06:43:07.846639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.245 [2024-11-20 06:43:07.846670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.245 qpair failed and we were unable to recover it. 00:32:36.245 [2024-11-20 06:43:07.846804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.245 [2024-11-20 06:43:07.846835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.245 qpair failed and we were unable to recover it. 00:32:36.245 [2024-11-20 06:43:07.846949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.245 [2024-11-20 06:43:07.846981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.245 qpair failed and we were unable to recover it. 00:32:36.245 [2024-11-20 06:43:07.847170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.245 [2024-11-20 06:43:07.847211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.245 qpair failed and we were unable to recover it. 00:32:36.245 [2024-11-20 06:43:07.847452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.245 [2024-11-20 06:43:07.847484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.245 qpair failed and we were unable to recover it. 00:32:36.245 [2024-11-20 06:43:07.847662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.245 [2024-11-20 06:43:07.847693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.245 qpair failed and we were unable to recover it. 00:32:36.245 [2024-11-20 06:43:07.847824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.245 [2024-11-20 06:43:07.847855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.245 qpair failed and we were unable to recover it. 00:32:36.245 [2024-11-20 06:43:07.847993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.245 [2024-11-20 06:43:07.848024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.245 qpair failed and we were unable to recover it. 00:32:36.245 [2024-11-20 06:43:07.848215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.245 [2024-11-20 06:43:07.848248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.245 qpair failed and we were unable to recover it. 00:32:36.245 [2024-11-20 06:43:07.848374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.245 [2024-11-20 06:43:07.848405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.245 qpair failed and we were unable to recover it. 00:32:36.245 [2024-11-20 06:43:07.848512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.245 [2024-11-20 06:43:07.848544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.245 qpair failed and we were unable to recover it. 00:32:36.245 [2024-11-20 06:43:07.848726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.245 [2024-11-20 06:43:07.848757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.245 qpair failed and we were unable to recover it. 00:32:36.245 [2024-11-20 06:43:07.849000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.245 [2024-11-20 06:43:07.849031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.245 qpair failed and we were unable to recover it. 00:32:36.245 [2024-11-20 06:43:07.849143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.245 [2024-11-20 06:43:07.849174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.245 qpair failed and we were unable to recover it. 00:32:36.245 [2024-11-20 06:43:07.849319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.245 [2024-11-20 06:43:07.849351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.245 qpair failed and we were unable to recover it. 00:32:36.245 [2024-11-20 06:43:07.849532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.245 [2024-11-20 06:43:07.849564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.245 qpair failed and we were unable to recover it. 00:32:36.245 [2024-11-20 06:43:07.849739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.245 [2024-11-20 06:43:07.849770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.245 qpair failed and we were unable to recover it. 00:32:36.245 [2024-11-20 06:43:07.849950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.245 [2024-11-20 06:43:07.849982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.245 qpair failed and we were unable to recover it. 00:32:36.245 [2024-11-20 06:43:07.850107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.245 [2024-11-20 06:43:07.850137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.245 qpair failed and we were unable to recover it. 00:32:36.245 [2024-11-20 06:43:07.850326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.245 [2024-11-20 06:43:07.850360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.245 qpair failed and we were unable to recover it. 00:32:36.245 [2024-11-20 06:43:07.850557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.245 [2024-11-20 06:43:07.850588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.245 qpair failed and we were unable to recover it. 00:32:36.245 [2024-11-20 06:43:07.850709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.245 [2024-11-20 06:43:07.850741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.245 qpair failed and we were unable to recover it. 00:32:36.245 [2024-11-20 06:43:07.850855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.245 [2024-11-20 06:43:07.850886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.245 qpair failed and we were unable to recover it. 00:32:36.245 [2024-11-20 06:43:07.850988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.245 [2024-11-20 06:43:07.851019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.245 qpair failed and we were unable to recover it. 00:32:36.245 [2024-11-20 06:43:07.851122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.245 [2024-11-20 06:43:07.851153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.245 qpair failed and we were unable to recover it. 00:32:36.245 [2024-11-20 06:43:07.851348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.245 [2024-11-20 06:43:07.851382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.245 qpair failed and we were unable to recover it. 00:32:36.245 [2024-11-20 06:43:07.851559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.245 [2024-11-20 06:43:07.851589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.245 qpair failed and we were unable to recover it. 00:32:36.245 [2024-11-20 06:43:07.851710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.245 [2024-11-20 06:43:07.851743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.245 qpair failed and we were unable to recover it. 00:32:36.245 [2024-11-20 06:43:07.851949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.245 [2024-11-20 06:43:07.851981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.245 qpair failed and we were unable to recover it. 00:32:36.245 [2024-11-20 06:43:07.852167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.245 [2024-11-20 06:43:07.852197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.245 qpair failed and we were unable to recover it. 00:32:36.245 [2024-11-20 06:43:07.852405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.245 [2024-11-20 06:43:07.852437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.245 qpair failed and we were unable to recover it. 00:32:36.245 [2024-11-20 06:43:07.852569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.245 [2024-11-20 06:43:07.852599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.245 qpair failed and we were unable to recover it. 00:32:36.245 [2024-11-20 06:43:07.852724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.245 [2024-11-20 06:43:07.852756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.245 qpair failed and we were unable to recover it. 00:32:36.245 [2024-11-20 06:43:07.852996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.245 [2024-11-20 06:43:07.853027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.245 qpair failed and we were unable to recover it. 00:32:36.245 [2024-11-20 06:43:07.853149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.245 [2024-11-20 06:43:07.853180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.245 qpair failed and we were unable to recover it. 00:32:36.245 [2024-11-20 06:43:07.853432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.245 [2024-11-20 06:43:07.853469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.245 qpair failed and we were unable to recover it. 00:32:36.245 [2024-11-20 06:43:07.853605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.245 [2024-11-20 06:43:07.853636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.245 qpair failed and we were unable to recover it. 00:32:36.245 [2024-11-20 06:43:07.853808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.245 [2024-11-20 06:43:07.853840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.245 qpair failed and we were unable to recover it. 00:32:36.245 [2024-11-20 06:43:07.854018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.245 [2024-11-20 06:43:07.854049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.245 qpair failed and we were unable to recover it. 00:32:36.245 [2024-11-20 06:43:07.854237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.245 [2024-11-20 06:43:07.854271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.245 qpair failed and we were unable to recover it. 00:32:36.245 [2024-11-20 06:43:07.854394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.245 [2024-11-20 06:43:07.854425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.245 qpair failed and we were unable to recover it. 00:32:36.245 [2024-11-20 06:43:07.854547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.245 [2024-11-20 06:43:07.854579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.245 qpair failed and we were unable to recover it. 00:32:36.245 [2024-11-20 06:43:07.854754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.245 [2024-11-20 06:43:07.854785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.245 qpair failed and we were unable to recover it. 00:32:36.245 [2024-11-20 06:43:07.854973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.245 [2024-11-20 06:43:07.855003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.245 qpair failed and we were unable to recover it. 00:32:36.245 [2024-11-20 06:43:07.855112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.245 [2024-11-20 06:43:07.855144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.245 qpair failed and we were unable to recover it. 00:32:36.245 [2024-11-20 06:43:07.855390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.246 [2024-11-20 06:43:07.855424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.246 qpair failed and we were unable to recover it. 00:32:36.246 [2024-11-20 06:43:07.855663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.246 [2024-11-20 06:43:07.855696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.246 qpair failed and we were unable to recover it. 00:32:36.246 [2024-11-20 06:43:07.855986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.246 [2024-11-20 06:43:07.856017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.246 qpair failed and we were unable to recover it. 00:32:36.246 [2024-11-20 06:43:07.856199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.246 [2024-11-20 06:43:07.856242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.246 qpair failed and we were unable to recover it. 00:32:36.246 [2024-11-20 06:43:07.856436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.246 [2024-11-20 06:43:07.856468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.246 qpair failed and we were unable to recover it. 00:32:36.246 [2024-11-20 06:43:07.856675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.246 [2024-11-20 06:43:07.856708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.246 qpair failed and we were unable to recover it. 00:32:36.246 [2024-11-20 06:43:07.856813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.246 [2024-11-20 06:43:07.856844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.246 qpair failed and we were unable to recover it. 00:32:36.246 [2024-11-20 06:43:07.856979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.246 [2024-11-20 06:43:07.857010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.246 qpair failed and we were unable to recover it. 00:32:36.246 [2024-11-20 06:43:07.857199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.246 [2024-11-20 06:43:07.857242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.246 qpair failed and we were unable to recover it. 00:32:36.246 [2024-11-20 06:43:07.857428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.246 [2024-11-20 06:43:07.857460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.246 qpair failed and we were unable to recover it. 00:32:36.246 [2024-11-20 06:43:07.857680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.246 [2024-11-20 06:43:07.857712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.246 qpair failed and we were unable to recover it. 00:32:36.246 [2024-11-20 06:43:07.857816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.246 [2024-11-20 06:43:07.857847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.246 qpair failed and we were unable to recover it. 00:32:36.246 [2024-11-20 06:43:07.858059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.246 [2024-11-20 06:43:07.858089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.246 qpair failed and we were unable to recover it. 00:32:36.246 [2024-11-20 06:43:07.858244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.246 [2024-11-20 06:43:07.858276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.246 qpair failed and we were unable to recover it. 00:32:36.246 [2024-11-20 06:43:07.858458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.246 [2024-11-20 06:43:07.858490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.246 qpair failed and we were unable to recover it. 00:32:36.246 [2024-11-20 06:43:07.858607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.246 [2024-11-20 06:43:07.858638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.246 qpair failed and we were unable to recover it. 00:32:36.246 [2024-11-20 06:43:07.858763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.246 [2024-11-20 06:43:07.858794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.246 qpair failed and we were unable to recover it. 00:32:36.246 [2024-11-20 06:43:07.858992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.246 [2024-11-20 06:43:07.859029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.246 qpair failed and we were unable to recover it. 00:32:36.246 [2024-11-20 06:43:07.859141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.246 [2024-11-20 06:43:07.859171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.246 qpair failed and we were unable to recover it. 00:32:36.246 [2024-11-20 06:43:07.859316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.246 [2024-11-20 06:43:07.859348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.246 qpair failed and we were unable to recover it. 00:32:36.246 [2024-11-20 06:43:07.859536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.246 [2024-11-20 06:43:07.859568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.246 qpair failed and we were unable to recover it. 00:32:36.246 [2024-11-20 06:43:07.859810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.246 [2024-11-20 06:43:07.859842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.246 qpair failed and we were unable to recover it. 00:32:36.246 [2024-11-20 06:43:07.859946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.246 [2024-11-20 06:43:07.859978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.246 qpair failed and we were unable to recover it. 00:32:36.246 [2024-11-20 06:43:07.860107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.246 [2024-11-20 06:43:07.860138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.246 qpair failed and we were unable to recover it. 00:32:36.246 [2024-11-20 06:43:07.860340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.246 [2024-11-20 06:43:07.860375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.246 qpair failed and we were unable to recover it. 00:32:36.246 [2024-11-20 06:43:07.860510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.246 [2024-11-20 06:43:07.860541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.246 qpair failed and we were unable to recover it. 00:32:36.246 [2024-11-20 06:43:07.860786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.246 [2024-11-20 06:43:07.860818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.246 qpair failed and we were unable to recover it. 00:32:36.246 [2024-11-20 06:43:07.861003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.246 [2024-11-20 06:43:07.861034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.246 qpair failed and we were unable to recover it. 00:32:36.246 [2024-11-20 06:43:07.861298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.246 [2024-11-20 06:43:07.861331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.246 qpair failed and we were unable to recover it. 00:32:36.246 [2024-11-20 06:43:07.861531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.246 [2024-11-20 06:43:07.861563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.246 qpair failed and we were unable to recover it. 00:32:36.246 [2024-11-20 06:43:07.861757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.246 [2024-11-20 06:43:07.861789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.246 qpair failed and we were unable to recover it. 00:32:36.246 [2024-11-20 06:43:07.862040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.246 [2024-11-20 06:43:07.862072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.246 qpair failed and we were unable to recover it. 00:32:36.246 [2024-11-20 06:43:07.862257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.246 [2024-11-20 06:43:07.862290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.246 qpair failed and we were unable to recover it. 00:32:36.246 [2024-11-20 06:43:07.862488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.246 [2024-11-20 06:43:07.862519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.246 qpair failed and we were unable to recover it. 00:32:36.246 [2024-11-20 06:43:07.862707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.246 [2024-11-20 06:43:07.862738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.246 qpair failed and we were unable to recover it. 00:32:36.246 [2024-11-20 06:43:07.862866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.246 [2024-11-20 06:43:07.862897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.246 qpair failed and we were unable to recover it. 00:32:36.246 [2024-11-20 06:43:07.863081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.246 [2024-11-20 06:43:07.863113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.246 qpair failed and we were unable to recover it. 00:32:36.246 [2024-11-20 06:43:07.863220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.246 [2024-11-20 06:43:07.863253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.246 qpair failed and we were unable to recover it. 00:32:36.246 [2024-11-20 06:43:07.863475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.246 [2024-11-20 06:43:07.863507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.246 qpair failed and we were unable to recover it. 00:32:36.246 [2024-11-20 06:43:07.863635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.246 [2024-11-20 06:43:07.863668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.246 qpair failed and we were unable to recover it. 00:32:36.246 [2024-11-20 06:43:07.863791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.246 [2024-11-20 06:43:07.863821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.246 qpair failed and we were unable to recover it. 00:32:36.246 [2024-11-20 06:43:07.863956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.246 [2024-11-20 06:43:07.863987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.246 qpair failed and we were unable to recover it. 00:32:36.246 [2024-11-20 06:43:07.864175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.246 [2024-11-20 06:43:07.864226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.246 qpair failed and we were unable to recover it. 00:32:36.246 [2024-11-20 06:43:07.864416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.246 [2024-11-20 06:43:07.864447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.246 qpair failed and we were unable to recover it. 00:32:36.246 [2024-11-20 06:43:07.864623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.246 [2024-11-20 06:43:07.864655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.246 qpair failed and we were unable to recover it. 00:32:36.246 [2024-11-20 06:43:07.864828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.246 [2024-11-20 06:43:07.864859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.246 qpair failed and we were unable to recover it. 00:32:36.246 [2024-11-20 06:43:07.865079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.246 [2024-11-20 06:43:07.865111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.246 qpair failed and we were unable to recover it. 00:32:36.246 [2024-11-20 06:43:07.865229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.246 [2024-11-20 06:43:07.865263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.246 qpair failed and we were unable to recover it. 00:32:36.246 [2024-11-20 06:43:07.865395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.246 [2024-11-20 06:43:07.865426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.246 qpair failed and we were unable to recover it. 00:32:36.246 [2024-11-20 06:43:07.865667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.246 [2024-11-20 06:43:07.865698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.246 qpair failed and we were unable to recover it. 00:32:36.246 [2024-11-20 06:43:07.865825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.246 [2024-11-20 06:43:07.865857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.246 qpair failed and we were unable to recover it. 00:32:36.246 [2024-11-20 06:43:07.866050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.246 [2024-11-20 06:43:07.866081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.246 qpair failed and we were unable to recover it. 00:32:36.246 [2024-11-20 06:43:07.866218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.246 [2024-11-20 06:43:07.866251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.246 qpair failed and we were unable to recover it. 00:32:36.246 [2024-11-20 06:43:07.866451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.246 [2024-11-20 06:43:07.866482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.246 qpair failed and we were unable to recover it. 00:32:36.246 [2024-11-20 06:43:07.866599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.246 [2024-11-20 06:43:07.866631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.246 qpair failed and we were unable to recover it. 00:32:36.246 [2024-11-20 06:43:07.866748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.246 [2024-11-20 06:43:07.866779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.246 qpair failed and we were unable to recover it. 00:32:36.246 [2024-11-20 06:43:07.867021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.246 [2024-11-20 06:43:07.867053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.246 qpair failed and we were unable to recover it. 00:32:36.246 [2024-11-20 06:43:07.867179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.246 [2024-11-20 06:43:07.867226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.246 qpair failed and we were unable to recover it. 00:32:36.246 [2024-11-20 06:43:07.867418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.246 [2024-11-20 06:43:07.867450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.246 qpair failed and we were unable to recover it. 00:32:36.246 [2024-11-20 06:43:07.867578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.246 [2024-11-20 06:43:07.867614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.246 qpair failed and we were unable to recover it. 00:32:36.246 [2024-11-20 06:43:07.867854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.246 [2024-11-20 06:43:07.867886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.246 qpair failed and we were unable to recover it. 00:32:36.246 [2024-11-20 06:43:07.868059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.246 [2024-11-20 06:43:07.868090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.246 qpair failed and we were unable to recover it. 00:32:36.246 [2024-11-20 06:43:07.868236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.246 [2024-11-20 06:43:07.868270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.246 qpair failed and we were unable to recover it. 00:32:36.246 [2024-11-20 06:43:07.868418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.246 [2024-11-20 06:43:07.868450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.246 qpair failed and we were unable to recover it. 00:32:36.246 [2024-11-20 06:43:07.868583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.246 [2024-11-20 06:43:07.868614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.246 qpair failed and we were unable to recover it. 00:32:36.246 [2024-11-20 06:43:07.868746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.246 [2024-11-20 06:43:07.868777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.246 qpair failed and we were unable to recover it. 00:32:36.246 [2024-11-20 06:43:07.868901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.246 [2024-11-20 06:43:07.868932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.246 qpair failed and we were unable to recover it. 00:32:36.246 [2024-11-20 06:43:07.869215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.246 [2024-11-20 06:43:07.869248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.246 qpair failed and we were unable to recover it. 00:32:36.246 [2024-11-20 06:43:07.869355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.246 [2024-11-20 06:43:07.869387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.246 qpair failed and we were unable to recover it. 00:32:36.246 [2024-11-20 06:43:07.869490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.246 [2024-11-20 06:43:07.869522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.246 qpair failed and we were unable to recover it. 00:32:36.246 [2024-11-20 06:43:07.869720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.247 [2024-11-20 06:43:07.869751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.247 qpair failed and we were unable to recover it. 00:32:36.247 [2024-11-20 06:43:07.869881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.247 [2024-11-20 06:43:07.869914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.247 qpair failed and we were unable to recover it. 00:32:36.247 [2024-11-20 06:43:07.870097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.247 [2024-11-20 06:43:07.870129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.247 qpair failed and we were unable to recover it. 00:32:36.247 [2024-11-20 06:43:07.870311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.247 [2024-11-20 06:43:07.870345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.247 qpair failed and we were unable to recover it. 00:32:36.247 [2024-11-20 06:43:07.870531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.247 [2024-11-20 06:43:07.870564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.247 qpair failed and we were unable to recover it. 00:32:36.247 [2024-11-20 06:43:07.870696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.247 [2024-11-20 06:43:07.870727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.247 qpair failed and we were unable to recover it. 00:32:36.247 [2024-11-20 06:43:07.870861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.247 [2024-11-20 06:43:07.870893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.247 qpair failed and we were unable to recover it. 00:32:36.247 [2024-11-20 06:43:07.871018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.247 [2024-11-20 06:43:07.871049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.247 qpair failed and we were unable to recover it. 00:32:36.247 [2024-11-20 06:43:07.871167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.247 [2024-11-20 06:43:07.871198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.247 qpair failed and we were unable to recover it. 00:32:36.247 [2024-11-20 06:43:07.871323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.247 [2024-11-20 06:43:07.871355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.247 qpair failed and we were unable to recover it. 00:32:36.247 [2024-11-20 06:43:07.871481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.247 [2024-11-20 06:43:07.871514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.247 qpair failed and we were unable to recover it. 00:32:36.247 [2024-11-20 06:43:07.871693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.247 [2024-11-20 06:43:07.871725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.247 qpair failed and we were unable to recover it. 00:32:36.247 [2024-11-20 06:43:07.871853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.247 [2024-11-20 06:43:07.871885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.247 qpair failed and we were unable to recover it. 00:32:36.247 [2024-11-20 06:43:07.872008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.247 [2024-11-20 06:43:07.872039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.247 qpair failed and we were unable to recover it. 00:32:36.247 [2024-11-20 06:43:07.872182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.247 [2024-11-20 06:43:07.872222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.247 qpair failed and we were unable to recover it. 00:32:36.247 [2024-11-20 06:43:07.872417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.247 [2024-11-20 06:43:07.872448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.247 qpair failed and we were unable to recover it. 00:32:36.247 [2024-11-20 06:43:07.872625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.247 [2024-11-20 06:43:07.872657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.247 qpair failed and we were unable to recover it. 00:32:36.247 [2024-11-20 06:43:07.872771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.247 [2024-11-20 06:43:07.872803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.247 qpair failed and we were unable to recover it. 00:32:36.247 [2024-11-20 06:43:07.872992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.247 [2024-11-20 06:43:07.873023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.247 qpair failed and we were unable to recover it. 00:32:36.247 [2024-11-20 06:43:07.873218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.247 [2024-11-20 06:43:07.873252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.247 qpair failed and we were unable to recover it. 00:32:36.247 [2024-11-20 06:43:07.873358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.247 [2024-11-20 06:43:07.873389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.247 qpair failed and we were unable to recover it. 00:32:36.247 [2024-11-20 06:43:07.873598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.247 [2024-11-20 06:43:07.873630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.247 qpair failed and we were unable to recover it. 00:32:36.247 [2024-11-20 06:43:07.873768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.247 [2024-11-20 06:43:07.873799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.247 qpair failed and we were unable to recover it. 00:32:36.247 [2024-11-20 06:43:07.873977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.247 [2024-11-20 06:43:07.874008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.247 qpair failed and we were unable to recover it. 00:32:36.247 [2024-11-20 06:43:07.874185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.247 [2024-11-20 06:43:07.874229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.247 qpair failed and we were unable to recover it. 00:32:36.247 [2024-11-20 06:43:07.874414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.247 [2024-11-20 06:43:07.874445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.247 qpair failed and we were unable to recover it. 00:32:36.247 [2024-11-20 06:43:07.874640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.247 [2024-11-20 06:43:07.874672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.247 qpair failed and we were unable to recover it. 00:32:36.247 [2024-11-20 06:43:07.874784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.247 [2024-11-20 06:43:07.874821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.247 qpair failed and we were unable to recover it. 00:32:36.247 [2024-11-20 06:43:07.874992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.247 [2024-11-20 06:43:07.875023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.247 qpair failed and we were unable to recover it. 00:32:36.247 [2024-11-20 06:43:07.875148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.247 [2024-11-20 06:43:07.875180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.247 qpair failed and we were unable to recover it. 00:32:36.247 [2024-11-20 06:43:07.875306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.247 [2024-11-20 06:43:07.875338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.247 qpair failed and we were unable to recover it. 00:32:36.247 [2024-11-20 06:43:07.875580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.247 [2024-11-20 06:43:07.875611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.247 qpair failed and we were unable to recover it. 00:32:36.247 [2024-11-20 06:43:07.875738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.247 [2024-11-20 06:43:07.875769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.247 qpair failed and we were unable to recover it. 00:32:36.247 [2024-11-20 06:43:07.875900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.247 [2024-11-20 06:43:07.875932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.247 qpair failed and we were unable to recover it. 00:32:36.247 [2024-11-20 06:43:07.876101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.247 [2024-11-20 06:43:07.876132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.247 qpair failed and we were unable to recover it. 00:32:36.247 [2024-11-20 06:43:07.876265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.247 [2024-11-20 06:43:07.876298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.247 qpair failed and we were unable to recover it. 00:32:36.247 [2024-11-20 06:43:07.876539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.247 [2024-11-20 06:43:07.876572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.247 qpair failed and we were unable to recover it. 00:32:36.247 [2024-11-20 06:43:07.876710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.247 [2024-11-20 06:43:07.876741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.247 qpair failed and we were unable to recover it. 00:32:36.247 [2024-11-20 06:43:07.876934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.247 [2024-11-20 06:43:07.876965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.247 qpair failed and we were unable to recover it. 00:32:36.247 [2024-11-20 06:43:07.877083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.247 [2024-11-20 06:43:07.877114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.247 qpair failed and we were unable to recover it. 00:32:36.247 [2024-11-20 06:43:07.877233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.247 [2024-11-20 06:43:07.877294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.247 qpair failed and we were unable to recover it. 00:32:36.247 [2024-11-20 06:43:07.877444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.247 [2024-11-20 06:43:07.877478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.247 qpair failed and we were unable to recover it. 00:32:36.247 [2024-11-20 06:43:07.877744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.247 [2024-11-20 06:43:07.877777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.247 qpair failed and we were unable to recover it. 00:32:36.247 [2024-11-20 06:43:07.878013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.247 [2024-11-20 06:43:07.878044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.247 qpair failed and we were unable to recover it. 00:32:36.247 [2024-11-20 06:43:07.878233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.247 [2024-11-20 06:43:07.878266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.247 qpair failed and we were unable to recover it. 00:32:36.247 [2024-11-20 06:43:07.878463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.247 [2024-11-20 06:43:07.878495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.247 qpair failed and we were unable to recover it. 00:32:36.247 [2024-11-20 06:43:07.878686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.247 [2024-11-20 06:43:07.878717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.247 qpair failed and we were unable to recover it. 00:32:36.247 [2024-11-20 06:43:07.878940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.247 [2024-11-20 06:43:07.878972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.247 qpair failed and we were unable to recover it. 00:32:36.247 [2024-11-20 06:43:07.879093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.247 [2024-11-20 06:43:07.879124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.247 qpair failed and we were unable to recover it. 00:32:36.247 [2024-11-20 06:43:07.879249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.247 [2024-11-20 06:43:07.879281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.247 qpair failed and we were unable to recover it. 00:32:36.247 [2024-11-20 06:43:07.879451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.247 [2024-11-20 06:43:07.879482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.247 qpair failed and we were unable to recover it. 00:32:36.247 [2024-11-20 06:43:07.879612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.247 [2024-11-20 06:43:07.879643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.247 qpair failed and we were unable to recover it. 00:32:36.247 [2024-11-20 06:43:07.879769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.247 [2024-11-20 06:43:07.879800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.247 qpair failed and we were unable to recover it. 00:32:36.247 [2024-11-20 06:43:07.879992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.247 [2024-11-20 06:43:07.880023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.247 qpair failed and we were unable to recover it. 00:32:36.247 [2024-11-20 06:43:07.880146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.247 [2024-11-20 06:43:07.880177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.247 qpair failed and we were unable to recover it. 00:32:36.247 [2024-11-20 06:43:07.880322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.247 [2024-11-20 06:43:07.880354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.247 qpair failed and we were unable to recover it. 00:32:36.247 [2024-11-20 06:43:07.880529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.247 [2024-11-20 06:43:07.880561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.247 qpair failed and we were unable to recover it. 00:32:36.247 [2024-11-20 06:43:07.880678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.247 [2024-11-20 06:43:07.880709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.247 qpair failed and we were unable to recover it. 00:32:36.247 [2024-11-20 06:43:07.880826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.247 [2024-11-20 06:43:07.880857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.247 qpair failed and we were unable to recover it. 00:32:36.247 [2024-11-20 06:43:07.880971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.247 [2024-11-20 06:43:07.881003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.247 qpair failed and we were unable to recover it. 00:32:36.247 [2024-11-20 06:43:07.881189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.247 [2024-11-20 06:43:07.881234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.247 qpair failed and we were unable to recover it. 00:32:36.247 [2024-11-20 06:43:07.881405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.247 [2024-11-20 06:43:07.881437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.247 qpair failed and we were unable to recover it. 00:32:36.247 [2024-11-20 06:43:07.881545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.247 [2024-11-20 06:43:07.881576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.247 qpair failed and we were unable to recover it. 00:32:36.247 [2024-11-20 06:43:07.881693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.247 [2024-11-20 06:43:07.881723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.247 qpair failed and we were unable to recover it. 00:32:36.248 [2024-11-20 06:43:07.881854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.248 [2024-11-20 06:43:07.881885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.248 qpair failed and we were unable to recover it. 00:32:36.248 [2024-11-20 06:43:07.882093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.248 [2024-11-20 06:43:07.882126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.248 qpair failed and we were unable to recover it. 00:32:36.248 [2024-11-20 06:43:07.882316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.248 [2024-11-20 06:43:07.882349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.248 qpair failed and we were unable to recover it. 00:32:36.248 [2024-11-20 06:43:07.882477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.248 [2024-11-20 06:43:07.882514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.248 qpair failed and we were unable to recover it. 00:32:36.248 [2024-11-20 06:43:07.882621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.248 [2024-11-20 06:43:07.882652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.248 qpair failed and we were unable to recover it. 00:32:36.248 [2024-11-20 06:43:07.882827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.248 [2024-11-20 06:43:07.882858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.248 qpair failed and we were unable to recover it. 00:32:36.248 [2024-11-20 06:43:07.882966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.248 [2024-11-20 06:43:07.882998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.248 qpair failed and we were unable to recover it. 00:32:36.248 [2024-11-20 06:43:07.883103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.248 [2024-11-20 06:43:07.883135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.248 qpair failed and we were unable to recover it. 00:32:36.248 [2024-11-20 06:43:07.883270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.248 [2024-11-20 06:43:07.883302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.248 qpair failed and we were unable to recover it. 00:32:36.248 [2024-11-20 06:43:07.883473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.248 [2024-11-20 06:43:07.883505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.248 qpair failed and we were unable to recover it. 00:32:36.248 [2024-11-20 06:43:07.883609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.248 [2024-11-20 06:43:07.883640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.248 qpair failed and we were unable to recover it. 00:32:36.248 [2024-11-20 06:43:07.883748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.248 [2024-11-20 06:43:07.883779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.248 qpair failed and we were unable to recover it. 00:32:36.248 [2024-11-20 06:43:07.883967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.248 [2024-11-20 06:43:07.883998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.248 qpair failed and we were unable to recover it. 00:32:36.248 [2024-11-20 06:43:07.884106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.248 [2024-11-20 06:43:07.884137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.248 qpair failed and we were unable to recover it. 00:32:36.248 [2024-11-20 06:43:07.884263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.248 [2024-11-20 06:43:07.884295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.248 qpair failed and we were unable to recover it. 00:32:36.248 [2024-11-20 06:43:07.884418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.248 [2024-11-20 06:43:07.884451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.248 qpair failed and we were unable to recover it. 00:32:36.248 [2024-11-20 06:43:07.884636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.248 [2024-11-20 06:43:07.884667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.248 qpair failed and we were unable to recover it. 00:32:36.248 [2024-11-20 06:43:07.884792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.248 [2024-11-20 06:43:07.884824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.248 qpair failed and we were unable to recover it. 00:32:36.248 [2024-11-20 06:43:07.884928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.248 [2024-11-20 06:43:07.884960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.248 qpair failed and we were unable to recover it. 00:32:36.248 [2024-11-20 06:43:07.885085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.248 [2024-11-20 06:43:07.885116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.248 qpair failed and we were unable to recover it. 00:32:36.248 [2024-11-20 06:43:07.885232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.248 [2024-11-20 06:43:07.885266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.248 qpair failed and we were unable to recover it. 00:32:36.248 [2024-11-20 06:43:07.885383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.248 [2024-11-20 06:43:07.885414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.248 qpair failed and we were unable to recover it. 00:32:36.248 [2024-11-20 06:43:07.885521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.248 [2024-11-20 06:43:07.885552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.248 qpair failed and we were unable to recover it. 00:32:36.248 [2024-11-20 06:43:07.885680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.248 [2024-11-20 06:43:07.885713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.248 qpair failed and we were unable to recover it. 00:32:36.248 [2024-11-20 06:43:07.885884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.248 [2024-11-20 06:43:07.885916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.248 qpair failed and we were unable to recover it. 00:32:36.248 [2024-11-20 06:43:07.886097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.248 [2024-11-20 06:43:07.886129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.248 qpair failed and we were unable to recover it. 00:32:36.248 [2024-11-20 06:43:07.886346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.248 [2024-11-20 06:43:07.886379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.248 qpair failed and we were unable to recover it. 00:32:36.248 [2024-11-20 06:43:07.886625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.248 [2024-11-20 06:43:07.886657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.248 qpair failed and we were unable to recover it. 00:32:36.248 [2024-11-20 06:43:07.886829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.248 [2024-11-20 06:43:07.886860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.248 qpair failed and we were unable to recover it. 00:32:36.248 [2024-11-20 06:43:07.886973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.248 [2024-11-20 06:43:07.887005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.248 qpair failed and we were unable to recover it. 00:32:36.248 [2024-11-20 06:43:07.887129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.248 [2024-11-20 06:43:07.887162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.248 qpair failed and we were unable to recover it. 00:32:36.248 [2024-11-20 06:43:07.887373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.248 [2024-11-20 06:43:07.887406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.248 qpair failed and we were unable to recover it. 00:32:36.248 [2024-11-20 06:43:07.887589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.248 [2024-11-20 06:43:07.887621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.248 qpair failed and we were unable to recover it. 00:32:36.248 [2024-11-20 06:43:07.887732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.248 [2024-11-20 06:43:07.887763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.248 qpair failed and we were unable to recover it. 00:32:36.248 [2024-11-20 06:43:07.887958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.248 [2024-11-20 06:43:07.887989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.248 qpair failed and we were unable to recover it. 00:32:36.248 [2024-11-20 06:43:07.888173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.248 [2024-11-20 06:43:07.888211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.248 qpair failed and we were unable to recover it. 00:32:36.248 [2024-11-20 06:43:07.888342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.248 [2024-11-20 06:43:07.888373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.248 qpair failed and we were unable to recover it. 00:32:36.248 [2024-11-20 06:43:07.888482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.248 [2024-11-20 06:43:07.888513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.248 qpair failed and we were unable to recover it. 00:32:36.248 [2024-11-20 06:43:07.888631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.248 [2024-11-20 06:43:07.888663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.248 qpair failed and we were unable to recover it. 00:32:36.248 [2024-11-20 06:43:07.888877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.248 [2024-11-20 06:43:07.888908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.248 qpair failed and we were unable to recover it. 00:32:36.248 [2024-11-20 06:43:07.889030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.248 [2024-11-20 06:43:07.889063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.248 qpair failed and we were unable to recover it. 00:32:36.248 [2024-11-20 06:43:07.889239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.248 [2024-11-20 06:43:07.889272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.248 qpair failed and we were unable to recover it. 00:32:36.248 [2024-11-20 06:43:07.889409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.248 [2024-11-20 06:43:07.889441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.248 qpair failed and we were unable to recover it. 00:32:36.248 [2024-11-20 06:43:07.889633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.248 [2024-11-20 06:43:07.889670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.248 qpair failed and we were unable to recover it. 00:32:36.248 [2024-11-20 06:43:07.889813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.248 [2024-11-20 06:43:07.889845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.248 qpair failed and we were unable to recover it. 00:32:36.248 [2024-11-20 06:43:07.890048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.248 [2024-11-20 06:43:07.890079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.248 qpair failed and we were unable to recover it. 00:32:36.248 [2024-11-20 06:43:07.890262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.248 [2024-11-20 06:43:07.890295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.248 qpair failed and we were unable to recover it. 00:32:36.248 [2024-11-20 06:43:07.890398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.248 [2024-11-20 06:43:07.890430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.248 qpair failed and we were unable to recover it. 00:32:36.248 [2024-11-20 06:43:07.890619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.248 [2024-11-20 06:43:07.890650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.248 qpair failed and we were unable to recover it. 00:32:36.248 [2024-11-20 06:43:07.890782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.248 [2024-11-20 06:43:07.890813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.248 qpair failed and we were unable to recover it. 00:32:36.248 [2024-11-20 06:43:07.890927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.248 [2024-11-20 06:43:07.890959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.248 qpair failed and we were unable to recover it. 00:32:36.248 [2024-11-20 06:43:07.891157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.248 [2024-11-20 06:43:07.891188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.248 qpair failed and we were unable to recover it. 00:32:36.248 [2024-11-20 06:43:07.891374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.248 [2024-11-20 06:43:07.891405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.248 qpair failed and we were unable to recover it. 00:32:36.248 [2024-11-20 06:43:07.891535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.248 [2024-11-20 06:43:07.891568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.248 qpair failed and we were unable to recover it. 00:32:36.248 [2024-11-20 06:43:07.891679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.248 [2024-11-20 06:43:07.891710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.248 qpair failed and we were unable to recover it. 00:32:36.248 [2024-11-20 06:43:07.891828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.248 [2024-11-20 06:43:07.891860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.248 qpair failed and we were unable to recover it. 00:32:36.248 [2024-11-20 06:43:07.891975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.248 [2024-11-20 06:43:07.892007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.248 qpair failed and we were unable to recover it. 00:32:36.248 [2024-11-20 06:43:07.892139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.248 [2024-11-20 06:43:07.892171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.248 qpair failed and we were unable to recover it. 00:32:36.248 [2024-11-20 06:43:07.892463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.248 [2024-11-20 06:43:07.892535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.248 qpair failed and we were unable to recover it. 00:32:36.248 [2024-11-20 06:43:07.892699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.248 [2024-11-20 06:43:07.892735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.248 qpair failed and we were unable to recover it. 00:32:36.248 [2024-11-20 06:43:07.892859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.248 [2024-11-20 06:43:07.892891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.248 qpair failed and we were unable to recover it. 00:32:36.248 [2024-11-20 06:43:07.892996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.248 [2024-11-20 06:43:07.893028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.248 qpair failed and we were unable to recover it. 00:32:36.248 [2024-11-20 06:43:07.893221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.248 [2024-11-20 06:43:07.893256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.248 qpair failed and we were unable to recover it. 00:32:36.248 [2024-11-20 06:43:07.893434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.248 [2024-11-20 06:43:07.893466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.248 qpair failed and we were unable to recover it. 00:32:36.248 [2024-11-20 06:43:07.893733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.248 [2024-11-20 06:43:07.893765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.248 qpair failed and we were unable to recover it. 00:32:36.248 [2024-11-20 06:43:07.893958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.248 [2024-11-20 06:43:07.893991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.249 qpair failed and we were unable to recover it. 00:32:36.249 [2024-11-20 06:43:07.894166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.249 [2024-11-20 06:43:07.894198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.249 qpair failed and we were unable to recover it. 00:32:36.249 [2024-11-20 06:43:07.894394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.249 [2024-11-20 06:43:07.894425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.249 qpair failed and we were unable to recover it. 00:32:36.249 [2024-11-20 06:43:07.894614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.249 [2024-11-20 06:43:07.894646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.249 qpair failed and we were unable to recover it. 00:32:36.249 [2024-11-20 06:43:07.894765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.249 [2024-11-20 06:43:07.894797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.249 qpair failed and we were unable to recover it. 00:32:36.249 [2024-11-20 06:43:07.894942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.249 [2024-11-20 06:43:07.894975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.249 qpair failed and we were unable to recover it. 00:32:36.249 [2024-11-20 06:43:07.895105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.249 [2024-11-20 06:43:07.895137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.249 qpair failed and we were unable to recover it. 00:32:36.249 [2024-11-20 06:43:07.895265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.249 [2024-11-20 06:43:07.895298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.249 qpair failed and we were unable to recover it. 00:32:36.249 [2024-11-20 06:43:07.895539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.249 [2024-11-20 06:43:07.895571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.249 qpair failed and we were unable to recover it. 00:32:36.249 [2024-11-20 06:43:07.895697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.249 [2024-11-20 06:43:07.895729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.249 qpair failed and we were unable to recover it. 00:32:36.249 [2024-11-20 06:43:07.895911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.249 [2024-11-20 06:43:07.895943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.249 qpair failed and we were unable to recover it. 00:32:36.249 [2024-11-20 06:43:07.896075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.249 [2024-11-20 06:43:07.896107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.249 qpair failed and we were unable to recover it. 00:32:36.249 [2024-11-20 06:43:07.896220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.249 [2024-11-20 06:43:07.896253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.249 qpair failed and we were unable to recover it. 00:32:36.249 [2024-11-20 06:43:07.896428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.249 [2024-11-20 06:43:07.896460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.249 qpair failed and we were unable to recover it. 00:32:36.249 [2024-11-20 06:43:07.896727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.249 [2024-11-20 06:43:07.896759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.249 qpair failed and we were unable to recover it. 00:32:36.249 [2024-11-20 06:43:07.896871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.249 [2024-11-20 06:43:07.896903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.249 qpair failed and we were unable to recover it. 00:32:36.249 [2024-11-20 06:43:07.897025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.249 [2024-11-20 06:43:07.897057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.249 qpair failed and we were unable to recover it. 00:32:36.249 [2024-11-20 06:43:07.897174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.249 [2024-11-20 06:43:07.897214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.249 qpair failed and we were unable to recover it. 00:32:36.249 [2024-11-20 06:43:07.897334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.249 [2024-11-20 06:43:07.897372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.249 qpair failed and we were unable to recover it. 00:32:36.249 [2024-11-20 06:43:07.897557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.249 [2024-11-20 06:43:07.897589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.249 qpair failed and we were unable to recover it. 00:32:36.249 [2024-11-20 06:43:07.897700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.249 [2024-11-20 06:43:07.897732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.249 qpair failed and we were unable to recover it. 00:32:36.249 [2024-11-20 06:43:07.897838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.249 [2024-11-20 06:43:07.897869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.249 qpair failed and we were unable to recover it. 00:32:36.249 [2024-11-20 06:43:07.897987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.249 [2024-11-20 06:43:07.898019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.249 qpair failed and we were unable to recover it. 00:32:36.249 [2024-11-20 06:43:07.898222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.249 [2024-11-20 06:43:07.898256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.249 qpair failed and we were unable to recover it. 00:32:36.249 [2024-11-20 06:43:07.898434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.249 [2024-11-20 06:43:07.898465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.249 qpair failed and we were unable to recover it. 00:32:36.249 [2024-11-20 06:43:07.898637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.249 [2024-11-20 06:43:07.898669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.249 qpair failed and we were unable to recover it. 00:32:36.249 [2024-11-20 06:43:07.898850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.249 [2024-11-20 06:43:07.898882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.249 qpair failed and we were unable to recover it. 00:32:36.249 [2024-11-20 06:43:07.898998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.249 [2024-11-20 06:43:07.899029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.249 qpair failed and we were unable to recover it. 00:32:36.249 [2024-11-20 06:43:07.899165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.249 [2024-11-20 06:43:07.899197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.249 qpair failed and we were unable to recover it. 00:32:36.249 [2024-11-20 06:43:07.899340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.249 [2024-11-20 06:43:07.899372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.249 qpair failed and we were unable to recover it. 00:32:36.249 [2024-11-20 06:43:07.899495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.249 [2024-11-20 06:43:07.899527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.249 qpair failed and we were unable to recover it. 00:32:36.249 [2024-11-20 06:43:07.899652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.249 [2024-11-20 06:43:07.899684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.249 qpair failed and we were unable to recover it. 00:32:36.249 [2024-11-20 06:43:07.899881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.249 [2024-11-20 06:43:07.899914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.249 qpair failed and we were unable to recover it. 00:32:36.249 [2024-11-20 06:43:07.900024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.249 [2024-11-20 06:43:07.900056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.249 qpair failed and we were unable to recover it. 00:32:36.249 [2024-11-20 06:43:07.900163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.249 [2024-11-20 06:43:07.900194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.249 qpair failed and we were unable to recover it. 00:32:36.249 [2024-11-20 06:43:07.900415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.249 [2024-11-20 06:43:07.900447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.249 qpair failed and we were unable to recover it. 00:32:36.249 [2024-11-20 06:43:07.900624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.249 [2024-11-20 06:43:07.900656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.249 qpair failed and we were unable to recover it. 00:32:36.249 [2024-11-20 06:43:07.900767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.249 [2024-11-20 06:43:07.900799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.249 qpair failed and we were unable to recover it. 00:32:36.249 [2024-11-20 06:43:07.900921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.249 [2024-11-20 06:43:07.900953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.249 qpair failed and we were unable to recover it. 00:32:36.249 [2024-11-20 06:43:07.901075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.249 [2024-11-20 06:43:07.901107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.249 qpair failed and we were unable to recover it. 00:32:36.249 [2024-11-20 06:43:07.901367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.249 [2024-11-20 06:43:07.901400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.249 qpair failed and we were unable to recover it. 00:32:36.249 [2024-11-20 06:43:07.901531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.249 [2024-11-20 06:43:07.901563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.249 qpair failed and we were unable to recover it. 00:32:36.249 [2024-11-20 06:43:07.901673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.249 [2024-11-20 06:43:07.901705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.249 qpair failed and we were unable to recover it. 00:32:36.249 [2024-11-20 06:43:07.901874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.249 [2024-11-20 06:43:07.901906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.249 qpair failed and we were unable to recover it. 00:32:36.249 [2024-11-20 06:43:07.902101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.249 [2024-11-20 06:43:07.902133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.249 qpair failed and we were unable to recover it. 00:32:36.249 [2024-11-20 06:43:07.902324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.249 [2024-11-20 06:43:07.902357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.249 qpair failed and we were unable to recover it. 00:32:36.249 [2024-11-20 06:43:07.902480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.249 [2024-11-20 06:43:07.902512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.249 qpair failed and we were unable to recover it. 00:32:36.249 [2024-11-20 06:43:07.902639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.249 [2024-11-20 06:43:07.902688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.249 qpair failed and we were unable to recover it. 00:32:36.249 [2024-11-20 06:43:07.902888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.249 [2024-11-20 06:43:07.902920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.249 qpair failed and we were unable to recover it. 00:32:36.249 [2024-11-20 06:43:07.903094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.249 [2024-11-20 06:43:07.903126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.249 qpair failed and we were unable to recover it. 00:32:36.249 [2024-11-20 06:43:07.903248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.249 [2024-11-20 06:43:07.903281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.249 qpair failed and we were unable to recover it. 00:32:36.249 [2024-11-20 06:43:07.903458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.249 [2024-11-20 06:43:07.903490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.249 qpair failed and we were unable to recover it. 00:32:36.249 [2024-11-20 06:43:07.903669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.249 [2024-11-20 06:43:07.903700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.249 qpair failed and we were unable to recover it. 00:32:36.249 [2024-11-20 06:43:07.903809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.249 [2024-11-20 06:43:07.903840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.249 qpair failed and we were unable to recover it. 00:32:36.249 [2024-11-20 06:43:07.903957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.249 [2024-11-20 06:43:07.903988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.249 qpair failed and we were unable to recover it. 00:32:36.249 [2024-11-20 06:43:07.904162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.249 [2024-11-20 06:43:07.904194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.249 qpair failed and we were unable to recover it. 00:32:36.249 [2024-11-20 06:43:07.904389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.249 [2024-11-20 06:43:07.904421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.249 qpair failed and we were unable to recover it. 00:32:36.249 [2024-11-20 06:43:07.904539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.249 [2024-11-20 06:43:07.904570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.249 qpair failed and we were unable to recover it. 00:32:36.249 [2024-11-20 06:43:07.904686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.249 [2024-11-20 06:43:07.904724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.249 qpair failed and we were unable to recover it. 00:32:36.249 [2024-11-20 06:43:07.904835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.249 [2024-11-20 06:43:07.904866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.249 qpair failed and we were unable to recover it. 00:32:36.249 [2024-11-20 06:43:07.904983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.249 [2024-11-20 06:43:07.905015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.249 qpair failed and we were unable to recover it. 00:32:36.249 [2024-11-20 06:43:07.905131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.249 [2024-11-20 06:43:07.905163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.249 qpair failed and we were unable to recover it. 00:32:36.249 [2024-11-20 06:43:07.905365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.249 [2024-11-20 06:43:07.905398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.249 qpair failed and we were unable to recover it. 00:32:36.249 [2024-11-20 06:43:07.905586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.249 [2024-11-20 06:43:07.905618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.249 qpair failed and we were unable to recover it. 00:32:36.249 [2024-11-20 06:43:07.905721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.249 [2024-11-20 06:43:07.905753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.249 qpair failed and we were unable to recover it. 00:32:36.249 [2024-11-20 06:43:07.905936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.249 [2024-11-20 06:43:07.905967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.249 qpair failed and we were unable to recover it. 00:32:36.249 [2024-11-20 06:43:07.906222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.249 [2024-11-20 06:43:07.906256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.249 qpair failed and we were unable to recover it. 00:32:36.249 [2024-11-20 06:43:07.906428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.249 [2024-11-20 06:43:07.906461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.249 qpair failed and we were unable to recover it. 00:32:36.249 [2024-11-20 06:43:07.906642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.249 [2024-11-20 06:43:07.906674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.249 qpair failed and we were unable to recover it. 00:32:36.249 [2024-11-20 06:43:07.906798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.249 [2024-11-20 06:43:07.906830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.249 qpair failed and we were unable to recover it. 00:32:36.249 [2024-11-20 06:43:07.907001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.249 [2024-11-20 06:43:07.907032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.249 qpair failed and we were unable to recover it. 00:32:36.249 [2024-11-20 06:43:07.907155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.250 [2024-11-20 06:43:07.907186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.250 qpair failed and we were unable to recover it. 00:32:36.250 [2024-11-20 06:43:07.907397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.250 [2024-11-20 06:43:07.907429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.250 qpair failed and we were unable to recover it. 00:32:36.250 [2024-11-20 06:43:07.907550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.250 [2024-11-20 06:43:07.907583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.250 qpair failed and we were unable to recover it. 00:32:36.250 [2024-11-20 06:43:07.907692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.250 [2024-11-20 06:43:07.907725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.250 qpair failed and we were unable to recover it. 00:32:36.250 [2024-11-20 06:43:07.907987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.250 [2024-11-20 06:43:07.908019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.250 qpair failed and we were unable to recover it. 00:32:36.250 [2024-11-20 06:43:07.908227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.250 [2024-11-20 06:43:07.908260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.250 qpair failed and we were unable to recover it. 00:32:36.250 [2024-11-20 06:43:07.908398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.250 [2024-11-20 06:43:07.908429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.250 qpair failed and we were unable to recover it. 00:32:36.250 [2024-11-20 06:43:07.908604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.250 [2024-11-20 06:43:07.908636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.250 qpair failed and we were unable to recover it. 00:32:36.250 [2024-11-20 06:43:07.908819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.250 [2024-11-20 06:43:07.908851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.250 qpair failed and we were unable to recover it. 00:32:36.250 [2024-11-20 06:43:07.908983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.250 [2024-11-20 06:43:07.909014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.250 qpair failed and we were unable to recover it. 00:32:36.250 [2024-11-20 06:43:07.909213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.250 [2024-11-20 06:43:07.909246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.250 qpair failed and we were unable to recover it. 00:32:36.250 [2024-11-20 06:43:07.909494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.250 [2024-11-20 06:43:07.909527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.250 qpair failed and we were unable to recover it. 00:32:36.250 [2024-11-20 06:43:07.909642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.250 [2024-11-20 06:43:07.909674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.250 qpair failed and we were unable to recover it. 00:32:36.250 [2024-11-20 06:43:07.909787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.250 [2024-11-20 06:43:07.909819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.250 qpair failed and we were unable to recover it. 00:32:36.250 [2024-11-20 06:43:07.909938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.250 [2024-11-20 06:43:07.909970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.250 qpair failed and we were unable to recover it. 00:32:36.250 [2024-11-20 06:43:07.910159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.250 [2024-11-20 06:43:07.910190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.250 qpair failed and we were unable to recover it. 00:32:36.250 [2024-11-20 06:43:07.910376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.250 [2024-11-20 06:43:07.910407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.250 qpair failed and we were unable to recover it. 00:32:36.250 [2024-11-20 06:43:07.910583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.250 [2024-11-20 06:43:07.910615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.250 qpair failed and we were unable to recover it. 00:32:36.250 [2024-11-20 06:43:07.910757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.250 [2024-11-20 06:43:07.910788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.250 qpair failed and we were unable to recover it. 00:32:36.250 [2024-11-20 06:43:07.910969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.250 [2024-11-20 06:43:07.911000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.250 qpair failed and we were unable to recover it. 00:32:36.250 [2024-11-20 06:43:07.911182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.250 [2024-11-20 06:43:07.911226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.250 qpair failed and we were unable to recover it. 00:32:36.250 [2024-11-20 06:43:07.911362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.250 [2024-11-20 06:43:07.911394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.250 qpair failed and we were unable to recover it. 00:32:36.250 [2024-11-20 06:43:07.911518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.250 [2024-11-20 06:43:07.911549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.250 qpair failed and we were unable to recover it. 00:32:36.250 [2024-11-20 06:43:07.911721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.250 [2024-11-20 06:43:07.911752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.250 qpair failed and we were unable to recover it. 00:32:36.250 [2024-11-20 06:43:07.911875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.250 [2024-11-20 06:43:07.911907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.250 qpair failed and we were unable to recover it. 00:32:36.250 [2024-11-20 06:43:07.912087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.250 [2024-11-20 06:43:07.912118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.250 qpair failed and we were unable to recover it. 00:32:36.250 [2024-11-20 06:43:07.912304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.250 [2024-11-20 06:43:07.912338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.250 qpair failed and we were unable to recover it. 00:32:36.250 [2024-11-20 06:43:07.912450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.250 [2024-11-20 06:43:07.912489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.250 qpair failed and we were unable to recover it. 00:32:36.250 [2024-11-20 06:43:07.912608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.250 [2024-11-20 06:43:07.912638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.250 qpair failed and we were unable to recover it. 00:32:36.250 [2024-11-20 06:43:07.912742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.250 [2024-11-20 06:43:07.912775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.250 qpair failed and we were unable to recover it. 00:32:36.250 [2024-11-20 06:43:07.912893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.250 [2024-11-20 06:43:07.912926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.250 qpair failed and we were unable to recover it. 00:32:36.250 [2024-11-20 06:43:07.913100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.250 [2024-11-20 06:43:07.913131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.250 qpair failed and we were unable to recover it. 00:32:36.250 [2024-11-20 06:43:07.913265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.250 [2024-11-20 06:43:07.913299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.250 qpair failed and we were unable to recover it. 00:32:36.250 [2024-11-20 06:43:07.913490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.250 [2024-11-20 06:43:07.913523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.250 qpair failed and we were unable to recover it. 00:32:36.250 [2024-11-20 06:43:07.913717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.250 [2024-11-20 06:43:07.913748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.250 qpair failed and we were unable to recover it. 00:32:36.250 [2024-11-20 06:43:07.913873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.250 [2024-11-20 06:43:07.913905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.250 qpair failed and we were unable to recover it. 00:32:36.250 [2024-11-20 06:43:07.914043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.250 [2024-11-20 06:43:07.914074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.250 qpair failed and we were unable to recover it. 00:32:36.250 [2024-11-20 06:43:07.914208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.250 [2024-11-20 06:43:07.914242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.250 qpair failed and we were unable to recover it. 00:32:36.250 [2024-11-20 06:43:07.914352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.250 [2024-11-20 06:43:07.914384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.250 qpair failed and we were unable to recover it. 00:32:36.250 [2024-11-20 06:43:07.914496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.250 [2024-11-20 06:43:07.914527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.250 qpair failed and we were unable to recover it. 00:32:36.250 [2024-11-20 06:43:07.914700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.250 [2024-11-20 06:43:07.914732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.250 qpair failed and we were unable to recover it. 00:32:36.250 [2024-11-20 06:43:07.914846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.250 [2024-11-20 06:43:07.914879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.250 qpair failed and we were unable to recover it. 00:32:36.250 [2024-11-20 06:43:07.915060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.250 [2024-11-20 06:43:07.915094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.250 qpair failed and we were unable to recover it. 00:32:36.250 [2024-11-20 06:43:07.915213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.250 [2024-11-20 06:43:07.915246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.250 qpair failed and we were unable to recover it. 00:32:36.250 [2024-11-20 06:43:07.915445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.250 [2024-11-20 06:43:07.915479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.250 qpair failed and we were unable to recover it. 00:32:36.250 [2024-11-20 06:43:07.915606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.250 [2024-11-20 06:43:07.915638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.250 qpair failed and we were unable to recover it. 00:32:36.250 [2024-11-20 06:43:07.915753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.250 [2024-11-20 06:43:07.915785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.250 qpair failed and we were unable to recover it. 00:32:36.250 [2024-11-20 06:43:07.915984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.250 [2024-11-20 06:43:07.916016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.250 qpair failed and we were unable to recover it. 00:32:36.250 [2024-11-20 06:43:07.916194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.250 [2024-11-20 06:43:07.916248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.250 qpair failed and we were unable to recover it. 00:32:36.250 [2024-11-20 06:43:07.916353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.250 [2024-11-20 06:43:07.916385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.250 qpair failed and we were unable to recover it. 00:32:36.250 [2024-11-20 06:43:07.916676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.250 [2024-11-20 06:43:07.916709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.250 qpair failed and we were unable to recover it. 00:32:36.250 [2024-11-20 06:43:07.916877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.250 [2024-11-20 06:43:07.916909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.250 qpair failed and we were unable to recover it. 00:32:36.250 [2024-11-20 06:43:07.917026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.250 [2024-11-20 06:43:07.917058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.250 qpair failed and we were unable to recover it. 00:32:36.250 [2024-11-20 06:43:07.917196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.250 [2024-11-20 06:43:07.917241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.250 qpair failed and we were unable to recover it. 00:32:36.250 [2024-11-20 06:43:07.917419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.250 [2024-11-20 06:43:07.917451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.250 qpair failed and we were unable to recover it. 00:32:36.250 [2024-11-20 06:43:07.917566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.250 [2024-11-20 06:43:07.917597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.250 qpair failed and we were unable to recover it. 00:32:36.250 [2024-11-20 06:43:07.917716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.250 [2024-11-20 06:43:07.917748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.250 qpair failed and we were unable to recover it. 00:32:36.250 [2024-11-20 06:43:07.917866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.250 [2024-11-20 06:43:07.917899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.250 qpair failed and we were unable to recover it. 00:32:36.250 [2024-11-20 06:43:07.918018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.251 [2024-11-20 06:43:07.918049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.251 qpair failed and we were unable to recover it. 00:32:36.251 [2024-11-20 06:43:07.918237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.251 [2024-11-20 06:43:07.918272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.251 qpair failed and we were unable to recover it. 00:32:36.251 [2024-11-20 06:43:07.918544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.251 [2024-11-20 06:43:07.918576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.251 qpair failed and we were unable to recover it. 00:32:36.251 [2024-11-20 06:43:07.918702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.251 [2024-11-20 06:43:07.918733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.251 qpair failed and we were unable to recover it. 00:32:36.251 [2024-11-20 06:43:07.918844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.251 [2024-11-20 06:43:07.918876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.251 qpair failed and we were unable to recover it. 00:32:36.251 [2024-11-20 06:43:07.918985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.251 [2024-11-20 06:43:07.919018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.251 qpair failed and we were unable to recover it. 00:32:36.251 [2024-11-20 06:43:07.919198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.251 [2024-11-20 06:43:07.919237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.251 qpair failed and we were unable to recover it. 00:32:36.251 [2024-11-20 06:43:07.919348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.251 [2024-11-20 06:43:07.919379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.251 qpair failed and we were unable to recover it. 00:32:36.251 [2024-11-20 06:43:07.919500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.251 [2024-11-20 06:43:07.919532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.251 qpair failed and we were unable to recover it. 00:32:36.251 [2024-11-20 06:43:07.919659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.251 [2024-11-20 06:43:07.919698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.251 qpair failed and we were unable to recover it. 00:32:36.251 [2024-11-20 06:43:07.919876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.251 [2024-11-20 06:43:07.919908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.251 qpair failed and we were unable to recover it. 00:32:36.251 [2024-11-20 06:43:07.920028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.251 [2024-11-20 06:43:07.920061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.251 qpair failed and we were unable to recover it. 00:32:36.251 [2024-11-20 06:43:07.920236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.251 [2024-11-20 06:43:07.920270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.251 qpair failed and we were unable to recover it. 00:32:36.251 [2024-11-20 06:43:07.920414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.251 [2024-11-20 06:43:07.920446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.251 qpair failed and we were unable to recover it. 00:32:36.251 [2024-11-20 06:43:07.920644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.251 [2024-11-20 06:43:07.920677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.251 qpair failed and we were unable to recover it. 00:32:36.251 [2024-11-20 06:43:07.920787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.251 [2024-11-20 06:43:07.920819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.251 qpair failed and we were unable to recover it. 00:32:36.251 [2024-11-20 06:43:07.920940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.251 [2024-11-20 06:43:07.920972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.251 qpair failed and we were unable to recover it. 00:32:36.251 [2024-11-20 06:43:07.921161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.251 [2024-11-20 06:43:07.921193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.251 qpair failed and we were unable to recover it. 00:32:36.251 [2024-11-20 06:43:07.921381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.251 [2024-11-20 06:43:07.921413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.251 qpair failed and we were unable to recover it. 00:32:36.251 [2024-11-20 06:43:07.921527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.251 [2024-11-20 06:43:07.921559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.251 qpair failed and we were unable to recover it. 00:32:36.251 [2024-11-20 06:43:07.921836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.251 [2024-11-20 06:43:07.921867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.251 qpair failed and we were unable to recover it. 00:32:36.251 [2024-11-20 06:43:07.921985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.251 [2024-11-20 06:43:07.922016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.251 qpair failed and we were unable to recover it. 00:32:36.251 [2024-11-20 06:43:07.922136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.251 [2024-11-20 06:43:07.922168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.251 qpair failed and we were unable to recover it. 00:32:36.251 [2024-11-20 06:43:07.922395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.251 [2024-11-20 06:43:07.922429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.251 qpair failed and we were unable to recover it. 00:32:36.251 [2024-11-20 06:43:07.922621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.251 [2024-11-20 06:43:07.922653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.251 qpair failed and we were unable to recover it. 00:32:36.251 [2024-11-20 06:43:07.922791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.251 [2024-11-20 06:43:07.922823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.251 qpair failed and we were unable to recover it. 00:32:36.251 [2024-11-20 06:43:07.922939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.251 [2024-11-20 06:43:07.922970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.251 qpair failed and we were unable to recover it. 00:32:36.251 [2024-11-20 06:43:07.923082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.251 [2024-11-20 06:43:07.923115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.251 qpair failed and we were unable to recover it. 00:32:36.251 [2024-11-20 06:43:07.923350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.251 [2024-11-20 06:43:07.923384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.251 qpair failed and we were unable to recover it. 00:32:36.251 [2024-11-20 06:43:07.923496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.251 [2024-11-20 06:43:07.923528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.251 qpair failed and we were unable to recover it. 00:32:36.251 [2024-11-20 06:43:07.923709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.251 [2024-11-20 06:43:07.923741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.251 qpair failed and we were unable to recover it. 00:32:36.251 [2024-11-20 06:43:07.923928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.251 [2024-11-20 06:43:07.923960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.251 qpair failed and we were unable to recover it. 00:32:36.251 [2024-11-20 06:43:07.924083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.251 [2024-11-20 06:43:07.924116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.251 qpair failed and we were unable to recover it. 00:32:36.251 [2024-11-20 06:43:07.924304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.251 [2024-11-20 06:43:07.924337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.251 qpair failed and we were unable to recover it. 00:32:36.251 [2024-11-20 06:43:07.924446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.251 [2024-11-20 06:43:07.924478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.251 qpair failed and we were unable to recover it. 00:32:36.251 [2024-11-20 06:43:07.924602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.251 [2024-11-20 06:43:07.924634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.251 qpair failed and we were unable to recover it. 00:32:36.251 [2024-11-20 06:43:07.924798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.251 [2024-11-20 06:43:07.924870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.251 qpair failed and we were unable to recover it. 00:32:36.251 [2024-11-20 06:43:07.925141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.251 [2024-11-20 06:43:07.925176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.251 qpair failed and we were unable to recover it. 00:32:36.251 [2024-11-20 06:43:07.925386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.251 [2024-11-20 06:43:07.925420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.251 qpair failed and we were unable to recover it. 00:32:36.251 [2024-11-20 06:43:07.925594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.251 [2024-11-20 06:43:07.925625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.251 qpair failed and we were unable to recover it. 00:32:36.251 [2024-11-20 06:43:07.925753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.251 [2024-11-20 06:43:07.925786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.251 qpair failed and we were unable to recover it. 00:32:36.251 [2024-11-20 06:43:07.926026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.251 [2024-11-20 06:43:07.926057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.251 qpair failed and we were unable to recover it. 00:32:36.251 [2024-11-20 06:43:07.926178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.251 [2024-11-20 06:43:07.926218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.251 qpair failed and we were unable to recover it. 00:32:36.251 [2024-11-20 06:43:07.926353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.251 [2024-11-20 06:43:07.926386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.251 qpair failed and we were unable to recover it. 00:32:36.251 [2024-11-20 06:43:07.926556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.251 [2024-11-20 06:43:07.926587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.251 qpair failed and we were unable to recover it. 00:32:36.251 [2024-11-20 06:43:07.926790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.251 [2024-11-20 06:43:07.926821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.251 qpair failed and we were unable to recover it. 00:32:36.251 [2024-11-20 06:43:07.926944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.251 [2024-11-20 06:43:07.926977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.251 qpair failed and we were unable to recover it. 00:32:36.251 [2024-11-20 06:43:07.927153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.251 [2024-11-20 06:43:07.927183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.251 qpair failed and we were unable to recover it. 00:32:36.251 [2024-11-20 06:43:07.927341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.251 [2024-11-20 06:43:07.927373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.251 qpair failed and we were unable to recover it. 00:32:36.251 [2024-11-20 06:43:07.927492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.251 [2024-11-20 06:43:07.927533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.251 qpair failed and we were unable to recover it. 00:32:36.251 [2024-11-20 06:43:07.927765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.251 [2024-11-20 06:43:07.927797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.251 qpair failed and we were unable to recover it. 00:32:36.251 [2024-11-20 06:43:07.927987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.251 [2024-11-20 06:43:07.928019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.251 qpair failed and we were unable to recover it. 00:32:36.251 [2024-11-20 06:43:07.928187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.251 [2024-11-20 06:43:07.928230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.251 qpair failed and we were unable to recover it. 00:32:36.251 [2024-11-20 06:43:07.928355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.251 [2024-11-20 06:43:07.928387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.251 qpair failed and we were unable to recover it. 00:32:36.251 [2024-11-20 06:43:07.928518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.251 [2024-11-20 06:43:07.928550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.251 qpair failed and we were unable to recover it. 00:32:36.251 [2024-11-20 06:43:07.928682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.251 [2024-11-20 06:43:07.928715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.251 qpair failed and we were unable to recover it. 00:32:36.251 [2024-11-20 06:43:07.928901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.251 [2024-11-20 06:43:07.928932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.251 qpair failed and we were unable to recover it. 00:32:36.251 [2024-11-20 06:43:07.929038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.251 [2024-11-20 06:43:07.929070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.251 qpair failed and we were unable to recover it. 00:32:36.251 [2024-11-20 06:43:07.929192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.251 [2024-11-20 06:43:07.929238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.251 qpair failed and we were unable to recover it. 00:32:36.251 [2024-11-20 06:43:07.929482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.251 [2024-11-20 06:43:07.929514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.251 qpair failed and we were unable to recover it. 00:32:36.251 [2024-11-20 06:43:07.929702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.251 [2024-11-20 06:43:07.929734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.251 qpair failed and we were unable to recover it. 00:32:36.251 [2024-11-20 06:43:07.929841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.251 [2024-11-20 06:43:07.929876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.251 qpair failed and we were unable to recover it. 00:32:36.251 [2024-11-20 06:43:07.930059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.251 [2024-11-20 06:43:07.930090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.251 qpair failed and we were unable to recover it. 00:32:36.251 [2024-11-20 06:43:07.930224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.251 [2024-11-20 06:43:07.930258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.251 qpair failed and we were unable to recover it. 00:32:36.251 [2024-11-20 06:43:07.930505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.251 [2024-11-20 06:43:07.930537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.251 qpair failed and we were unable to recover it. 00:32:36.251 [2024-11-20 06:43:07.930727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.251 [2024-11-20 06:43:07.930760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.251 qpair failed and we were unable to recover it. 00:32:36.251 [2024-11-20 06:43:07.930950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.251 [2024-11-20 06:43:07.930983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.251 qpair failed and we were unable to recover it. 00:32:36.251 [2024-11-20 06:43:07.931159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.251 [2024-11-20 06:43:07.931190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.251 qpair failed and we were unable to recover it. 00:32:36.251 [2024-11-20 06:43:07.931308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.251 [2024-11-20 06:43:07.931341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.251 qpair failed and we were unable to recover it. 00:32:36.251 [2024-11-20 06:43:07.931525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.251 [2024-11-20 06:43:07.931557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.251 qpair failed and we were unable to recover it. 00:32:36.252 [2024-11-20 06:43:07.931679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.252 [2024-11-20 06:43:07.931711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.252 qpair failed and we were unable to recover it. 00:32:36.252 [2024-11-20 06:43:07.931852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.252 [2024-11-20 06:43:07.931884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.252 qpair failed and we were unable to recover it. 00:32:36.252 [2024-11-20 06:43:07.931996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.252 [2024-11-20 06:43:07.932028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.252 qpair failed and we were unable to recover it. 00:32:36.252 [2024-11-20 06:43:07.932141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.252 [2024-11-20 06:43:07.932173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.252 qpair failed and we were unable to recover it. 00:32:36.252 [2024-11-20 06:43:07.932426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.252 [2024-11-20 06:43:07.932459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.252 qpair failed and we were unable to recover it. 00:32:36.252 [2024-11-20 06:43:07.932746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.252 [2024-11-20 06:43:07.932779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.252 qpair failed and we were unable to recover it. 00:32:36.252 [2024-11-20 06:43:07.932995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.252 [2024-11-20 06:43:07.933031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.252 qpair failed and we were unable to recover it. 00:32:36.252 [2024-11-20 06:43:07.933158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.252 [2024-11-20 06:43:07.933189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.252 qpair failed and we were unable to recover it. 00:32:36.252 [2024-11-20 06:43:07.933378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.252 [2024-11-20 06:43:07.933411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.252 qpair failed and we were unable to recover it. 00:32:36.252 [2024-11-20 06:43:07.933545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.252 [2024-11-20 06:43:07.933577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.252 qpair failed and we were unable to recover it. 00:32:36.252 [2024-11-20 06:43:07.933718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.252 [2024-11-20 06:43:07.933749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.252 qpair failed and we were unable to recover it. 00:32:36.252 [2024-11-20 06:43:07.933864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.252 [2024-11-20 06:43:07.933896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.252 qpair failed and we were unable to recover it. 00:32:36.252 [2024-11-20 06:43:07.934043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.252 [2024-11-20 06:43:07.934075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.252 qpair failed and we were unable to recover it. 00:32:36.252 [2024-11-20 06:43:07.934192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.252 [2024-11-20 06:43:07.934232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.252 qpair failed and we were unable to recover it. 00:32:36.252 [2024-11-20 06:43:07.934406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.252 [2024-11-20 06:43:07.934437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.252 qpair failed and we were unable to recover it. 00:32:36.252 [2024-11-20 06:43:07.934610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.252 [2024-11-20 06:43:07.934642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.252 qpair failed and we were unable to recover it. 00:32:36.252 [2024-11-20 06:43:07.934751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.252 [2024-11-20 06:43:07.934783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.252 qpair failed and we were unable to recover it. 00:32:36.252 [2024-11-20 06:43:07.935006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.252 [2024-11-20 06:43:07.935039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.252 qpair failed and we were unable to recover it. 00:32:36.252 [2024-11-20 06:43:07.936818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.252 [2024-11-20 06:43:07.936876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.252 qpair failed and we were unable to recover it. 00:32:36.252 [2024-11-20 06:43:07.937146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.252 [2024-11-20 06:43:07.937189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.252 qpair failed and we were unable to recover it. 00:32:36.252 [2024-11-20 06:43:07.937466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.252 [2024-11-20 06:43:07.937499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.252 qpair failed and we were unable to recover it. 00:32:36.252 [2024-11-20 06:43:07.937714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.252 [2024-11-20 06:43:07.937747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.252 qpair failed and we were unable to recover it. 00:32:36.252 [2024-11-20 06:43:07.937929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.252 [2024-11-20 06:43:07.937962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.252 qpair failed and we were unable to recover it. 00:32:36.252 [2024-11-20 06:43:07.938147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.252 [2024-11-20 06:43:07.938178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.252 qpair failed and we were unable to recover it. 00:32:36.252 [2024-11-20 06:43:07.938324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.252 [2024-11-20 06:43:07.938358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.252 qpair failed and we were unable to recover it. 00:32:36.252 [2024-11-20 06:43:07.938545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.252 [2024-11-20 06:43:07.938577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.252 qpair failed and we were unable to recover it. 00:32:36.252 [2024-11-20 06:43:07.938815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.252 [2024-11-20 06:43:07.938848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.252 qpair failed and we were unable to recover it. 00:32:36.252 [2024-11-20 06:43:07.938966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.252 [2024-11-20 06:43:07.938997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.252 qpair failed and we were unable to recover it. 00:32:36.252 [2024-11-20 06:43:07.939171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.252 [2024-11-20 06:43:07.939213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.252 qpair failed and we were unable to recover it. 00:32:36.252 [2024-11-20 06:43:07.939348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.252 [2024-11-20 06:43:07.939381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.252 qpair failed and we were unable to recover it. 00:32:36.252 [2024-11-20 06:43:07.939592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.252 [2024-11-20 06:43:07.939623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.252 qpair failed and we were unable to recover it. 00:32:36.252 [2024-11-20 06:43:07.939812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.252 [2024-11-20 06:43:07.939844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.252 qpair failed and we were unable to recover it. 00:32:36.252 [2024-11-20 06:43:07.940014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.252 [2024-11-20 06:43:07.940046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.252 qpair failed and we were unable to recover it. 00:32:36.252 [2024-11-20 06:43:07.940174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.252 [2024-11-20 06:43:07.940239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.252 qpair failed and we were unable to recover it. 00:32:36.252 [2024-11-20 06:43:07.940430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.252 [2024-11-20 06:43:07.940463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.252 qpair failed and we were unable to recover it. 00:32:36.252 [2024-11-20 06:43:07.940656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.252 [2024-11-20 06:43:07.940687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.252 qpair failed and we were unable to recover it. 00:32:36.252 [2024-11-20 06:43:07.940807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.252 [2024-11-20 06:43:07.940838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.252 qpair failed and we were unable to recover it. 00:32:36.252 [2024-11-20 06:43:07.940951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.252 [2024-11-20 06:43:07.940982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.252 qpair failed and we were unable to recover it. 00:32:36.252 [2024-11-20 06:43:07.941094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.252 [2024-11-20 06:43:07.941126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.252 qpair failed and we were unable to recover it. 00:32:36.252 [2024-11-20 06:43:07.941316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.252 [2024-11-20 06:43:07.941349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.252 qpair failed and we were unable to recover it. 00:32:36.252 [2024-11-20 06:43:07.941473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.252 [2024-11-20 06:43:07.941505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.252 qpair failed and we were unable to recover it. 00:32:36.252 [2024-11-20 06:43:07.941706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.252 [2024-11-20 06:43:07.941738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.252 qpair failed and we were unable to recover it. 00:32:36.252 [2024-11-20 06:43:07.941858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.252 [2024-11-20 06:43:07.941890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.252 qpair failed and we were unable to recover it. 00:32:36.252 [2024-11-20 06:43:07.942078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.252 [2024-11-20 06:43:07.942110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.252 qpair failed and we were unable to recover it. 00:32:36.252 [2024-11-20 06:43:07.942241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.252 [2024-11-20 06:43:07.942276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.252 qpair failed and we were unable to recover it. 00:32:36.252 [2024-11-20 06:43:07.942566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.252 [2024-11-20 06:43:07.942600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.252 qpair failed and we were unable to recover it. 00:32:36.252 [2024-11-20 06:43:07.942773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.252 [2024-11-20 06:43:07.942844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.252 qpair failed and we were unable to recover it. 00:32:36.252 [2024-11-20 06:43:07.942986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.252 [2024-11-20 06:43:07.943022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.252 qpair failed and we were unable to recover it. 00:32:36.252 [2024-11-20 06:43:07.943213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.252 [2024-11-20 06:43:07.943248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.252 qpair failed and we were unable to recover it. 00:32:36.252 [2024-11-20 06:43:07.943380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.252 [2024-11-20 06:43:07.943412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.252 qpair failed and we were unable to recover it. 00:32:36.252 [2024-11-20 06:43:07.943543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.252 [2024-11-20 06:43:07.943576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.252 qpair failed and we were unable to recover it. 00:32:36.252 [2024-11-20 06:43:07.943763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.252 [2024-11-20 06:43:07.943795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.252 qpair failed and we were unable to recover it. 00:32:36.252 [2024-11-20 06:43:07.943915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.252 [2024-11-20 06:43:07.943946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.252 qpair failed and we were unable to recover it. 00:32:36.252 [2024-11-20 06:43:07.944071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.252 [2024-11-20 06:43:07.944102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.252 qpair failed and we were unable to recover it. 00:32:36.252 [2024-11-20 06:43:07.944246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.252 [2024-11-20 06:43:07.944280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.252 qpair failed and we were unable to recover it. 00:32:36.252 [2024-11-20 06:43:07.944456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.252 [2024-11-20 06:43:07.944488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.252 qpair failed and we were unable to recover it. 00:32:36.252 [2024-11-20 06:43:07.944687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.252 [2024-11-20 06:43:07.944719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.252 qpair failed and we were unable to recover it. 00:32:36.252 [2024-11-20 06:43:07.944837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.252 [2024-11-20 06:43:07.944868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.252 qpair failed and we were unable to recover it. 00:32:36.252 [2024-11-20 06:43:07.945059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.252 [2024-11-20 06:43:07.945091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.252 qpair failed and we were unable to recover it. 00:32:36.252 [2024-11-20 06:43:07.945275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.252 [2024-11-20 06:43:07.945318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.252 qpair failed and we were unable to recover it. 00:32:36.252 [2024-11-20 06:43:07.945450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.252 [2024-11-20 06:43:07.945482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.252 qpair failed and we were unable to recover it. 00:32:36.252 [2024-11-20 06:43:07.945591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.252 [2024-11-20 06:43:07.945624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.252 qpair failed and we were unable to recover it. 00:32:36.252 [2024-11-20 06:43:07.945806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.252 [2024-11-20 06:43:07.945837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.252 qpair failed and we were unable to recover it. 00:32:36.252 [2024-11-20 06:43:07.946095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.252 [2024-11-20 06:43:07.946128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.252 qpair failed and we were unable to recover it. 00:32:36.252 [2024-11-20 06:43:07.946254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.252 [2024-11-20 06:43:07.946287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.252 qpair failed and we were unable to recover it. 00:32:36.252 [2024-11-20 06:43:07.946415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.252 [2024-11-20 06:43:07.946447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.252 qpair failed and we were unable to recover it. 00:32:36.252 [2024-11-20 06:43:07.946563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.252 [2024-11-20 06:43:07.946595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.252 qpair failed and we were unable to recover it. 00:32:36.252 [2024-11-20 06:43:07.946720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.252 [2024-11-20 06:43:07.946752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.252 qpair failed and we were unable to recover it. 00:32:36.252 [2024-11-20 06:43:07.946945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.252 [2024-11-20 06:43:07.946978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.253 qpair failed and we were unable to recover it. 00:32:36.253 [2024-11-20 06:43:07.947152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.253 [2024-11-20 06:43:07.947184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.253 qpair failed and we were unable to recover it. 00:32:36.253 [2024-11-20 06:43:07.947309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.253 [2024-11-20 06:43:07.947342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.253 qpair failed and we were unable to recover it. 00:32:36.253 [2024-11-20 06:43:07.947465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.253 [2024-11-20 06:43:07.947497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.253 qpair failed and we were unable to recover it. 00:32:36.253 [2024-11-20 06:43:07.947626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.253 [2024-11-20 06:43:07.947659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.253 qpair failed and we were unable to recover it. 00:32:36.253 [2024-11-20 06:43:07.947856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.253 [2024-11-20 06:43:07.947889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.253 qpair failed and we were unable to recover it. 00:32:36.253 [2024-11-20 06:43:07.948020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.253 [2024-11-20 06:43:07.948051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.253 qpair failed and we were unable to recover it. 00:32:36.253 [2024-11-20 06:43:07.948157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.253 [2024-11-20 06:43:07.948189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.253 qpair failed and we were unable to recover it. 00:32:36.253 [2024-11-20 06:43:07.948321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.253 [2024-11-20 06:43:07.948354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.253 qpair failed and we were unable to recover it. 00:32:36.253 [2024-11-20 06:43:07.948472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.253 [2024-11-20 06:43:07.948504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.253 qpair failed and we were unable to recover it. 00:32:36.253 [2024-11-20 06:43:07.948624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.253 [2024-11-20 06:43:07.948655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.253 qpair failed and we were unable to recover it. 00:32:36.253 [2024-11-20 06:43:07.948844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.253 [2024-11-20 06:43:07.948876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.253 qpair failed and we were unable to recover it. 00:32:36.253 [2024-11-20 06:43:07.949059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.253 [2024-11-20 06:43:07.949091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.253 qpair failed and we were unable to recover it. 00:32:36.253 [2024-11-20 06:43:07.949238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.253 [2024-11-20 06:43:07.949271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.253 qpair failed and we were unable to recover it. 00:32:36.253 [2024-11-20 06:43:07.949397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.253 [2024-11-20 06:43:07.949430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.253 qpair failed and we were unable to recover it. 00:32:36.253 [2024-11-20 06:43:07.949603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.253 [2024-11-20 06:43:07.949634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.253 qpair failed and we were unable to recover it. 00:32:36.253 [2024-11-20 06:43:07.949807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.253 [2024-11-20 06:43:07.949839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.253 qpair failed and we were unable to recover it. 00:32:36.253 [2024-11-20 06:43:07.950017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.253 [2024-11-20 06:43:07.950048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.253 qpair failed and we were unable to recover it. 00:32:36.253 [2024-11-20 06:43:07.950300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.253 [2024-11-20 06:43:07.950337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.253 qpair failed and we were unable to recover it. 00:32:36.253 [2024-11-20 06:43:07.950460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.253 [2024-11-20 06:43:07.950492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.253 qpair failed and we were unable to recover it. 00:32:36.253 [2024-11-20 06:43:07.950604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.253 [2024-11-20 06:43:07.950635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.253 qpair failed and we were unable to recover it. 00:32:36.253 [2024-11-20 06:43:07.950824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.253 [2024-11-20 06:43:07.950855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.253 qpair failed and we were unable to recover it. 00:32:36.253 [2024-11-20 06:43:07.950987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.253 [2024-11-20 06:43:07.951018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.253 qpair failed and we were unable to recover it. 00:32:36.253 [2024-11-20 06:43:07.951134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.253 [2024-11-20 06:43:07.951166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.253 qpair failed and we were unable to recover it. 00:32:36.253 [2024-11-20 06:43:07.951356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.253 [2024-11-20 06:43:07.951389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.253 qpair failed and we were unable to recover it. 00:32:36.253 [2024-11-20 06:43:07.951508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.253 [2024-11-20 06:43:07.951539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.253 qpair failed and we were unable to recover it. 00:32:36.253 [2024-11-20 06:43:07.951673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.253 [2024-11-20 06:43:07.951705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.253 qpair failed and we were unable to recover it. 00:32:36.253 [2024-11-20 06:43:07.951818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.253 [2024-11-20 06:43:07.951850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.253 qpair failed and we were unable to recover it. 00:32:36.253 [2024-11-20 06:43:07.952022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.253 [2024-11-20 06:43:07.952054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.253 qpair failed and we were unable to recover it. 00:32:36.253 [2024-11-20 06:43:07.952169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.253 [2024-11-20 06:43:07.952208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.253 qpair failed and we were unable to recover it. 00:32:36.253 [2024-11-20 06:43:07.952383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.253 [2024-11-20 06:43:07.952414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.253 qpair failed and we were unable to recover it. 00:32:36.253 [2024-11-20 06:43:07.952523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.253 [2024-11-20 06:43:07.952560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.253 qpair failed and we were unable to recover it. 00:32:36.253 [2024-11-20 06:43:07.952804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.253 [2024-11-20 06:43:07.952837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.253 qpair failed and we were unable to recover it. 00:32:36.253 [2024-11-20 06:43:07.953022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.253 [2024-11-20 06:43:07.953054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.253 qpair failed and we were unable to recover it. 00:32:36.253 [2024-11-20 06:43:07.953171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.253 [2024-11-20 06:43:07.953233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.253 qpair failed and we were unable to recover it. 00:32:36.253 [2024-11-20 06:43:07.953345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.253 [2024-11-20 06:43:07.953377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.253 qpair failed and we were unable to recover it. 00:32:36.253 [2024-11-20 06:43:07.953506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.253 [2024-11-20 06:43:07.953538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.253 qpair failed and we were unable to recover it. 00:32:36.253 [2024-11-20 06:43:07.953711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.253 [2024-11-20 06:43:07.953742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.253 qpair failed and we were unable to recover it. 00:32:36.253 [2024-11-20 06:43:07.953922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.253 [2024-11-20 06:43:07.953953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.253 qpair failed and we were unable to recover it. 00:32:36.253 [2024-11-20 06:43:07.954219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.253 [2024-11-20 06:43:07.954253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.253 qpair failed and we were unable to recover it. 00:32:36.253 [2024-11-20 06:43:07.954370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.253 [2024-11-20 06:43:07.954402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.253 qpair failed and we were unable to recover it. 00:32:36.253 [2024-11-20 06:43:07.954529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.253 [2024-11-20 06:43:07.954560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.253 qpair failed and we were unable to recover it. 00:32:36.253 [2024-11-20 06:43:07.954667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.253 [2024-11-20 06:43:07.954699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.253 qpair failed and we were unable to recover it. 00:32:36.253 [2024-11-20 06:43:07.954877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.253 [2024-11-20 06:43:07.954909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.253 qpair failed and we were unable to recover it. 00:32:36.253 [2024-11-20 06:43:07.955083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.253 [2024-11-20 06:43:07.955115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.253 qpair failed and we were unable to recover it. 00:32:36.253 [2024-11-20 06:43:07.955292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.253 [2024-11-20 06:43:07.955325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.253 qpair failed and we were unable to recover it. 00:32:36.253 [2024-11-20 06:43:07.955451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.253 [2024-11-20 06:43:07.955482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.253 qpair failed and we were unable to recover it. 00:32:36.253 [2024-11-20 06:43:07.955601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.253 [2024-11-20 06:43:07.955633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.253 qpair failed and we were unable to recover it. 00:32:36.253 [2024-11-20 06:43:07.955769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.253 [2024-11-20 06:43:07.955800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.253 qpair failed and we were unable to recover it. 00:32:36.253 [2024-11-20 06:43:07.956038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.253 [2024-11-20 06:43:07.956069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.253 qpair failed and we were unable to recover it. 00:32:36.253 [2024-11-20 06:43:07.956189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.253 [2024-11-20 06:43:07.956229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.253 qpair failed and we were unable to recover it. 00:32:36.253 [2024-11-20 06:43:07.956425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.253 [2024-11-20 06:43:07.956457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.253 qpair failed and we were unable to recover it. 00:32:36.253 [2024-11-20 06:43:07.956637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.253 [2024-11-20 06:43:07.956669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.253 qpair failed and we were unable to recover it. 00:32:36.253 [2024-11-20 06:43:07.956898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.253 [2024-11-20 06:43:07.956929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.253 qpair failed and we were unable to recover it. 00:32:36.253 [2024-11-20 06:43:07.957051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.253 [2024-11-20 06:43:07.957082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.253 qpair failed and we were unable to recover it. 00:32:36.253 [2024-11-20 06:43:07.957215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.253 [2024-11-20 06:43:07.957248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.253 qpair failed and we were unable to recover it. 00:32:36.253 [2024-11-20 06:43:07.957425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.253 [2024-11-20 06:43:07.957458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.253 qpair failed and we were unable to recover it. 00:32:36.253 [2024-11-20 06:43:07.957572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.253 [2024-11-20 06:43:07.957603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.253 qpair failed and we were unable to recover it. 00:32:36.253 [2024-11-20 06:43:07.957785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.253 [2024-11-20 06:43:07.957822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.253 qpair failed and we were unable to recover it. 00:32:36.253 [2024-11-20 06:43:07.957999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.253 [2024-11-20 06:43:07.958032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.253 qpair failed and we were unable to recover it. 00:32:36.253 [2024-11-20 06:43:07.958238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.253 [2024-11-20 06:43:07.958273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.253 qpair failed and we were unable to recover it. 00:32:36.253 [2024-11-20 06:43:07.958467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.253 [2024-11-20 06:43:07.958499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.253 qpair failed and we were unable to recover it. 00:32:36.253 [2024-11-20 06:43:07.958624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.253 [2024-11-20 06:43:07.958654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.253 qpair failed and we were unable to recover it. 00:32:36.253 [2024-11-20 06:43:07.958836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.253 [2024-11-20 06:43:07.958868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.253 qpair failed and we were unable to recover it. 00:32:36.253 [2024-11-20 06:43:07.959082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.253 [2024-11-20 06:43:07.959115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.253 qpair failed and we were unable to recover it. 00:32:36.253 [2024-11-20 06:43:07.959352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.253 [2024-11-20 06:43:07.959387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.253 qpair failed and we were unable to recover it. 00:32:36.253 [2024-11-20 06:43:07.959516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.254 [2024-11-20 06:43:07.959547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.254 qpair failed and we were unable to recover it. 00:32:36.254 [2024-11-20 06:43:07.959673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.254 [2024-11-20 06:43:07.959706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.254 qpair failed and we were unable to recover it. 00:32:36.254 [2024-11-20 06:43:07.959822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.254 [2024-11-20 06:43:07.959854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.254 qpair failed and we were unable to recover it. 00:32:36.254 [2024-11-20 06:43:07.960047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.254 [2024-11-20 06:43:07.960080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.254 qpair failed and we were unable to recover it. 00:32:36.254 [2024-11-20 06:43:07.960272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.254 [2024-11-20 06:43:07.960306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.254 qpair failed and we were unable to recover it. 00:32:36.254 [2024-11-20 06:43:07.960438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.254 [2024-11-20 06:43:07.960470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.254 qpair failed and we were unable to recover it. 00:32:36.254 [2024-11-20 06:43:07.960669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.254 [2024-11-20 06:43:07.960702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.254 qpair failed and we were unable to recover it. 00:32:36.254 [2024-11-20 06:43:07.960820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.254 [2024-11-20 06:43:07.960852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.254 qpair failed and we were unable to recover it. 00:32:36.254 [2024-11-20 06:43:07.960975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.254 [2024-11-20 06:43:07.961007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.254 qpair failed and we were unable to recover it. 00:32:36.254 [2024-11-20 06:43:07.961194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.254 [2024-11-20 06:43:07.961237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.254 qpair failed and we were unable to recover it. 00:32:36.254 [2024-11-20 06:43:07.961349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.254 [2024-11-20 06:43:07.961381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.254 qpair failed and we were unable to recover it. 00:32:36.254 [2024-11-20 06:43:07.961494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.254 [2024-11-20 06:43:07.961525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.254 qpair failed and we were unable to recover it. 00:32:36.254 [2024-11-20 06:43:07.961699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.254 [2024-11-20 06:43:07.961732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.254 qpair failed and we were unable to recover it. 00:32:36.254 [2024-11-20 06:43:07.961865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.254 [2024-11-20 06:43:07.961898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.254 qpair failed and we were unable to recover it. 00:32:36.254 [2024-11-20 06:43:07.962106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.254 [2024-11-20 06:43:07.962139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.254 qpair failed and we were unable to recover it. 00:32:36.254 [2024-11-20 06:43:07.962332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.254 [2024-11-20 06:43:07.962366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.254 qpair failed and we were unable to recover it. 00:32:36.254 [2024-11-20 06:43:07.962475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.254 [2024-11-20 06:43:07.962507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.254 qpair failed and we were unable to recover it. 00:32:36.254 [2024-11-20 06:43:07.962644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.254 [2024-11-20 06:43:07.962676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.254 qpair failed and we were unable to recover it. 00:32:36.254 [2024-11-20 06:43:07.962886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.254 [2024-11-20 06:43:07.962918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.254 qpair failed and we were unable to recover it. 00:32:36.254 [2024-11-20 06:43:07.963048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.254 [2024-11-20 06:43:07.963079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.254 qpair failed and we were unable to recover it. 00:32:36.254 [2024-11-20 06:43:07.963198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.254 [2024-11-20 06:43:07.963242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.254 qpair failed and we were unable to recover it. 00:32:36.254 [2024-11-20 06:43:07.963458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.254 [2024-11-20 06:43:07.963490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.254 qpair failed and we were unable to recover it. 00:32:36.254 [2024-11-20 06:43:07.963670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.254 [2024-11-20 06:43:07.963701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.254 qpair failed and we were unable to recover it. 00:32:36.254 [2024-11-20 06:43:07.963807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.254 [2024-11-20 06:43:07.963838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.254 qpair failed and we were unable to recover it. 00:32:36.254 [2024-11-20 06:43:07.964032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.254 [2024-11-20 06:43:07.964064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.254 qpair failed and we were unable to recover it. 00:32:36.254 [2024-11-20 06:43:07.964307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.254 [2024-11-20 06:43:07.964341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.254 qpair failed and we were unable to recover it. 00:32:36.254 [2024-11-20 06:43:07.964478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.254 [2024-11-20 06:43:07.964511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.254 qpair failed and we were unable to recover it. 00:32:36.254 [2024-11-20 06:43:07.964628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.254 [2024-11-20 06:43:07.964660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.254 qpair failed and we were unable to recover it. 00:32:36.254 [2024-11-20 06:43:07.964777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.254 [2024-11-20 06:43:07.964808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.254 qpair failed and we were unable to recover it. 00:32:36.254 [2024-11-20 06:43:07.964940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.254 [2024-11-20 06:43:07.964971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.254 qpair failed and we were unable to recover it. 00:32:36.254 [2024-11-20 06:43:07.965083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.254 [2024-11-20 06:43:07.965115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.254 qpair failed and we were unable to recover it. 00:32:36.254 [2024-11-20 06:43:07.965257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.254 [2024-11-20 06:43:07.965290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.254 qpair failed and we were unable to recover it. 00:32:36.254 [2024-11-20 06:43:07.965469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.254 [2024-11-20 06:43:07.965513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.254 qpair failed and we were unable to recover it. 00:32:36.254 [2024-11-20 06:43:07.965687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.254 [2024-11-20 06:43:07.965718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.254 qpair failed and we were unable to recover it. 00:32:36.254 [2024-11-20 06:43:07.965897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.254 [2024-11-20 06:43:07.965929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.254 qpair failed and we were unable to recover it. 00:32:36.254 [2024-11-20 06:43:07.966100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.254 [2024-11-20 06:43:07.966132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.254 qpair failed and we were unable to recover it. 00:32:36.254 [2024-11-20 06:43:07.966265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.254 [2024-11-20 06:43:07.966299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.254 qpair failed and we were unable to recover it. 00:32:36.254 [2024-11-20 06:43:07.966414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.254 [2024-11-20 06:43:07.966447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.254 qpair failed and we were unable to recover it. 00:32:36.254 [2024-11-20 06:43:07.966572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.254 [2024-11-20 06:43:07.966604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.254 qpair failed and we were unable to recover it. 00:32:36.254 [2024-11-20 06:43:07.966776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.254 [2024-11-20 06:43:07.966808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.254 qpair failed and we were unable to recover it. 00:32:36.254 [2024-11-20 06:43:07.966983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.254 [2024-11-20 06:43:07.967015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.254 qpair failed and we were unable to recover it. 00:32:36.254 [2024-11-20 06:43:07.967189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.254 [2024-11-20 06:43:07.967229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.254 qpair failed and we were unable to recover it. 00:32:36.254 [2024-11-20 06:43:07.967407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.254 [2024-11-20 06:43:07.967439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.254 qpair failed and we were unable to recover it. 00:32:36.254 [2024-11-20 06:43:07.967568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.254 [2024-11-20 06:43:07.967600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.254 qpair failed and we were unable to recover it. 00:32:36.254 [2024-11-20 06:43:07.967723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.254 [2024-11-20 06:43:07.967755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.254 qpair failed and we were unable to recover it. 00:32:36.254 [2024-11-20 06:43:07.967883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.254 [2024-11-20 06:43:07.967915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.254 qpair failed and we were unable to recover it. 00:32:36.254 [2024-11-20 06:43:07.968020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.254 [2024-11-20 06:43:07.968053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.254 qpair failed and we were unable to recover it. 00:32:36.254 [2024-11-20 06:43:07.968226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.254 [2024-11-20 06:43:07.968259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.254 qpair failed and we were unable to recover it. 00:32:36.254 [2024-11-20 06:43:07.968526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.254 [2024-11-20 06:43:07.968559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.254 qpair failed and we were unable to recover it. 00:32:36.254 [2024-11-20 06:43:07.968673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.254 [2024-11-20 06:43:07.968704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.254 qpair failed and we were unable to recover it. 00:32:36.254 [2024-11-20 06:43:07.968896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.254 [2024-11-20 06:43:07.968927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.254 qpair failed and we were unable to recover it. 00:32:36.254 [2024-11-20 06:43:07.969065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.254 [2024-11-20 06:43:07.969096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.254 qpair failed and we were unable to recover it. 00:32:36.254 [2024-11-20 06:43:07.969215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.254 [2024-11-20 06:43:07.969248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.254 qpair failed and we were unable to recover it. 00:32:36.254 [2024-11-20 06:43:07.969351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.254 [2024-11-20 06:43:07.969383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.254 qpair failed and we were unable to recover it. 00:32:36.254 [2024-11-20 06:43:07.969572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.254 [2024-11-20 06:43:07.969604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.254 qpair failed and we were unable to recover it. 00:32:36.254 [2024-11-20 06:43:07.969798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.254 [2024-11-20 06:43:07.969830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.254 qpair failed and we were unable to recover it. 00:32:36.254 [2024-11-20 06:43:07.970019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.254 [2024-11-20 06:43:07.970051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.254 qpair failed and we were unable to recover it. 00:32:36.254 [2024-11-20 06:43:07.970239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.254 [2024-11-20 06:43:07.970272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.254 qpair failed and we were unable to recover it. 00:32:36.254 [2024-11-20 06:43:07.970383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.254 [2024-11-20 06:43:07.970415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.254 qpair failed and we were unable to recover it. 00:32:36.254 [2024-11-20 06:43:07.970613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.254 [2024-11-20 06:43:07.970645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.254 qpair failed and we were unable to recover it. 00:32:36.254 [2024-11-20 06:43:07.970886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.254 [2024-11-20 06:43:07.970918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.254 qpair failed and we were unable to recover it. 00:32:36.254 [2024-11-20 06:43:07.971095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.254 [2024-11-20 06:43:07.971126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.254 qpair failed and we were unable to recover it. 00:32:36.254 [2024-11-20 06:43:07.971334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.254 [2024-11-20 06:43:07.971367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.254 qpair failed and we were unable to recover it. 00:32:36.254 [2024-11-20 06:43:07.971471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.254 [2024-11-20 06:43:07.971504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.254 qpair failed and we were unable to recover it. 00:32:36.254 [2024-11-20 06:43:07.971621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.254 [2024-11-20 06:43:07.971652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.254 qpair failed and we were unable to recover it. 00:32:36.254 [2024-11-20 06:43:07.971868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.254 [2024-11-20 06:43:07.971901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.254 qpair failed and we were unable to recover it. 00:32:36.254 [2024-11-20 06:43:07.972114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.254 [2024-11-20 06:43:07.972146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.254 qpair failed and we were unable to recover it. 00:32:36.254 [2024-11-20 06:43:07.972281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.254 [2024-11-20 06:43:07.972314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.254 qpair failed and we were unable to recover it. 00:32:36.254 [2024-11-20 06:43:07.972503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.254 [2024-11-20 06:43:07.972534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.254 qpair failed and we were unable to recover it. 00:32:36.254 [2024-11-20 06:43:07.972665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.254 [2024-11-20 06:43:07.972696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.254 qpair failed and we were unable to recover it. 00:32:36.254 [2024-11-20 06:43:07.972827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.254 [2024-11-20 06:43:07.972859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.254 qpair failed and we were unable to recover it. 00:32:36.254 [2024-11-20 06:43:07.973038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.255 [2024-11-20 06:43:07.973068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.255 qpair failed and we were unable to recover it. 00:32:36.255 [2024-11-20 06:43:07.973198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.255 [2024-11-20 06:43:07.973248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.255 qpair failed and we were unable to recover it. 00:32:36.255 [2024-11-20 06:43:07.973491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.255 [2024-11-20 06:43:07.973523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.255 qpair failed and we were unable to recover it. 00:32:36.255 [2024-11-20 06:43:07.973646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.255 [2024-11-20 06:43:07.973677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.255 qpair failed and we were unable to recover it. 00:32:36.255 [2024-11-20 06:43:07.973861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.255 [2024-11-20 06:43:07.973893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.255 qpair failed and we were unable to recover it. 00:32:36.255 [2024-11-20 06:43:07.973996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.255 [2024-11-20 06:43:07.974027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.255 qpair failed and we were unable to recover it. 00:32:36.255 [2024-11-20 06:43:07.974246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.255 [2024-11-20 06:43:07.974279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.255 qpair failed and we were unable to recover it. 00:32:36.255 [2024-11-20 06:43:07.974463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.255 [2024-11-20 06:43:07.974495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.255 qpair failed and we were unable to recover it. 00:32:36.255 [2024-11-20 06:43:07.974670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.255 [2024-11-20 06:43:07.974702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.255 qpair failed and we were unable to recover it. 00:32:36.255 [2024-11-20 06:43:07.974887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.255 [2024-11-20 06:43:07.974918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.255 qpair failed and we were unable to recover it. 00:32:36.255 [2024-11-20 06:43:07.975098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.255 [2024-11-20 06:43:07.975130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.255 qpair failed and we were unable to recover it. 00:32:36.255 [2024-11-20 06:43:07.975314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.255 [2024-11-20 06:43:07.975347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.255 qpair failed and we were unable to recover it. 00:32:36.255 [2024-11-20 06:43:07.975586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.255 [2024-11-20 06:43:07.975618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.255 qpair failed and we were unable to recover it. 00:32:36.255 [2024-11-20 06:43:07.975790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.255 [2024-11-20 06:43:07.975822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.255 qpair failed and we were unable to recover it. 00:32:36.255 [2024-11-20 06:43:07.975939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.255 [2024-11-20 06:43:07.975971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.255 qpair failed and we were unable to recover it. 00:32:36.255 [2024-11-20 06:43:07.976092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.255 [2024-11-20 06:43:07.976124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.255 qpair failed and we were unable to recover it. 00:32:36.255 [2024-11-20 06:43:07.976270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.255 [2024-11-20 06:43:07.976303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.255 qpair failed and we were unable to recover it. 00:32:36.255 [2024-11-20 06:43:07.976421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.255 [2024-11-20 06:43:07.976453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.255 qpair failed and we were unable to recover it. 00:32:36.255 [2024-11-20 06:43:07.976576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.255 [2024-11-20 06:43:07.976608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.255 qpair failed and we were unable to recover it. 00:32:36.255 [2024-11-20 06:43:07.976722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.255 [2024-11-20 06:43:07.976754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.255 qpair failed and we were unable to recover it. 00:32:36.255 [2024-11-20 06:43:07.976947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.255 [2024-11-20 06:43:07.976980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.255 qpair failed and we were unable to recover it. 00:32:36.255 [2024-11-20 06:43:07.977086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.255 [2024-11-20 06:43:07.977117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.255 qpair failed and we were unable to recover it. 00:32:36.255 [2024-11-20 06:43:07.977286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.255 [2024-11-20 06:43:07.977319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.255 qpair failed and we were unable to recover it. 00:32:36.255 [2024-11-20 06:43:07.977507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.255 [2024-11-20 06:43:07.977539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.255 qpair failed and we were unable to recover it. 00:32:36.255 [2024-11-20 06:43:07.977717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.255 [2024-11-20 06:43:07.977749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.255 qpair failed and we were unable to recover it. 00:32:36.255 [2024-11-20 06:43:07.977925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.255 [2024-11-20 06:43:07.977957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.255 qpair failed and we were unable to recover it. 00:32:36.255 [2024-11-20 06:43:07.978081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.255 [2024-11-20 06:43:07.978114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.255 qpair failed and we were unable to recover it. 00:32:36.255 [2024-11-20 06:43:07.978325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.255 [2024-11-20 06:43:07.978358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.255 qpair failed and we were unable to recover it. 00:32:36.255 [2024-11-20 06:43:07.978609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.255 [2024-11-20 06:43:07.978641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.255 qpair failed and we were unable to recover it. 00:32:36.255 [2024-11-20 06:43:07.978830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.255 [2024-11-20 06:43:07.978862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.255 qpair failed and we were unable to recover it. 00:32:36.255 [2024-11-20 06:43:07.979039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.255 [2024-11-20 06:43:07.979071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.255 qpair failed and we were unable to recover it. 00:32:36.255 [2024-11-20 06:43:07.979262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.255 [2024-11-20 06:43:07.979296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.255 qpair failed and we were unable to recover it. 00:32:36.255 [2024-11-20 06:43:07.979416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.255 [2024-11-20 06:43:07.979449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.255 qpair failed and we were unable to recover it. 00:32:36.255 [2024-11-20 06:43:07.979580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.255 [2024-11-20 06:43:07.979611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.255 qpair failed and we were unable to recover it. 00:32:36.255 [2024-11-20 06:43:07.979737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.255 [2024-11-20 06:43:07.979769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.255 qpair failed and we were unable to recover it. 00:32:36.255 [2024-11-20 06:43:07.980022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.255 [2024-11-20 06:43:07.980054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.255 qpair failed and we were unable to recover it. 00:32:36.255 [2024-11-20 06:43:07.980176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.255 [2024-11-20 06:43:07.980215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.255 qpair failed and we were unable to recover it. 00:32:36.255 [2024-11-20 06:43:07.980337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.255 [2024-11-20 06:43:07.980369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.255 qpair failed and we were unable to recover it. 00:32:36.255 [2024-11-20 06:43:07.980482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.255 [2024-11-20 06:43:07.980514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.255 qpair failed and we were unable to recover it. 00:32:36.255 [2024-11-20 06:43:07.980708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.255 [2024-11-20 06:43:07.980740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.255 qpair failed and we were unable to recover it. 00:32:36.255 [2024-11-20 06:43:07.980976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.255 [2024-11-20 06:43:07.981008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.255 qpair failed and we were unable to recover it. 00:32:36.255 [2024-11-20 06:43:07.981117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.255 [2024-11-20 06:43:07.981154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.255 qpair failed and we were unable to recover it. 00:32:36.255 [2024-11-20 06:43:07.981289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.255 [2024-11-20 06:43:07.981324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.255 qpair failed and we were unable to recover it. 00:32:36.255 [2024-11-20 06:43:07.981450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.255 [2024-11-20 06:43:07.981483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.255 qpair failed and we were unable to recover it. 00:32:36.255 [2024-11-20 06:43:07.981590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.255 [2024-11-20 06:43:07.981622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.255 qpair failed and we were unable to recover it. 00:32:36.255 [2024-11-20 06:43:07.981803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.255 [2024-11-20 06:43:07.981835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.255 qpair failed and we were unable to recover it. 00:32:36.255 [2024-11-20 06:43:07.982078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.255 [2024-11-20 06:43:07.982110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.255 qpair failed and we were unable to recover it. 00:32:36.255 [2024-11-20 06:43:07.982299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.255 [2024-11-20 06:43:07.982332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.255 qpair failed and we were unable to recover it. 00:32:36.255 [2024-11-20 06:43:07.982578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.255 [2024-11-20 06:43:07.982610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.255 qpair failed and we were unable to recover it. 00:32:36.255 [2024-11-20 06:43:07.982731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.255 [2024-11-20 06:43:07.982763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.255 qpair failed and we were unable to recover it. 00:32:36.255 [2024-11-20 06:43:07.982946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.255 [2024-11-20 06:43:07.982978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.255 qpair failed and we were unable to recover it. 00:32:36.255 [2024-11-20 06:43:07.983153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.255 [2024-11-20 06:43:07.983185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.255 qpair failed and we were unable to recover it. 00:32:36.255 [2024-11-20 06:43:07.983440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.255 [2024-11-20 06:43:07.983473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.255 qpair failed and we were unable to recover it. 00:32:36.255 [2024-11-20 06:43:07.983660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.255 [2024-11-20 06:43:07.983690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.255 qpair failed and we were unable to recover it. 00:32:36.255 [2024-11-20 06:43:07.983867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.255 [2024-11-20 06:43:07.983900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.255 qpair failed and we were unable to recover it. 00:32:36.255 [2024-11-20 06:43:07.984032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.255 [2024-11-20 06:43:07.984065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.255 qpair failed and we were unable to recover it. 00:32:36.255 [2024-11-20 06:43:07.984242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.255 [2024-11-20 06:43:07.984276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.255 qpair failed and we were unable to recover it. 00:32:36.255 [2024-11-20 06:43:07.984379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.255 [2024-11-20 06:43:07.984413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.255 qpair failed and we were unable to recover it. 00:32:36.255 [2024-11-20 06:43:07.984595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.255 [2024-11-20 06:43:07.984627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.255 qpair failed and we were unable to recover it. 00:32:36.255 [2024-11-20 06:43:07.984730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.255 [2024-11-20 06:43:07.984763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.255 qpair failed and we were unable to recover it. 00:32:36.255 [2024-11-20 06:43:07.984947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.255 [2024-11-20 06:43:07.984978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.255 qpair failed and we were unable to recover it. 00:32:36.255 [2024-11-20 06:43:07.985152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.255 [2024-11-20 06:43:07.985184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.255 qpair failed and we were unable to recover it. 00:32:36.255 [2024-11-20 06:43:07.985307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.255 [2024-11-20 06:43:07.985340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.255 qpair failed and we were unable to recover it. 00:32:36.255 [2024-11-20 06:43:07.985522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.255 [2024-11-20 06:43:07.985556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.255 qpair failed and we were unable to recover it. 00:32:36.255 [2024-11-20 06:43:07.985743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.255 [2024-11-20 06:43:07.985774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.255 qpair failed and we were unable to recover it. 00:32:36.255 [2024-11-20 06:43:07.985905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.255 [2024-11-20 06:43:07.985937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.255 qpair failed and we were unable to recover it. 00:32:36.256 [2024-11-20 06:43:07.986040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.256 [2024-11-20 06:43:07.986073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.256 qpair failed and we were unable to recover it. 00:32:36.256 [2024-11-20 06:43:07.986251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.256 [2024-11-20 06:43:07.986284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.256 qpair failed and we were unable to recover it. 00:32:36.256 [2024-11-20 06:43:07.986576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.256 [2024-11-20 06:43:07.986609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.256 qpair failed and we were unable to recover it. 00:32:36.256 [2024-11-20 06:43:07.986737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.256 [2024-11-20 06:43:07.986770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.256 qpair failed and we were unable to recover it. 00:32:36.256 [2024-11-20 06:43:07.986961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.256 [2024-11-20 06:43:07.986993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.256 qpair failed and we were unable to recover it. 00:32:36.256 [2024-11-20 06:43:07.987098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.256 [2024-11-20 06:43:07.987129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.256 qpair failed and we were unable to recover it. 00:32:36.256 [2024-11-20 06:43:07.987274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.256 [2024-11-20 06:43:07.987308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.256 qpair failed and we were unable to recover it. 00:32:36.256 [2024-11-20 06:43:07.987504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.256 [2024-11-20 06:43:07.987536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.256 qpair failed and we were unable to recover it. 00:32:36.256 [2024-11-20 06:43:07.987658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.256 [2024-11-20 06:43:07.987689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.256 qpair failed and we were unable to recover it. 00:32:36.256 [2024-11-20 06:43:07.987956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.256 [2024-11-20 06:43:07.987988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.256 qpair failed and we were unable to recover it. 00:32:36.256 [2024-11-20 06:43:07.988099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.256 [2024-11-20 06:43:07.988130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.256 qpair failed and we were unable to recover it. 00:32:36.256 [2024-11-20 06:43:07.988314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.256 [2024-11-20 06:43:07.988348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.256 qpair failed and we were unable to recover it. 00:32:36.256 [2024-11-20 06:43:07.988534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.256 [2024-11-20 06:43:07.988566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.256 qpair failed and we were unable to recover it. 00:32:36.256 [2024-11-20 06:43:07.988680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.256 [2024-11-20 06:43:07.988713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.256 qpair failed and we were unable to recover it. 00:32:36.256 [2024-11-20 06:43:07.988836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.256 [2024-11-20 06:43:07.988868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.256 qpair failed and we were unable to recover it. 00:32:36.256 [2024-11-20 06:43:07.988982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.256 [2024-11-20 06:43:07.989020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.256 qpair failed and we were unable to recover it. 00:32:36.256 [2024-11-20 06:43:07.989279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.256 [2024-11-20 06:43:07.989311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.256 qpair failed and we were unable to recover it. 00:32:36.256 [2024-11-20 06:43:07.989427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.256 [2024-11-20 06:43:07.989460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.256 qpair failed and we were unable to recover it. 00:32:36.256 [2024-11-20 06:43:07.989632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.256 [2024-11-20 06:43:07.989665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.256 qpair failed and we were unable to recover it. 00:32:36.256 [2024-11-20 06:43:07.989770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.256 [2024-11-20 06:43:07.989801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.256 qpair failed and we were unable to recover it. 00:32:36.256 [2024-11-20 06:43:07.989906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.256 [2024-11-20 06:43:07.989939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.256 qpair failed and we were unable to recover it. 00:32:36.256 [2024-11-20 06:43:07.990051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.256 [2024-11-20 06:43:07.990083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.256 qpair failed and we were unable to recover it. 00:32:36.256 [2024-11-20 06:43:07.990265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.256 [2024-11-20 06:43:07.990297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.256 qpair failed and we were unable to recover it. 00:32:36.256 [2024-11-20 06:43:07.990419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.256 [2024-11-20 06:43:07.990452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.256 qpair failed and we were unable to recover it. 00:32:36.256 [2024-11-20 06:43:07.990575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.256 [2024-11-20 06:43:07.990608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.256 qpair failed and we were unable to recover it. 00:32:36.256 [2024-11-20 06:43:07.990718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.256 [2024-11-20 06:43:07.990751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.256 qpair failed and we were unable to recover it. 00:32:36.256 [2024-11-20 06:43:07.990933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.256 [2024-11-20 06:43:07.990965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.256 qpair failed and we were unable to recover it. 00:32:36.256 [2024-11-20 06:43:07.991099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.256 [2024-11-20 06:43:07.991130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.256 qpair failed and we were unable to recover it. 00:32:36.256 [2024-11-20 06:43:07.991306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.256 [2024-11-20 06:43:07.991338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.256 qpair failed and we were unable to recover it. 00:32:36.256 [2024-11-20 06:43:07.991516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.256 [2024-11-20 06:43:07.991549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.256 qpair failed and we were unable to recover it. 00:32:36.256 [2024-11-20 06:43:07.991678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.256 [2024-11-20 06:43:07.991710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.256 qpair failed and we were unable to recover it. 00:32:36.256 [2024-11-20 06:43:07.991928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.256 [2024-11-20 06:43:07.991961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.256 qpair failed and we were unable to recover it. 00:32:36.256 [2024-11-20 06:43:07.992139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.256 [2024-11-20 06:43:07.992170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.256 qpair failed and we were unable to recover it. 00:32:36.256 [2024-11-20 06:43:07.992384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.256 [2024-11-20 06:43:07.992418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.256 qpair failed and we were unable to recover it. 00:32:36.256 [2024-11-20 06:43:07.992541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.256 [2024-11-20 06:43:07.992573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.256 qpair failed and we were unable to recover it. 00:32:36.256 [2024-11-20 06:43:07.992755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.256 [2024-11-20 06:43:07.992786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.256 qpair failed and we were unable to recover it. 00:32:36.256 [2024-11-20 06:43:07.992976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.256 [2024-11-20 06:43:07.993008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.256 qpair failed and we were unable to recover it. 00:32:36.256 [2024-11-20 06:43:07.993120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.256 [2024-11-20 06:43:07.993151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.256 qpair failed and we were unable to recover it. 00:32:36.256 [2024-11-20 06:43:07.993294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.256 [2024-11-20 06:43:07.993327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.256 qpair failed and we were unable to recover it. 00:32:36.256 [2024-11-20 06:43:07.993510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.256 [2024-11-20 06:43:07.993542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.256 qpair failed and we were unable to recover it. 00:32:36.256 [2024-11-20 06:43:07.993717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.256 [2024-11-20 06:43:07.993748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.256 qpair failed and we were unable to recover it. 00:32:36.256 [2024-11-20 06:43:07.993879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.256 [2024-11-20 06:43:07.993911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.256 qpair failed and we were unable to recover it. 00:32:36.256 [2024-11-20 06:43:07.994034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.256 [2024-11-20 06:43:07.994067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.256 qpair failed and we were unable to recover it. 00:32:36.256 [2024-11-20 06:43:07.994246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.256 [2024-11-20 06:43:07.994279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.256 qpair failed and we were unable to recover it. 00:32:36.256 [2024-11-20 06:43:07.994447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.256 [2024-11-20 06:43:07.994480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.256 qpair failed and we were unable to recover it. 00:32:36.256 [2024-11-20 06:43:07.994660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.256 [2024-11-20 06:43:07.994692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.256 qpair failed and we were unable to recover it. 00:32:36.256 [2024-11-20 06:43:07.994873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.256 [2024-11-20 06:43:07.994905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.256 qpair failed and we were unable to recover it. 00:32:36.256 [2024-11-20 06:43:07.995043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.256 [2024-11-20 06:43:07.995074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.256 qpair failed and we were unable to recover it. 00:32:36.256 [2024-11-20 06:43:07.995248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.256 [2024-11-20 06:43:07.995282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.256 qpair failed and we were unable to recover it. 00:32:36.256 [2024-11-20 06:43:07.995463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.256 [2024-11-20 06:43:07.995495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.256 qpair failed and we were unable to recover it. 00:32:36.256 [2024-11-20 06:43:07.995627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.256 [2024-11-20 06:43:07.995668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.256 qpair failed and we were unable to recover it. 00:32:36.256 [2024-11-20 06:43:07.995884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.256 [2024-11-20 06:43:07.995916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.256 qpair failed and we were unable to recover it. 00:32:36.256 [2024-11-20 06:43:07.996115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.256 [2024-11-20 06:43:07.996147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.256 qpair failed and we were unable to recover it. 00:32:36.256 [2024-11-20 06:43:07.996259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.256 [2024-11-20 06:43:07.996293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.256 qpair failed and we were unable to recover it. 00:32:36.256 [2024-11-20 06:43:07.996409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.256 [2024-11-20 06:43:07.996441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.256 qpair failed and we were unable to recover it. 00:32:36.256 [2024-11-20 06:43:07.996566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.256 [2024-11-20 06:43:07.996604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.256 qpair failed and we were unable to recover it. 00:32:36.256 [2024-11-20 06:43:07.996708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.256 [2024-11-20 06:43:07.996740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.256 qpair failed and we were unable to recover it. 00:32:36.256 [2024-11-20 06:43:07.996866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.256 [2024-11-20 06:43:07.996898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.256 qpair failed and we were unable to recover it. 00:32:36.256 [2024-11-20 06:43:07.997138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.256 [2024-11-20 06:43:07.997169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.256 qpair failed and we were unable to recover it. 00:32:36.256 [2024-11-20 06:43:07.997297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.256 [2024-11-20 06:43:07.997330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.256 qpair failed and we were unable to recover it. 00:32:36.256 [2024-11-20 06:43:07.997445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.256 [2024-11-20 06:43:07.997478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.256 qpair failed and we were unable to recover it. 00:32:36.256 [2024-11-20 06:43:07.997590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.256 [2024-11-20 06:43:07.997622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.256 qpair failed and we were unable to recover it. 00:32:36.256 [2024-11-20 06:43:07.997821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.256 [2024-11-20 06:43:07.997854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.256 qpair failed and we were unable to recover it. 00:32:36.256 [2024-11-20 06:43:07.998031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.257 [2024-11-20 06:43:07.998063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.257 qpair failed and we were unable to recover it. 00:32:36.257 [2024-11-20 06:43:07.998253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.257 [2024-11-20 06:43:07.998286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.257 qpair failed and we were unable to recover it. 00:32:36.257 [2024-11-20 06:43:07.998402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.257 [2024-11-20 06:43:07.998435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.257 qpair failed and we were unable to recover it. 00:32:36.257 [2024-11-20 06:43:07.998544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.257 [2024-11-20 06:43:07.998576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.257 qpair failed and we were unable to recover it. 00:32:36.257 [2024-11-20 06:43:07.998700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.257 [2024-11-20 06:43:07.998732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.257 qpair failed and we were unable to recover it. 00:32:36.257 [2024-11-20 06:43:07.998917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.257 [2024-11-20 06:43:07.998950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.257 qpair failed and we were unable to recover it. 00:32:36.257 [2024-11-20 06:43:07.999061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.257 [2024-11-20 06:43:07.999094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.257 qpair failed and we were unable to recover it. 00:32:36.257 [2024-11-20 06:43:07.999294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.257 [2024-11-20 06:43:07.999328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.257 qpair failed and we were unable to recover it. 00:32:36.257 [2024-11-20 06:43:07.999454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.257 [2024-11-20 06:43:07.999487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.257 qpair failed and we were unable to recover it. 00:32:36.257 [2024-11-20 06:43:07.999617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.257 [2024-11-20 06:43:07.999650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.257 qpair failed and we were unable to recover it. 00:32:36.257 [2024-11-20 06:43:08.000015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.257 [2024-11-20 06:43:08.000048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.257 qpair failed and we were unable to recover it. 00:32:36.257 [2024-11-20 06:43:08.000172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.257 [2024-11-20 06:43:08.000211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.257 qpair failed and we were unable to recover it. 00:32:36.257 [2024-11-20 06:43:08.000331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.257 [2024-11-20 06:43:08.000364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.257 qpair failed and we were unable to recover it. 00:32:36.257 [2024-11-20 06:43:08.000484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.257 [2024-11-20 06:43:08.000517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.257 qpair failed and we were unable to recover it. 00:32:36.257 [2024-11-20 06:43:08.000643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.257 [2024-11-20 06:43:08.000675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.257 qpair failed and we were unable to recover it. 00:32:36.257 [2024-11-20 06:43:08.000778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.257 [2024-11-20 06:43:08.000810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.257 qpair failed and we were unable to recover it. 00:32:36.257 [2024-11-20 06:43:08.000977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.257 [2024-11-20 06:43:08.001010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.257 qpair failed and we were unable to recover it. 00:32:36.257 [2024-11-20 06:43:08.001189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.257 [2024-11-20 06:43:08.001234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.257 qpair failed and we were unable to recover it. 00:32:36.257 [2024-11-20 06:43:08.001480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.257 [2024-11-20 06:43:08.001514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.257 qpair failed and we were unable to recover it. 00:32:36.257 [2024-11-20 06:43:08.001711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.257 [2024-11-20 06:43:08.001744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.257 qpair failed and we were unable to recover it. 00:32:36.257 [2024-11-20 06:43:08.001942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.257 [2024-11-20 06:43:08.001973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.257 qpair failed and we were unable to recover it. 00:32:36.257 [2024-11-20 06:43:08.002157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.257 [2024-11-20 06:43:08.002190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.257 qpair failed and we were unable to recover it. 00:32:36.257 [2024-11-20 06:43:08.002307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.257 [2024-11-20 06:43:08.002341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.257 qpair failed and we were unable to recover it. 00:32:36.257 [2024-11-20 06:43:08.002528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.257 [2024-11-20 06:43:08.002560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.257 qpair failed and we were unable to recover it. 00:32:36.257 [2024-11-20 06:43:08.002683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.257 [2024-11-20 06:43:08.002715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.257 qpair failed and we were unable to recover it. 00:32:36.257 [2024-11-20 06:43:08.002824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.257 [2024-11-20 06:43:08.002855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.257 qpair failed and we were unable to recover it. 00:32:36.257 [2024-11-20 06:43:08.003114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.257 [2024-11-20 06:43:08.003147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.257 qpair failed and we were unable to recover it. 00:32:36.257 [2024-11-20 06:43:08.003343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.257 [2024-11-20 06:43:08.003377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.257 qpair failed and we were unable to recover it. 00:32:36.257 [2024-11-20 06:43:08.003551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.257 [2024-11-20 06:43:08.003584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.257 qpair failed and we were unable to recover it. 00:32:36.257 [2024-11-20 06:43:08.003792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.257 [2024-11-20 06:43:08.003823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.257 qpair failed and we were unable to recover it. 00:32:36.257 [2024-11-20 06:43:08.003933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.257 [2024-11-20 06:43:08.003965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.257 qpair failed and we were unable to recover it. 00:32:36.257 [2024-11-20 06:43:08.004084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.257 [2024-11-20 06:43:08.004117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.257 qpair failed and we were unable to recover it. 00:32:36.257 [2024-11-20 06:43:08.004308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.257 [2024-11-20 06:43:08.004353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.257 qpair failed and we were unable to recover it. 00:32:36.257 [2024-11-20 06:43:08.004473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.257 [2024-11-20 06:43:08.004506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.257 qpair failed and we were unable to recover it. 00:32:36.257 [2024-11-20 06:43:08.004708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.257 [2024-11-20 06:43:08.004740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.257 qpair failed and we were unable to recover it. 00:32:36.257 [2024-11-20 06:43:08.004866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.257 [2024-11-20 06:43:08.004899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.257 qpair failed and we were unable to recover it. 00:32:36.257 [2024-11-20 06:43:08.005073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.257 [2024-11-20 06:43:08.005106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.257 qpair failed and we were unable to recover it. 00:32:36.257 [2024-11-20 06:43:08.005239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.257 [2024-11-20 06:43:08.005272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.257 qpair failed and we were unable to recover it. 00:32:36.257 [2024-11-20 06:43:08.005404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.257 [2024-11-20 06:43:08.005436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.257 qpair failed and we were unable to recover it. 00:32:36.257 [2024-11-20 06:43:08.005540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.257 [2024-11-20 06:43:08.005571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.257 qpair failed and we were unable to recover it. 00:32:36.257 [2024-11-20 06:43:08.005673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.257 [2024-11-20 06:43:08.005705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.257 qpair failed and we were unable to recover it. 00:32:36.257 [2024-11-20 06:43:08.005878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.257 [2024-11-20 06:43:08.005911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.257 qpair failed and we were unable to recover it. 00:32:36.257 [2024-11-20 06:43:08.006122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.257 [2024-11-20 06:43:08.006153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.257 qpair failed and we were unable to recover it. 00:32:36.257 [2024-11-20 06:43:08.006358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.257 [2024-11-20 06:43:08.006392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.257 qpair failed and we were unable to recover it. 00:32:36.257 [2024-11-20 06:43:08.006523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.257 [2024-11-20 06:43:08.006555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.257 qpair failed and we were unable to recover it. 00:32:36.257 [2024-11-20 06:43:08.006676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.257 [2024-11-20 06:43:08.006707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.257 qpair failed and we were unable to recover it. 00:32:36.257 [2024-11-20 06:43:08.006829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.257 [2024-11-20 06:43:08.006862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.257 qpair failed and we were unable to recover it. 00:32:36.257 [2024-11-20 06:43:08.006967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.257 [2024-11-20 06:43:08.006999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.257 qpair failed and we were unable to recover it. 00:32:36.257 [2024-11-20 06:43:08.007106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.257 [2024-11-20 06:43:08.007139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.257 qpair failed and we were unable to recover it. 00:32:36.257 [2024-11-20 06:43:08.007256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.257 [2024-11-20 06:43:08.007288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.257 qpair failed and we were unable to recover it. 00:32:36.257 [2024-11-20 06:43:08.007425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.257 [2024-11-20 06:43:08.007458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.257 qpair failed and we were unable to recover it. 00:32:36.257 [2024-11-20 06:43:08.007654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.257 [2024-11-20 06:43:08.007687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.257 qpair failed and we were unable to recover it. 00:32:36.257 [2024-11-20 06:43:08.007810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.257 [2024-11-20 06:43:08.007842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.257 qpair failed and we were unable to recover it. 00:32:36.257 [2024-11-20 06:43:08.007957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.257 [2024-11-20 06:43:08.007989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.257 qpair failed and we were unable to recover it. 00:32:36.257 [2024-11-20 06:43:08.008096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.257 [2024-11-20 06:43:08.008127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.257 qpair failed and we were unable to recover it. 00:32:36.257 [2024-11-20 06:43:08.008380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.257 [2024-11-20 06:43:08.008415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.257 qpair failed and we were unable to recover it. 00:32:36.257 [2024-11-20 06:43:08.008600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.257 [2024-11-20 06:43:08.008632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.257 qpair failed and we were unable to recover it. 00:32:36.257 [2024-11-20 06:43:08.008805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.257 [2024-11-20 06:43:08.008838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.257 qpair failed and we were unable to recover it. 00:32:36.257 [2024-11-20 06:43:08.009036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.257 [2024-11-20 06:43:08.009069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.257 qpair failed and we were unable to recover it. 00:32:36.257 [2024-11-20 06:43:08.009211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.257 [2024-11-20 06:43:08.009244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.257 qpair failed and we were unable to recover it. 00:32:36.257 [2024-11-20 06:43:08.009353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.257 [2024-11-20 06:43:08.009385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.257 qpair failed and we were unable to recover it. 00:32:36.257 [2024-11-20 06:43:08.009500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.257 [2024-11-20 06:43:08.009532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.257 qpair failed and we were unable to recover it. 00:32:36.257 [2024-11-20 06:43:08.009703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.257 [2024-11-20 06:43:08.009734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.257 qpair failed and we were unable to recover it. 00:32:36.257 [2024-11-20 06:43:08.009918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.257 [2024-11-20 06:43:08.009950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.257 qpair failed and we were unable to recover it. 00:32:36.257 [2024-11-20 06:43:08.010057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.257 [2024-11-20 06:43:08.010090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.257 qpair failed and we were unable to recover it. 00:32:36.257 [2024-11-20 06:43:08.010228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.257 [2024-11-20 06:43:08.010262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.257 qpair failed and we were unable to recover it. 00:32:36.257 [2024-11-20 06:43:08.010372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.257 [2024-11-20 06:43:08.010404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.257 qpair failed and we were unable to recover it. 00:32:36.257 [2024-11-20 06:43:08.010527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.257 [2024-11-20 06:43:08.010560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.257 qpair failed and we were unable to recover it. 00:32:36.257 [2024-11-20 06:43:08.010683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.257 [2024-11-20 06:43:08.010715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.257 qpair failed and we were unable to recover it. 00:32:36.257 [2024-11-20 06:43:08.010884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.257 [2024-11-20 06:43:08.010916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.257 qpair failed and we were unable to recover it. 00:32:36.257 [2024-11-20 06:43:08.011040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.257 [2024-11-20 06:43:08.011073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.258 qpair failed and we were unable to recover it. 00:32:36.258 [2024-11-20 06:43:08.011252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.258 [2024-11-20 06:43:08.011285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.258 qpair failed and we were unable to recover it. 00:32:36.258 [2024-11-20 06:43:08.011420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.258 [2024-11-20 06:43:08.011459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.258 qpair failed and we were unable to recover it. 00:32:36.258 [2024-11-20 06:43:08.011568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.258 [2024-11-20 06:43:08.011600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.258 qpair failed and we were unable to recover it. 00:32:36.258 [2024-11-20 06:43:08.011740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.258 [2024-11-20 06:43:08.011772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.258 qpair failed and we were unable to recover it. 00:32:36.258 [2024-11-20 06:43:08.011889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.258 [2024-11-20 06:43:08.011922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.258 qpair failed and we were unable to recover it. 00:32:36.258 [2024-11-20 06:43:08.012030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.258 [2024-11-20 06:43:08.012063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.258 qpair failed and we were unable to recover it. 00:32:36.258 [2024-11-20 06:43:08.012167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.258 [2024-11-20 06:43:08.012199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.258 qpair failed and we were unable to recover it. 00:32:36.258 [2024-11-20 06:43:08.012456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.258 [2024-11-20 06:43:08.012489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.258 qpair failed and we were unable to recover it. 00:32:36.258 [2024-11-20 06:43:08.012697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.258 [2024-11-20 06:43:08.012729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.258 qpair failed and we were unable to recover it. 00:32:36.258 [2024-11-20 06:43:08.012849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.258 [2024-11-20 06:43:08.012882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.258 qpair failed and we were unable to recover it. 00:32:36.258 [2024-11-20 06:43:08.013082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.258 [2024-11-20 06:43:08.013114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.258 qpair failed and we were unable to recover it. 00:32:36.258 [2024-11-20 06:43:08.013223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.258 [2024-11-20 06:43:08.013258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.258 qpair failed and we were unable to recover it. 00:32:36.258 [2024-11-20 06:43:08.013443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.258 [2024-11-20 06:43:08.013476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.258 qpair failed and we were unable to recover it. 00:32:36.258 [2024-11-20 06:43:08.013603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.258 [2024-11-20 06:43:08.013635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.258 qpair failed and we were unable to recover it. 00:32:36.258 [2024-11-20 06:43:08.013759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.258 [2024-11-20 06:43:08.013791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.258 qpair failed and we were unable to recover it. 00:32:36.258 [2024-11-20 06:43:08.013917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.258 [2024-11-20 06:43:08.013950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.258 qpair failed and we were unable to recover it. 00:32:36.258 [2024-11-20 06:43:08.014119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.258 [2024-11-20 06:43:08.014152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.258 qpair failed and we were unable to recover it. 00:32:36.258 [2024-11-20 06:43:08.014275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.258 [2024-11-20 06:43:08.014307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.258 qpair failed and we were unable to recover it. 00:32:36.258 [2024-11-20 06:43:08.014547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.258 [2024-11-20 06:43:08.014580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.258 qpair failed and we were unable to recover it. 00:32:36.258 [2024-11-20 06:43:08.014683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.258 [2024-11-20 06:43:08.014714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.258 qpair failed and we were unable to recover it. 00:32:36.258 [2024-11-20 06:43:08.014823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.258 [2024-11-20 06:43:08.014855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.258 qpair failed and we were unable to recover it. 00:32:36.258 [2024-11-20 06:43:08.015045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.258 [2024-11-20 06:43:08.015078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.258 qpair failed and we were unable to recover it. 00:32:36.258 [2024-11-20 06:43:08.015256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.258 [2024-11-20 06:43:08.015290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.258 qpair failed and we were unable to recover it. 00:32:36.258 [2024-11-20 06:43:08.015410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.258 [2024-11-20 06:43:08.015442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.258 qpair failed and we were unable to recover it. 00:32:36.258 [2024-11-20 06:43:08.015558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.258 [2024-11-20 06:43:08.015591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.258 qpair failed and we were unable to recover it. 00:32:36.258 [2024-11-20 06:43:08.015705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.258 [2024-11-20 06:43:08.015738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.258 qpair failed and we were unable to recover it. 00:32:36.258 [2024-11-20 06:43:08.015862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.258 [2024-11-20 06:43:08.015895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.258 qpair failed and we were unable to recover it. 00:32:36.258 [2024-11-20 06:43:08.016008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.258 [2024-11-20 06:43:08.016040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.258 qpair failed and we were unable to recover it. 00:32:36.258 [2024-11-20 06:43:08.016218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.258 [2024-11-20 06:43:08.016291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.258 qpair failed and we were unable to recover it. 00:32:36.258 [2024-11-20 06:43:08.016499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.258 [2024-11-20 06:43:08.016535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.258 qpair failed and we were unable to recover it. 00:32:36.258 [2024-11-20 06:43:08.016719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.258 [2024-11-20 06:43:08.016751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.258 qpair failed and we were unable to recover it. 00:32:36.258 [2024-11-20 06:43:08.016956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.258 [2024-11-20 06:43:08.016988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.258 qpair failed and we were unable to recover it. 00:32:36.258 [2024-11-20 06:43:08.017101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.258 [2024-11-20 06:43:08.017133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.258 qpair failed and we were unable to recover it. 00:32:36.258 [2024-11-20 06:43:08.017309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.258 [2024-11-20 06:43:08.017343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.258 qpair failed and we were unable to recover it. 00:32:36.258 [2024-11-20 06:43:08.017591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.258 [2024-11-20 06:43:08.017628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.258 qpair failed and we were unable to recover it. 00:32:36.258 [2024-11-20 06:43:08.017803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.258 [2024-11-20 06:43:08.017835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.258 qpair failed and we were unable to recover it. 00:32:36.258 [2024-11-20 06:43:08.017953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.258 [2024-11-20 06:43:08.017986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.258 qpair failed and we were unable to recover it. 00:32:36.258 [2024-11-20 06:43:08.018096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.258 [2024-11-20 06:43:08.018129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.258 qpair failed and we were unable to recover it. 00:32:36.258 [2024-11-20 06:43:08.018252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.258 [2024-11-20 06:43:08.018285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.258 qpair failed and we were unable to recover it. 00:32:36.258 [2024-11-20 06:43:08.018398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.258 [2024-11-20 06:43:08.018430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.258 qpair failed and we were unable to recover it. 00:32:36.258 [2024-11-20 06:43:08.018637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.258 [2024-11-20 06:43:08.018670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.258 qpair failed and we were unable to recover it. 00:32:36.258 [2024-11-20 06:43:08.018773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.258 [2024-11-20 06:43:08.018804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.258 qpair failed and we were unable to recover it. 00:32:36.258 [2024-11-20 06:43:08.018996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.258 [2024-11-20 06:43:08.019028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.258 qpair failed and we were unable to recover it. 00:32:36.258 [2024-11-20 06:43:08.019217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.258 [2024-11-20 06:43:08.019251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.258 qpair failed and we were unable to recover it. 00:32:36.258 [2024-11-20 06:43:08.019395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.258 [2024-11-20 06:43:08.019427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.258 qpair failed and we were unable to recover it. 00:32:36.258 [2024-11-20 06:43:08.019546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.258 [2024-11-20 06:43:08.019578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.258 qpair failed and we were unable to recover it. 00:32:36.258 [2024-11-20 06:43:08.019701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.258 [2024-11-20 06:43:08.019734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.258 qpair failed and we were unable to recover it. 00:32:36.258 [2024-11-20 06:43:08.019866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.258 [2024-11-20 06:43:08.019899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.258 qpair failed and we were unable to recover it. 00:32:36.258 [2024-11-20 06:43:08.020009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.258 [2024-11-20 06:43:08.020040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.258 qpair failed and we were unable to recover it. 00:32:36.258 [2024-11-20 06:43:08.020221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.258 [2024-11-20 06:43:08.020255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.258 qpair failed and we were unable to recover it. 00:32:36.258 [2024-11-20 06:43:08.020375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.258 [2024-11-20 06:43:08.020407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.258 qpair failed and we were unable to recover it. 00:32:36.258 [2024-11-20 06:43:08.020594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.258 [2024-11-20 06:43:08.020626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.258 qpair failed and we were unable to recover it. 00:32:36.258 [2024-11-20 06:43:08.020805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.258 [2024-11-20 06:43:08.020837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.258 qpair failed and we were unable to recover it. 00:32:36.258 [2024-11-20 06:43:08.021015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.258 [2024-11-20 06:43:08.021047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.258 qpair failed and we were unable to recover it. 00:32:36.258 [2024-11-20 06:43:08.021168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.258 [2024-11-20 06:43:08.021200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.258 qpair failed and we were unable to recover it. 00:32:36.258 [2024-11-20 06:43:08.021335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.258 [2024-11-20 06:43:08.021378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.258 qpair failed and we were unable to recover it. 00:32:36.258 [2024-11-20 06:43:08.021559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.258 [2024-11-20 06:43:08.021591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.258 qpair failed and we were unable to recover it. 00:32:36.258 [2024-11-20 06:43:08.021714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.258 [2024-11-20 06:43:08.021746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.258 qpair failed and we were unable to recover it. 00:32:36.258 [2024-11-20 06:43:08.021944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.258 [2024-11-20 06:43:08.021976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.258 qpair failed and we were unable to recover it. 00:32:36.258 [2024-11-20 06:43:08.022258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.258 [2024-11-20 06:43:08.022292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.258 qpair failed and we were unable to recover it. 00:32:36.258 [2024-11-20 06:43:08.022419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.258 [2024-11-20 06:43:08.022451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.258 qpair failed and we were unable to recover it. 00:32:36.258 [2024-11-20 06:43:08.022573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.258 [2024-11-20 06:43:08.022605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.258 qpair failed and we were unable to recover it. 00:32:36.258 [2024-11-20 06:43:08.022815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.258 [2024-11-20 06:43:08.022848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.258 qpair failed and we were unable to recover it. 00:32:36.259 [2024-11-20 06:43:08.022968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.259 [2024-11-20 06:43:08.023000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.259 qpair failed and we were unable to recover it. 00:32:36.259 [2024-11-20 06:43:08.023126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.259 [2024-11-20 06:43:08.023158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.259 qpair failed and we were unable to recover it. 00:32:36.259 [2024-11-20 06:43:08.023362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.259 [2024-11-20 06:43:08.023396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.259 qpair failed and we were unable to recover it. 00:32:36.259 [2024-11-20 06:43:08.023511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.259 [2024-11-20 06:43:08.023544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.259 qpair failed and we were unable to recover it. 00:32:36.259 [2024-11-20 06:43:08.023735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.259 [2024-11-20 06:43:08.023766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.259 qpair failed and we were unable to recover it. 00:32:36.259 [2024-11-20 06:43:08.023975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.259 [2024-11-20 06:43:08.024008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.259 qpair failed and we were unable to recover it. 00:32:36.259 [2024-11-20 06:43:08.024149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.259 [2024-11-20 06:43:08.024183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.259 qpair failed and we were unable to recover it. 00:32:36.259 [2024-11-20 06:43:08.024380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.259 [2024-11-20 06:43:08.024413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.259 qpair failed and we were unable to recover it. 00:32:36.259 [2024-11-20 06:43:08.024675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.259 [2024-11-20 06:43:08.024707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.259 qpair failed and we were unable to recover it. 00:32:36.259 [2024-11-20 06:43:08.024832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.259 [2024-11-20 06:43:08.024864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.259 qpair failed and we were unable to recover it. 00:32:36.259 [2024-11-20 06:43:08.024977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.259 [2024-11-20 06:43:08.025009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.259 qpair failed and we were unable to recover it. 00:32:36.259 [2024-11-20 06:43:08.025125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.259 [2024-11-20 06:43:08.025157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.259 qpair failed and we were unable to recover it. 00:32:36.259 [2024-11-20 06:43:08.025338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.259 [2024-11-20 06:43:08.025371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.259 qpair failed and we were unable to recover it. 00:32:36.259 [2024-11-20 06:43:08.025497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.259 [2024-11-20 06:43:08.025530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.259 qpair failed and we were unable to recover it. 00:32:36.259 [2024-11-20 06:43:08.025755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.259 [2024-11-20 06:43:08.025787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.259 qpair failed and we were unable to recover it. 00:32:36.259 [2024-11-20 06:43:08.025976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.259 [2024-11-20 06:43:08.026008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.259 qpair failed and we were unable to recover it. 00:32:36.259 [2024-11-20 06:43:08.026117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.259 [2024-11-20 06:43:08.026148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.259 qpair failed and we were unable to recover it. 00:32:36.259 [2024-11-20 06:43:08.026420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.259 [2024-11-20 06:43:08.026454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.259 qpair failed and we were unable to recover it. 00:32:36.259 [2024-11-20 06:43:08.026568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.259 [2024-11-20 06:43:08.026600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.259 qpair failed and we were unable to recover it. 00:32:36.259 [2024-11-20 06:43:08.026712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.259 [2024-11-20 06:43:08.026749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.259 qpair failed and we were unable to recover it. 00:32:36.259 [2024-11-20 06:43:08.026880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.259 [2024-11-20 06:43:08.026913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.259 qpair failed and we were unable to recover it. 00:32:36.259 [2024-11-20 06:43:08.027107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.259 [2024-11-20 06:43:08.027140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.259 qpair failed and we were unable to recover it. 00:32:36.259 [2024-11-20 06:43:08.027317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.259 [2024-11-20 06:43:08.027350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.259 qpair failed and we were unable to recover it. 00:32:36.259 [2024-11-20 06:43:08.027477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.259 [2024-11-20 06:43:08.027509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.259 qpair failed and we were unable to recover it. 00:32:36.259 [2024-11-20 06:43:08.027680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.259 [2024-11-20 06:43:08.027712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.259 qpair failed and we were unable to recover it. 00:32:36.259 [2024-11-20 06:43:08.027825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.259 [2024-11-20 06:43:08.027857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.259 qpair failed and we were unable to recover it. 00:32:36.259 [2024-11-20 06:43:08.027974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.259 [2024-11-20 06:43:08.028007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.259 qpair failed and we were unable to recover it. 00:32:36.259 [2024-11-20 06:43:08.028188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.259 [2024-11-20 06:43:08.028233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.259 qpair failed and we were unable to recover it. 00:32:36.259 [2024-11-20 06:43:08.028406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.259 [2024-11-20 06:43:08.028439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.259 qpair failed and we were unable to recover it. 00:32:36.259 [2024-11-20 06:43:08.028619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.259 [2024-11-20 06:43:08.028652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.259 qpair failed and we were unable to recover it. 00:32:36.259 [2024-11-20 06:43:08.028768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.259 [2024-11-20 06:43:08.028799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.259 qpair failed and we were unable to recover it. 00:32:36.259 [2024-11-20 06:43:08.028972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.259 [2024-11-20 06:43:08.029004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.259 qpair failed and we were unable to recover it. 00:32:36.259 [2024-11-20 06:43:08.029125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.259 [2024-11-20 06:43:08.029159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.259 qpair failed and we were unable to recover it. 00:32:36.259 [2024-11-20 06:43:08.029418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.259 [2024-11-20 06:43:08.029489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.259 qpair failed and we were unable to recover it. 00:32:36.259 [2024-11-20 06:43:08.029696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.259 [2024-11-20 06:43:08.029731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.259 qpair failed and we were unable to recover it. 00:32:36.259 [2024-11-20 06:43:08.029909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.259 [2024-11-20 06:43:08.029942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.259 qpair failed and we were unable to recover it. 00:32:36.259 [2024-11-20 06:43:08.030126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.259 [2024-11-20 06:43:08.030158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.259 qpair failed and we were unable to recover it. 00:32:36.259 [2024-11-20 06:43:08.030364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.259 [2024-11-20 06:43:08.030398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.259 qpair failed and we were unable to recover it. 00:32:36.259 [2024-11-20 06:43:08.030523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.259 [2024-11-20 06:43:08.030556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.259 qpair failed and we were unable to recover it. 00:32:36.259 [2024-11-20 06:43:08.030663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.259 [2024-11-20 06:43:08.030695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.259 qpair failed and we were unable to recover it. 00:32:36.259 [2024-11-20 06:43:08.030876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.259 [2024-11-20 06:43:08.030908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.259 qpair failed and we were unable to recover it. 00:32:36.259 [2024-11-20 06:43:08.031102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.259 [2024-11-20 06:43:08.031134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.259 qpair failed and we were unable to recover it. 00:32:36.259 [2024-11-20 06:43:08.031310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.259 [2024-11-20 06:43:08.031344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.259 qpair failed and we were unable to recover it. 00:32:36.259 [2024-11-20 06:43:08.031473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.259 [2024-11-20 06:43:08.031504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.259 qpair failed and we were unable to recover it. 00:32:36.259 [2024-11-20 06:43:08.031698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.259 [2024-11-20 06:43:08.031740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.259 qpair failed and we were unable to recover it. 00:32:36.259 [2024-11-20 06:43:08.032033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.259 [2024-11-20 06:43:08.032066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.259 qpair failed and we were unable to recover it. 00:32:36.259 [2024-11-20 06:43:08.032178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.259 [2024-11-20 06:43:08.032241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.259 qpair failed and we were unable to recover it. 00:32:36.259 [2024-11-20 06:43:08.032384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.259 [2024-11-20 06:43:08.032432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.259 qpair failed and we were unable to recover it. 00:32:36.259 [2024-11-20 06:43:08.032585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.259 [2024-11-20 06:43:08.032631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.259 qpair failed and we were unable to recover it. 00:32:36.259 [2024-11-20 06:43:08.032831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.259 [2024-11-20 06:43:08.032873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.259 qpair failed and we were unable to recover it. 00:32:36.259 [2024-11-20 06:43:08.033077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.259 [2024-11-20 06:43:08.033114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.259 qpair failed and we were unable to recover it. 00:32:36.259 [2024-11-20 06:43:08.033261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.259 [2024-11-20 06:43:08.033296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.259 qpair failed and we were unable to recover it. 00:32:36.259 [2024-11-20 06:43:08.033413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.259 [2024-11-20 06:43:08.033446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.259 qpair failed and we were unable to recover it. 00:32:36.259 [2024-11-20 06:43:08.033652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.259 [2024-11-20 06:43:08.033683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.259 qpair failed and we were unable to recover it. 00:32:36.259 [2024-11-20 06:43:08.033858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.259 [2024-11-20 06:43:08.033889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.259 qpair failed and we were unable to recover it. 00:32:36.259 [2024-11-20 06:43:08.033993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.259 [2024-11-20 06:43:08.034027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.259 qpair failed and we were unable to recover it. 00:32:36.259 [2024-11-20 06:43:08.034222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.259 [2024-11-20 06:43:08.034255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.259 qpair failed and we were unable to recover it. 00:32:36.259 [2024-11-20 06:43:08.034435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.259 [2024-11-20 06:43:08.034468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.259 qpair failed and we were unable to recover it. 00:32:36.259 [2024-11-20 06:43:08.034652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.259 [2024-11-20 06:43:08.034684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.259 qpair failed and we were unable to recover it. 00:32:36.259 [2024-11-20 06:43:08.034807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.259 [2024-11-20 06:43:08.034840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.259 qpair failed and we were unable to recover it. 00:32:36.259 [2024-11-20 06:43:08.034988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.259 [2024-11-20 06:43:08.035022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.259 qpair failed and we were unable to recover it. 00:32:36.259 [2024-11-20 06:43:08.035144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.259 [2024-11-20 06:43:08.035175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.259 qpair failed and we were unable to recover it. 00:32:36.259 [2024-11-20 06:43:08.035380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.259 [2024-11-20 06:43:08.035413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.259 qpair failed and we were unable to recover it. 00:32:36.259 [2024-11-20 06:43:08.035593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.259 [2024-11-20 06:43:08.035626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.259 qpair failed and we were unable to recover it. 00:32:36.259 [2024-11-20 06:43:08.035812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.259 [2024-11-20 06:43:08.035843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.259 qpair failed and we were unable to recover it. 00:32:36.259 [2024-11-20 06:43:08.036046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.259 [2024-11-20 06:43:08.036081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.259 qpair failed and we were unable to recover it. 00:32:36.259 [2024-11-20 06:43:08.036261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.259 [2024-11-20 06:43:08.036309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.259 qpair failed and we were unable to recover it. 00:32:36.259 [2024-11-20 06:43:08.036472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.259 [2024-11-20 06:43:08.036518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.259 qpair failed and we were unable to recover it. 00:32:36.259 [2024-11-20 06:43:08.036655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.259 [2024-11-20 06:43:08.036689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.259 qpair failed and we were unable to recover it. 00:32:36.259 [2024-11-20 06:43:08.036815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.259 [2024-11-20 06:43:08.036846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.259 qpair failed and we were unable to recover it. 00:32:36.259 [2024-11-20 06:43:08.036951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.259 [2024-11-20 06:43:08.036982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.259 qpair failed and we were unable to recover it. 00:32:36.259 [2024-11-20 06:43:08.037154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.260 [2024-11-20 06:43:08.037185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.260 qpair failed and we were unable to recover it. 00:32:36.260 [2024-11-20 06:43:08.037389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.260 [2024-11-20 06:43:08.037422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.260 qpair failed and we were unable to recover it. 00:32:36.260 [2024-11-20 06:43:08.037585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.260 [2024-11-20 06:43:08.037658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.260 qpair failed and we were unable to recover it. 00:32:36.260 [2024-11-20 06:43:08.037860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.260 [2024-11-20 06:43:08.037896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.260 qpair failed and we were unable to recover it. 00:32:36.260 [2024-11-20 06:43:08.038080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.260 [2024-11-20 06:43:08.038111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.260 qpair failed and we were unable to recover it. 00:32:36.260 [2024-11-20 06:43:08.038299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.260 [2024-11-20 06:43:08.038335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.260 qpair failed and we were unable to recover it. 00:32:36.260 [2024-11-20 06:43:08.038616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.260 [2024-11-20 06:43:08.038647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.260 qpair failed and we were unable to recover it. 00:32:36.260 [2024-11-20 06:43:08.038780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.260 [2024-11-20 06:43:08.038812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.260 qpair failed and we were unable to recover it. 00:32:36.260 [2024-11-20 06:43:08.038992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.260 [2024-11-20 06:43:08.039023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.260 qpair failed and we were unable to recover it. 00:32:36.260 [2024-11-20 06:43:08.039159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.260 [2024-11-20 06:43:08.039191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.260 qpair failed and we were unable to recover it. 00:32:36.260 [2024-11-20 06:43:08.039385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.260 [2024-11-20 06:43:08.039419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.260 qpair failed and we were unable to recover it. 00:32:36.260 [2024-11-20 06:43:08.039538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.260 [2024-11-20 06:43:08.039570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.260 qpair failed and we were unable to recover it. 00:32:36.260 [2024-11-20 06:43:08.039814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.260 [2024-11-20 06:43:08.039846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.260 qpair failed and we were unable to recover it. 00:32:36.260 [2024-11-20 06:43:08.040106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.260 [2024-11-20 06:43:08.040138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.260 qpair failed and we were unable to recover it. 00:32:36.260 [2024-11-20 06:43:08.040323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.260 [2024-11-20 06:43:08.040355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.260 qpair failed and we were unable to recover it. 00:32:36.260 [2024-11-20 06:43:08.040478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.260 [2024-11-20 06:43:08.040511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.260 qpair failed and we were unable to recover it. 00:32:36.260 [2024-11-20 06:43:08.040635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.260 [2024-11-20 06:43:08.040667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.260 qpair failed and we were unable to recover it. 00:32:36.260 [2024-11-20 06:43:08.040782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.260 [2024-11-20 06:43:08.040814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.260 qpair failed and we were unable to recover it. 00:32:36.260 [2024-11-20 06:43:08.040921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.260 [2024-11-20 06:43:08.040953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.260 qpair failed and we were unable to recover it. 00:32:36.260 [2024-11-20 06:43:08.041080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.260 [2024-11-20 06:43:08.041111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.260 qpair failed and we were unable to recover it. 00:32:36.260 [2024-11-20 06:43:08.041242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.260 [2024-11-20 06:43:08.041276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.260 qpair failed and we were unable to recover it. 00:32:36.260 [2024-11-20 06:43:08.041388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.260 [2024-11-20 06:43:08.041420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.260 qpair failed and we were unable to recover it. 00:32:36.260 [2024-11-20 06:43:08.041535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.260 [2024-11-20 06:43:08.041565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.260 qpair failed and we were unable to recover it. 00:32:36.260 [2024-11-20 06:43:08.041750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.260 [2024-11-20 06:43:08.041781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.260 qpair failed and we were unable to recover it. 00:32:36.260 [2024-11-20 06:43:08.041966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.260 [2024-11-20 06:43:08.041997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.260 qpair failed and we were unable to recover it. 00:32:36.260 [2024-11-20 06:43:08.042101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.260 [2024-11-20 06:43:08.042133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.260 qpair failed and we were unable to recover it. 00:32:36.260 [2024-11-20 06:43:08.042265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.260 [2024-11-20 06:43:08.042298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.260 qpair failed and we were unable to recover it. 00:32:36.260 [2024-11-20 06:43:08.042418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.260 [2024-11-20 06:43:08.042450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.260 qpair failed and we were unable to recover it. 00:32:36.260 [2024-11-20 06:43:08.042624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.260 [2024-11-20 06:43:08.042655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.260 qpair failed and we were unable to recover it. 00:32:36.260 [2024-11-20 06:43:08.042835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.260 [2024-11-20 06:43:08.042873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.260 qpair failed and we were unable to recover it. 00:32:36.260 [2024-11-20 06:43:08.043065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.260 [2024-11-20 06:43:08.043096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.260 qpair failed and we were unable to recover it. 00:32:36.260 [2024-11-20 06:43:08.043235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.260 [2024-11-20 06:43:08.043268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.260 qpair failed and we were unable to recover it. 00:32:36.260 [2024-11-20 06:43:08.043402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.260 [2024-11-20 06:43:08.043433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.260 qpair failed and we were unable to recover it. 00:32:36.260 [2024-11-20 06:43:08.043541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.260 [2024-11-20 06:43:08.043573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.260 qpair failed and we were unable to recover it. 00:32:36.260 [2024-11-20 06:43:08.043684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.260 [2024-11-20 06:43:08.043715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.260 qpair failed and we were unable to recover it. 00:32:36.260 [2024-11-20 06:43:08.043915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.260 [2024-11-20 06:43:08.043946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.260 qpair failed and we were unable to recover it. 00:32:36.260 [2024-11-20 06:43:08.044062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.260 [2024-11-20 06:43:08.044094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.260 qpair failed and we were unable to recover it. 00:32:36.260 [2024-11-20 06:43:08.044266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.260 [2024-11-20 06:43:08.044298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.260 qpair failed and we were unable to recover it. 00:32:36.260 [2024-11-20 06:43:08.044437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.260 [2024-11-20 06:43:08.044469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.260 qpair failed and we were unable to recover it. 00:32:36.260 [2024-11-20 06:43:08.044581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.260 [2024-11-20 06:43:08.044612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.260 qpair failed and we were unable to recover it. 00:32:36.260 [2024-11-20 06:43:08.044727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.260 [2024-11-20 06:43:08.044759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.260 qpair failed and we were unable to recover it. 00:32:36.540 [2024-11-20 06:43:08.044866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.540 [2024-11-20 06:43:08.044898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.540 qpair failed and we were unable to recover it. 00:32:36.540 [2024-11-20 06:43:08.045013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.540 [2024-11-20 06:43:08.045044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.540 qpair failed and we were unable to recover it. 00:32:36.540 [2024-11-20 06:43:08.045175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.540 [2024-11-20 06:43:08.045222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.540 qpair failed and we were unable to recover it. 00:32:36.540 [2024-11-20 06:43:08.045329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.540 [2024-11-20 06:43:08.045361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.540 qpair failed and we were unable to recover it. 00:32:36.540 [2024-11-20 06:43:08.045474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.540 [2024-11-20 06:43:08.045505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.540 qpair failed and we were unable to recover it. 00:32:36.540 [2024-11-20 06:43:08.045629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.541 [2024-11-20 06:43:08.045660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.541 qpair failed and we were unable to recover it. 00:32:36.541 [2024-11-20 06:43:08.045849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.541 [2024-11-20 06:43:08.045880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.541 qpair failed and we were unable to recover it. 00:32:36.541 [2024-11-20 06:43:08.046065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.541 [2024-11-20 06:43:08.046096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.541 qpair failed and we were unable to recover it. 00:32:36.541 [2024-11-20 06:43:08.046339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.541 [2024-11-20 06:43:08.046372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.541 qpair failed and we were unable to recover it. 00:32:36.541 [2024-11-20 06:43:08.046556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.541 [2024-11-20 06:43:08.046588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.541 qpair failed and we were unable to recover it. 00:32:36.541 [2024-11-20 06:43:08.046712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.541 [2024-11-20 06:43:08.046742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.541 qpair failed and we were unable to recover it. 00:32:36.541 [2024-11-20 06:43:08.046918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.541 [2024-11-20 06:43:08.046951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.541 qpair failed and we were unable to recover it. 00:32:36.541 [2024-11-20 06:43:08.047146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.541 [2024-11-20 06:43:08.047178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.541 qpair failed and we were unable to recover it. 00:32:36.541 [2024-11-20 06:43:08.047312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.541 [2024-11-20 06:43:08.047344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.541 qpair failed and we were unable to recover it. 00:32:36.541 [2024-11-20 06:43:08.047515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.541 [2024-11-20 06:43:08.047547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.541 qpair failed and we were unable to recover it. 00:32:36.541 [2024-11-20 06:43:08.047666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.541 [2024-11-20 06:43:08.047698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.541 qpair failed and we were unable to recover it. 00:32:36.541 [2024-11-20 06:43:08.047889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.541 [2024-11-20 06:43:08.047920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.541 qpair failed and we were unable to recover it. 00:32:36.541 [2024-11-20 06:43:08.048112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.541 [2024-11-20 06:43:08.048143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.541 qpair failed and we were unable to recover it. 00:32:36.541 [2024-11-20 06:43:08.048329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.541 [2024-11-20 06:43:08.048362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.541 qpair failed and we were unable to recover it. 00:32:36.541 [2024-11-20 06:43:08.048548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.541 [2024-11-20 06:43:08.048581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.541 qpair failed and we were unable to recover it. 00:32:36.541 [2024-11-20 06:43:08.048803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.541 [2024-11-20 06:43:08.048837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.541 qpair failed and we were unable to recover it. 00:32:36.541 [2024-11-20 06:43:08.048994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.541 [2024-11-20 06:43:08.049026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.541 qpair failed and we were unable to recover it. 00:32:36.541 [2024-11-20 06:43:08.049213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.541 [2024-11-20 06:43:08.049246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.541 qpair failed and we were unable to recover it. 00:32:36.541 [2024-11-20 06:43:08.049428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.541 [2024-11-20 06:43:08.049461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.541 qpair failed and we were unable to recover it. 00:32:36.541 [2024-11-20 06:43:08.049649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.541 [2024-11-20 06:43:08.049681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.541 qpair failed and we were unable to recover it. 00:32:36.541 [2024-11-20 06:43:08.049853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.541 [2024-11-20 06:43:08.049885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.541 qpair failed and we were unable to recover it. 00:32:36.541 [2024-11-20 06:43:08.050077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.541 [2024-11-20 06:43:08.050108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.541 qpair failed and we were unable to recover it. 00:32:36.541 [2024-11-20 06:43:08.050251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.541 [2024-11-20 06:43:08.050286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.541 qpair failed and we were unable to recover it. 00:32:36.541 [2024-11-20 06:43:08.050411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.541 [2024-11-20 06:43:08.050444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.541 qpair failed and we were unable to recover it. 00:32:36.541 [2024-11-20 06:43:08.050594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.541 [2024-11-20 06:43:08.050646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.541 qpair failed and we were unable to recover it. 00:32:36.541 [2024-11-20 06:43:08.050788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.541 [2024-11-20 06:43:08.050821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.541 qpair failed and we were unable to recover it. 00:32:36.541 [2024-11-20 06:43:08.050936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.541 [2024-11-20 06:43:08.050968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.541 qpair failed and we were unable to recover it. 00:32:36.541 [2024-11-20 06:43:08.051187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.541 [2024-11-20 06:43:08.051229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.541 qpair failed and we were unable to recover it. 00:32:36.541 [2024-11-20 06:43:08.051404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.541 [2024-11-20 06:43:08.051435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.541 qpair failed and we were unable to recover it. 00:32:36.541 [2024-11-20 06:43:08.051549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.541 [2024-11-20 06:43:08.051581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.541 qpair failed and we were unable to recover it. 00:32:36.541 [2024-11-20 06:43:08.051828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.541 [2024-11-20 06:43:08.051860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.541 qpair failed and we were unable to recover it. 00:32:36.541 [2024-11-20 06:43:08.051986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.541 [2024-11-20 06:43:08.052018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.541 qpair failed and we were unable to recover it. 00:32:36.541 [2024-11-20 06:43:08.052134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.541 [2024-11-20 06:43:08.052165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.541 qpair failed and we were unable to recover it. 00:32:36.541 [2024-11-20 06:43:08.052371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.541 [2024-11-20 06:43:08.052405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.541 qpair failed and we were unable to recover it. 00:32:36.541 [2024-11-20 06:43:08.052525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.541 [2024-11-20 06:43:08.052557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.541 qpair failed and we were unable to recover it. 00:32:36.541 [2024-11-20 06:43:08.052739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.541 [2024-11-20 06:43:08.052770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.541 qpair failed and we were unable to recover it. 00:32:36.541 [2024-11-20 06:43:08.052914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.541 [2024-11-20 06:43:08.052946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.541 qpair failed and we were unable to recover it. 00:32:36.541 [2024-11-20 06:43:08.053131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.542 [2024-11-20 06:43:08.053172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.542 qpair failed and we were unable to recover it. 00:32:36.542 [2024-11-20 06:43:08.053371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.542 [2024-11-20 06:43:08.053403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.542 qpair failed and we were unable to recover it. 00:32:36.542 [2024-11-20 06:43:08.053578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.542 [2024-11-20 06:43:08.053610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.542 qpair failed and we were unable to recover it. 00:32:36.542 [2024-11-20 06:43:08.053797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.542 [2024-11-20 06:43:08.053829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.542 qpair failed and we were unable to recover it. 00:32:36.542 [2024-11-20 06:43:08.053958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.542 [2024-11-20 06:43:08.053990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.542 qpair failed and we were unable to recover it. 00:32:36.542 [2024-11-20 06:43:08.054183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.542 [2024-11-20 06:43:08.054226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.542 qpair failed and we were unable to recover it. 00:32:36.542 [2024-11-20 06:43:08.054349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.542 [2024-11-20 06:43:08.054381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.542 qpair failed and we were unable to recover it. 00:32:36.542 [2024-11-20 06:43:08.054555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.542 [2024-11-20 06:43:08.054586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.542 qpair failed and we were unable to recover it. 00:32:36.542 [2024-11-20 06:43:08.054699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.542 [2024-11-20 06:43:08.054731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.542 qpair failed and we were unable to recover it. 00:32:36.542 [2024-11-20 06:43:08.054925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.542 [2024-11-20 06:43:08.054957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.542 qpair failed and we were unable to recover it. 00:32:36.542 [2024-11-20 06:43:08.055067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.542 [2024-11-20 06:43:08.055099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.542 qpair failed and we were unable to recover it. 00:32:36.542 [2024-11-20 06:43:08.055237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.542 [2024-11-20 06:43:08.055272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.542 qpair failed and we were unable to recover it. 00:32:36.542 [2024-11-20 06:43:08.055513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.542 [2024-11-20 06:43:08.055545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.542 qpair failed and we were unable to recover it. 00:32:36.542 [2024-11-20 06:43:08.055658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.542 [2024-11-20 06:43:08.055690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.542 qpair failed and we were unable to recover it. 00:32:36.542 [2024-11-20 06:43:08.055935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.542 [2024-11-20 06:43:08.055968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.542 qpair failed and we were unable to recover it. 00:32:36.542 [2024-11-20 06:43:08.056150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.542 [2024-11-20 06:43:08.056183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.542 qpair failed and we were unable to recover it. 00:32:36.542 [2024-11-20 06:43:08.056312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.542 [2024-11-20 06:43:08.056344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.542 qpair failed and we were unable to recover it. 00:32:36.542 [2024-11-20 06:43:08.056521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.542 [2024-11-20 06:43:08.056552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.542 qpair failed and we were unable to recover it. 00:32:36.542 [2024-11-20 06:43:08.056670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.542 [2024-11-20 06:43:08.056701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.542 qpair failed and we were unable to recover it. 00:32:36.542 [2024-11-20 06:43:08.056884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.542 [2024-11-20 06:43:08.056916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.542 qpair failed and we were unable to recover it. 00:32:36.542 [2024-11-20 06:43:08.057108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.542 [2024-11-20 06:43:08.057139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.542 qpair failed and we were unable to recover it. 00:32:36.542 [2024-11-20 06:43:08.057254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.542 [2024-11-20 06:43:08.057287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.542 qpair failed and we were unable to recover it. 00:32:36.542 [2024-11-20 06:43:08.057471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.542 [2024-11-20 06:43:08.057504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.542 qpair failed and we were unable to recover it. 00:32:36.542 [2024-11-20 06:43:08.057625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.542 [2024-11-20 06:43:08.057657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.542 qpair failed and we were unable to recover it. 00:32:36.542 [2024-11-20 06:43:08.057862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.542 [2024-11-20 06:43:08.057893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.542 qpair failed and we were unable to recover it. 00:32:36.542 [2024-11-20 06:43:08.058086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.542 [2024-11-20 06:43:08.058118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.542 qpair failed and we were unable to recover it. 00:32:36.542 [2024-11-20 06:43:08.058243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.542 [2024-11-20 06:43:08.058277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.542 qpair failed and we were unable to recover it. 00:32:36.542 [2024-11-20 06:43:08.058533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.542 [2024-11-20 06:43:08.058605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.542 qpair failed and we were unable to recover it. 00:32:36.542 [2024-11-20 06:43:08.058837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.542 [2024-11-20 06:43:08.058873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.542 qpair failed and we were unable to recover it. 00:32:36.542 [2024-11-20 06:43:08.059011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.542 [2024-11-20 06:43:08.059044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.542 qpair failed and we were unable to recover it. 00:32:36.542 [2024-11-20 06:43:08.059164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.542 [2024-11-20 06:43:08.059197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.542 qpair failed and we were unable to recover it. 00:32:36.542 [2024-11-20 06:43:08.059389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.542 [2024-11-20 06:43:08.059422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.542 qpair failed and we were unable to recover it. 00:32:36.542 [2024-11-20 06:43:08.059666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.542 [2024-11-20 06:43:08.059698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.542 qpair failed and we were unable to recover it. 00:32:36.542 [2024-11-20 06:43:08.059888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.542 [2024-11-20 06:43:08.059920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.542 qpair failed and we were unable to recover it. 00:32:36.542 [2024-11-20 06:43:08.060115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.542 [2024-11-20 06:43:08.060147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.542 qpair failed and we were unable to recover it. 00:32:36.542 [2024-11-20 06:43:08.060366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.542 [2024-11-20 06:43:08.060401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.542 qpair failed and we were unable to recover it. 00:32:36.542 [2024-11-20 06:43:08.060581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.542 [2024-11-20 06:43:08.060613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.542 qpair failed and we were unable to recover it. 00:32:36.542 [2024-11-20 06:43:08.060720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.542 [2024-11-20 06:43:08.060752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.543 qpair failed and we were unable to recover it. 00:32:36.543 [2024-11-20 06:43:08.060871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.543 [2024-11-20 06:43:08.060903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.543 qpair failed and we were unable to recover it. 00:32:36.543 [2024-11-20 06:43:08.061094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.543 [2024-11-20 06:43:08.061126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.543 qpair failed and we were unable to recover it. 00:32:36.543 [2024-11-20 06:43:08.061246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.543 [2024-11-20 06:43:08.061278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.543 qpair failed and we were unable to recover it. 00:32:36.543 [2024-11-20 06:43:08.061414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.543 [2024-11-20 06:43:08.061445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.543 qpair failed and we were unable to recover it. 00:32:36.543 [2024-11-20 06:43:08.061631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.543 [2024-11-20 06:43:08.061663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.543 qpair failed and we were unable to recover it. 00:32:36.543 [2024-11-20 06:43:08.061852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.543 [2024-11-20 06:43:08.061885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.543 qpair failed and we were unable to recover it. 00:32:36.543 [2024-11-20 06:43:08.062018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.543 [2024-11-20 06:43:08.062049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.543 qpair failed and we were unable to recover it. 00:32:36.543 [2024-11-20 06:43:08.062155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.543 [2024-11-20 06:43:08.062185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.543 qpair failed and we were unable to recover it. 00:32:36.543 [2024-11-20 06:43:08.062322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.543 [2024-11-20 06:43:08.062352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.543 qpair failed and we were unable to recover it. 00:32:36.543 [2024-11-20 06:43:08.062476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.543 [2024-11-20 06:43:08.062506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.543 qpair failed and we were unable to recover it. 00:32:36.543 [2024-11-20 06:43:08.062747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.543 [2024-11-20 06:43:08.062779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.543 qpair failed and we were unable to recover it. 00:32:36.543 [2024-11-20 06:43:08.062968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.543 [2024-11-20 06:43:08.063000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.543 qpair failed and we were unable to recover it. 00:32:36.543 [2024-11-20 06:43:08.063109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.543 [2024-11-20 06:43:08.063139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.543 qpair failed and we were unable to recover it. 00:32:36.543 [2024-11-20 06:43:08.063341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.543 [2024-11-20 06:43:08.063373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.543 qpair failed and we were unable to recover it. 00:32:36.543 [2024-11-20 06:43:08.063485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.543 [2024-11-20 06:43:08.063517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.543 qpair failed and we were unable to recover it. 00:32:36.543 [2024-11-20 06:43:08.063788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.543 [2024-11-20 06:43:08.063822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.543 qpair failed and we were unable to recover it. 00:32:36.543 [2024-11-20 06:43:08.063944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.543 [2024-11-20 06:43:08.063982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.543 qpair failed and we were unable to recover it. 00:32:36.543 [2024-11-20 06:43:08.064131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.543 [2024-11-20 06:43:08.064162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.543 qpair failed and we were unable to recover it. 00:32:36.543 [2024-11-20 06:43:08.064294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.543 [2024-11-20 06:43:08.064328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.543 qpair failed and we were unable to recover it. 00:32:36.543 [2024-11-20 06:43:08.064441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.543 [2024-11-20 06:43:08.064473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.543 qpair failed and we were unable to recover it. 00:32:36.543 [2024-11-20 06:43:08.064592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.543 [2024-11-20 06:43:08.064622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.543 qpair failed and we were unable to recover it. 00:32:36.543 [2024-11-20 06:43:08.064860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.543 [2024-11-20 06:43:08.064891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.543 qpair failed and we were unable to recover it. 00:32:36.543 [2024-11-20 06:43:08.065016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.543 [2024-11-20 06:43:08.065048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.543 qpair failed and we were unable to recover it. 00:32:36.543 [2024-11-20 06:43:08.065236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.543 [2024-11-20 06:43:08.065269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.543 qpair failed and we were unable to recover it. 00:32:36.543 [2024-11-20 06:43:08.065387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.543 [2024-11-20 06:43:08.065418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.543 qpair failed and we were unable to recover it. 00:32:36.543 [2024-11-20 06:43:08.065593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.543 [2024-11-20 06:43:08.065625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.543 qpair failed and we were unable to recover it. 00:32:36.543 [2024-11-20 06:43:08.065806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.543 [2024-11-20 06:43:08.065838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.543 qpair failed and we were unable to recover it. 00:32:36.543 [2024-11-20 06:43:08.065962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.543 [2024-11-20 06:43:08.065993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.543 qpair failed and we were unable to recover it. 00:32:36.543 [2024-11-20 06:43:08.066226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.543 [2024-11-20 06:43:08.066258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.543 qpair failed and we were unable to recover it. 00:32:36.543 [2024-11-20 06:43:08.066452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.543 [2024-11-20 06:43:08.066482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.543 qpair failed and we were unable to recover it. 00:32:36.543 [2024-11-20 06:43:08.066611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.543 [2024-11-20 06:43:08.066643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.543 qpair failed and we were unable to recover it. 00:32:36.543 [2024-11-20 06:43:08.066749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.543 [2024-11-20 06:43:08.066780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.543 qpair failed and we were unable to recover it. 00:32:36.543 [2024-11-20 06:43:08.067035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.543 [2024-11-20 06:43:08.067065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.543 qpair failed and we were unable to recover it. 00:32:36.543 [2024-11-20 06:43:08.067175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.543 [2024-11-20 06:43:08.067213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.543 qpair failed and we were unable to recover it. 00:32:36.543 [2024-11-20 06:43:08.067397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.543 [2024-11-20 06:43:08.067429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.543 qpair failed and we were unable to recover it. 00:32:36.543 [2024-11-20 06:43:08.067549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.543 [2024-11-20 06:43:08.067579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.543 qpair failed and we were unable to recover it. 00:32:36.543 [2024-11-20 06:43:08.067697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.543 [2024-11-20 06:43:08.067728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.543 qpair failed and we were unable to recover it. 00:32:36.543 [2024-11-20 06:43:08.067841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.543 [2024-11-20 06:43:08.067873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.543 qpair failed and we were unable to recover it. 00:32:36.544 [2024-11-20 06:43:08.068056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.544 [2024-11-20 06:43:08.068087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.544 qpair failed and we were unable to recover it. 00:32:36.544 [2024-11-20 06:43:08.068196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.544 [2024-11-20 06:43:08.068253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.544 qpair failed and we were unable to recover it. 00:32:36.544 [2024-11-20 06:43:08.068381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.544 [2024-11-20 06:43:08.068414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.544 qpair failed and we were unable to recover it. 00:32:36.544 [2024-11-20 06:43:08.068533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.544 [2024-11-20 06:43:08.068563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.544 qpair failed and we were unable to recover it. 00:32:36.544 [2024-11-20 06:43:08.068740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.544 [2024-11-20 06:43:08.068771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.544 qpair failed and we were unable to recover it. 00:32:36.544 [2024-11-20 06:43:08.068881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.544 [2024-11-20 06:43:08.068914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.544 qpair failed and we were unable to recover it. 00:32:36.544 [2024-11-20 06:43:08.069032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.544 [2024-11-20 06:43:08.069064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.544 qpair failed and we were unable to recover it. 00:32:36.544 [2024-11-20 06:43:08.069215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.544 [2024-11-20 06:43:08.069249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.544 qpair failed and we were unable to recover it. 00:32:36.544 [2024-11-20 06:43:08.069359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.544 [2024-11-20 06:43:08.069390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.544 qpair failed and we were unable to recover it. 00:32:36.544 [2024-11-20 06:43:08.069497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.544 [2024-11-20 06:43:08.069529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.544 qpair failed and we were unable to recover it. 00:32:36.544 [2024-11-20 06:43:08.069801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.544 [2024-11-20 06:43:08.069833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.544 qpair failed and we were unable to recover it. 00:32:36.544 [2024-11-20 06:43:08.069942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.544 [2024-11-20 06:43:08.069973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.544 qpair failed and we were unable to recover it. 00:32:36.544 [2024-11-20 06:43:08.070096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.544 [2024-11-20 06:43:08.070128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.544 qpair failed and we were unable to recover it. 00:32:36.544 [2024-11-20 06:43:08.070260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.544 [2024-11-20 06:43:08.070295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.544 qpair failed and we were unable to recover it. 00:32:36.544 [2024-11-20 06:43:08.070558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.544 [2024-11-20 06:43:08.070591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.544 qpair failed and we were unable to recover it. 00:32:36.544 [2024-11-20 06:43:08.070778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.544 [2024-11-20 06:43:08.070811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.544 qpair failed and we were unable to recover it. 00:32:36.544 [2024-11-20 06:43:08.070935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.544 [2024-11-20 06:43:08.070966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.544 qpair failed and we were unable to recover it. 00:32:36.544 [2024-11-20 06:43:08.071091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.544 [2024-11-20 06:43:08.071123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.544 qpair failed and we were unable to recover it. 00:32:36.544 [2024-11-20 06:43:08.071252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.544 [2024-11-20 06:43:08.071284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.544 qpair failed and we were unable to recover it. 00:32:36.544 [2024-11-20 06:43:08.071393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.544 [2024-11-20 06:43:08.071430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.544 qpair failed and we were unable to recover it. 00:32:36.544 [2024-11-20 06:43:08.071546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.544 [2024-11-20 06:43:08.071579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.544 qpair failed and we were unable to recover it. 00:32:36.544 [2024-11-20 06:43:08.071694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.544 [2024-11-20 06:43:08.071724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.544 qpair failed and we were unable to recover it. 00:32:36.544 [2024-11-20 06:43:08.071847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.544 [2024-11-20 06:43:08.071880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.544 qpair failed and we were unable to recover it. 00:32:36.544 [2024-11-20 06:43:08.072051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.544 [2024-11-20 06:43:08.072082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.544 qpair failed and we were unable to recover it. 00:32:36.544 [2024-11-20 06:43:08.072280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.544 [2024-11-20 06:43:08.072312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.544 qpair failed and we were unable to recover it. 00:32:36.544 [2024-11-20 06:43:08.072421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.544 [2024-11-20 06:43:08.072452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.544 qpair failed and we were unable to recover it. 00:32:36.544 [2024-11-20 06:43:08.072667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.544 [2024-11-20 06:43:08.072700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.544 qpair failed and we were unable to recover it. 00:32:36.544 [2024-11-20 06:43:08.072826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.544 [2024-11-20 06:43:08.072858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.544 qpair failed and we were unable to recover it. 00:32:36.544 [2024-11-20 06:43:08.072972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.544 [2024-11-20 06:43:08.073003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.544 qpair failed and we were unable to recover it. 00:32:36.544 [2024-11-20 06:43:08.073106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.544 [2024-11-20 06:43:08.073138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.544 qpair failed and we were unable to recover it. 00:32:36.544 [2024-11-20 06:43:08.073252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.544 [2024-11-20 06:43:08.073285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.544 qpair failed and we were unable to recover it. 00:32:36.544 [2024-11-20 06:43:08.073469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.544 [2024-11-20 06:43:08.073502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.544 qpair failed and we were unable to recover it. 00:32:36.544 [2024-11-20 06:43:08.073629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.544 [2024-11-20 06:43:08.073661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.544 qpair failed and we were unable to recover it. 00:32:36.544 [2024-11-20 06:43:08.073919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.544 [2024-11-20 06:43:08.073952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.544 qpair failed and we were unable to recover it. 00:32:36.544 [2024-11-20 06:43:08.074060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.544 [2024-11-20 06:43:08.074091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.544 qpair failed and we were unable to recover it. 00:32:36.544 [2024-11-20 06:43:08.074275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.544 [2024-11-20 06:43:08.074307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.544 qpair failed and we were unable to recover it. 00:32:36.544 [2024-11-20 06:43:08.074489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.544 [2024-11-20 06:43:08.074521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.544 qpair failed and we were unable to recover it. 00:32:36.544 [2024-11-20 06:43:08.074638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.544 [2024-11-20 06:43:08.074670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.545 qpair failed and we were unable to recover it. 00:32:36.545 [2024-11-20 06:43:08.074777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.545 [2024-11-20 06:43:08.074808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.545 qpair failed and we were unable to recover it. 00:32:36.545 [2024-11-20 06:43:08.074915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.545 [2024-11-20 06:43:08.074947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.545 qpair failed and we were unable to recover it. 00:32:36.545 [2024-11-20 06:43:08.075067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.545 [2024-11-20 06:43:08.075097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.545 qpair failed and we were unable to recover it. 00:32:36.545 [2024-11-20 06:43:08.075209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.545 [2024-11-20 06:43:08.075241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.545 qpair failed and we were unable to recover it. 00:32:36.545 [2024-11-20 06:43:08.075351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.545 [2024-11-20 06:43:08.075382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.545 qpair failed and we were unable to recover it. 00:32:36.545 [2024-11-20 06:43:08.075567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.545 [2024-11-20 06:43:08.075599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.545 qpair failed and we were unable to recover it. 00:32:36.545 [2024-11-20 06:43:08.075726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.545 [2024-11-20 06:43:08.075758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.545 qpair failed and we were unable to recover it. 00:32:36.545 [2024-11-20 06:43:08.075871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.545 [2024-11-20 06:43:08.075903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.545 qpair failed and we were unable to recover it. 00:32:36.545 [2024-11-20 06:43:08.076005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.545 [2024-11-20 06:43:08.076047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.545 qpair failed and we were unable to recover it. 00:32:36.545 [2024-11-20 06:43:08.076176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.545 [2024-11-20 06:43:08.076224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.545 qpair failed and we were unable to recover it. 00:32:36.545 [2024-11-20 06:43:08.076346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.545 [2024-11-20 06:43:08.076377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.545 qpair failed and we were unable to recover it. 00:32:36.545 [2024-11-20 06:43:08.076497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.545 [2024-11-20 06:43:08.076530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.545 qpair failed and we were unable to recover it. 00:32:36.545 [2024-11-20 06:43:08.076705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.545 [2024-11-20 06:43:08.076737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.545 qpair failed and we were unable to recover it. 00:32:36.545 [2024-11-20 06:43:08.076847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.545 [2024-11-20 06:43:08.076878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.545 qpair failed and we were unable to recover it. 00:32:36.545 [2024-11-20 06:43:08.077048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.545 [2024-11-20 06:43:08.077079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.545 qpair failed and we were unable to recover it. 00:32:36.545 [2024-11-20 06:43:08.077192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.545 [2024-11-20 06:43:08.077233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.545 qpair failed and we were unable to recover it. 00:32:36.545 [2024-11-20 06:43:08.077345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.545 [2024-11-20 06:43:08.077376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.545 qpair failed and we were unable to recover it. 00:32:36.545 [2024-11-20 06:43:08.077565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.545 [2024-11-20 06:43:08.077597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.545 qpair failed and we were unable to recover it. 00:32:36.545 [2024-11-20 06:43:08.077841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.545 [2024-11-20 06:43:08.077873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.545 qpair failed and we were unable to recover it. 00:32:36.545 [2024-11-20 06:43:08.077987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.545 [2024-11-20 06:43:08.078021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.545 qpair failed and we were unable to recover it. 00:32:36.545 [2024-11-20 06:43:08.078132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.545 [2024-11-20 06:43:08.078162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.545 qpair failed and we were unable to recover it. 00:32:36.545 [2024-11-20 06:43:08.078377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.545 [2024-11-20 06:43:08.078410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.545 qpair failed and we were unable to recover it. 00:32:36.545 [2024-11-20 06:43:08.078536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.545 [2024-11-20 06:43:08.078569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.545 qpair failed and we were unable to recover it. 00:32:36.545 [2024-11-20 06:43:08.078766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.545 [2024-11-20 06:43:08.078797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.545 qpair failed and we were unable to recover it. 00:32:36.545 [2024-11-20 06:43:08.079016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.545 [2024-11-20 06:43:08.079046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.545 qpair failed and we were unable to recover it. 00:32:36.545 [2024-11-20 06:43:08.079264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.545 [2024-11-20 06:43:08.079298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.545 qpair failed and we were unable to recover it. 00:32:36.545 [2024-11-20 06:43:08.079415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.545 [2024-11-20 06:43:08.079445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.545 qpair failed and we were unable to recover it. 00:32:36.545 [2024-11-20 06:43:08.079563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.545 [2024-11-20 06:43:08.079595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.545 qpair failed and we were unable to recover it. 00:32:36.545 [2024-11-20 06:43:08.079719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.545 [2024-11-20 06:43:08.079750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.545 qpair failed and we were unable to recover it. 00:32:36.545 [2024-11-20 06:43:08.079928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.545 [2024-11-20 06:43:08.079960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.545 qpair failed and we were unable to recover it. 00:32:36.545 [2024-11-20 06:43:08.080076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.545 [2024-11-20 06:43:08.080108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.545 qpair failed and we were unable to recover it. 00:32:36.546 [2024-11-20 06:43:08.080300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.546 [2024-11-20 06:43:08.080332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.546 qpair failed and we were unable to recover it. 00:32:36.546 [2024-11-20 06:43:08.080541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.546 [2024-11-20 06:43:08.080572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.546 qpair failed and we were unable to recover it. 00:32:36.546 [2024-11-20 06:43:08.080749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.546 [2024-11-20 06:43:08.080780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.546 qpair failed and we were unable to recover it. 00:32:36.546 [2024-11-20 06:43:08.080902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.546 [2024-11-20 06:43:08.080933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.546 qpair failed and we were unable to recover it. 00:32:36.546 [2024-11-20 06:43:08.081062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.546 [2024-11-20 06:43:08.081093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.546 qpair failed and we were unable to recover it. 00:32:36.546 [2024-11-20 06:43:08.081309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.546 [2024-11-20 06:43:08.081343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.546 qpair failed and we were unable to recover it. 00:32:36.546 [2024-11-20 06:43:08.081515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.546 [2024-11-20 06:43:08.081546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.546 qpair failed and we were unable to recover it. 00:32:36.546 [2024-11-20 06:43:08.081727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.546 [2024-11-20 06:43:08.081758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.546 qpair failed and we were unable to recover it. 00:32:36.546 [2024-11-20 06:43:08.081870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.546 [2024-11-20 06:43:08.081902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.546 qpair failed and we were unable to recover it. 00:32:36.546 [2024-11-20 06:43:08.082084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.546 [2024-11-20 06:43:08.082116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.546 qpair failed and we were unable to recover it. 00:32:36.546 [2024-11-20 06:43:08.082224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.546 [2024-11-20 06:43:08.082257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.546 qpair failed and we were unable to recover it. 00:32:36.546 [2024-11-20 06:43:08.082381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.546 [2024-11-20 06:43:08.082413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.546 qpair failed and we were unable to recover it. 00:32:36.546 [2024-11-20 06:43:08.082531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.546 [2024-11-20 06:43:08.082561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.546 qpair failed and we were unable to recover it. 00:32:36.546 [2024-11-20 06:43:08.082737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.546 [2024-11-20 06:43:08.082768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.546 qpair failed and we were unable to recover it. 00:32:36.546 [2024-11-20 06:43:08.082947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.546 [2024-11-20 06:43:08.082978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.546 qpair failed and we were unable to recover it. 00:32:36.546 [2024-11-20 06:43:08.083160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.546 [2024-11-20 06:43:08.083192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.546 qpair failed and we were unable to recover it. 00:32:36.546 [2024-11-20 06:43:08.083320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.546 [2024-11-20 06:43:08.083351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.546 qpair failed and we were unable to recover it. 00:32:36.546 [2024-11-20 06:43:08.083462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.546 [2024-11-20 06:43:08.083493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.546 qpair failed and we were unable to recover it. 00:32:36.546 [2024-11-20 06:43:08.083617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.546 [2024-11-20 06:43:08.083653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.546 qpair failed and we were unable to recover it. 00:32:36.546 [2024-11-20 06:43:08.083768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.546 [2024-11-20 06:43:08.083798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.546 qpair failed and we were unable to recover it. 00:32:36.546 [2024-11-20 06:43:08.083987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.546 [2024-11-20 06:43:08.084019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.546 qpair failed and we were unable to recover it. 00:32:36.546 [2024-11-20 06:43:08.084129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.546 [2024-11-20 06:43:08.084160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.546 qpair failed and we were unable to recover it. 00:32:36.546 [2024-11-20 06:43:08.084306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.546 [2024-11-20 06:43:08.084342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.546 qpair failed and we were unable to recover it. 00:32:36.546 [2024-11-20 06:43:08.084540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.546 [2024-11-20 06:43:08.084572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.546 qpair failed and we were unable to recover it. 00:32:36.546 [2024-11-20 06:43:08.084715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.546 [2024-11-20 06:43:08.084746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.546 qpair failed and we were unable to recover it. 00:32:36.546 [2024-11-20 06:43:08.084864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.546 [2024-11-20 06:43:08.084895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.546 qpair failed and we were unable to recover it. 00:32:36.546 [2024-11-20 06:43:08.085008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.546 [2024-11-20 06:43:08.085038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.546 qpair failed and we were unable to recover it. 00:32:36.546 [2024-11-20 06:43:08.085155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.546 [2024-11-20 06:43:08.085186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.546 qpair failed and we were unable to recover it. 00:32:36.546 [2024-11-20 06:43:08.085387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.546 [2024-11-20 06:43:08.085420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.546 qpair failed and we were unable to recover it. 00:32:36.546 [2024-11-20 06:43:08.085541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.546 [2024-11-20 06:43:08.085572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.546 qpair failed and we were unable to recover it. 00:32:36.546 [2024-11-20 06:43:08.085748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.546 [2024-11-20 06:43:08.085781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.546 qpair failed and we were unable to recover it. 00:32:36.546 [2024-11-20 06:43:08.085908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.546 [2024-11-20 06:43:08.085939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.546 qpair failed and we were unable to recover it. 00:32:36.546 [2024-11-20 06:43:08.086167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.546 [2024-11-20 06:43:08.086197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.546 qpair failed and we were unable to recover it. 00:32:36.546 [2024-11-20 06:43:08.086389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.546 [2024-11-20 06:43:08.086420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.546 qpair failed and we were unable to recover it. 00:32:36.546 [2024-11-20 06:43:08.086629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.546 [2024-11-20 06:43:08.086660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.546 qpair failed and we were unable to recover it. 00:32:36.546 [2024-11-20 06:43:08.086787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.546 [2024-11-20 06:43:08.086818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.546 qpair failed and we were unable to recover it. 00:32:36.546 [2024-11-20 06:43:08.086924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.546 [2024-11-20 06:43:08.086955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.546 qpair failed and we were unable to recover it. 00:32:36.546 [2024-11-20 06:43:08.087078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.546 [2024-11-20 06:43:08.087108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.546 qpair failed and we were unable to recover it. 00:32:36.547 [2024-11-20 06:43:08.087236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.547 [2024-11-20 06:43:08.087269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.547 qpair failed and we were unable to recover it. 00:32:36.547 [2024-11-20 06:43:08.087376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.547 [2024-11-20 06:43:08.087406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.547 qpair failed and we were unable to recover it. 00:32:36.547 [2024-11-20 06:43:08.087519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.547 [2024-11-20 06:43:08.087552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.547 qpair failed and we were unable to recover it. 00:32:36.547 [2024-11-20 06:43:08.087819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.547 [2024-11-20 06:43:08.087851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.547 qpair failed and we were unable to recover it. 00:32:36.547 [2024-11-20 06:43:08.087974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.547 [2024-11-20 06:43:08.088004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.547 qpair failed and we were unable to recover it. 00:32:36.547 [2024-11-20 06:43:08.088127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.547 [2024-11-20 06:43:08.088157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.547 qpair failed and we were unable to recover it. 00:32:36.547 [2024-11-20 06:43:08.088377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.547 [2024-11-20 06:43:08.088411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.547 qpair failed and we were unable to recover it. 00:32:36.547 [2024-11-20 06:43:08.088583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.547 [2024-11-20 06:43:08.088622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.547 qpair failed and we were unable to recover it. 00:32:36.547 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 710194 Killed "${NVMF_APP[@]}" "$@" 00:32:36.547 [2024-11-20 06:43:08.088801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.547 [2024-11-20 06:43:08.088836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.547 qpair failed and we were unable to recover it. 00:32:36.547 [2024-11-20 06:43:08.088941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.547 [2024-11-20 06:43:08.088970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.547 qpair failed and we were unable to recover it. 00:32:36.547 [2024-11-20 06:43:08.089088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.547 [2024-11-20 06:43:08.089119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.547 qpair failed and we were unable to recover it. 00:32:36.547 [2024-11-20 06:43:08.089249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.547 [2024-11-20 06:43:08.089281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.547 qpair failed and we were unable to recover it. 00:32:36.547 06:43:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:32:36.547 [2024-11-20 06:43:08.089471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.547 [2024-11-20 06:43:08.089505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.547 qpair failed and we were unable to recover it. 00:32:36.547 [2024-11-20 06:43:08.089641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.547 [2024-11-20 06:43:08.089672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.547 qpair failed and we were unable to recover it. 00:32:36.547 06:43:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:32:36.547 [2024-11-20 06:43:08.089851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.547 [2024-11-20 06:43:08.089886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.547 qpair failed and we were unable to recover it. 00:32:36.547 [2024-11-20 06:43:08.090003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.547 [2024-11-20 06:43:08.090034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.547 qpair failed and we were unable to recover it. 00:32:36.547 06:43:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:36.547 [2024-11-20 06:43:08.090275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.547 [2024-11-20 06:43:08.090310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.547 qpair failed and we were unable to recover it. 00:32:36.547 [2024-11-20 06:43:08.090423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.547 [2024-11-20 06:43:08.090456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.547 06:43:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:36.547 qpair failed and we were unable to recover it. 00:32:36.547 [2024-11-20 06:43:08.090716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.547 [2024-11-20 06:43:08.090749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.547 06:43:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:36.547 qpair failed and we were unable to recover it. 00:32:36.547 [2024-11-20 06:43:08.090941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.547 [2024-11-20 06:43:08.090973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.547 qpair failed and we were unable to recover it. 00:32:36.547 [2024-11-20 06:43:08.091098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.547 [2024-11-20 06:43:08.091129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.547 qpair failed and we were unable to recover it. 00:32:36.547 [2024-11-20 06:43:08.091250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.547 [2024-11-20 06:43:08.091283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.547 qpair failed and we were unable to recover it. 00:32:36.547 [2024-11-20 06:43:08.091523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.547 [2024-11-20 06:43:08.091555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.547 qpair failed and we were unable to recover it. 00:32:36.547 [2024-11-20 06:43:08.091683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.547 [2024-11-20 06:43:08.091715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.547 qpair failed and we were unable to recover it. 00:32:36.547 [2024-11-20 06:43:08.091857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.547 [2024-11-20 06:43:08.091890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.547 qpair failed and we were unable to recover it. 00:32:36.547 [2024-11-20 06:43:08.092007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.547 [2024-11-20 06:43:08.092038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.547 qpair failed and we were unable to recover it. 00:32:36.547 [2024-11-20 06:43:08.092176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.547 [2024-11-20 06:43:08.092216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.547 qpair failed and we were unable to recover it. 00:32:36.547 [2024-11-20 06:43:08.092343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.547 [2024-11-20 06:43:08.092374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.547 qpair failed and we were unable to recover it. 00:32:36.547 [2024-11-20 06:43:08.092558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.547 [2024-11-20 06:43:08.092590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.547 qpair failed and we were unable to recover it. 00:32:36.547 [2024-11-20 06:43:08.092778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.547 [2024-11-20 06:43:08.092810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.547 qpair failed and we were unable to recover it. 00:32:36.547 [2024-11-20 06:43:08.092934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.547 [2024-11-20 06:43:08.092965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.547 qpair failed and we were unable to recover it. 00:32:36.547 [2024-11-20 06:43:08.093103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.547 [2024-11-20 06:43:08.093134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.547 qpair failed and we were unable to recover it. 00:32:36.547 [2024-11-20 06:43:08.093319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.547 [2024-11-20 06:43:08.093352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.547 qpair failed and we were unable to recover it. 00:32:36.547 [2024-11-20 06:43:08.093459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.547 [2024-11-20 06:43:08.093491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.547 qpair failed and we were unable to recover it. 00:32:36.547 [2024-11-20 06:43:08.093670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.547 [2024-11-20 06:43:08.093700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.547 qpair failed and we were unable to recover it. 00:32:36.547 [2024-11-20 06:43:08.093884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.547 [2024-11-20 06:43:08.093915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.548 qpair failed and we were unable to recover it. 00:32:36.548 [2024-11-20 06:43:08.094028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.548 [2024-11-20 06:43:08.094060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.548 qpair failed and we were unable to recover it. 00:32:36.548 [2024-11-20 06:43:08.094243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.548 [2024-11-20 06:43:08.094278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.548 qpair failed and we were unable to recover it. 00:32:36.548 [2024-11-20 06:43:08.094394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.548 [2024-11-20 06:43:08.094425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.548 qpair failed and we were unable to recover it. 00:32:36.548 [2024-11-20 06:43:08.094545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.548 [2024-11-20 06:43:08.094577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.548 qpair failed and we were unable to recover it. 00:32:36.548 [2024-11-20 06:43:08.094723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.548 [2024-11-20 06:43:08.094755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.548 qpair failed and we were unable to recover it. 00:32:36.548 [2024-11-20 06:43:08.094885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.548 [2024-11-20 06:43:08.094917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.548 qpair failed and we were unable to recover it. 00:32:36.548 [2024-11-20 06:43:08.095037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.548 [2024-11-20 06:43:08.095069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.548 qpair failed and we were unable to recover it. 00:32:36.548 [2024-11-20 06:43:08.095257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.548 [2024-11-20 06:43:08.095290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.548 qpair failed and we were unable to recover it. 00:32:36.548 [2024-11-20 06:43:08.095410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.548 [2024-11-20 06:43:08.095439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.548 qpair failed and we were unable to recover it. 00:32:36.548 [2024-11-20 06:43:08.095552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.548 [2024-11-20 06:43:08.095584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.548 qpair failed and we were unable to recover it. 00:32:36.548 [2024-11-20 06:43:08.095697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.548 [2024-11-20 06:43:08.095728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.548 qpair failed and we were unable to recover it. 00:32:36.548 [2024-11-20 06:43:08.095917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.548 [2024-11-20 06:43:08.095948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.548 qpair failed and we were unable to recover it. 00:32:36.548 [2024-11-20 06:43:08.096071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.548 [2024-11-20 06:43:08.096102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.548 qpair failed and we were unable to recover it. 00:32:36.548 [2024-11-20 06:43:08.096285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.548 [2024-11-20 06:43:08.096319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.548 qpair failed and we were unable to recover it. 00:32:36.548 [2024-11-20 06:43:08.096437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.548 [2024-11-20 06:43:08.096468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.548 qpair failed and we were unable to recover it. 00:32:36.548 [2024-11-20 06:43:08.096591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.548 [2024-11-20 06:43:08.096624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.548 qpair failed and we were unable to recover it. 00:32:36.548 [2024-11-20 06:43:08.096745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.548 [2024-11-20 06:43:08.096778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.548 qpair failed and we were unable to recover it. 00:32:36.548 06:43:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=710935 00:32:36.548 06:43:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 710935 00:32:36.548 06:43:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:32:36.548 [2024-11-20 06:43:08.098261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.548 06:43:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # '[' -z 710935 ']' 00:32:36.548 [2024-11-20 06:43:08.098314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.548 qpair failed and we were unable to recover it. 00:32:36.548 [2024-11-20 06:43:08.098540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.548 [2024-11-20 06:43:08.098578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.548 qpair failed and we were unable to recover it. 00:32:36.548 06:43:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:36.548 [2024-11-20 06:43:08.098764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.548 [2024-11-20 06:43:08.098798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.548 qpair failed and we were unable to recover it. 00:32:36.548 06:43:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # local max_retries=100 00:32:36.548 [2024-11-20 06:43:08.098926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.548 [2024-11-20 06:43:08.098967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.548 qpair failed and we were unable to recover it. 00:32:36.548 [2024-11-20 06:43:08.099141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.548 [2024-11-20 06:43:08.099177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.548 qpair failed and we were unable to recover it. 00:32:36.548 06:43:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:36.548 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:36.548 [2024-11-20 06:43:08.099327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.548 [2024-11-20 06:43:08.099360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.548 qpair failed and we were unable to recover it. 00:32:36.548 [2024-11-20 06:43:08.099493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.548 [2024-11-20 06:43:08.099526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.548 06:43:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # xtrace_disable 00:32:36.548 qpair failed and we were unable to recover it. 00:32:36.548 [2024-11-20 06:43:08.099654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.548 [2024-11-20 06:43:08.099685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.548 qpair failed and we were unable to recover it. 00:32:36.548 06:43:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:36.548 [2024-11-20 06:43:08.099969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.548 [2024-11-20 06:43:08.100002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.548 qpair failed and we were unable to recover it. 00:32:36.548 [2024-11-20 06:43:08.100112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.548 [2024-11-20 06:43:08.100142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.548 qpair failed and we were unable to recover it. 00:32:36.548 [2024-11-20 06:43:08.100284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.548 [2024-11-20 06:43:08.100316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.548 qpair failed and we were unable to recover it. 00:32:36.548 [2024-11-20 06:43:08.100446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.548 [2024-11-20 06:43:08.100477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.548 qpair failed and we were unable to recover it. 00:32:36.548 [2024-11-20 06:43:08.100603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.548 [2024-11-20 06:43:08.100634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.548 qpair failed and we were unable to recover it. 00:32:36.548 [2024-11-20 06:43:08.100823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.548 [2024-11-20 06:43:08.100855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.548 qpair failed and we were unable to recover it. 00:32:36.548 [2024-11-20 06:43:08.100990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.548 [2024-11-20 06:43:08.101022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.548 qpair failed and we were unable to recover it. 00:32:36.548 [2024-11-20 06:43:08.101223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.548 [2024-11-20 06:43:08.101257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.548 qpair failed and we were unable to recover it. 00:32:36.548 [2024-11-20 06:43:08.101376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.549 [2024-11-20 06:43:08.101411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.549 qpair failed and we were unable to recover it. 00:32:36.549 [2024-11-20 06:43:08.101595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.549 [2024-11-20 06:43:08.101627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.549 qpair failed and we were unable to recover it. 00:32:36.549 [2024-11-20 06:43:08.101750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.549 [2024-11-20 06:43:08.101785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.549 qpair failed and we were unable to recover it. 00:32:36.549 [2024-11-20 06:43:08.101973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.549 [2024-11-20 06:43:08.102004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.549 qpair failed and we were unable to recover it. 00:32:36.549 [2024-11-20 06:43:08.102120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.549 [2024-11-20 06:43:08.102151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.549 qpair failed and we were unable to recover it. 00:32:36.549 [2024-11-20 06:43:08.102273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.549 [2024-11-20 06:43:08.102306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.549 qpair failed and we were unable to recover it. 00:32:36.549 [2024-11-20 06:43:08.102429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.549 [2024-11-20 06:43:08.102461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.549 qpair failed and we were unable to recover it. 00:32:36.549 [2024-11-20 06:43:08.102586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.549 [2024-11-20 06:43:08.102622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.549 qpair failed and we were unable to recover it. 00:32:36.549 [2024-11-20 06:43:08.102866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.549 [2024-11-20 06:43:08.102898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.549 qpair failed and we were unable to recover it. 00:32:36.549 [2024-11-20 06:43:08.103017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.549 [2024-11-20 06:43:08.103051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.549 qpair failed and we were unable to recover it. 00:32:36.549 [2024-11-20 06:43:08.103245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.549 [2024-11-20 06:43:08.103279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.549 qpair failed and we were unable to recover it. 00:32:36.549 [2024-11-20 06:43:08.103391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.549 [2024-11-20 06:43:08.103423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.549 qpair failed and we were unable to recover it. 00:32:36.549 [2024-11-20 06:43:08.103534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.549 [2024-11-20 06:43:08.103567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.549 qpair failed and we were unable to recover it. 00:32:36.549 [2024-11-20 06:43:08.103708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.549 [2024-11-20 06:43:08.103740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.549 qpair failed and we were unable to recover it. 00:32:36.549 [2024-11-20 06:43:08.103923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.549 [2024-11-20 06:43:08.103952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.549 qpair failed and we were unable to recover it. 00:32:36.549 [2024-11-20 06:43:08.104140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.549 [2024-11-20 06:43:08.104170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.549 qpair failed and we were unable to recover it. 00:32:36.549 [2024-11-20 06:43:08.104312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.549 [2024-11-20 06:43:08.104346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.549 qpair failed and we were unable to recover it. 00:32:36.549 [2024-11-20 06:43:08.104516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.549 [2024-11-20 06:43:08.104544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.549 qpair failed and we were unable to recover it. 00:32:36.549 [2024-11-20 06:43:08.104728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.549 [2024-11-20 06:43:08.104756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.549 qpair failed and we were unable to recover it. 00:32:36.549 [2024-11-20 06:43:08.104941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.549 [2024-11-20 06:43:08.104969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.549 qpair failed and we were unable to recover it. 00:32:36.549 [2024-11-20 06:43:08.105084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.549 [2024-11-20 06:43:08.105113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.549 qpair failed and we were unable to recover it. 00:32:36.549 [2024-11-20 06:43:08.105237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.549 [2024-11-20 06:43:08.105266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.549 qpair failed and we were unable to recover it. 00:32:36.549 [2024-11-20 06:43:08.105447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.549 [2024-11-20 06:43:08.105474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.549 qpair failed and we were unable to recover it. 00:32:36.549 [2024-11-20 06:43:08.105587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.549 [2024-11-20 06:43:08.105618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.549 qpair failed and we were unable to recover it. 00:32:36.549 [2024-11-20 06:43:08.105725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.549 [2024-11-20 06:43:08.105753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.549 qpair failed and we were unable to recover it. 00:32:36.549 [2024-11-20 06:43:08.105876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.549 [2024-11-20 06:43:08.105905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.549 qpair failed and we were unable to recover it. 00:32:36.549 [2024-11-20 06:43:08.106007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.549 [2024-11-20 06:43:08.106040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.549 qpair failed and we were unable to recover it. 00:32:36.549 [2024-11-20 06:43:08.106159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.549 [2024-11-20 06:43:08.106188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.549 qpair failed and we were unable to recover it. 00:32:36.549 [2024-11-20 06:43:08.106311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.549 [2024-11-20 06:43:08.106340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.549 qpair failed and we were unable to recover it. 00:32:36.549 [2024-11-20 06:43:08.106451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.549 [2024-11-20 06:43:08.106478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.549 qpair failed and we were unable to recover it. 00:32:36.549 [2024-11-20 06:43:08.106720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.549 [2024-11-20 06:43:08.106749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.549 qpair failed and we were unable to recover it. 00:32:36.549 [2024-11-20 06:43:08.106872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.549 [2024-11-20 06:43:08.106901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.549 qpair failed and we were unable to recover it. 00:32:36.549 [2024-11-20 06:43:08.107088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.549 [2024-11-20 06:43:08.107115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.549 qpair failed and we were unable to recover it. 00:32:36.549 [2024-11-20 06:43:08.107237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.549 [2024-11-20 06:43:08.107267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.549 qpair failed and we were unable to recover it. 00:32:36.549 [2024-11-20 06:43:08.107380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.549 [2024-11-20 06:43:08.107409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.549 qpair failed and we were unable to recover it. 00:32:36.549 [2024-11-20 06:43:08.107530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.549 [2024-11-20 06:43:08.107558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.549 qpair failed and we were unable to recover it. 00:32:36.549 [2024-11-20 06:43:08.107752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.549 [2024-11-20 06:43:08.107782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.549 qpair failed and we were unable to recover it. 00:32:36.549 [2024-11-20 06:43:08.107959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.549 [2024-11-20 06:43:08.107988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.549 qpair failed and we were unable to recover it. 00:32:36.549 [2024-11-20 06:43:08.108092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.549 [2024-11-20 06:43:08.108120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.549 qpair failed and we were unable to recover it. 00:32:36.550 [2024-11-20 06:43:08.108305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.550 [2024-11-20 06:43:08.108334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.550 qpair failed and we were unable to recover it. 00:32:36.550 [2024-11-20 06:43:08.108458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.550 [2024-11-20 06:43:08.108488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.550 qpair failed and we were unable to recover it. 00:32:36.550 [2024-11-20 06:43:08.108592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.550 [2024-11-20 06:43:08.108620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.550 qpair failed and we were unable to recover it. 00:32:36.550 [2024-11-20 06:43:08.108859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.550 [2024-11-20 06:43:08.108886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.550 qpair failed and we were unable to recover it. 00:32:36.550 [2024-11-20 06:43:08.109054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.550 [2024-11-20 06:43:08.109082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.550 qpair failed and we were unable to recover it. 00:32:36.550 [2024-11-20 06:43:08.109221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.550 [2024-11-20 06:43:08.109253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.550 qpair failed and we were unable to recover it. 00:32:36.550 [2024-11-20 06:43:08.109351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.550 [2024-11-20 06:43:08.109379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.550 qpair failed and we were unable to recover it. 00:32:36.550 [2024-11-20 06:43:08.109507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.550 [2024-11-20 06:43:08.109536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.550 qpair failed and we were unable to recover it. 00:32:36.550 [2024-11-20 06:43:08.109636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.550 [2024-11-20 06:43:08.109664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.550 qpair failed and we were unable to recover it. 00:32:36.550 [2024-11-20 06:43:08.109831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.550 [2024-11-20 06:43:08.109875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.550 qpair failed and we were unable to recover it. 00:32:36.550 [2024-11-20 06:43:08.109984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.550 [2024-11-20 06:43:08.110016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.550 qpair failed and we were unable to recover it. 00:32:36.550 [2024-11-20 06:43:08.110126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.550 [2024-11-20 06:43:08.110158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.550 qpair failed and we were unable to recover it. 00:32:36.550 [2024-11-20 06:43:08.110348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.550 [2024-11-20 06:43:08.110379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.550 qpair failed and we were unable to recover it. 00:32:36.550 [2024-11-20 06:43:08.110507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.550 [2024-11-20 06:43:08.110539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.550 qpair failed and we were unable to recover it. 00:32:36.550 [2024-11-20 06:43:08.110717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.550 [2024-11-20 06:43:08.110756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.550 qpair failed and we were unable to recover it. 00:32:36.550 [2024-11-20 06:43:08.111005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.550 [2024-11-20 06:43:08.111036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.550 qpair failed and we were unable to recover it. 00:32:36.550 [2024-11-20 06:43:08.111238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.550 [2024-11-20 06:43:08.111269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.550 qpair failed and we were unable to recover it. 00:32:36.550 [2024-11-20 06:43:08.111407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.550 [2024-11-20 06:43:08.111438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.550 qpair failed and we were unable to recover it. 00:32:36.550 [2024-11-20 06:43:08.111613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.550 [2024-11-20 06:43:08.111646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.550 qpair failed and we were unable to recover it. 00:32:36.550 [2024-11-20 06:43:08.111892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.550 [2024-11-20 06:43:08.111927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.550 qpair failed and we were unable to recover it. 00:32:36.550 [2024-11-20 06:43:08.112032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.550 [2024-11-20 06:43:08.112063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.550 qpair failed and we were unable to recover it. 00:32:36.550 [2024-11-20 06:43:08.112255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.550 [2024-11-20 06:43:08.112283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.550 qpair failed and we were unable to recover it. 00:32:36.550 [2024-11-20 06:43:08.112388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.550 [2024-11-20 06:43:08.112428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.550 qpair failed and we were unable to recover it. 00:32:36.550 [2024-11-20 06:43:08.112602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.550 [2024-11-20 06:43:08.112633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.550 qpair failed and we were unable to recover it. 00:32:36.550 [2024-11-20 06:43:08.112807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.550 [2024-11-20 06:43:08.112839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.550 qpair failed and we were unable to recover it. 00:32:36.550 [2024-11-20 06:43:08.112944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.550 [2024-11-20 06:43:08.112975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.550 qpair failed and we were unable to recover it. 00:32:36.550 [2024-11-20 06:43:08.113151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.550 [2024-11-20 06:43:08.113183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.550 qpair failed and we were unable to recover it. 00:32:36.550 [2024-11-20 06:43:08.113326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.550 [2024-11-20 06:43:08.113360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.550 qpair failed and we were unable to recover it. 00:32:36.550 [2024-11-20 06:43:08.113487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.550 [2024-11-20 06:43:08.113521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.550 qpair failed and we were unable to recover it. 00:32:36.550 [2024-11-20 06:43:08.113629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.550 [2024-11-20 06:43:08.113660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.550 qpair failed and we were unable to recover it. 00:32:36.550 [2024-11-20 06:43:08.113952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.550 [2024-11-20 06:43:08.113984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.550 qpair failed and we were unable to recover it. 00:32:36.550 [2024-11-20 06:43:08.114112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.550 [2024-11-20 06:43:08.114144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.550 qpair failed and we were unable to recover it. 00:32:36.550 [2024-11-20 06:43:08.114277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.550 [2024-11-20 06:43:08.114308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.550 qpair failed and we were unable to recover it. 00:32:36.551 [2024-11-20 06:43:08.114487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.551 [2024-11-20 06:43:08.114519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.551 qpair failed and we were unable to recover it. 00:32:36.551 [2024-11-20 06:43:08.114763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.551 [2024-11-20 06:43:08.114795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.551 qpair failed and we were unable to recover it. 00:32:36.551 [2024-11-20 06:43:08.115035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.551 [2024-11-20 06:43:08.115065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.551 qpair failed and we were unable to recover it. 00:32:36.551 [2024-11-20 06:43:08.115192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.551 [2024-11-20 06:43:08.115236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.551 qpair failed and we were unable to recover it. 00:32:36.551 [2024-11-20 06:43:08.115369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.551 [2024-11-20 06:43:08.115400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.551 qpair failed and we were unable to recover it. 00:32:36.551 [2024-11-20 06:43:08.115519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.551 [2024-11-20 06:43:08.115548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.551 qpair failed and we were unable to recover it. 00:32:36.551 [2024-11-20 06:43:08.115687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.551 [2024-11-20 06:43:08.115719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.551 qpair failed and we were unable to recover it. 00:32:36.551 [2024-11-20 06:43:08.115894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.551 [2024-11-20 06:43:08.115928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.551 qpair failed and we were unable to recover it. 00:32:36.551 [2024-11-20 06:43:08.116098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.551 [2024-11-20 06:43:08.116128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.551 qpair failed and we were unable to recover it. 00:32:36.551 [2024-11-20 06:43:08.116269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.551 [2024-11-20 06:43:08.116302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.551 qpair failed and we were unable to recover it. 00:32:36.551 [2024-11-20 06:43:08.116494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.551 [2024-11-20 06:43:08.116527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.551 qpair failed and we were unable to recover it. 00:32:36.551 [2024-11-20 06:43:08.116741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.551 [2024-11-20 06:43:08.116773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.551 qpair failed and we were unable to recover it. 00:32:36.551 [2024-11-20 06:43:08.116946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.551 [2024-11-20 06:43:08.116977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.551 qpair failed and we were unable to recover it. 00:32:36.551 [2024-11-20 06:43:08.117104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.551 [2024-11-20 06:43:08.117133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.551 qpair failed and we were unable to recover it. 00:32:36.551 [2024-11-20 06:43:08.117372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.551 [2024-11-20 06:43:08.117405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.551 qpair failed and we were unable to recover it. 00:32:36.551 [2024-11-20 06:43:08.117529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.551 [2024-11-20 06:43:08.117559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.551 qpair failed and we were unable to recover it. 00:32:36.551 [2024-11-20 06:43:08.117670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.551 [2024-11-20 06:43:08.117701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.551 qpair failed and we were unable to recover it. 00:32:36.551 [2024-11-20 06:43:08.117830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.551 [2024-11-20 06:43:08.117861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.551 qpair failed and we were unable to recover it. 00:32:36.551 [2024-11-20 06:43:08.117967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.551 [2024-11-20 06:43:08.117998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.551 qpair failed and we were unable to recover it. 00:32:36.551 [2024-11-20 06:43:08.118189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.551 [2024-11-20 06:43:08.118232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.551 qpair failed and we were unable to recover it. 00:32:36.551 [2024-11-20 06:43:08.118350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.551 [2024-11-20 06:43:08.118380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.551 qpair failed and we were unable to recover it. 00:32:36.551 [2024-11-20 06:43:08.118577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.551 [2024-11-20 06:43:08.118609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.551 qpair failed and we were unable to recover it. 00:32:36.551 [2024-11-20 06:43:08.118732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.551 [2024-11-20 06:43:08.118768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.551 qpair failed and we were unable to recover it. 00:32:36.551 [2024-11-20 06:43:08.118892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.551 [2024-11-20 06:43:08.118923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.551 qpair failed and we were unable to recover it. 00:32:36.551 [2024-11-20 06:43:08.119107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.551 [2024-11-20 06:43:08.119137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.551 qpair failed and we were unable to recover it. 00:32:36.551 [2024-11-20 06:43:08.119319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.551 [2024-11-20 06:43:08.119350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.551 qpair failed and we were unable to recover it. 00:32:36.551 [2024-11-20 06:43:08.119535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.551 [2024-11-20 06:43:08.119565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.551 qpair failed and we were unable to recover it. 00:32:36.551 [2024-11-20 06:43:08.119677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.551 [2024-11-20 06:43:08.119708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.551 qpair failed and we were unable to recover it. 00:32:36.551 [2024-11-20 06:43:08.119886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.551 [2024-11-20 06:43:08.119918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.551 qpair failed and we were unable to recover it. 00:32:36.551 [2024-11-20 06:43:08.120157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.551 [2024-11-20 06:43:08.120188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.551 qpair failed and we were unable to recover it. 00:32:36.551 [2024-11-20 06:43:08.120353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.551 [2024-11-20 06:43:08.120384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.551 qpair failed and we were unable to recover it. 00:32:36.551 [2024-11-20 06:43:08.120592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.551 [2024-11-20 06:43:08.120623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.551 qpair failed and we were unable to recover it. 00:32:36.551 [2024-11-20 06:43:08.120868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.551 [2024-11-20 06:43:08.120899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.551 qpair failed and we were unable to recover it. 00:32:36.551 [2024-11-20 06:43:08.121007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.551 [2024-11-20 06:43:08.121037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.551 qpair failed and we were unable to recover it. 00:32:36.551 [2024-11-20 06:43:08.121156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.551 [2024-11-20 06:43:08.121188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.551 qpair failed and we were unable to recover it. 00:32:36.551 [2024-11-20 06:43:08.121370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.551 [2024-11-20 06:43:08.121401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.551 qpair failed and we were unable to recover it. 00:32:36.551 [2024-11-20 06:43:08.121530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.551 [2024-11-20 06:43:08.121559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.551 qpair failed and we were unable to recover it. 00:32:36.551 [2024-11-20 06:43:08.121748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.551 [2024-11-20 06:43:08.121780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.551 qpair failed and we were unable to recover it. 00:32:36.551 [2024-11-20 06:43:08.121956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.552 [2024-11-20 06:43:08.121987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.552 qpair failed and we were unable to recover it. 00:32:36.552 [2024-11-20 06:43:08.122159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.552 [2024-11-20 06:43:08.122189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.552 qpair failed and we were unable to recover it. 00:32:36.552 [2024-11-20 06:43:08.122324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.552 [2024-11-20 06:43:08.122354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.552 qpair failed and we were unable to recover it. 00:32:36.552 [2024-11-20 06:43:08.122471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.552 [2024-11-20 06:43:08.122503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.552 qpair failed and we were unable to recover it. 00:32:36.552 [2024-11-20 06:43:08.122609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.552 [2024-11-20 06:43:08.122639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.552 qpair failed and we were unable to recover it. 00:32:36.552 [2024-11-20 06:43:08.122743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.552 [2024-11-20 06:43:08.122775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.552 qpair failed and we were unable to recover it. 00:32:36.552 [2024-11-20 06:43:08.122968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.552 [2024-11-20 06:43:08.123000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.552 qpair failed and we were unable to recover it. 00:32:36.552 [2024-11-20 06:43:08.123100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.552 [2024-11-20 06:43:08.123140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.552 qpair failed and we were unable to recover it. 00:32:36.552 [2024-11-20 06:43:08.123352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.552 [2024-11-20 06:43:08.123385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.552 qpair failed and we were unable to recover it. 00:32:36.552 [2024-11-20 06:43:08.123513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.552 [2024-11-20 06:43:08.123546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.552 qpair failed and we were unable to recover it. 00:32:36.552 [2024-11-20 06:43:08.123730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.552 [2024-11-20 06:43:08.123762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.552 qpair failed and we were unable to recover it. 00:32:36.552 [2024-11-20 06:43:08.123875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.552 [2024-11-20 06:43:08.123912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.552 qpair failed and we were unable to recover it. 00:32:36.552 [2024-11-20 06:43:08.124024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.552 [2024-11-20 06:43:08.124055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.552 qpair failed and we were unable to recover it. 00:32:36.552 [2024-11-20 06:43:08.124255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.552 [2024-11-20 06:43:08.124288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.552 qpair failed and we were unable to recover it. 00:32:36.552 [2024-11-20 06:43:08.124396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.552 [2024-11-20 06:43:08.124426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.552 qpair failed and we were unable to recover it. 00:32:36.552 [2024-11-20 06:43:08.124607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.552 [2024-11-20 06:43:08.124638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.552 qpair failed and we were unable to recover it. 00:32:36.552 [2024-11-20 06:43:08.124808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.552 [2024-11-20 06:43:08.124839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.552 qpair failed and we were unable to recover it. 00:32:36.552 [2024-11-20 06:43:08.125023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.552 [2024-11-20 06:43:08.125055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.552 qpair failed and we were unable to recover it. 00:32:36.552 [2024-11-20 06:43:08.125187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.552 [2024-11-20 06:43:08.125226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.552 qpair failed and we were unable to recover it. 00:32:36.552 [2024-11-20 06:43:08.125483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.552 [2024-11-20 06:43:08.125515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.552 qpair failed and we were unable to recover it. 00:32:36.552 [2024-11-20 06:43:08.125726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.552 [2024-11-20 06:43:08.125756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.552 qpair failed and we were unable to recover it. 00:32:36.552 [2024-11-20 06:43:08.125859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.552 [2024-11-20 06:43:08.125890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.552 qpair failed and we were unable to recover it. 00:32:36.552 [2024-11-20 06:43:08.126013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.552 [2024-11-20 06:43:08.126049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.552 qpair failed and we were unable to recover it. 00:32:36.552 [2024-11-20 06:43:08.126164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.552 [2024-11-20 06:43:08.126194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.552 qpair failed and we were unable to recover it. 00:32:36.552 [2024-11-20 06:43:08.126332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.552 [2024-11-20 06:43:08.126363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.552 qpair failed and we were unable to recover it. 00:32:36.552 [2024-11-20 06:43:08.126608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.552 [2024-11-20 06:43:08.126680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.552 qpair failed and we were unable to recover it. 00:32:36.552 [2024-11-20 06:43:08.126904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.552 [2024-11-20 06:43:08.126940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.552 qpair failed and we were unable to recover it. 00:32:36.552 [2024-11-20 06:43:08.127224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.552 [2024-11-20 06:43:08.127259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.552 qpair failed and we were unable to recover it. 00:32:36.552 [2024-11-20 06:43:08.127458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.552 [2024-11-20 06:43:08.127489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.552 qpair failed and we were unable to recover it. 00:32:36.552 [2024-11-20 06:43:08.127667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.552 [2024-11-20 06:43:08.127699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.552 qpair failed and we were unable to recover it. 00:32:36.552 [2024-11-20 06:43:08.127827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.552 [2024-11-20 06:43:08.127859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.552 qpair failed and we were unable to recover it. 00:32:36.552 [2024-11-20 06:43:08.128010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.552 [2024-11-20 06:43:08.128042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.552 qpair failed and we were unable to recover it. 00:32:36.552 [2024-11-20 06:43:08.128154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.552 [2024-11-20 06:43:08.128187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.552 qpair failed and we were unable to recover it. 00:32:36.552 [2024-11-20 06:43:08.128339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.552 [2024-11-20 06:43:08.128371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.552 qpair failed and we were unable to recover it. 00:32:36.552 [2024-11-20 06:43:08.128545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.552 [2024-11-20 06:43:08.128577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.552 qpair failed and we were unable to recover it. 00:32:36.552 [2024-11-20 06:43:08.128707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.552 [2024-11-20 06:43:08.128739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.552 qpair failed and we were unable to recover it. 00:32:36.552 [2024-11-20 06:43:08.128848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.552 [2024-11-20 06:43:08.128880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.552 qpair failed and we were unable to recover it. 00:32:36.552 [2024-11-20 06:43:08.129016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.552 [2024-11-20 06:43:08.129047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.552 qpair failed and we were unable to recover it. 00:32:36.552 [2024-11-20 06:43:08.129153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.553 [2024-11-20 06:43:08.129195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.553 qpair failed and we were unable to recover it. 00:32:36.553 [2024-11-20 06:43:08.129325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.553 [2024-11-20 06:43:08.129357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.553 qpair failed and we were unable to recover it. 00:32:36.553 [2024-11-20 06:43:08.129563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.553 [2024-11-20 06:43:08.129596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.553 qpair failed and we were unable to recover it. 00:32:36.553 [2024-11-20 06:43:08.129768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.553 [2024-11-20 06:43:08.129800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.553 qpair failed and we were unable to recover it. 00:32:36.553 [2024-11-20 06:43:08.129903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.553 [2024-11-20 06:43:08.129935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.553 qpair failed and we were unable to recover it. 00:32:36.553 [2024-11-20 06:43:08.130125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.553 [2024-11-20 06:43:08.130156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.553 qpair failed and we were unable to recover it. 00:32:36.553 [2024-11-20 06:43:08.130433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.553 [2024-11-20 06:43:08.130467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.553 qpair failed and we were unable to recover it. 00:32:36.553 [2024-11-20 06:43:08.130589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.553 [2024-11-20 06:43:08.130622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.553 qpair failed and we were unable to recover it. 00:32:36.553 [2024-11-20 06:43:08.130818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.553 [2024-11-20 06:43:08.130851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.553 qpair failed and we were unable to recover it. 00:32:36.553 [2024-11-20 06:43:08.131049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.553 [2024-11-20 06:43:08.131080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.553 qpair failed and we were unable to recover it. 00:32:36.553 [2024-11-20 06:43:08.131262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.553 [2024-11-20 06:43:08.131296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.553 qpair failed and we were unable to recover it. 00:32:36.553 [2024-11-20 06:43:08.131407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.553 [2024-11-20 06:43:08.131439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.553 qpair failed and we were unable to recover it. 00:32:36.553 [2024-11-20 06:43:08.131627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.553 [2024-11-20 06:43:08.131660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.553 qpair failed and we were unable to recover it. 00:32:36.553 [2024-11-20 06:43:08.131782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.553 [2024-11-20 06:43:08.131814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.553 qpair failed and we were unable to recover it. 00:32:36.553 [2024-11-20 06:43:08.131998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.553 [2024-11-20 06:43:08.132030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.553 qpair failed and we were unable to recover it. 00:32:36.553 [2024-11-20 06:43:08.132217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.553 [2024-11-20 06:43:08.132251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.553 qpair failed and we were unable to recover it. 00:32:36.553 [2024-11-20 06:43:08.132369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.553 [2024-11-20 06:43:08.132401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.553 qpair failed and we were unable to recover it. 00:32:36.553 [2024-11-20 06:43:08.132576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.553 [2024-11-20 06:43:08.132607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.553 qpair failed and we were unable to recover it. 00:32:36.553 [2024-11-20 06:43:08.132734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.553 [2024-11-20 06:43:08.132766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.553 qpair failed and we were unable to recover it. 00:32:36.553 [2024-11-20 06:43:08.132937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.553 [2024-11-20 06:43:08.132968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.553 qpair failed and we were unable to recover it. 00:32:36.553 [2024-11-20 06:43:08.133085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.553 [2024-11-20 06:43:08.133118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.553 qpair failed and we were unable to recover it. 00:32:36.553 [2024-11-20 06:43:08.133313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.553 [2024-11-20 06:43:08.133348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.553 qpair failed and we were unable to recover it. 00:32:36.553 [2024-11-20 06:43:08.133455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.553 [2024-11-20 06:43:08.133488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.553 qpair failed and we were unable to recover it. 00:32:36.553 [2024-11-20 06:43:08.133613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.553 [2024-11-20 06:43:08.133647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.553 qpair failed and we were unable to recover it. 00:32:36.553 [2024-11-20 06:43:08.133752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.553 [2024-11-20 06:43:08.133785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.553 qpair failed and we were unable to recover it. 00:32:36.553 [2024-11-20 06:43:08.133894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.553 [2024-11-20 06:43:08.133926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.553 qpair failed and we were unable to recover it. 00:32:36.553 [2024-11-20 06:43:08.134122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.553 [2024-11-20 06:43:08.134155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.553 qpair failed and we were unable to recover it. 00:32:36.553 [2024-11-20 06:43:08.134442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.553 [2024-11-20 06:43:08.134480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.553 qpair failed and we were unable to recover it. 00:32:36.553 [2024-11-20 06:43:08.134620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.553 [2024-11-20 06:43:08.134654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.553 qpair failed and we were unable to recover it. 00:32:36.553 [2024-11-20 06:43:08.134796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.553 [2024-11-20 06:43:08.134828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.553 qpair failed and we were unable to recover it. 00:32:36.553 [2024-11-20 06:43:08.134952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.553 [2024-11-20 06:43:08.134985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.553 qpair failed and we were unable to recover it. 00:32:36.553 [2024-11-20 06:43:08.135091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.553 [2024-11-20 06:43:08.135123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.553 qpair failed and we were unable to recover it. 00:32:36.553 [2024-11-20 06:43:08.135238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.553 [2024-11-20 06:43:08.135272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.553 qpair failed and we were unable to recover it. 00:32:36.553 [2024-11-20 06:43:08.135511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.553 [2024-11-20 06:43:08.135542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.553 qpair failed and we were unable to recover it. 00:32:36.553 [2024-11-20 06:43:08.135758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.553 [2024-11-20 06:43:08.135790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.553 qpair failed and we were unable to recover it. 00:32:36.553 [2024-11-20 06:43:08.135893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.553 [2024-11-20 06:43:08.135925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.553 qpair failed and we were unable to recover it. 00:32:36.553 [2024-11-20 06:43:08.136188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.553 [2024-11-20 06:43:08.136253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.553 qpair failed and we were unable to recover it. 00:32:36.553 [2024-11-20 06:43:08.136533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.553 [2024-11-20 06:43:08.136565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.553 qpair failed and we were unable to recover it. 00:32:36.554 [2024-11-20 06:43:08.136812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.554 [2024-11-20 06:43:08.136843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.554 qpair failed and we were unable to recover it. 00:32:36.554 [2024-11-20 06:43:08.137056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.554 [2024-11-20 06:43:08.137092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.554 qpair failed and we were unable to recover it. 00:32:36.554 [2024-11-20 06:43:08.137234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.554 [2024-11-20 06:43:08.137267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.554 qpair failed and we were unable to recover it. 00:32:36.554 [2024-11-20 06:43:08.137414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.554 [2024-11-20 06:43:08.137445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.554 qpair failed and we were unable to recover it. 00:32:36.554 [2024-11-20 06:43:08.137643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.554 [2024-11-20 06:43:08.137675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.554 qpair failed and we were unable to recover it. 00:32:36.554 [2024-11-20 06:43:08.137797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.554 [2024-11-20 06:43:08.137828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.554 qpair failed and we were unable to recover it. 00:32:36.554 [2024-11-20 06:43:08.137938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.554 [2024-11-20 06:43:08.137969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.554 qpair failed and we were unable to recover it. 00:32:36.554 [2024-11-20 06:43:08.138150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.554 [2024-11-20 06:43:08.138182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.554 qpair failed and we were unable to recover it. 00:32:36.554 [2024-11-20 06:43:08.138318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.554 [2024-11-20 06:43:08.138349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.554 qpair failed and we were unable to recover it. 00:32:36.554 [2024-11-20 06:43:08.138526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.554 [2024-11-20 06:43:08.138558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.554 qpair failed and we were unable to recover it. 00:32:36.554 [2024-11-20 06:43:08.138687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.554 [2024-11-20 06:43:08.138717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.554 qpair failed and we were unable to recover it. 00:32:36.554 [2024-11-20 06:43:08.138904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.554 [2024-11-20 06:43:08.138935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.554 qpair failed and we were unable to recover it. 00:32:36.554 [2024-11-20 06:43:08.139068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.554 [2024-11-20 06:43:08.139099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.554 qpair failed and we were unable to recover it. 00:32:36.554 [2024-11-20 06:43:08.139215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.554 [2024-11-20 06:43:08.139248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.554 qpair failed and we were unable to recover it. 00:32:36.554 [2024-11-20 06:43:08.139427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.554 [2024-11-20 06:43:08.139458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.554 qpair failed and we were unable to recover it. 00:32:36.554 [2024-11-20 06:43:08.139670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.554 [2024-11-20 06:43:08.139701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.554 qpair failed and we were unable to recover it. 00:32:36.554 [2024-11-20 06:43:08.139900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.554 [2024-11-20 06:43:08.139937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.554 qpair failed and we were unable to recover it. 00:32:36.554 [2024-11-20 06:43:08.140059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.554 [2024-11-20 06:43:08.140090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.554 qpair failed and we were unable to recover it. 00:32:36.554 [2024-11-20 06:43:08.140275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.554 [2024-11-20 06:43:08.140308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.554 qpair failed and we were unable to recover it. 00:32:36.554 [2024-11-20 06:43:08.140423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.554 [2024-11-20 06:43:08.140454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.554 qpair failed and we were unable to recover it. 00:32:36.554 [2024-11-20 06:43:08.140580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.554 [2024-11-20 06:43:08.140611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.554 qpair failed and we were unable to recover it. 00:32:36.554 [2024-11-20 06:43:08.140740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.554 [2024-11-20 06:43:08.140771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.554 qpair failed and we were unable to recover it. 00:32:36.554 [2024-11-20 06:43:08.140898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.554 [2024-11-20 06:43:08.140929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.554 qpair failed and we were unable to recover it. 00:32:36.554 [2024-11-20 06:43:08.141034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.554 [2024-11-20 06:43:08.141066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.554 qpair failed and we were unable to recover it. 00:32:36.554 [2024-11-20 06:43:08.141196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.554 [2024-11-20 06:43:08.141240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.554 qpair failed and we were unable to recover it. 00:32:36.554 [2024-11-20 06:43:08.141414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.554 [2024-11-20 06:43:08.141445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.554 qpair failed and we were unable to recover it. 00:32:36.554 [2024-11-20 06:43:08.141716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.554 [2024-11-20 06:43:08.141747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.554 qpair failed and we were unable to recover it. 00:32:36.554 [2024-11-20 06:43:08.141971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.554 [2024-11-20 06:43:08.142002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.554 qpair failed and we were unable to recover it. 00:32:36.554 [2024-11-20 06:43:08.142179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.554 [2024-11-20 06:43:08.142219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.554 qpair failed and we were unable to recover it. 00:32:36.554 [2024-11-20 06:43:08.142403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.554 [2024-11-20 06:43:08.142435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.554 qpair failed and we were unable to recover it. 00:32:36.554 [2024-11-20 06:43:08.142631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.554 [2024-11-20 06:43:08.142662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.554 qpair failed and we were unable to recover it. 00:32:36.554 [2024-11-20 06:43:08.142844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.554 [2024-11-20 06:43:08.142875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.554 qpair failed and we were unable to recover it. 00:32:36.554 [2024-11-20 06:43:08.143069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.554 [2024-11-20 06:43:08.143100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.554 qpair failed and we were unable to recover it. 00:32:36.554 [2024-11-20 06:43:08.143293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.554 [2024-11-20 06:43:08.143326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.555 qpair failed and we were unable to recover it. 00:32:36.555 [2024-11-20 06:43:08.143510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.555 [2024-11-20 06:43:08.143542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.555 qpair failed and we were unable to recover it. 00:32:36.555 [2024-11-20 06:43:08.143808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.555 [2024-11-20 06:43:08.143839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.555 qpair failed and we were unable to recover it. 00:32:36.555 [2024-11-20 06:43:08.143964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.555 [2024-11-20 06:43:08.143995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.555 qpair failed and we were unable to recover it. 00:32:36.555 [2024-11-20 06:43:08.144240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.555 [2024-11-20 06:43:08.144273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.555 qpair failed and we were unable to recover it. 00:32:36.555 [2024-11-20 06:43:08.144389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.555 [2024-11-20 06:43:08.144419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.555 qpair failed and we were unable to recover it. 00:32:36.555 [2024-11-20 06:43:08.144710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.555 [2024-11-20 06:43:08.144742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.555 qpair failed and we were unable to recover it. 00:32:36.555 [2024-11-20 06:43:08.144930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.555 [2024-11-20 06:43:08.144961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.555 qpair failed and we were unable to recover it. 00:32:36.555 [2024-11-20 06:43:08.145089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.555 [2024-11-20 06:43:08.145120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.555 qpair failed and we were unable to recover it. 00:32:36.555 [2024-11-20 06:43:08.145247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.555 [2024-11-20 06:43:08.145280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.555 qpair failed and we were unable to recover it. 00:32:36.555 [2024-11-20 06:43:08.145540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.555 [2024-11-20 06:43:08.145577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.555 qpair failed and we were unable to recover it. 00:32:36.555 [2024-11-20 06:43:08.145818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.555 [2024-11-20 06:43:08.145848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.555 qpair failed and we were unable to recover it. 00:32:36.555 [2024-11-20 06:43:08.146044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.555 [2024-11-20 06:43:08.146075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.555 qpair failed and we were unable to recover it. 00:32:36.555 [2024-11-20 06:43:08.146176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.555 [2024-11-20 06:43:08.146217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.555 qpair failed and we were unable to recover it. 00:32:36.555 [2024-11-20 06:43:08.146389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.555 [2024-11-20 06:43:08.146420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.555 qpair failed and we were unable to recover it. 00:32:36.555 [2024-11-20 06:43:08.146532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.555 [2024-11-20 06:43:08.146563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.555 qpair failed and we were unable to recover it. 00:32:36.555 [2024-11-20 06:43:08.146747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.555 [2024-11-20 06:43:08.146779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.555 qpair failed and we were unable to recover it. 00:32:36.555 [2024-11-20 06:43:08.146988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.555 [2024-11-20 06:43:08.147019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.555 qpair failed and we were unable to recover it. 00:32:36.555 [2024-11-20 06:43:08.147213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.555 [2024-11-20 06:43:08.147246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.555 qpair failed and we were unable to recover it. 00:32:36.555 [2024-11-20 06:43:08.147493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.555 [2024-11-20 06:43:08.147525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.555 qpair failed and we were unable to recover it. 00:32:36.555 [2024-11-20 06:43:08.147648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.555 [2024-11-20 06:43:08.147680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.555 qpair failed and we were unable to recover it. 00:32:36.555 [2024-11-20 06:43:08.147862] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:32:36.555 [2024-11-20 06:43:08.147871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.555 [2024-11-20 06:43:08.147904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.555 [2024-11-20 06:43:08.147911] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:36.555 qpair failed and we were unable to recover it. 00:32:36.555 [2024-11-20 06:43:08.148016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.555 [2024-11-20 06:43:08.148047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.555 qpair failed and we were unable to recover it. 00:32:36.555 [2024-11-20 06:43:08.148241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.555 [2024-11-20 06:43:08.148273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.555 qpair failed and we were unable to recover it. 00:32:36.555 [2024-11-20 06:43:08.148401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.555 [2024-11-20 06:43:08.148433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.555 qpair failed and we were unable to recover it. 00:32:36.555 [2024-11-20 06:43:08.148636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.555 [2024-11-20 06:43:08.148668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.555 qpair failed and we were unable to recover it. 00:32:36.555 [2024-11-20 06:43:08.148915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.555 [2024-11-20 06:43:08.148946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.555 qpair failed and we were unable to recover it. 00:32:36.555 [2024-11-20 06:43:08.149131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.555 [2024-11-20 06:43:08.149163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.555 qpair failed and we were unable to recover it. 00:32:36.555 [2024-11-20 06:43:08.149352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.555 [2024-11-20 06:43:08.149385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.555 qpair failed and we were unable to recover it. 00:32:36.555 [2024-11-20 06:43:08.149579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.555 [2024-11-20 06:43:08.149611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.555 qpair failed and we were unable to recover it. 00:32:36.555 [2024-11-20 06:43:08.149879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.555 [2024-11-20 06:43:08.149912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.555 qpair failed and we were unable to recover it. 00:32:36.555 [2024-11-20 06:43:08.150029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.555 [2024-11-20 06:43:08.150062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.555 qpair failed and we were unable to recover it. 00:32:36.555 [2024-11-20 06:43:08.150255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.555 [2024-11-20 06:43:08.150289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.555 qpair failed and we were unable to recover it. 00:32:36.555 [2024-11-20 06:43:08.150424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.555 [2024-11-20 06:43:08.150457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.555 qpair failed and we were unable to recover it. 00:32:36.555 [2024-11-20 06:43:08.150632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.555 [2024-11-20 06:43:08.150665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.555 qpair failed and we were unable to recover it. 00:32:36.555 [2024-11-20 06:43:08.150790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.555 [2024-11-20 06:43:08.150822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.555 qpair failed and we were unable to recover it. 00:32:36.555 [2024-11-20 06:43:08.151097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.555 [2024-11-20 06:43:08.151136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.555 qpair failed and we were unable to recover it. 00:32:36.555 [2024-11-20 06:43:08.151330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.555 [2024-11-20 06:43:08.151363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.556 qpair failed and we were unable to recover it. 00:32:36.556 [2024-11-20 06:43:08.151535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.556 [2024-11-20 06:43:08.151567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.556 qpair failed and we were unable to recover it. 00:32:36.556 [2024-11-20 06:43:08.151752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.556 [2024-11-20 06:43:08.151785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.556 qpair failed and we were unable to recover it. 00:32:36.556 [2024-11-20 06:43:08.152010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.556 [2024-11-20 06:43:08.152043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.556 qpair failed and we were unable to recover it. 00:32:36.556 [2024-11-20 06:43:08.152234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.556 [2024-11-20 06:43:08.152268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.556 qpair failed and we were unable to recover it. 00:32:36.556 [2024-11-20 06:43:08.152396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.556 [2024-11-20 06:43:08.152428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.556 qpair failed and we were unable to recover it. 00:32:36.556 [2024-11-20 06:43:08.152603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.556 [2024-11-20 06:43:08.152634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.556 qpair failed and we were unable to recover it. 00:32:36.556 [2024-11-20 06:43:08.152769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.556 [2024-11-20 06:43:08.152800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.556 qpair failed and we were unable to recover it. 00:32:36.556 [2024-11-20 06:43:08.152904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.556 [2024-11-20 06:43:08.152935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.556 qpair failed and we were unable to recover it. 00:32:36.556 [2024-11-20 06:43:08.153108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.556 [2024-11-20 06:43:08.153139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.556 qpair failed and we were unable to recover it. 00:32:36.556 [2024-11-20 06:43:08.153244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.556 [2024-11-20 06:43:08.153276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.556 qpair failed and we were unable to recover it. 00:32:36.556 [2024-11-20 06:43:08.153471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.556 [2024-11-20 06:43:08.153502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.556 qpair failed and we were unable to recover it. 00:32:36.556 [2024-11-20 06:43:08.153608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.556 [2024-11-20 06:43:08.153639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.556 qpair failed and we were unable to recover it. 00:32:36.556 [2024-11-20 06:43:08.153906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.556 [2024-11-20 06:43:08.153938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.556 qpair failed and we were unable to recover it. 00:32:36.556 [2024-11-20 06:43:08.154123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.556 [2024-11-20 06:43:08.154155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.556 qpair failed and we were unable to recover it. 00:32:36.556 [2024-11-20 06:43:08.154302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.556 [2024-11-20 06:43:08.154335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.556 qpair failed and we were unable to recover it. 00:32:36.556 [2024-11-20 06:43:08.154856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.556 [2024-11-20 06:43:08.154893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.556 qpair failed and we were unable to recover it. 00:32:36.556 [2024-11-20 06:43:08.155080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.556 [2024-11-20 06:43:08.155113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.556 qpair failed and we were unable to recover it. 00:32:36.556 [2024-11-20 06:43:08.155236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.556 [2024-11-20 06:43:08.155269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.556 qpair failed and we were unable to recover it. 00:32:36.556 [2024-11-20 06:43:08.155512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.556 [2024-11-20 06:43:08.155545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.556 qpair failed and we were unable to recover it. 00:32:36.556 [2024-11-20 06:43:08.155757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.556 [2024-11-20 06:43:08.155788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.556 qpair failed and we were unable to recover it. 00:32:36.556 [2024-11-20 06:43:08.156068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.556 [2024-11-20 06:43:08.156100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.556 qpair failed and we were unable to recover it. 00:32:36.556 [2024-11-20 06:43:08.156372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.556 [2024-11-20 06:43:08.156407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.556 qpair failed and we were unable to recover it. 00:32:36.556 [2024-11-20 06:43:08.156529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.556 [2024-11-20 06:43:08.156561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.556 qpair failed and we were unable to recover it. 00:32:36.556 [2024-11-20 06:43:08.156679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.556 [2024-11-20 06:43:08.156711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.556 qpair failed and we were unable to recover it. 00:32:36.556 [2024-11-20 06:43:08.156902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.556 [2024-11-20 06:43:08.156934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.556 qpair failed and we were unable to recover it. 00:32:36.556 [2024-11-20 06:43:08.157103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.556 [2024-11-20 06:43:08.157136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.556 qpair failed and we were unable to recover it. 00:32:36.556 [2024-11-20 06:43:08.157284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.556 [2024-11-20 06:43:08.157318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.556 qpair failed and we were unable to recover it. 00:32:36.556 [2024-11-20 06:43:08.157494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.556 [2024-11-20 06:43:08.157527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.556 qpair failed and we were unable to recover it. 00:32:36.556 [2024-11-20 06:43:08.157655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.556 [2024-11-20 06:43:08.157687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.556 qpair failed and we were unable to recover it. 00:32:36.556 [2024-11-20 06:43:08.157821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.556 [2024-11-20 06:43:08.157853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.556 qpair failed and we were unable to recover it. 00:32:36.556 [2024-11-20 06:43:08.158057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.556 [2024-11-20 06:43:08.158089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.556 qpair failed and we were unable to recover it. 00:32:36.556 [2024-11-20 06:43:08.158335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.556 [2024-11-20 06:43:08.158369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.556 qpair failed and we were unable to recover it. 00:32:36.556 [2024-11-20 06:43:08.158543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.556 [2024-11-20 06:43:08.158575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.556 qpair failed and we were unable to recover it. 00:32:36.556 [2024-11-20 06:43:08.158695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.556 [2024-11-20 06:43:08.158726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.556 qpair failed and we were unable to recover it. 00:32:36.556 [2024-11-20 06:43:08.158911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.556 [2024-11-20 06:43:08.158943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.556 qpair failed and we were unable to recover it. 00:32:36.556 [2024-11-20 06:43:08.159128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.556 [2024-11-20 06:43:08.159160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.556 qpair failed and we were unable to recover it. 00:32:36.556 [2024-11-20 06:43:08.159409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.556 [2024-11-20 06:43:08.159442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.556 qpair failed and we were unable to recover it. 00:32:36.556 [2024-11-20 06:43:08.159620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.556 [2024-11-20 06:43:08.159653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.556 qpair failed and we were unable to recover it. 00:32:36.557 [2024-11-20 06:43:08.159834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.557 [2024-11-20 06:43:08.159867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.557 qpair failed and we were unable to recover it. 00:32:36.557 [2024-11-20 06:43:08.159987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.557 [2024-11-20 06:43:08.160024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.557 qpair failed and we were unable to recover it. 00:32:36.557 [2024-11-20 06:43:08.160142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.557 [2024-11-20 06:43:08.160175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.557 qpair failed and we were unable to recover it. 00:32:36.557 [2024-11-20 06:43:08.160307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.557 [2024-11-20 06:43:08.160350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.557 qpair failed and we were unable to recover it. 00:32:36.557 [2024-11-20 06:43:08.160589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.557 [2024-11-20 06:43:08.160620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.557 qpair failed and we were unable to recover it. 00:32:36.557 [2024-11-20 06:43:08.160750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.557 [2024-11-20 06:43:08.160783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.557 qpair failed and we were unable to recover it. 00:32:36.557 [2024-11-20 06:43:08.160886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.557 [2024-11-20 06:43:08.160918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.557 qpair failed and we were unable to recover it. 00:32:36.557 [2024-11-20 06:43:08.161158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.557 [2024-11-20 06:43:08.161189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.557 qpair failed and we were unable to recover it. 00:32:36.557 [2024-11-20 06:43:08.161372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.557 [2024-11-20 06:43:08.161405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.557 qpair failed and we were unable to recover it. 00:32:36.557 [2024-11-20 06:43:08.161595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.557 [2024-11-20 06:43:08.161627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.557 qpair failed and we were unable to recover it. 00:32:36.557 [2024-11-20 06:43:08.161805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.557 [2024-11-20 06:43:08.161837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.557 qpair failed and we were unable to recover it. 00:32:36.557 [2024-11-20 06:43:08.162026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.557 [2024-11-20 06:43:08.162058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.557 qpair failed and we were unable to recover it. 00:32:36.557 [2024-11-20 06:43:08.162166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.557 [2024-11-20 06:43:08.162198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.557 qpair failed and we were unable to recover it. 00:32:36.557 [2024-11-20 06:43:08.162351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.557 [2024-11-20 06:43:08.162383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.557 qpair failed and we were unable to recover it. 00:32:36.557 [2024-11-20 06:43:08.162621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.557 [2024-11-20 06:43:08.162653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.557 qpair failed and we were unable to recover it. 00:32:36.557 [2024-11-20 06:43:08.162768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.557 [2024-11-20 06:43:08.162798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.557 qpair failed and we were unable to recover it. 00:32:36.557 [2024-11-20 06:43:08.162914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.557 [2024-11-20 06:43:08.162946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.557 qpair failed and we were unable to recover it. 00:32:36.557 [2024-11-20 06:43:08.163064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.557 [2024-11-20 06:43:08.163096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.557 qpair failed and we were unable to recover it. 00:32:36.557 [2024-11-20 06:43:08.163289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.557 [2024-11-20 06:43:08.163322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.557 qpair failed and we were unable to recover it. 00:32:36.557 [2024-11-20 06:43:08.163441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.557 [2024-11-20 06:43:08.163475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.557 qpair failed and we were unable to recover it. 00:32:36.557 [2024-11-20 06:43:08.163669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.557 [2024-11-20 06:43:08.163701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.557 qpair failed and we were unable to recover it. 00:32:36.557 [2024-11-20 06:43:08.163881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.557 [2024-11-20 06:43:08.163913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.557 qpair failed and we were unable to recover it. 00:32:36.557 [2024-11-20 06:43:08.164042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.557 [2024-11-20 06:43:08.164074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.557 qpair failed and we were unable to recover it. 00:32:36.557 [2024-11-20 06:43:08.164259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.557 [2024-11-20 06:43:08.164293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.557 qpair failed and we were unable to recover it. 00:32:36.557 [2024-11-20 06:43:08.164432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.557 [2024-11-20 06:43:08.164463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.557 qpair failed and we were unable to recover it. 00:32:36.557 [2024-11-20 06:43:08.164652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.557 [2024-11-20 06:43:08.164684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.557 qpair failed and we were unable to recover it. 00:32:36.557 [2024-11-20 06:43:08.164877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.557 [2024-11-20 06:43:08.164909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.557 qpair failed and we were unable to recover it. 00:32:36.557 [2024-11-20 06:43:08.165092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.557 [2024-11-20 06:43:08.165124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.557 qpair failed and we were unable to recover it. 00:32:36.557 [2024-11-20 06:43:08.165318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.557 [2024-11-20 06:43:08.165351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.557 qpair failed and we were unable to recover it. 00:32:36.558 [2024-11-20 06:43:08.165538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.558 [2024-11-20 06:43:08.165580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.558 qpair failed and we were unable to recover it. 00:32:36.558 [2024-11-20 06:43:08.165854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.558 [2024-11-20 06:43:08.165888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.558 qpair failed and we were unable to recover it. 00:32:36.558 [2024-11-20 06:43:08.166013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.558 [2024-11-20 06:43:08.166045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.558 qpair failed and we were unable to recover it. 00:32:36.558 [2024-11-20 06:43:08.166169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.558 [2024-11-20 06:43:08.166212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.558 qpair failed and we were unable to recover it. 00:32:36.558 [2024-11-20 06:43:08.166412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.558 [2024-11-20 06:43:08.166444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.558 qpair failed and we were unable to recover it. 00:32:36.558 [2024-11-20 06:43:08.166637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.558 [2024-11-20 06:43:08.166668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.558 qpair failed and we were unable to recover it. 00:32:36.558 [2024-11-20 06:43:08.166726] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9deaf0 (9): Bad file descriptor 00:32:36.558 [2024-11-20 06:43:08.167075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.558 [2024-11-20 06:43:08.167147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.558 qpair failed and we were unable to recover it. 00:32:36.558 [2024-11-20 06:43:08.167494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.558 [2024-11-20 06:43:08.167533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.558 qpair failed and we were unable to recover it. 00:32:36.558 [2024-11-20 06:43:08.167669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.558 [2024-11-20 06:43:08.167702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.558 qpair failed and we were unable to recover it. 00:32:36.558 [2024-11-20 06:43:08.167988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.558 [2024-11-20 06:43:08.168021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.558 qpair failed and we were unable to recover it. 00:32:36.558 [2024-11-20 06:43:08.168262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.558 [2024-11-20 06:43:08.168298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.558 qpair failed and we were unable to recover it. 00:32:36.558 [2024-11-20 06:43:08.168441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.558 [2024-11-20 06:43:08.168474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.558 qpair failed and we were unable to recover it. 00:32:36.558 [2024-11-20 06:43:08.168662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.558 [2024-11-20 06:43:08.168694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.558 qpair failed and we were unable to recover it. 00:32:36.558 [2024-11-20 06:43:08.168825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.558 [2024-11-20 06:43:08.168859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.558 qpair failed and we were unable to recover it. 00:32:36.558 [2024-11-20 06:43:08.169000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.558 [2024-11-20 06:43:08.169031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.558 qpair failed and we were unable to recover it. 00:32:36.558 [2024-11-20 06:43:08.169307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.558 [2024-11-20 06:43:08.169342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.558 qpair failed and we were unable to recover it. 00:32:36.558 [2024-11-20 06:43:08.169484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.558 [2024-11-20 06:43:08.169517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.558 qpair failed and we were unable to recover it. 00:32:36.558 [2024-11-20 06:43:08.169758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.558 [2024-11-20 06:43:08.169791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.558 qpair failed and we were unable to recover it. 00:32:36.558 [2024-11-20 06:43:08.169976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.558 [2024-11-20 06:43:08.170009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.558 qpair failed and we were unable to recover it. 00:32:36.558 [2024-11-20 06:43:08.170122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.558 [2024-11-20 06:43:08.170155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.558 qpair failed and we were unable to recover it. 00:32:36.558 [2024-11-20 06:43:08.170377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.558 [2024-11-20 06:43:08.170410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.558 qpair failed and we were unable to recover it. 00:32:36.558 [2024-11-20 06:43:08.170585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.558 [2024-11-20 06:43:08.170618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.558 qpair failed and we were unable to recover it. 00:32:36.558 [2024-11-20 06:43:08.170755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.558 [2024-11-20 06:43:08.170789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.558 qpair failed and we were unable to recover it. 00:32:36.558 [2024-11-20 06:43:08.171057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.558 [2024-11-20 06:43:08.171090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.558 qpair failed and we were unable to recover it. 00:32:36.558 [2024-11-20 06:43:08.171288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.558 [2024-11-20 06:43:08.171321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.558 qpair failed and we were unable to recover it. 00:32:36.558 [2024-11-20 06:43:08.171451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.558 [2024-11-20 06:43:08.171484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.559 qpair failed and we were unable to recover it. 00:32:36.559 [2024-11-20 06:43:08.171682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.559 [2024-11-20 06:43:08.171715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.559 qpair failed and we were unable to recover it. 00:32:36.559 [2024-11-20 06:43:08.171900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.559 [2024-11-20 06:43:08.171932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.559 qpair failed and we were unable to recover it. 00:32:36.559 [2024-11-20 06:43:08.172063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.559 [2024-11-20 06:43:08.172096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.559 qpair failed and we were unable to recover it. 00:32:36.559 [2024-11-20 06:43:08.172303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.559 [2024-11-20 06:43:08.172338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.559 qpair failed and we were unable to recover it. 00:32:36.559 [2024-11-20 06:43:08.172535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.559 [2024-11-20 06:43:08.172568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.559 qpair failed and we were unable to recover it. 00:32:36.559 [2024-11-20 06:43:08.172703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.559 [2024-11-20 06:43:08.172736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.559 qpair failed and we were unable to recover it. 00:32:36.559 [2024-11-20 06:43:08.172859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.559 [2024-11-20 06:43:08.172892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.559 qpair failed and we were unable to recover it. 00:32:36.559 [2024-11-20 06:43:08.173018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.559 [2024-11-20 06:43:08.173050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.559 qpair failed and we were unable to recover it. 00:32:36.559 [2024-11-20 06:43:08.173168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.559 [2024-11-20 06:43:08.173200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.559 qpair failed and we were unable to recover it. 00:32:36.559 [2024-11-20 06:43:08.173394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.559 [2024-11-20 06:43:08.173427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.559 qpair failed and we were unable to recover it. 00:32:36.559 [2024-11-20 06:43:08.173630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.559 [2024-11-20 06:43:08.173662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.559 qpair failed and we were unable to recover it. 00:32:36.559 [2024-11-20 06:43:08.173852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.559 [2024-11-20 06:43:08.173895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.559 qpair failed and we were unable to recover it. 00:32:36.559 [2024-11-20 06:43:08.174084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.559 [2024-11-20 06:43:08.174116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.559 qpair failed and we were unable to recover it. 00:32:36.559 [2024-11-20 06:43:08.174402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.559 [2024-11-20 06:43:08.174444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.559 qpair failed and we were unable to recover it. 00:32:36.559 [2024-11-20 06:43:08.174641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.559 [2024-11-20 06:43:08.174675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.559 qpair failed and we were unable to recover it. 00:32:36.559 [2024-11-20 06:43:08.174867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.559 [2024-11-20 06:43:08.174899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.559 qpair failed and we were unable to recover it. 00:32:36.559 [2024-11-20 06:43:08.175076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.559 [2024-11-20 06:43:08.175109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.559 qpair failed and we were unable to recover it. 00:32:36.559 [2024-11-20 06:43:08.175284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.559 [2024-11-20 06:43:08.175319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.559 qpair failed and we were unable to recover it. 00:32:36.559 [2024-11-20 06:43:08.175534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.559 [2024-11-20 06:43:08.175566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.559 qpair failed and we were unable to recover it. 00:32:36.559 [2024-11-20 06:43:08.175784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.559 [2024-11-20 06:43:08.175817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.559 qpair failed and we were unable to recover it. 00:32:36.559 [2024-11-20 06:43:08.175931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.559 [2024-11-20 06:43:08.175976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.559 qpair failed and we were unable to recover it. 00:32:36.559 [2024-11-20 06:43:08.176090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.559 [2024-11-20 06:43:08.176123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.559 qpair failed and we were unable to recover it. 00:32:36.559 [2024-11-20 06:43:08.176273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.559 [2024-11-20 06:43:08.176308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.559 qpair failed and we were unable to recover it. 00:32:36.559 [2024-11-20 06:43:08.176412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.559 [2024-11-20 06:43:08.176444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.559 qpair failed and we were unable to recover it. 00:32:36.559 [2024-11-20 06:43:08.176634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.559 [2024-11-20 06:43:08.176667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.559 qpair failed and we were unable to recover it. 00:32:36.559 [2024-11-20 06:43:08.176885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.559 [2024-11-20 06:43:08.176918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.559 qpair failed and we were unable to recover it. 00:32:36.560 [2024-11-20 06:43:08.177039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.560 [2024-11-20 06:43:08.177071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.560 qpair failed and we were unable to recover it. 00:32:36.560 [2024-11-20 06:43:08.177265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.560 [2024-11-20 06:43:08.177301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.560 qpair failed and we were unable to recover it. 00:32:36.560 [2024-11-20 06:43:08.177474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.560 [2024-11-20 06:43:08.177508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.560 qpair failed and we were unable to recover it. 00:32:36.560 [2024-11-20 06:43:08.177692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.560 [2024-11-20 06:43:08.177725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.560 qpair failed and we were unable to recover it. 00:32:36.560 [2024-11-20 06:43:08.177849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.560 [2024-11-20 06:43:08.177880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.560 qpair failed and we were unable to recover it. 00:32:36.560 [2024-11-20 06:43:08.178064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.560 [2024-11-20 06:43:08.178097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.560 qpair failed and we were unable to recover it. 00:32:36.560 [2024-11-20 06:43:08.178341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.560 [2024-11-20 06:43:08.178374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.560 qpair failed and we were unable to recover it. 00:32:36.560 [2024-11-20 06:43:08.178620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.560 [2024-11-20 06:43:08.178653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.560 qpair failed and we were unable to recover it. 00:32:36.560 [2024-11-20 06:43:08.178825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.560 [2024-11-20 06:43:08.178857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.560 qpair failed and we were unable to recover it. 00:32:36.560 [2024-11-20 06:43:08.179050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.560 [2024-11-20 06:43:08.179082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.560 qpair failed and we were unable to recover it. 00:32:36.560 [2024-11-20 06:43:08.179300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.560 [2024-11-20 06:43:08.179334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.560 qpair failed and we were unable to recover it. 00:32:36.560 [2024-11-20 06:43:08.179555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.560 [2024-11-20 06:43:08.179588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.560 qpair failed and we were unable to recover it. 00:32:36.560 [2024-11-20 06:43:08.179857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.560 [2024-11-20 06:43:08.179889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.560 qpair failed and we were unable to recover it. 00:32:36.560 [2024-11-20 06:43:08.180073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.560 [2024-11-20 06:43:08.180105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.560 qpair failed and we were unable to recover it. 00:32:36.560 [2024-11-20 06:43:08.180369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.560 [2024-11-20 06:43:08.180402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.560 qpair failed and we were unable to recover it. 00:32:36.560 [2024-11-20 06:43:08.180593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.560 [2024-11-20 06:43:08.180624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.560 qpair failed and we were unable to recover it. 00:32:36.560 [2024-11-20 06:43:08.180810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.560 [2024-11-20 06:43:08.180844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.560 qpair failed and we were unable to recover it. 00:32:36.560 [2024-11-20 06:43:08.181056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.560 [2024-11-20 06:43:08.181089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.560 qpair failed and we were unable to recover it. 00:32:36.560 [2024-11-20 06:43:08.181216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.560 [2024-11-20 06:43:08.181249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.560 qpair failed and we were unable to recover it. 00:32:36.560 [2024-11-20 06:43:08.181358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.560 [2024-11-20 06:43:08.181391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.560 qpair failed and we were unable to recover it. 00:32:36.560 [2024-11-20 06:43:08.181578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.560 [2024-11-20 06:43:08.181612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.561 qpair failed and we were unable to recover it. 00:32:36.561 [2024-11-20 06:43:08.181742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.561 [2024-11-20 06:43:08.181776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.561 qpair failed and we were unable to recover it. 00:32:36.561 [2024-11-20 06:43:08.181987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.561 [2024-11-20 06:43:08.182019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.561 qpair failed and we were unable to recover it. 00:32:36.561 [2024-11-20 06:43:08.182145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.561 [2024-11-20 06:43:08.182177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.561 qpair failed and we were unable to recover it. 00:32:36.561 [2024-11-20 06:43:08.182301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.561 [2024-11-20 06:43:08.182332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.561 qpair failed and we were unable to recover it. 00:32:36.561 [2024-11-20 06:43:08.182498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.561 [2024-11-20 06:43:08.182530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.561 qpair failed and we were unable to recover it. 00:32:36.561 [2024-11-20 06:43:08.182653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.561 [2024-11-20 06:43:08.182685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.561 qpair failed and we were unable to recover it. 00:32:36.561 [2024-11-20 06:43:08.182808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.561 [2024-11-20 06:43:08.182845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.561 qpair failed and we were unable to recover it. 00:32:36.561 [2024-11-20 06:43:08.183020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.561 [2024-11-20 06:43:08.183053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.561 qpair failed and we were unable to recover it. 00:32:36.561 [2024-11-20 06:43:08.183316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.561 [2024-11-20 06:43:08.183348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.561 qpair failed and we were unable to recover it. 00:32:36.561 [2024-11-20 06:43:08.183589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.561 [2024-11-20 06:43:08.183621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.561 qpair failed and we were unable to recover it. 00:32:36.561 [2024-11-20 06:43:08.183859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.561 [2024-11-20 06:43:08.183890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.561 qpair failed and we were unable to recover it. 00:32:36.561 [2024-11-20 06:43:08.184137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.561 [2024-11-20 06:43:08.184170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.561 qpair failed and we were unable to recover it. 00:32:36.561 [2024-11-20 06:43:08.184425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.561 [2024-11-20 06:43:08.184458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.561 qpair failed and we were unable to recover it. 00:32:36.561 [2024-11-20 06:43:08.184732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.561 [2024-11-20 06:43:08.184764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.561 qpair failed and we were unable to recover it. 00:32:36.561 [2024-11-20 06:43:08.185023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.561 [2024-11-20 06:43:08.185053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.561 qpair failed and we were unable to recover it. 00:32:36.561 [2024-11-20 06:43:08.185295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.561 [2024-11-20 06:43:08.185329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.561 qpair failed and we were unable to recover it. 00:32:36.561 [2024-11-20 06:43:08.185467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.561 [2024-11-20 06:43:08.185498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.561 qpair failed and we were unable to recover it. 00:32:36.561 [2024-11-20 06:43:08.185615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.561 [2024-11-20 06:43:08.185647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.561 qpair failed and we were unable to recover it. 00:32:36.561 [2024-11-20 06:43:08.185783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.561 [2024-11-20 06:43:08.185815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.561 qpair failed and we were unable to recover it. 00:32:36.561 [2024-11-20 06:43:08.186085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.561 [2024-11-20 06:43:08.186117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.561 qpair failed and we were unable to recover it. 00:32:36.561 [2024-11-20 06:43:08.186376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.561 [2024-11-20 06:43:08.186409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.561 qpair failed and we were unable to recover it. 00:32:36.561 [2024-11-20 06:43:08.186586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.561 [2024-11-20 06:43:08.186618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.561 qpair failed and we were unable to recover it. 00:32:36.561 [2024-11-20 06:43:08.186909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.561 [2024-11-20 06:43:08.186941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.561 qpair failed and we were unable to recover it. 00:32:36.561 [2024-11-20 06:43:08.187056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.561 [2024-11-20 06:43:08.187087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.561 qpair failed and we were unable to recover it. 00:32:36.561 [2024-11-20 06:43:08.187218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.561 [2024-11-20 06:43:08.187251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.561 qpair failed and we were unable to recover it. 00:32:36.561 [2024-11-20 06:43:08.187390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.561 [2024-11-20 06:43:08.187422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.561 qpair failed and we were unable to recover it. 00:32:36.562 [2024-11-20 06:43:08.187686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.562 [2024-11-20 06:43:08.187717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.562 qpair failed and we were unable to recover it. 00:32:36.562 [2024-11-20 06:43:08.187839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.562 [2024-11-20 06:43:08.187870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.562 qpair failed and we were unable to recover it. 00:32:36.562 [2024-11-20 06:43:08.187998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.562 [2024-11-20 06:43:08.188030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.562 qpair failed and we were unable to recover it. 00:32:36.562 [2024-11-20 06:43:08.188214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.562 [2024-11-20 06:43:08.188246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.562 qpair failed and we were unable to recover it. 00:32:36.562 [2024-11-20 06:43:08.188365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.562 [2024-11-20 06:43:08.188397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.562 qpair failed and we were unable to recover it. 00:32:36.562 [2024-11-20 06:43:08.188519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.562 [2024-11-20 06:43:08.188550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.562 qpair failed and we were unable to recover it. 00:32:36.562 [2024-11-20 06:43:08.188817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.562 [2024-11-20 06:43:08.188849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.562 qpair failed and we were unable to recover it. 00:32:36.562 [2024-11-20 06:43:08.188973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.562 [2024-11-20 06:43:08.189010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.562 qpair failed and we were unable to recover it. 00:32:36.562 [2024-11-20 06:43:08.189129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.562 [2024-11-20 06:43:08.189161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.562 qpair failed and we were unable to recover it. 00:32:36.562 [2024-11-20 06:43:08.189420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.562 [2024-11-20 06:43:08.189459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.562 qpair failed and we were unable to recover it. 00:32:36.562 [2024-11-20 06:43:08.189740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.562 [2024-11-20 06:43:08.189773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.562 qpair failed and we were unable to recover it. 00:32:36.562 [2024-11-20 06:43:08.189947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.562 [2024-11-20 06:43:08.189980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.562 qpair failed and we were unable to recover it. 00:32:36.562 [2024-11-20 06:43:08.190226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.562 [2024-11-20 06:43:08.190260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.562 qpair failed and we were unable to recover it. 00:32:36.562 [2024-11-20 06:43:08.190454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.562 [2024-11-20 06:43:08.190487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.562 qpair failed and we were unable to recover it. 00:32:36.562 [2024-11-20 06:43:08.190605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.562 [2024-11-20 06:43:08.190636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.562 qpair failed and we were unable to recover it. 00:32:36.562 [2024-11-20 06:43:08.190822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.562 [2024-11-20 06:43:08.190854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.562 qpair failed and we were unable to recover it. 00:32:36.562 [2024-11-20 06:43:08.191037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.562 [2024-11-20 06:43:08.191069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.562 qpair failed and we were unable to recover it. 00:32:36.562 [2024-11-20 06:43:08.191171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.562 [2024-11-20 06:43:08.191213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.562 qpair failed and we were unable to recover it. 00:32:36.562 [2024-11-20 06:43:08.191340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.562 [2024-11-20 06:43:08.191371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.562 qpair failed and we were unable to recover it. 00:32:36.562 [2024-11-20 06:43:08.191543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.562 [2024-11-20 06:43:08.191575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.562 qpair failed and we were unable to recover it. 00:32:36.562 [2024-11-20 06:43:08.191679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.562 [2024-11-20 06:43:08.191717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.562 qpair failed and we were unable to recover it. 00:32:36.562 [2024-11-20 06:43:08.191890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.562 [2024-11-20 06:43:08.191921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.562 qpair failed and we were unable to recover it. 00:32:36.562 [2024-11-20 06:43:08.192169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.562 [2024-11-20 06:43:08.192229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.562 qpair failed and we were unable to recover it. 00:32:36.562 [2024-11-20 06:43:08.192413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.562 [2024-11-20 06:43:08.192446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.562 qpair failed and we were unable to recover it. 00:32:36.562 [2024-11-20 06:43:08.192689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.562 [2024-11-20 06:43:08.192720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.562 qpair failed and we were unable to recover it. 00:32:36.562 [2024-11-20 06:43:08.192912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.562 [2024-11-20 06:43:08.192943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.562 qpair failed and we were unable to recover it. 00:32:36.563 [2024-11-20 06:43:08.193113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.563 [2024-11-20 06:43:08.193144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.563 qpair failed and we were unable to recover it. 00:32:36.563 [2024-11-20 06:43:08.193354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.563 [2024-11-20 06:43:08.193387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.563 qpair failed and we were unable to recover it. 00:32:36.563 [2024-11-20 06:43:08.193670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.563 [2024-11-20 06:43:08.193702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.563 qpair failed and we were unable to recover it. 00:32:36.563 [2024-11-20 06:43:08.193819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.563 [2024-11-20 06:43:08.193850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.563 qpair failed and we were unable to recover it. 00:32:36.563 [2024-11-20 06:43:08.193970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.563 [2024-11-20 06:43:08.194002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.563 qpair failed and we were unable to recover it. 00:32:36.563 [2024-11-20 06:43:08.194121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.563 [2024-11-20 06:43:08.194152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.563 qpair failed and we were unable to recover it. 00:32:36.563 [2024-11-20 06:43:08.194340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.563 [2024-11-20 06:43:08.194373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.563 qpair failed and we were unable to recover it. 00:32:36.563 [2024-11-20 06:43:08.194565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.563 [2024-11-20 06:43:08.194597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.563 qpair failed and we were unable to recover it. 00:32:36.563 [2024-11-20 06:43:08.194736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.563 [2024-11-20 06:43:08.194768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.563 qpair failed and we were unable to recover it. 00:32:36.563 [2024-11-20 06:43:08.194951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.563 [2024-11-20 06:43:08.194982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.563 qpair failed and we were unable to recover it. 00:32:36.563 [2024-11-20 06:43:08.195107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.563 [2024-11-20 06:43:08.195138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.563 qpair failed and we were unable to recover it. 00:32:36.563 [2024-11-20 06:43:08.195324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.563 [2024-11-20 06:43:08.195356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.563 qpair failed and we were unable to recover it. 00:32:36.563 [2024-11-20 06:43:08.195524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.563 [2024-11-20 06:43:08.195557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.563 qpair failed and we were unable to recover it. 00:32:36.563 [2024-11-20 06:43:08.195666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.563 [2024-11-20 06:43:08.195698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.563 qpair failed and we were unable to recover it. 00:32:36.563 [2024-11-20 06:43:08.195896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.563 [2024-11-20 06:43:08.195927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.563 qpair failed and we were unable to recover it. 00:32:36.563 [2024-11-20 06:43:08.196112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.563 [2024-11-20 06:43:08.196144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.563 qpair failed and we were unable to recover it. 00:32:36.563 [2024-11-20 06:43:08.196421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.563 [2024-11-20 06:43:08.196454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.563 qpair failed and we were unable to recover it. 00:32:36.563 [2024-11-20 06:43:08.196584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.563 [2024-11-20 06:43:08.196616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.563 qpair failed and we were unable to recover it. 00:32:36.563 [2024-11-20 06:43:08.196858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.563 [2024-11-20 06:43:08.196890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.563 qpair failed and we were unable to recover it. 00:32:36.563 [2024-11-20 06:43:08.197098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.563 [2024-11-20 06:43:08.197129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.563 qpair failed and we were unable to recover it. 00:32:36.563 [2024-11-20 06:43:08.197308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.563 [2024-11-20 06:43:08.197341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.563 qpair failed and we were unable to recover it. 00:32:36.563 [2024-11-20 06:43:08.197532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.563 [2024-11-20 06:43:08.197568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.563 qpair failed and we were unable to recover it. 00:32:36.563 [2024-11-20 06:43:08.197760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.563 [2024-11-20 06:43:08.197792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.563 qpair failed and we were unable to recover it. 00:32:36.563 [2024-11-20 06:43:08.198031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.563 [2024-11-20 06:43:08.198065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.563 qpair failed and we were unable to recover it. 00:32:36.563 [2024-11-20 06:43:08.198254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.563 [2024-11-20 06:43:08.198289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.563 qpair failed and we were unable to recover it. 00:32:36.563 [2024-11-20 06:43:08.198414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.563 [2024-11-20 06:43:08.198446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.563 qpair failed and we were unable to recover it. 00:32:36.564 [2024-11-20 06:43:08.198566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.564 [2024-11-20 06:43:08.198598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.564 qpair failed and we were unable to recover it. 00:32:36.564 [2024-11-20 06:43:08.198786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.564 [2024-11-20 06:43:08.198819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.564 qpair failed and we were unable to recover it. 00:32:36.564 [2024-11-20 06:43:08.199082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.564 [2024-11-20 06:43:08.199112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.564 qpair failed and we were unable to recover it. 00:32:36.564 [2024-11-20 06:43:08.199366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.564 [2024-11-20 06:43:08.199400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.564 qpair failed and we were unable to recover it. 00:32:36.564 [2024-11-20 06:43:08.199531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.564 [2024-11-20 06:43:08.199562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.564 qpair failed and we were unable to recover it. 00:32:36.564 [2024-11-20 06:43:08.199696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.564 [2024-11-20 06:43:08.199728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.564 qpair failed and we were unable to recover it. 00:32:36.564 [2024-11-20 06:43:08.199856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.564 [2024-11-20 06:43:08.199888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.564 qpair failed and we were unable to recover it. 00:32:36.564 [2024-11-20 06:43:08.199988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.564 [2024-11-20 06:43:08.200020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.564 qpair failed and we were unable to recover it. 00:32:36.564 [2024-11-20 06:43:08.200258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.564 [2024-11-20 06:43:08.200302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.564 qpair failed and we were unable to recover it. 00:32:36.564 [2024-11-20 06:43:08.200413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.564 [2024-11-20 06:43:08.200444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.564 qpair failed and we were unable to recover it. 00:32:36.564 [2024-11-20 06:43:08.200546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.564 [2024-11-20 06:43:08.200578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.564 qpair failed and we were unable to recover it. 00:32:36.564 [2024-11-20 06:43:08.200763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.564 [2024-11-20 06:43:08.200795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.564 qpair failed and we were unable to recover it. 00:32:36.564 [2024-11-20 06:43:08.200991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.564 [2024-11-20 06:43:08.201021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.564 qpair failed and we were unable to recover it. 00:32:36.564 [2024-11-20 06:43:08.201196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.564 [2024-11-20 06:43:08.201241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.564 qpair failed and we were unable to recover it. 00:32:36.564 [2024-11-20 06:43:08.201501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.564 [2024-11-20 06:43:08.201533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.564 qpair failed and we were unable to recover it. 00:32:36.564 [2024-11-20 06:43:08.201709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.564 [2024-11-20 06:43:08.201740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.564 qpair failed and we were unable to recover it. 00:32:36.564 [2024-11-20 06:43:08.201910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.564 [2024-11-20 06:43:08.201941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.564 qpair failed and we were unable to recover it. 00:32:36.564 [2024-11-20 06:43:08.202198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.564 [2024-11-20 06:43:08.202239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.564 qpair failed and we were unable to recover it. 00:32:36.564 [2024-11-20 06:43:08.202484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.564 [2024-11-20 06:43:08.202516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.564 qpair failed and we were unable to recover it. 00:32:36.564 [2024-11-20 06:43:08.202805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.564 [2024-11-20 06:43:08.202835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.564 qpair failed and we were unable to recover it. 00:32:36.564 [2024-11-20 06:43:08.203007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.564 [2024-11-20 06:43:08.203038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.564 qpair failed and we were unable to recover it. 00:32:36.564 [2024-11-20 06:43:08.203231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.564 [2024-11-20 06:43:08.203265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.564 qpair failed and we were unable to recover it. 00:32:36.564 [2024-11-20 06:43:08.203375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.564 [2024-11-20 06:43:08.203407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.564 qpair failed and we were unable to recover it. 00:32:36.564 [2024-11-20 06:43:08.203596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.564 [2024-11-20 06:43:08.203628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.564 qpair failed and we were unable to recover it. 00:32:36.564 [2024-11-20 06:43:08.203867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.564 [2024-11-20 06:43:08.203899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.564 qpair failed and we were unable to recover it. 00:32:36.564 [2024-11-20 06:43:08.204022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.564 [2024-11-20 06:43:08.204054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.564 qpair failed and we were unable to recover it. 00:32:36.565 [2024-11-20 06:43:08.204254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.565 [2024-11-20 06:43:08.204290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.565 qpair failed and we were unable to recover it. 00:32:36.565 [2024-11-20 06:43:08.204424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.565 [2024-11-20 06:43:08.204456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.565 qpair failed and we were unable to recover it. 00:32:36.565 [2024-11-20 06:43:08.204628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.565 [2024-11-20 06:43:08.204659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.565 qpair failed and we were unable to recover it. 00:32:36.565 [2024-11-20 06:43:08.204952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.565 [2024-11-20 06:43:08.204984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.565 qpair failed and we were unable to recover it. 00:32:36.565 [2024-11-20 06:43:08.205110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.565 [2024-11-20 06:43:08.205142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.565 qpair failed and we were unable to recover it. 00:32:36.565 [2024-11-20 06:43:08.205307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.565 [2024-11-20 06:43:08.205340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.565 qpair failed and we were unable to recover it. 00:32:36.565 [2024-11-20 06:43:08.205442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.565 [2024-11-20 06:43:08.205473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.565 qpair failed and we were unable to recover it. 00:32:36.565 [2024-11-20 06:43:08.205660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.565 [2024-11-20 06:43:08.205692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.565 qpair failed and we were unable to recover it. 00:32:36.565 [2024-11-20 06:43:08.205959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.565 [2024-11-20 06:43:08.205991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.565 qpair failed and we were unable to recover it. 00:32:36.565 [2024-11-20 06:43:08.206262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.565 [2024-11-20 06:43:08.206295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.565 qpair failed and we were unable to recover it. 00:32:36.565 [2024-11-20 06:43:08.206399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.565 [2024-11-20 06:43:08.206430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.565 qpair failed and we were unable to recover it. 00:32:36.565 [2024-11-20 06:43:08.206549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.565 [2024-11-20 06:43:08.206581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.565 qpair failed and we were unable to recover it. 00:32:36.565 [2024-11-20 06:43:08.206700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.565 [2024-11-20 06:43:08.206731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.565 qpair failed and we were unable to recover it. 00:32:36.565 [2024-11-20 06:43:08.206985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.565 [2024-11-20 06:43:08.207017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.565 qpair failed and we were unable to recover it. 00:32:36.565 [2024-11-20 06:43:08.207218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.565 [2024-11-20 06:43:08.207253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.565 qpair failed and we were unable to recover it. 00:32:36.565 [2024-11-20 06:43:08.207446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.565 [2024-11-20 06:43:08.207478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.565 qpair failed and we were unable to recover it. 00:32:36.565 [2024-11-20 06:43:08.207678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.565 [2024-11-20 06:43:08.207709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.565 qpair failed and we were unable to recover it. 00:32:36.565 [2024-11-20 06:43:08.207828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.565 [2024-11-20 06:43:08.207860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.565 qpair failed and we were unable to recover it. 00:32:36.565 [2024-11-20 06:43:08.207996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.565 [2024-11-20 06:43:08.208027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.565 qpair failed and we were unable to recover it. 00:32:36.565 [2024-11-20 06:43:08.208235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.565 [2024-11-20 06:43:08.208270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.565 qpair failed and we were unable to recover it. 00:32:36.565 [2024-11-20 06:43:08.208386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.565 [2024-11-20 06:43:08.208418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.565 qpair failed and we were unable to recover it. 00:32:36.565 [2024-11-20 06:43:08.208588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.565 [2024-11-20 06:43:08.208621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.565 qpair failed and we were unable to recover it. 00:32:36.565 [2024-11-20 06:43:08.208826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.565 [2024-11-20 06:43:08.208864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.565 qpair failed and we were unable to recover it. 00:32:36.565 [2024-11-20 06:43:08.209116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.565 [2024-11-20 06:43:08.209150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.565 qpair failed and we were unable to recover it. 00:32:36.565 [2024-11-20 06:43:08.209394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.565 [2024-11-20 06:43:08.209428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.565 qpair failed and we were unable to recover it. 00:32:36.565 [2024-11-20 06:43:08.209542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.565 [2024-11-20 06:43:08.209573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.565 qpair failed and we were unable to recover it. 00:32:36.566 [2024-11-20 06:43:08.209839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.566 [2024-11-20 06:43:08.209871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.566 qpair failed and we were unable to recover it. 00:32:36.566 [2024-11-20 06:43:08.210111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.566 [2024-11-20 06:43:08.210142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.566 qpair failed and we were unable to recover it. 00:32:36.566 [2024-11-20 06:43:08.210392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.566 [2024-11-20 06:43:08.210424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.566 qpair failed and we were unable to recover it. 00:32:36.566 [2024-11-20 06:43:08.210618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.566 [2024-11-20 06:43:08.210650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.566 qpair failed and we were unable to recover it. 00:32:36.566 [2024-11-20 06:43:08.210863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.566 [2024-11-20 06:43:08.210895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.566 qpair failed and we were unable to recover it. 00:32:36.566 [2024-11-20 06:43:08.211136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.566 [2024-11-20 06:43:08.211169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.566 qpair failed and we were unable to recover it. 00:32:36.566 [2024-11-20 06:43:08.211353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.566 [2024-11-20 06:43:08.211426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.566 qpair failed and we were unable to recover it. 00:32:36.566 [2024-11-20 06:43:08.211609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.566 [2024-11-20 06:43:08.211644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.566 qpair failed and we were unable to recover it. 00:32:36.566 [2024-11-20 06:43:08.211891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.566 [2024-11-20 06:43:08.211924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.566 qpair failed and we were unable to recover it. 00:32:36.566 [2024-11-20 06:43:08.212059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.566 [2024-11-20 06:43:08.212091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.566 qpair failed and we were unable to recover it. 00:32:36.566 [2024-11-20 06:43:08.212228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.566 [2024-11-20 06:43:08.212263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.566 qpair failed and we were unable to recover it. 00:32:36.566 [2024-11-20 06:43:08.212510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.566 [2024-11-20 06:43:08.212542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.566 qpair failed and we were unable to recover it. 00:32:36.566 [2024-11-20 06:43:08.212726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.566 [2024-11-20 06:43:08.212757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.566 qpair failed and we were unable to recover it. 00:32:36.566 [2024-11-20 06:43:08.212950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.566 [2024-11-20 06:43:08.212982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.566 qpair failed and we were unable to recover it. 00:32:36.566 [2024-11-20 06:43:08.213168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.566 [2024-11-20 06:43:08.213199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.566 qpair failed and we were unable to recover it. 00:32:36.566 [2024-11-20 06:43:08.213350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.566 [2024-11-20 06:43:08.213382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.566 qpair failed and we were unable to recover it. 00:32:36.566 [2024-11-20 06:43:08.213515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.566 [2024-11-20 06:43:08.213547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.566 qpair failed and we were unable to recover it. 00:32:36.566 [2024-11-20 06:43:08.213750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.566 [2024-11-20 06:43:08.213782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.566 qpair failed and we were unable to recover it. 00:32:36.566 [2024-11-20 06:43:08.213967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.566 [2024-11-20 06:43:08.214000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.566 qpair failed and we were unable to recover it. 00:32:36.566 [2024-11-20 06:43:08.214241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.566 [2024-11-20 06:43:08.214275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.566 qpair failed and we were unable to recover it. 00:32:36.566 [2024-11-20 06:43:08.214465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.566 [2024-11-20 06:43:08.214496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.566 qpair failed and we were unable to recover it. 00:32:36.566 [2024-11-20 06:43:08.214748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.566 [2024-11-20 06:43:08.214779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.566 qpair failed and we were unable to recover it. 00:32:36.566 [2024-11-20 06:43:08.214989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.566 [2024-11-20 06:43:08.215022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.566 qpair failed and we were unable to recover it. 00:32:36.567 [2024-11-20 06:43:08.215220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.567 [2024-11-20 06:43:08.215253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.567 qpair failed and we were unable to recover it. 00:32:36.567 [2024-11-20 06:43:08.215396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.567 [2024-11-20 06:43:08.215428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.567 qpair failed and we were unable to recover it. 00:32:36.567 [2024-11-20 06:43:08.215614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.567 [2024-11-20 06:43:08.215646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.567 qpair failed and we were unable to recover it. 00:32:36.567 [2024-11-20 06:43:08.215761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.567 [2024-11-20 06:43:08.215792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.567 qpair failed and we were unable to recover it. 00:32:36.567 [2024-11-20 06:43:08.216002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.567 [2024-11-20 06:43:08.216034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.567 qpair failed and we were unable to recover it. 00:32:36.567 [2024-11-20 06:43:08.216218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.567 [2024-11-20 06:43:08.216252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.567 qpair failed and we were unable to recover it. 00:32:36.567 [2024-11-20 06:43:08.216452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.567 [2024-11-20 06:43:08.216484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.567 qpair failed and we were unable to recover it. 00:32:36.567 [2024-11-20 06:43:08.216602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.567 [2024-11-20 06:43:08.216634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.567 qpair failed and we were unable to recover it. 00:32:36.567 [2024-11-20 06:43:08.216871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.567 [2024-11-20 06:43:08.216903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.567 qpair failed and we were unable to recover it. 00:32:36.567 [2024-11-20 06:43:08.217081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.567 [2024-11-20 06:43:08.217114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.567 qpair failed and we were unable to recover it. 00:32:36.567 [2024-11-20 06:43:08.217297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.567 [2024-11-20 06:43:08.217331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.567 qpair failed and we were unable to recover it. 00:32:36.567 [2024-11-20 06:43:08.217502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.567 [2024-11-20 06:43:08.217534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.567 qpair failed and we were unable to recover it. 00:32:36.567 [2024-11-20 06:43:08.217707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.567 [2024-11-20 06:43:08.217739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.567 qpair failed and we were unable to recover it. 00:32:36.567 [2024-11-20 06:43:08.217920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.567 [2024-11-20 06:43:08.217957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.567 qpair failed and we were unable to recover it. 00:32:36.567 [2024-11-20 06:43:08.218130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.567 [2024-11-20 06:43:08.218161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.567 qpair failed and we were unable to recover it. 00:32:36.567 [2024-11-20 06:43:08.218286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.567 [2024-11-20 06:43:08.218318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.567 qpair failed and we were unable to recover it. 00:32:36.567 [2024-11-20 06:43:08.218417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.567 [2024-11-20 06:43:08.218448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.567 qpair failed and we were unable to recover it. 00:32:36.567 [2024-11-20 06:43:08.218687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.567 [2024-11-20 06:43:08.218718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.567 qpair failed and we were unable to recover it. 00:32:36.567 [2024-11-20 06:43:08.218916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.567 [2024-11-20 06:43:08.218947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.567 qpair failed and we were unable to recover it. 00:32:36.567 [2024-11-20 06:43:08.219187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.567 [2024-11-20 06:43:08.219227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.567 qpair failed and we were unable to recover it. 00:32:36.567 [2024-11-20 06:43:08.219415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.567 [2024-11-20 06:43:08.219446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.567 qpair failed and we were unable to recover it. 00:32:36.567 [2024-11-20 06:43:08.219588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.567 [2024-11-20 06:43:08.219619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.567 qpair failed and we were unable to recover it. 00:32:36.567 [2024-11-20 06:43:08.219731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.567 [2024-11-20 06:43:08.219762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.567 qpair failed and we were unable to recover it. 00:32:36.567 [2024-11-20 06:43:08.219869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.567 [2024-11-20 06:43:08.219900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.567 qpair failed and we were unable to recover it. 00:32:36.568 [2024-11-20 06:43:08.220035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.568 [2024-11-20 06:43:08.220068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.568 qpair failed and we were unable to recover it. 00:32:36.568 [2024-11-20 06:43:08.220302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.568 [2024-11-20 06:43:08.220335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.568 qpair failed and we were unable to recover it. 00:32:36.568 [2024-11-20 06:43:08.220509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.568 [2024-11-20 06:43:08.220541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.568 qpair failed and we were unable to recover it. 00:32:36.568 [2024-11-20 06:43:08.220666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.568 [2024-11-20 06:43:08.220698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.568 qpair failed and we were unable to recover it. 00:32:36.568 [2024-11-20 06:43:08.220827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.568 [2024-11-20 06:43:08.220858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.568 qpair failed and we were unable to recover it. 00:32:36.568 [2024-11-20 06:43:08.221045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.568 [2024-11-20 06:43:08.221076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.568 qpair failed and we were unable to recover it. 00:32:36.568 [2024-11-20 06:43:08.221253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.568 [2024-11-20 06:43:08.221286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.568 qpair failed and we were unable to recover it. 00:32:36.568 [2024-11-20 06:43:08.221483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.568 [2024-11-20 06:43:08.221517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.568 qpair failed and we were unable to recover it. 00:32:36.568 [2024-11-20 06:43:08.221691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.568 [2024-11-20 06:43:08.221722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.568 qpair failed and we were unable to recover it. 00:32:36.568 [2024-11-20 06:43:08.221903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.568 [2024-11-20 06:43:08.221935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.568 qpair failed and we were unable to recover it. 00:32:36.568 [2024-11-20 06:43:08.222128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.568 [2024-11-20 06:43:08.222160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.568 qpair failed and we were unable to recover it. 00:32:36.568 [2024-11-20 06:43:08.222356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.568 [2024-11-20 06:43:08.222388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.568 qpair failed and we were unable to recover it. 00:32:36.568 [2024-11-20 06:43:08.222576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.568 [2024-11-20 06:43:08.222608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.568 qpair failed and we were unable to recover it. 00:32:36.568 [2024-11-20 06:43:08.222803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.568 [2024-11-20 06:43:08.222835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.568 qpair failed and we were unable to recover it. 00:32:36.568 [2024-11-20 06:43:08.223031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.568 [2024-11-20 06:43:08.223063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.568 qpair failed and we were unable to recover it. 00:32:36.568 [2024-11-20 06:43:08.223248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.568 [2024-11-20 06:43:08.223281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.568 qpair failed and we were unable to recover it. 00:32:36.568 [2024-11-20 06:43:08.223488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.568 [2024-11-20 06:43:08.223520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.568 qpair failed and we were unable to recover it. 00:32:36.568 [2024-11-20 06:43:08.223691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.568 [2024-11-20 06:43:08.223723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.568 qpair failed and we were unable to recover it. 00:32:36.568 [2024-11-20 06:43:08.223896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.568 [2024-11-20 06:43:08.223928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.568 qpair failed and we were unable to recover it. 00:32:36.568 [2024-11-20 06:43:08.224065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.568 [2024-11-20 06:43:08.224098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.568 qpair failed and we were unable to recover it. 00:32:36.568 [2024-11-20 06:43:08.224337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.568 [2024-11-20 06:43:08.224370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.568 qpair failed and we were unable to recover it. 00:32:36.568 [2024-11-20 06:43:08.224619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.568 [2024-11-20 06:43:08.224651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.568 qpair failed and we were unable to recover it. 00:32:36.568 [2024-11-20 06:43:08.224843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.568 [2024-11-20 06:43:08.224878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.568 qpair failed and we were unable to recover it. 00:32:36.568 [2024-11-20 06:43:08.225072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.568 [2024-11-20 06:43:08.225103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.568 qpair failed and we were unable to recover it. 00:32:36.568 [2024-11-20 06:43:08.225374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.568 [2024-11-20 06:43:08.225408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.568 qpair failed and we were unable to recover it. 00:32:36.568 [2024-11-20 06:43:08.225583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.568 [2024-11-20 06:43:08.225616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.569 qpair failed and we were unable to recover it. 00:32:36.569 [2024-11-20 06:43:08.225796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.569 [2024-11-20 06:43:08.225828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.569 qpair failed and we were unable to recover it. 00:32:36.569 [2024-11-20 06:43:08.225997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.569 [2024-11-20 06:43:08.226029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.569 qpair failed and we were unable to recover it. 00:32:36.569 [2024-11-20 06:43:08.226287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.569 [2024-11-20 06:43:08.226320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.569 qpair failed and we were unable to recover it. 00:32:36.569 [2024-11-20 06:43:08.226516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.569 [2024-11-20 06:43:08.226563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.569 qpair failed and we were unable to recover it. 00:32:36.569 [2024-11-20 06:43:08.226762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.569 [2024-11-20 06:43:08.226794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.569 qpair failed and we were unable to recover it. 00:32:36.569 [2024-11-20 06:43:08.226984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.569 [2024-11-20 06:43:08.227015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.569 qpair failed and we were unable to recover it. 00:32:36.569 [2024-11-20 06:43:08.227189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.569 [2024-11-20 06:43:08.227246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.569 qpair failed and we were unable to recover it. 00:32:36.569 [2024-11-20 06:43:08.227453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.569 [2024-11-20 06:43:08.227484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.569 qpair failed and we were unable to recover it. 00:32:36.569 [2024-11-20 06:43:08.227608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.569 [2024-11-20 06:43:08.227640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.569 qpair failed and we were unable to recover it. 00:32:36.569 [2024-11-20 06:43:08.227905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.569 [2024-11-20 06:43:08.227937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.569 qpair failed and we were unable to recover it. 00:32:36.569 [2024-11-20 06:43:08.228078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.569 [2024-11-20 06:43:08.228110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.569 qpair failed and we were unable to recover it. 00:32:36.569 [2024-11-20 06:43:08.228284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.569 [2024-11-20 06:43:08.228318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.569 qpair failed and we were unable to recover it. 00:32:36.569 [2024-11-20 06:43:08.228456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.569 [2024-11-20 06:43:08.228488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.569 qpair failed and we were unable to recover it. 00:32:36.569 [2024-11-20 06:43:08.228694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.569 [2024-11-20 06:43:08.228726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.569 qpair failed and we were unable to recover it. 00:32:36.569 [2024-11-20 06:43:08.228829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.569 [2024-11-20 06:43:08.228861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.569 qpair failed and we were unable to recover it. 00:32:36.569 [2024-11-20 06:43:08.229070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.569 [2024-11-20 06:43:08.229101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.569 qpair failed and we were unable to recover it. 00:32:36.569 [2024-11-20 06:43:08.229275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.569 [2024-11-20 06:43:08.229309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.569 qpair failed and we were unable to recover it. 00:32:36.569 [2024-11-20 06:43:08.229433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.569 [2024-11-20 06:43:08.229465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.569 qpair failed and we were unable to recover it. 00:32:36.569 [2024-11-20 06:43:08.229668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.569 [2024-11-20 06:43:08.229700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.569 qpair failed and we were unable to recover it. 00:32:36.569 [2024-11-20 06:43:08.229914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.569 [2024-11-20 06:43:08.229945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.569 qpair failed and we were unable to recover it. 00:32:36.569 [2024-11-20 06:43:08.230066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.569 [2024-11-20 06:43:08.230098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.569 qpair failed and we were unable to recover it. 00:32:36.569 [2024-11-20 06:43:08.230283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.569 [2024-11-20 06:43:08.230316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.569 qpair failed and we were unable to recover it. 00:32:36.569 [2024-11-20 06:43:08.230583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.569 [2024-11-20 06:43:08.230615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.569 [2024-11-20 06:43:08.230623] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:36.569 qpair failed and we were unable to recover it. 00:32:36.569 [2024-11-20 06:43:08.230737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.569 [2024-11-20 06:43:08.230769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.569 qpair failed and we were unable to recover it. 00:32:36.569 [2024-11-20 06:43:08.230940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.569 [2024-11-20 06:43:08.230972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.569 qpair failed and we were unable to recover it. 00:32:36.569 [2024-11-20 06:43:08.231147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.570 [2024-11-20 06:43:08.231178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.570 qpair failed and we were unable to recover it. 00:32:36.570 [2024-11-20 06:43:08.231375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.570 [2024-11-20 06:43:08.231407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.570 qpair failed and we were unable to recover it. 00:32:36.570 [2024-11-20 06:43:08.231589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.570 [2024-11-20 06:43:08.231621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.570 qpair failed and we were unable to recover it. 00:32:36.570 [2024-11-20 06:43:08.231803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.570 [2024-11-20 06:43:08.231835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.570 qpair failed and we were unable to recover it. 00:32:36.570 [2024-11-20 06:43:08.232025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.570 [2024-11-20 06:43:08.232056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.570 qpair failed and we were unable to recover it. 00:32:36.570 [2024-11-20 06:43:08.232168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.570 [2024-11-20 06:43:08.232199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.570 qpair failed and we were unable to recover it. 00:32:36.570 [2024-11-20 06:43:08.232402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.570 [2024-11-20 06:43:08.232434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.570 qpair failed and we were unable to recover it. 00:32:36.570 [2024-11-20 06:43:08.232743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.570 [2024-11-20 06:43:08.232774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.570 qpair failed and we were unable to recover it. 00:32:36.570 [2024-11-20 06:43:08.233016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.570 [2024-11-20 06:43:08.233047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.570 qpair failed and we were unable to recover it. 00:32:36.570 [2024-11-20 06:43:08.233241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.570 [2024-11-20 06:43:08.233274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.570 qpair failed and we were unable to recover it. 00:32:36.570 [2024-11-20 06:43:08.233474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.570 [2024-11-20 06:43:08.233506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.570 qpair failed and we were unable to recover it. 00:32:36.570 [2024-11-20 06:43:08.233696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.570 [2024-11-20 06:43:08.233728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.570 qpair failed and we were unable to recover it. 00:32:36.570 [2024-11-20 06:43:08.233870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.570 [2024-11-20 06:43:08.233901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.570 qpair failed and we were unable to recover it. 00:32:36.570 [2024-11-20 06:43:08.234018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.570 [2024-11-20 06:43:08.234049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.570 qpair failed and we were unable to recover it. 00:32:36.570 [2024-11-20 06:43:08.234148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.570 [2024-11-20 06:43:08.234179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.570 qpair failed and we were unable to recover it. 00:32:36.570 [2024-11-20 06:43:08.234459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.570 [2024-11-20 06:43:08.234491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.570 qpair failed and we were unable to recover it. 00:32:36.570 [2024-11-20 06:43:08.234714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.570 [2024-11-20 06:43:08.234746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.570 qpair failed and we were unable to recover it. 00:32:36.570 [2024-11-20 06:43:08.234867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.570 [2024-11-20 06:43:08.234898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.570 qpair failed and we were unable to recover it. 00:32:36.570 [2024-11-20 06:43:08.235186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.570 [2024-11-20 06:43:08.235225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.570 qpair failed and we were unable to recover it. 00:32:36.570 [2024-11-20 06:43:08.235350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.570 [2024-11-20 06:43:08.235382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.570 qpair failed and we were unable to recover it. 00:32:36.570 [2024-11-20 06:43:08.235498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.570 [2024-11-20 06:43:08.235529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.570 qpair failed and we were unable to recover it. 00:32:36.570 [2024-11-20 06:43:08.235718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.570 [2024-11-20 06:43:08.235750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.570 qpair failed and we were unable to recover it. 00:32:36.570 [2024-11-20 06:43:08.235992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.570 [2024-11-20 06:43:08.236023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.570 qpair failed and we were unable to recover it. 00:32:36.570 [2024-11-20 06:43:08.236242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.570 [2024-11-20 06:43:08.236278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.570 qpair failed and we were unable to recover it. 00:32:36.570 [2024-11-20 06:43:08.236384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.570 [2024-11-20 06:43:08.236418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.570 qpair failed and we were unable to recover it. 00:32:36.570 [2024-11-20 06:43:08.236606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.570 [2024-11-20 06:43:08.236648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.570 qpair failed and we were unable to recover it. 00:32:36.570 [2024-11-20 06:43:08.236763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.571 [2024-11-20 06:43:08.236796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.571 qpair failed and we were unable to recover it. 00:32:36.571 [2024-11-20 06:43:08.236987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.571 [2024-11-20 06:43:08.237019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.571 qpair failed and we were unable to recover it. 00:32:36.571 [2024-11-20 06:43:08.237285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.571 [2024-11-20 06:43:08.237318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.571 qpair failed and we were unable to recover it. 00:32:36.571 [2024-11-20 06:43:08.237440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.571 [2024-11-20 06:43:08.237473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.571 qpair failed and we were unable to recover it. 00:32:36.571 [2024-11-20 06:43:08.237593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.571 [2024-11-20 06:43:08.237624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.571 qpair failed and we were unable to recover it. 00:32:36.571 [2024-11-20 06:43:08.237868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.571 [2024-11-20 06:43:08.237907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.571 qpair failed and we were unable to recover it. 00:32:36.571 [2024-11-20 06:43:08.238083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.571 [2024-11-20 06:43:08.238115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.571 qpair failed and we were unable to recover it. 00:32:36.571 [2024-11-20 06:43:08.238233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.571 [2024-11-20 06:43:08.238267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.571 qpair failed and we were unable to recover it. 00:32:36.571 [2024-11-20 06:43:08.238450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.571 [2024-11-20 06:43:08.238483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.571 qpair failed and we were unable to recover it. 00:32:36.571 [2024-11-20 06:43:08.238675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.571 [2024-11-20 06:43:08.238707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.571 qpair failed and we were unable to recover it. 00:32:36.571 [2024-11-20 06:43:08.238955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.571 [2024-11-20 06:43:08.238988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.571 qpair failed and we were unable to recover it. 00:32:36.571 [2024-11-20 06:43:08.239123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.571 [2024-11-20 06:43:08.239154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.571 qpair failed and we were unable to recover it. 00:32:36.571 [2024-11-20 06:43:08.239352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.571 [2024-11-20 06:43:08.239386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.571 qpair failed and we were unable to recover it. 00:32:36.571 [2024-11-20 06:43:08.239672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.571 [2024-11-20 06:43:08.239706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.571 qpair failed and we were unable to recover it. 00:32:36.571 [2024-11-20 06:43:08.239822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.571 [2024-11-20 06:43:08.239855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.571 qpair failed and we were unable to recover it. 00:32:36.571 [2024-11-20 06:43:08.239990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.571 [2024-11-20 06:43:08.240022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.571 qpair failed and we were unable to recover it. 00:32:36.571 [2024-11-20 06:43:08.240280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.571 [2024-11-20 06:43:08.240315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.571 qpair failed and we were unable to recover it. 00:32:36.571 [2024-11-20 06:43:08.240454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.571 [2024-11-20 06:43:08.240486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.571 qpair failed and we were unable to recover it. 00:32:36.571 [2024-11-20 06:43:08.240675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.571 [2024-11-20 06:43:08.240709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.571 qpair failed and we were unable to recover it. 00:32:36.571 [2024-11-20 06:43:08.240982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.571 [2024-11-20 06:43:08.241016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.571 qpair failed and we were unable to recover it. 00:32:36.571 [2024-11-20 06:43:08.241222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.571 [2024-11-20 06:43:08.241256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.571 qpair failed and we were unable to recover it. 00:32:36.571 [2024-11-20 06:43:08.241431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.571 [2024-11-20 06:43:08.241464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.571 qpair failed and we were unable to recover it. 00:32:36.571 [2024-11-20 06:43:08.241725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.571 [2024-11-20 06:43:08.241759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.571 qpair failed and we were unable to recover it. 00:32:36.571 [2024-11-20 06:43:08.241900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.571 [2024-11-20 06:43:08.241932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.571 qpair failed and we were unable to recover it. 00:32:36.571 [2024-11-20 06:43:08.242144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.571 [2024-11-20 06:43:08.242178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.571 qpair failed and we were unable to recover it. 00:32:36.571 [2024-11-20 06:43:08.242433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.571 [2024-11-20 06:43:08.242466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.571 qpair failed and we were unable to recover it. 00:32:36.571 [2024-11-20 06:43:08.242584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.572 [2024-11-20 06:43:08.242616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.572 qpair failed and we were unable to recover it. 00:32:36.572 [2024-11-20 06:43:08.242794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.572 [2024-11-20 06:43:08.242826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.572 qpair failed and we were unable to recover it. 00:32:36.572 [2024-11-20 06:43:08.243095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.572 [2024-11-20 06:43:08.243127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.572 qpair failed and we were unable to recover it. 00:32:36.572 [2024-11-20 06:43:08.243246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.572 [2024-11-20 06:43:08.243279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.572 qpair failed and we were unable to recover it. 00:32:36.572 [2024-11-20 06:43:08.243479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.572 [2024-11-20 06:43:08.243510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.572 qpair failed and we were unable to recover it. 00:32:36.572 [2024-11-20 06:43:08.243633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.572 [2024-11-20 06:43:08.243664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.572 qpair failed and we were unable to recover it. 00:32:36.572 [2024-11-20 06:43:08.243877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.572 [2024-11-20 06:43:08.243923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.572 qpair failed and we were unable to recover it. 00:32:36.572 [2024-11-20 06:43:08.244140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.572 [2024-11-20 06:43:08.244172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.572 qpair failed and we were unable to recover it. 00:32:36.572 [2024-11-20 06:43:08.244359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.572 [2024-11-20 06:43:08.244393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.572 qpair failed and we were unable to recover it. 00:32:36.572 [2024-11-20 06:43:08.244628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.572 [2024-11-20 06:43:08.244659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.572 qpair failed and we were unable to recover it. 00:32:36.572 [2024-11-20 06:43:08.244855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.572 [2024-11-20 06:43:08.244887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.572 qpair failed and we were unable to recover it. 00:32:36.572 [2024-11-20 06:43:08.245158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.572 [2024-11-20 06:43:08.245190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.572 qpair failed and we were unable to recover it. 00:32:36.572 [2024-11-20 06:43:08.245395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.572 [2024-11-20 06:43:08.245428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.572 qpair failed and we were unable to recover it. 00:32:36.572 [2024-11-20 06:43:08.245546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.572 [2024-11-20 06:43:08.245578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.572 qpair failed and we were unable to recover it. 00:32:36.572 [2024-11-20 06:43:08.245694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.572 [2024-11-20 06:43:08.245727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.572 qpair failed and we were unable to recover it. 00:32:36.572 [2024-11-20 06:43:08.245995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.572 [2024-11-20 06:43:08.246028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.572 qpair failed and we were unable to recover it. 00:32:36.572 [2024-11-20 06:43:08.246197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.572 [2024-11-20 06:43:08.246241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.572 qpair failed and we were unable to recover it. 00:32:36.572 [2024-11-20 06:43:08.246481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.572 [2024-11-20 06:43:08.246514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.572 qpair failed and we were unable to recover it. 00:32:36.572 [2024-11-20 06:43:08.246761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.572 [2024-11-20 06:43:08.246793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.572 qpair failed and we were unable to recover it. 00:32:36.572 [2024-11-20 06:43:08.247069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.572 [2024-11-20 06:43:08.247115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.572 qpair failed and we were unable to recover it. 00:32:36.572 [2024-11-20 06:43:08.247222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.572 [2024-11-20 06:43:08.247257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.572 qpair failed and we were unable to recover it. 00:32:36.572 [2024-11-20 06:43:08.247458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.572 [2024-11-20 06:43:08.247490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.572 qpair failed and we were unable to recover it. 00:32:36.572 [2024-11-20 06:43:08.247751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.572 [2024-11-20 06:43:08.247784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.572 qpair failed and we were unable to recover it. 00:32:36.572 [2024-11-20 06:43:08.247971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.572 [2024-11-20 06:43:08.248005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.572 qpair failed and we were unable to recover it. 00:32:36.572 [2024-11-20 06:43:08.248246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.572 [2024-11-20 06:43:08.248281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.572 qpair failed and we were unable to recover it. 00:32:36.572 [2024-11-20 06:43:08.248486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.572 [2024-11-20 06:43:08.248520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.572 qpair failed and we were unable to recover it. 00:32:36.572 [2024-11-20 06:43:08.248643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.573 [2024-11-20 06:43:08.248675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.573 qpair failed and we were unable to recover it. 00:32:36.573 [2024-11-20 06:43:08.248809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.573 [2024-11-20 06:43:08.248842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.573 qpair failed and we were unable to recover it. 00:32:36.573 [2024-11-20 06:43:08.249035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.573 [2024-11-20 06:43:08.249068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.573 qpair failed and we were unable to recover it. 00:32:36.573 [2024-11-20 06:43:08.249337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.573 [2024-11-20 06:43:08.249371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.573 qpair failed and we were unable to recover it. 00:32:36.573 [2024-11-20 06:43:08.249569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.573 [2024-11-20 06:43:08.249602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.573 qpair failed and we were unable to recover it. 00:32:36.573 [2024-11-20 06:43:08.249774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.573 [2024-11-20 06:43:08.249807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.573 qpair failed and we were unable to recover it. 00:32:36.573 [2024-11-20 06:43:08.249988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.573 [2024-11-20 06:43:08.250021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.573 qpair failed and we were unable to recover it. 00:32:36.573 [2024-11-20 06:43:08.250220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.573 [2024-11-20 06:43:08.250254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.573 qpair failed and we were unable to recover it. 00:32:36.573 [2024-11-20 06:43:08.250492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.573 [2024-11-20 06:43:08.250525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.573 qpair failed and we were unable to recover it. 00:32:36.573 [2024-11-20 06:43:08.250801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.573 [2024-11-20 06:43:08.250833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.573 qpair failed and we were unable to recover it. 00:32:36.573 [2024-11-20 06:43:08.250962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.573 [2024-11-20 06:43:08.250995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.573 qpair failed and we were unable to recover it. 00:32:36.573 [2024-11-20 06:43:08.251178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.573 [2024-11-20 06:43:08.251229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.573 qpair failed and we were unable to recover it. 00:32:36.573 [2024-11-20 06:43:08.251431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.573 [2024-11-20 06:43:08.251465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.573 qpair failed and we were unable to recover it. 00:32:36.573 [2024-11-20 06:43:08.251653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.573 [2024-11-20 06:43:08.251685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.573 qpair failed and we were unable to recover it. 00:32:36.573 [2024-11-20 06:43:08.251864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.573 [2024-11-20 06:43:08.251898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.573 qpair failed and we were unable to recover it. 00:32:36.573 [2024-11-20 06:43:08.252171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.573 [2024-11-20 06:43:08.252217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.573 qpair failed and we were unable to recover it. 00:32:36.573 [2024-11-20 06:43:08.252411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.573 [2024-11-20 06:43:08.252443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.573 qpair failed and we were unable to recover it. 00:32:36.573 [2024-11-20 06:43:08.252647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.573 [2024-11-20 06:43:08.252680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.573 qpair failed and we were unable to recover it. 00:32:36.573 [2024-11-20 06:43:08.252864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.573 [2024-11-20 06:43:08.252897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.573 qpair failed and we were unable to recover it. 00:32:36.573 [2024-11-20 06:43:08.253134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.573 [2024-11-20 06:43:08.253167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.573 qpair failed and we were unable to recover it. 00:32:36.573 [2024-11-20 06:43:08.253409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.573 [2024-11-20 06:43:08.253482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.573 qpair failed and we were unable to recover it. 00:32:36.573 [2024-11-20 06:43:08.253650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.573 [2024-11-20 06:43:08.253687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.573 qpair failed and we were unable to recover it. 00:32:36.574 [2024-11-20 06:43:08.253810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.574 [2024-11-20 06:43:08.253844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.574 qpair failed and we were unable to recover it. 00:32:36.574 [2024-11-20 06:43:08.254031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.574 [2024-11-20 06:43:08.254063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.574 qpair failed and we were unable to recover it. 00:32:36.574 [2024-11-20 06:43:08.254262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.574 [2024-11-20 06:43:08.254297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.574 qpair failed and we were unable to recover it. 00:32:36.574 [2024-11-20 06:43:08.254543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.574 [2024-11-20 06:43:08.254575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.574 qpair failed and we were unable to recover it. 00:32:36.574 [2024-11-20 06:43:08.254752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.574 [2024-11-20 06:43:08.254786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.574 qpair failed and we were unable to recover it. 00:32:36.574 [2024-11-20 06:43:08.254986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.574 [2024-11-20 06:43:08.255018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.574 qpair failed and we were unable to recover it. 00:32:36.574 [2024-11-20 06:43:08.255195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.574 [2024-11-20 06:43:08.255239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.574 qpair failed and we were unable to recover it. 00:32:36.574 [2024-11-20 06:43:08.255491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.574 [2024-11-20 06:43:08.255522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.574 qpair failed and we were unable to recover it. 00:32:36.574 [2024-11-20 06:43:08.255648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.574 [2024-11-20 06:43:08.255680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.574 qpair failed and we were unable to recover it. 00:32:36.574 [2024-11-20 06:43:08.255818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.574 [2024-11-20 06:43:08.255849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.574 qpair failed and we were unable to recover it. 00:32:36.574 [2024-11-20 06:43:08.255993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.574 [2024-11-20 06:43:08.256025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.574 qpair failed and we were unable to recover it. 00:32:36.574 [2024-11-20 06:43:08.256264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.574 [2024-11-20 06:43:08.256298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.574 qpair failed and we were unable to recover it. 00:32:36.574 [2024-11-20 06:43:08.256501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.574 [2024-11-20 06:43:08.256534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.574 qpair failed and we were unable to recover it. 00:32:36.574 [2024-11-20 06:43:08.256649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.574 [2024-11-20 06:43:08.256681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.574 qpair failed and we were unable to recover it. 00:32:36.574 [2024-11-20 06:43:08.256944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.574 [2024-11-20 06:43:08.256977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.574 qpair failed and we were unable to recover it. 00:32:36.574 [2024-11-20 06:43:08.257237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.574 [2024-11-20 06:43:08.257271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.574 qpair failed and we were unable to recover it. 00:32:36.574 [2024-11-20 06:43:08.257526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.574 [2024-11-20 06:43:08.257558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.574 qpair failed and we were unable to recover it. 00:32:36.574 [2024-11-20 06:43:08.257800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.574 [2024-11-20 06:43:08.257832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.574 qpair failed and we were unable to recover it. 00:32:36.574 [2024-11-20 06:43:08.258017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.574 [2024-11-20 06:43:08.258049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.574 qpair failed and we were unable to recover it. 00:32:36.574 [2024-11-20 06:43:08.258313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.574 [2024-11-20 06:43:08.258346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.574 qpair failed and we were unable to recover it. 00:32:36.574 [2024-11-20 06:43:08.258457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.574 [2024-11-20 06:43:08.258490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.574 qpair failed and we were unable to recover it. 00:32:36.574 [2024-11-20 06:43:08.258665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.574 [2024-11-20 06:43:08.258697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.574 qpair failed and we were unable to recover it. 00:32:36.574 [2024-11-20 06:43:08.258804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.574 [2024-11-20 06:43:08.258838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.574 qpair failed and we were unable to recover it. 00:32:36.574 [2024-11-20 06:43:08.259037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.574 [2024-11-20 06:43:08.259070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.574 qpair failed and we were unable to recover it. 00:32:36.574 [2024-11-20 06:43:08.259325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.574 [2024-11-20 06:43:08.259359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.574 qpair failed and we were unable to recover it. 00:32:36.574 [2024-11-20 06:43:08.259473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.574 [2024-11-20 06:43:08.259512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.574 qpair failed and we were unable to recover it. 00:32:36.575 [2024-11-20 06:43:08.259648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.575 [2024-11-20 06:43:08.259680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.575 qpair failed and we were unable to recover it. 00:32:36.575 [2024-11-20 06:43:08.259798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.575 [2024-11-20 06:43:08.259830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.575 qpair failed and we were unable to recover it. 00:32:36.575 [2024-11-20 06:43:08.260097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.575 [2024-11-20 06:43:08.260130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.575 qpair failed and we were unable to recover it. 00:32:36.575 [2024-11-20 06:43:08.260322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.575 [2024-11-20 06:43:08.260357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.575 qpair failed and we were unable to recover it. 00:32:36.575 [2024-11-20 06:43:08.260619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.575 [2024-11-20 06:43:08.260651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.575 qpair failed and we were unable to recover it. 00:32:36.575 [2024-11-20 06:43:08.260776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.575 [2024-11-20 06:43:08.260809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.575 qpair failed and we were unable to recover it. 00:32:36.575 [2024-11-20 06:43:08.260917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.575 [2024-11-20 06:43:08.260949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.575 qpair failed and we were unable to recover it. 00:32:36.575 [2024-11-20 06:43:08.261125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.575 [2024-11-20 06:43:08.261157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.575 qpair failed and we were unable to recover it. 00:32:36.575 [2024-11-20 06:43:08.261283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.575 [2024-11-20 06:43:08.261316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.575 qpair failed and we were unable to recover it. 00:32:36.575 [2024-11-20 06:43:08.261543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.575 [2024-11-20 06:43:08.261576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.575 qpair failed and we were unable to recover it. 00:32:36.575 [2024-11-20 06:43:08.261861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.575 [2024-11-20 06:43:08.261893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.575 qpair failed and we were unable to recover it. 00:32:36.575 [2024-11-20 06:43:08.262076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.575 [2024-11-20 06:43:08.262108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.575 qpair failed and we were unable to recover it. 00:32:36.575 [2024-11-20 06:43:08.262283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.575 [2024-11-20 06:43:08.262318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.575 qpair failed and we were unable to recover it. 00:32:36.575 [2024-11-20 06:43:08.262524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.575 [2024-11-20 06:43:08.262558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.575 qpair failed and we were unable to recover it. 00:32:36.575 [2024-11-20 06:43:08.262754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.575 [2024-11-20 06:43:08.262786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.575 qpair failed and we were unable to recover it. 00:32:36.575 [2024-11-20 06:43:08.262922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.575 [2024-11-20 06:43:08.262956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.575 qpair failed and we were unable to recover it. 00:32:36.575 [2024-11-20 06:43:08.263149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.575 [2024-11-20 06:43:08.263182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.575 qpair failed and we were unable to recover it. 00:32:36.575 [2024-11-20 06:43:08.263387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.575 [2024-11-20 06:43:08.263420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.575 qpair failed and we were unable to recover it. 00:32:36.575 [2024-11-20 06:43:08.263586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.575 [2024-11-20 06:43:08.263618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.575 qpair failed and we were unable to recover it. 00:32:36.575 [2024-11-20 06:43:08.263799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.575 [2024-11-20 06:43:08.263833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.575 qpair failed and we were unable to recover it. 00:32:36.575 [2024-11-20 06:43:08.264010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.575 [2024-11-20 06:43:08.264041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.575 qpair failed and we were unable to recover it. 00:32:36.575 [2024-11-20 06:43:08.264150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.575 [2024-11-20 06:43:08.264183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.575 qpair failed and we were unable to recover it. 00:32:36.575 [2024-11-20 06:43:08.264417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.575 [2024-11-20 06:43:08.264451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.575 qpair failed and we were unable to recover it. 00:32:36.575 [2024-11-20 06:43:08.264667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.575 [2024-11-20 06:43:08.264699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.575 qpair failed and we were unable to recover it. 00:32:36.575 [2024-11-20 06:43:08.264894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.575 [2024-11-20 06:43:08.264925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.575 qpair failed and we were unable to recover it. 00:32:36.575 [2024-11-20 06:43:08.265170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.575 [2024-11-20 06:43:08.265213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.575 qpair failed and we were unable to recover it. 00:32:36.575 [2024-11-20 06:43:08.265416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.576 [2024-11-20 06:43:08.265447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.576 qpair failed and we were unable to recover it. 00:32:36.576 [2024-11-20 06:43:08.265656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.576 [2024-11-20 06:43:08.265689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.576 qpair failed and we were unable to recover it. 00:32:36.576 [2024-11-20 06:43:08.265814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.576 [2024-11-20 06:43:08.265846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.576 qpair failed and we were unable to recover it. 00:32:36.576 [2024-11-20 06:43:08.266088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.576 [2024-11-20 06:43:08.266121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.576 qpair failed and we were unable to recover it. 00:32:36.576 [2024-11-20 06:43:08.266261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.576 [2024-11-20 06:43:08.266294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.576 qpair failed and we were unable to recover it. 00:32:36.576 [2024-11-20 06:43:08.266564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.576 [2024-11-20 06:43:08.266597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.576 qpair failed and we were unable to recover it. 00:32:36.576 [2024-11-20 06:43:08.266783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.576 [2024-11-20 06:43:08.266815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.576 qpair failed and we were unable to recover it. 00:32:36.576 [2024-11-20 06:43:08.267084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.576 [2024-11-20 06:43:08.267117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.576 qpair failed and we were unable to recover it. 00:32:36.576 [2024-11-20 06:43:08.267380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.576 [2024-11-20 06:43:08.267413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.576 qpair failed and we were unable to recover it. 00:32:36.576 [2024-11-20 06:43:08.267631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.576 [2024-11-20 06:43:08.267671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.576 qpair failed and we were unable to recover it. 00:32:36.576 [2024-11-20 06:43:08.267779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.576 [2024-11-20 06:43:08.267811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.576 qpair failed and we were unable to recover it. 00:32:36.576 [2024-11-20 06:43:08.268075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.576 [2024-11-20 06:43:08.268107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.576 qpair failed and we were unable to recover it. 00:32:36.576 [2024-11-20 06:43:08.268289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.576 [2024-11-20 06:43:08.268323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.576 qpair failed and we were unable to recover it. 00:32:36.576 [2024-11-20 06:43:08.268500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.576 [2024-11-20 06:43:08.268533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.576 qpair failed and we were unable to recover it. 00:32:36.576 [2024-11-20 06:43:08.268720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.576 [2024-11-20 06:43:08.268775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.576 qpair failed and we were unable to recover it. 00:32:36.576 [2024-11-20 06:43:08.268894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.576 [2024-11-20 06:43:08.268927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.576 qpair failed and we were unable to recover it. 00:32:36.576 [2024-11-20 06:43:08.269113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.576 [2024-11-20 06:43:08.269145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.576 qpair failed and we were unable to recover it. 00:32:36.576 [2024-11-20 06:43:08.269265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.576 [2024-11-20 06:43:08.269299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.576 qpair failed and we were unable to recover it. 00:32:36.576 [2024-11-20 06:43:08.269520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.576 [2024-11-20 06:43:08.269553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.576 qpair failed and we were unable to recover it. 00:32:36.576 [2024-11-20 06:43:08.269751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.576 [2024-11-20 06:43:08.269784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.576 qpair failed and we were unable to recover it. 00:32:36.576 [2024-11-20 06:43:08.270042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.576 [2024-11-20 06:43:08.270075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.576 qpair failed and we were unable to recover it. 00:32:36.576 [2024-11-20 06:43:08.270212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.576 [2024-11-20 06:43:08.270248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.576 qpair failed and we were unable to recover it. 00:32:36.576 [2024-11-20 06:43:08.270374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.576 [2024-11-20 06:43:08.270406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.576 qpair failed and we were unable to recover it. 00:32:36.576 [2024-11-20 06:43:08.270548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.576 [2024-11-20 06:43:08.270583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.576 qpair failed and we were unable to recover it. 00:32:36.576 [2024-11-20 06:43:08.270720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.576 [2024-11-20 06:43:08.270752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.576 qpair failed and we were unable to recover it. 00:32:36.576 [2024-11-20 06:43:08.271021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.576 [2024-11-20 06:43:08.271053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.576 qpair failed and we were unable to recover it. 00:32:36.576 [2024-11-20 06:43:08.271170] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:36.576 [2024-11-20 06:43:08.271187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.577 [2024-11-20 06:43:08.271200] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:36.577 [2024-11-20 06:43:08.271216] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:36.577 [2024-11-20 06:43:08.271222] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:36.577 [2024-11-20 06:43:08.271231] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:36.577 [2024-11-20 06:43:08.271228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.577 qpair failed and we were unable to recover it. 00:32:36.577 [2024-11-20 06:43:08.271405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.577 [2024-11-20 06:43:08.271437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.577 qpair failed and we were unable to recover it. 00:32:36.577 [2024-11-20 06:43:08.271573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.577 [2024-11-20 06:43:08.271603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.577 qpair failed and we were unable to recover it. 00:32:36.577 [2024-11-20 06:43:08.271722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.577 [2024-11-20 06:43:08.271753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.577 qpair failed and we were unable to recover it. 00:32:36.577 [2024-11-20 06:43:08.271938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.577 [2024-11-20 06:43:08.271970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.577 qpair failed and we were unable to recover it. 00:32:36.577 [2024-11-20 06:43:08.272250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.577 [2024-11-20 06:43:08.272284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.577 qpair failed and we were unable to recover it. 00:32:36.577 [2024-11-20 06:43:08.272473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.577 [2024-11-20 06:43:08.272510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.577 qpair failed and we were unable to recover it. 00:32:36.577 [2024-11-20 06:43:08.272749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.577 [2024-11-20 06:43:08.272781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.577 qpair failed and we were unable to recover it. 00:32:36.577 [2024-11-20 06:43:08.272841] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:32:36.577 [2024-11-20 06:43:08.272947] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:32:36.577 [2024-11-20 06:43:08.273031] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:32:36.577 [2024-11-20 06:43:08.273047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.577 [2024-11-20 06:43:08.273078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.577 qpair failed and we were unable to recover it. 00:32:36.577 [2024-11-20 06:43:08.273032] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:32:36.577 [2024-11-20 06:43:08.273319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.577 [2024-11-20 06:43:08.273353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.577 qpair failed and we were unable to recover it. 00:32:36.577 [2024-11-20 06:43:08.273596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.577 [2024-11-20 06:43:08.273629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.577 qpair failed and we were unable to recover it. 00:32:36.577 [2024-11-20 06:43:08.273824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.577 [2024-11-20 06:43:08.273856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.577 qpair failed and we were unable to recover it. 00:32:36.577 [2024-11-20 06:43:08.274094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.577 [2024-11-20 06:43:08.274137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.577 qpair failed and we were unable to recover it. 00:32:36.577 [2024-11-20 06:43:08.274337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.577 [2024-11-20 06:43:08.274372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.577 qpair failed and we were unable to recover it. 00:32:36.577 [2024-11-20 06:43:08.274637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.577 [2024-11-20 06:43:08.274669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.577 qpair failed and we were unable to recover it. 00:32:36.577 [2024-11-20 06:43:08.274813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.577 [2024-11-20 06:43:08.274845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.577 qpair failed and we were unable to recover it. 00:32:36.577 [2024-11-20 06:43:08.275029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.577 [2024-11-20 06:43:08.275061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.577 qpair failed and we were unable to recover it. 00:32:36.577 [2024-11-20 06:43:08.275180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.577 [2024-11-20 06:43:08.275222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.577 qpair failed and we were unable to recover it. 00:32:36.577 [2024-11-20 06:43:08.275468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.577 [2024-11-20 06:43:08.275500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.577 qpair failed and we were unable to recover it. 00:32:36.577 [2024-11-20 06:43:08.275701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.577 [2024-11-20 06:43:08.275733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.577 qpair failed and we were unable to recover it. 00:32:36.577 [2024-11-20 06:43:08.275976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.577 [2024-11-20 06:43:08.276009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.577 qpair failed and we were unable to recover it. 00:32:36.577 [2024-11-20 06:43:08.276136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.577 [2024-11-20 06:43:08.276168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.577 qpair failed and we were unable to recover it. 00:32:36.577 [2024-11-20 06:43:08.276389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.577 [2024-11-20 06:43:08.276422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.577 qpair failed and we were unable to recover it. 00:32:36.577 [2024-11-20 06:43:08.276695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.577 [2024-11-20 06:43:08.276727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.577 qpair failed and we were unable to recover it. 00:32:36.578 [2024-11-20 06:43:08.276926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.578 [2024-11-20 06:43:08.276958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.578 qpair failed and we were unable to recover it. 00:32:36.578 [2024-11-20 06:43:08.277224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.578 [2024-11-20 06:43:08.277266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.578 qpair failed and we were unable to recover it. 00:32:36.578 [2024-11-20 06:43:08.277524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.578 [2024-11-20 06:43:08.277557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.578 qpair failed and we were unable to recover it. 00:32:36.578 [2024-11-20 06:43:08.277751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.578 [2024-11-20 06:43:08.277784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.578 qpair failed and we were unable to recover it. 00:32:36.578 [2024-11-20 06:43:08.278055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.578 [2024-11-20 06:43:08.278087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.578 qpair failed and we were unable to recover it. 00:32:36.578 [2024-11-20 06:43:08.278217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.578 [2024-11-20 06:43:08.278251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.578 qpair failed and we were unable to recover it. 00:32:36.578 [2024-11-20 06:43:08.278455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.578 [2024-11-20 06:43:08.278488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.578 qpair failed and we were unable to recover it. 00:32:36.578 [2024-11-20 06:43:08.278621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.578 [2024-11-20 06:43:08.278653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.578 qpair failed and we were unable to recover it. 00:32:36.578 [2024-11-20 06:43:08.278847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.578 [2024-11-20 06:43:08.278880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.578 qpair failed and we were unable to recover it. 00:32:36.578 [2024-11-20 06:43:08.279053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.578 [2024-11-20 06:43:08.279086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.578 qpair failed and we were unable to recover it. 00:32:36.578 [2024-11-20 06:43:08.279286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.578 [2024-11-20 06:43:08.279321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.578 qpair failed and we were unable to recover it. 00:32:36.578 [2024-11-20 06:43:08.279449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.578 [2024-11-20 06:43:08.279481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.578 qpair failed and we were unable to recover it. 00:32:36.578 [2024-11-20 06:43:08.279670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.578 [2024-11-20 06:43:08.279702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.578 qpair failed and we were unable to recover it. 00:32:36.578 [2024-11-20 06:43:08.279869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.578 [2024-11-20 06:43:08.279901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.578 qpair failed and we were unable to recover it. 00:32:36.578 [2024-11-20 06:43:08.280077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.578 [2024-11-20 06:43:08.280109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.578 qpair failed and we were unable to recover it. 00:32:36.578 [2024-11-20 06:43:08.280382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.578 [2024-11-20 06:43:08.280417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.578 qpair failed and we were unable to recover it. 00:32:36.578 [2024-11-20 06:43:08.280671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.578 [2024-11-20 06:43:08.280704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.578 qpair failed and we were unable to recover it. 00:32:36.578 [2024-11-20 06:43:08.280818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.578 [2024-11-20 06:43:08.280849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.578 qpair failed and we were unable to recover it. 00:32:36.578 [2024-11-20 06:43:08.281110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.578 [2024-11-20 06:43:08.281143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.578 qpair failed and we were unable to recover it. 00:32:36.578 [2024-11-20 06:43:08.281415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.578 [2024-11-20 06:43:08.281448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.578 qpair failed and we were unable to recover it. 00:32:36.578 [2024-11-20 06:43:08.281575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.578 [2024-11-20 06:43:08.281609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.578 qpair failed and we were unable to recover it. 00:32:36.578 [2024-11-20 06:43:08.281731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.578 [2024-11-20 06:43:08.281764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.578 qpair failed and we were unable to recover it. 00:32:36.578 [2024-11-20 06:43:08.281919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.578 [2024-11-20 06:43:08.281951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.578 qpair failed and we were unable to recover it. 00:32:36.578 [2024-11-20 06:43:08.282129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.578 [2024-11-20 06:43:08.282164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.578 qpair failed and we were unable to recover it. 00:32:36.578 [2024-11-20 06:43:08.282367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.578 [2024-11-20 06:43:08.282401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.578 qpair failed and we were unable to recover it. 00:32:36.578 [2024-11-20 06:43:08.282524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.578 [2024-11-20 06:43:08.282556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.578 qpair failed and we were unable to recover it. 00:32:36.579 [2024-11-20 06:43:08.282814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.579 [2024-11-20 06:43:08.282846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.579 qpair failed and we were unable to recover it. 00:32:36.579 [2024-11-20 06:43:08.282969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.579 [2024-11-20 06:43:08.283000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.579 qpair failed and we were unable to recover it. 00:32:36.579 [2024-11-20 06:43:08.283272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.579 [2024-11-20 06:43:08.283321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.579 qpair failed and we were unable to recover it. 00:32:36.579 [2024-11-20 06:43:08.283609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.579 [2024-11-20 06:43:08.283645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.579 qpair failed and we were unable to recover it. 00:32:36.579 [2024-11-20 06:43:08.283784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.579 [2024-11-20 06:43:08.283817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.579 qpair failed and we were unable to recover it. 00:32:36.579 [2024-11-20 06:43:08.283925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.579 [2024-11-20 06:43:08.283956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.579 qpair failed and we were unable to recover it. 00:32:36.579 [2024-11-20 06:43:08.284143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.579 [2024-11-20 06:43:08.284178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.579 qpair failed and we were unable to recover it. 00:32:36.579 [2024-11-20 06:43:08.284342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.579 [2024-11-20 06:43:08.284377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.579 qpair failed and we were unable to recover it. 00:32:36.579 [2024-11-20 06:43:08.284555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.579 [2024-11-20 06:43:08.284587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.579 qpair failed and we were unable to recover it. 00:32:36.579 [2024-11-20 06:43:08.284699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.579 [2024-11-20 06:43:08.284731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.579 qpair failed and we were unable to recover it. 00:32:36.579 [2024-11-20 06:43:08.284935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.579 [2024-11-20 06:43:08.284969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.579 qpair failed and we were unable to recover it. 00:32:36.579 [2024-11-20 06:43:08.285159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.579 [2024-11-20 06:43:08.285192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.579 qpair failed and we were unable to recover it. 00:32:36.579 [2024-11-20 06:43:08.285310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.579 [2024-11-20 06:43:08.285345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.579 qpair failed and we were unable to recover it. 00:32:36.579 [2024-11-20 06:43:08.285469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.579 [2024-11-20 06:43:08.285502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.579 qpair failed and we were unable to recover it. 00:32:36.579 [2024-11-20 06:43:08.285686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.579 [2024-11-20 06:43:08.285719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.579 qpair failed and we were unable to recover it. 00:32:36.579 [2024-11-20 06:43:08.285828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.579 [2024-11-20 06:43:08.285870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.579 qpair failed and we were unable to recover it. 00:32:36.579 [2024-11-20 06:43:08.286072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.579 [2024-11-20 06:43:08.286106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.579 qpair failed and we were unable to recover it. 00:32:36.579 [2024-11-20 06:43:08.286280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.579 [2024-11-20 06:43:08.286316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.579 qpair failed and we were unable to recover it. 00:32:36.579 [2024-11-20 06:43:08.286577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.579 [2024-11-20 06:43:08.286609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.579 qpair failed and we were unable to recover it. 00:32:36.579 [2024-11-20 06:43:08.286788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.579 [2024-11-20 06:43:08.286823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.579 qpair failed and we were unable to recover it. 00:32:36.579 [2024-11-20 06:43:08.287022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.579 [2024-11-20 06:43:08.287056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.579 qpair failed and we were unable to recover it. 00:32:36.579 [2024-11-20 06:43:08.287185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.579 [2024-11-20 06:43:08.287226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.579 qpair failed and we were unable to recover it. 00:32:36.579 [2024-11-20 06:43:08.287414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.579 [2024-11-20 06:43:08.287447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.579 qpair failed and we were unable to recover it. 00:32:36.579 [2024-11-20 06:43:08.287687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.579 [2024-11-20 06:43:08.287721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.579 qpair failed and we were unable to recover it. 00:32:36.579 [2024-11-20 06:43:08.287988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.579 [2024-11-20 06:43:08.288022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.579 qpair failed and we were unable to recover it. 00:32:36.579 [2024-11-20 06:43:08.288273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.579 [2024-11-20 06:43:08.288311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.580 qpair failed and we were unable to recover it. 00:32:36.580 [2024-11-20 06:43:08.288493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.580 [2024-11-20 06:43:08.288528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.580 qpair failed and we were unable to recover it. 00:32:36.580 [2024-11-20 06:43:08.288652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.580 [2024-11-20 06:43:08.288687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.580 qpair failed and we were unable to recover it. 00:32:36.580 [2024-11-20 06:43:08.288886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.580 [2024-11-20 06:43:08.288920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.580 qpair failed and we were unable to recover it. 00:32:36.580 [2024-11-20 06:43:08.289036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.580 [2024-11-20 06:43:08.289070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.580 qpair failed and we were unable to recover it. 00:32:36.580 [2024-11-20 06:43:08.289246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.580 [2024-11-20 06:43:08.289282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.580 qpair failed and we were unable to recover it. 00:32:36.580 [2024-11-20 06:43:08.289556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.580 [2024-11-20 06:43:08.289589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.580 qpair failed and we were unable to recover it. 00:32:36.580 [2024-11-20 06:43:08.289771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.580 [2024-11-20 06:43:08.289807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.580 qpair failed and we were unable to recover it. 00:32:36.580 [2024-11-20 06:43:08.289999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.580 [2024-11-20 06:43:08.290032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.580 qpair failed and we were unable to recover it. 00:32:36.580 [2024-11-20 06:43:08.290214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.580 [2024-11-20 06:43:08.290251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.580 qpair failed and we were unable to recover it. 00:32:36.580 [2024-11-20 06:43:08.290494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.580 [2024-11-20 06:43:08.290528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.580 qpair failed and we were unable to recover it. 00:32:36.580 [2024-11-20 06:43:08.290705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.580 [2024-11-20 06:43:08.290739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.580 qpair failed and we were unable to recover it. 00:32:36.580 [2024-11-20 06:43:08.290940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.580 [2024-11-20 06:43:08.290973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.580 qpair failed and we were unable to recover it. 00:32:36.580 [2024-11-20 06:43:08.291238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.580 [2024-11-20 06:43:08.291273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.580 qpair failed and we were unable to recover it. 00:32:36.580 [2024-11-20 06:43:08.291477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.580 [2024-11-20 06:43:08.291513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.580 qpair failed and we were unable to recover it. 00:32:36.580 [2024-11-20 06:43:08.291620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.580 [2024-11-20 06:43:08.291652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.580 qpair failed and we were unable to recover it. 00:32:36.580 [2024-11-20 06:43:08.291894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.580 [2024-11-20 06:43:08.291926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.580 qpair failed and we were unable to recover it. 00:32:36.580 [2024-11-20 06:43:08.292178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.580 [2024-11-20 06:43:08.292228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.580 qpair failed and we were unable to recover it. 00:32:36.580 [2024-11-20 06:43:08.292416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.580 [2024-11-20 06:43:08.292449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.580 qpair failed and we were unable to recover it. 00:32:36.580 [2024-11-20 06:43:08.292621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.580 [2024-11-20 06:43:08.292653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.580 qpair failed and we were unable to recover it. 00:32:36.580 [2024-11-20 06:43:08.292781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.580 [2024-11-20 06:43:08.292814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.580 qpair failed and we were unable to recover it. 00:32:36.580 [2024-11-20 06:43:08.293066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.580 [2024-11-20 06:43:08.293100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.580 qpair failed and we were unable to recover it. 00:32:36.580 [2024-11-20 06:43:08.293276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.581 [2024-11-20 06:43:08.293312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.581 qpair failed and we were unable to recover it. 00:32:36.581 [2024-11-20 06:43:08.293486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.581 [2024-11-20 06:43:08.293520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.581 qpair failed and we were unable to recover it. 00:32:36.581 [2024-11-20 06:43:08.293644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.581 [2024-11-20 06:43:08.293677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.581 qpair failed and we were unable to recover it. 00:32:36.581 [2024-11-20 06:43:08.293875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.581 [2024-11-20 06:43:08.293906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.581 qpair failed and we were unable to recover it. 00:32:36.581 [2024-11-20 06:43:08.294199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.581 [2024-11-20 06:43:08.294243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.581 qpair failed and we were unable to recover it. 00:32:36.581 [2024-11-20 06:43:08.294432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.581 [2024-11-20 06:43:08.294465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.581 qpair failed and we were unable to recover it. 00:32:36.581 [2024-11-20 06:43:08.294645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.581 [2024-11-20 06:43:08.294678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.581 qpair failed and we were unable to recover it. 00:32:36.581 [2024-11-20 06:43:08.294908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.581 [2024-11-20 06:43:08.294942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.581 qpair failed and we were unable to recover it. 00:32:36.581 [2024-11-20 06:43:08.295142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.581 [2024-11-20 06:43:08.295175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.581 qpair failed and we were unable to recover it. 00:32:36.581 [2024-11-20 06:43:08.295444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.581 [2024-11-20 06:43:08.295479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.581 qpair failed and we were unable to recover it. 00:32:36.581 [2024-11-20 06:43:08.295717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.581 [2024-11-20 06:43:08.295750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.581 qpair failed and we were unable to recover it. 00:32:36.581 [2024-11-20 06:43:08.296017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.581 [2024-11-20 06:43:08.296052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.581 qpair failed and we were unable to recover it. 00:32:36.581 [2024-11-20 06:43:08.296293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.581 [2024-11-20 06:43:08.296329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.581 qpair failed and we were unable to recover it. 00:32:36.581 [2024-11-20 06:43:08.296458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.581 [2024-11-20 06:43:08.296490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.581 qpair failed and we were unable to recover it. 00:32:36.581 [2024-11-20 06:43:08.296684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.581 [2024-11-20 06:43:08.296718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.581 qpair failed and we were unable to recover it. 00:32:36.581 [2024-11-20 06:43:08.296921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.581 [2024-11-20 06:43:08.296953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.581 qpair failed and we were unable to recover it. 00:32:36.581 [2024-11-20 06:43:08.297082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.581 [2024-11-20 06:43:08.297115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.581 qpair failed and we were unable to recover it. 00:32:36.581 [2024-11-20 06:43:08.297244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.581 [2024-11-20 06:43:08.297277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.581 qpair failed and we were unable to recover it. 00:32:36.581 [2024-11-20 06:43:08.297466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.581 [2024-11-20 06:43:08.297500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.581 qpair failed and we were unable to recover it. 00:32:36.581 [2024-11-20 06:43:08.297629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.581 [2024-11-20 06:43:08.297661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.581 qpair failed and we were unable to recover it. 00:32:36.581 [2024-11-20 06:43:08.297784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.581 [2024-11-20 06:43:08.297818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.581 qpair failed and we were unable to recover it. 00:32:36.581 [2024-11-20 06:43:08.298089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.581 [2024-11-20 06:43:08.298123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.581 qpair failed and we were unable to recover it. 00:32:36.581 [2024-11-20 06:43:08.298393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.581 [2024-11-20 06:43:08.298429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.581 qpair failed and we were unable to recover it. 00:32:36.581 [2024-11-20 06:43:08.298621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.581 [2024-11-20 06:43:08.298654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.581 qpair failed and we were unable to recover it. 00:32:36.581 [2024-11-20 06:43:08.298841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.581 [2024-11-20 06:43:08.298874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.581 qpair failed and we were unable to recover it. 00:32:36.581 [2024-11-20 06:43:08.299026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.581 [2024-11-20 06:43:08.299059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.581 qpair failed and we were unable to recover it. 00:32:36.581 [2024-11-20 06:43:08.299335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.582 [2024-11-20 06:43:08.299370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.582 qpair failed and we were unable to recover it. 00:32:36.582 [2024-11-20 06:43:08.299520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.582 [2024-11-20 06:43:08.299552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.582 qpair failed and we were unable to recover it. 00:32:36.582 [2024-11-20 06:43:08.299671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.582 [2024-11-20 06:43:08.299703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.582 qpair failed and we were unable to recover it. 00:32:36.582 [2024-11-20 06:43:08.299969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.582 [2024-11-20 06:43:08.300004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.582 qpair failed and we were unable to recover it. 00:32:36.582 [2024-11-20 06:43:08.300135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.582 [2024-11-20 06:43:08.300165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.582 qpair failed and we were unable to recover it. 00:32:36.582 [2024-11-20 06:43:08.300351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.582 [2024-11-20 06:43:08.300386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.582 qpair failed and we were unable to recover it. 00:32:36.582 [2024-11-20 06:43:08.300569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.582 [2024-11-20 06:43:08.300603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.582 qpair failed and we were unable to recover it. 00:32:36.582 [2024-11-20 06:43:08.300725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.582 [2024-11-20 06:43:08.300757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.582 qpair failed and we were unable to recover it. 00:32:36.582 [2024-11-20 06:43:08.300893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.582 [2024-11-20 06:43:08.300925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.582 qpair failed and we were unable to recover it. 00:32:36.582 [2024-11-20 06:43:08.301167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.582 [2024-11-20 06:43:08.301221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.582 qpair failed and we were unable to recover it. 00:32:36.582 [2024-11-20 06:43:08.301408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.582 [2024-11-20 06:43:08.301439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.582 qpair failed and we were unable to recover it. 00:32:36.582 [2024-11-20 06:43:08.301627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.582 [2024-11-20 06:43:08.301661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.582 qpair failed and we were unable to recover it. 00:32:36.582 [2024-11-20 06:43:08.301777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.582 [2024-11-20 06:43:08.301809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.582 qpair failed and we were unable to recover it. 00:32:36.582 [2024-11-20 06:43:08.301986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.582 [2024-11-20 06:43:08.302019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.582 qpair failed and we were unable to recover it. 00:32:36.582 [2024-11-20 06:43:08.302217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.582 [2024-11-20 06:43:08.302252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.582 qpair failed and we were unable to recover it. 00:32:36.582 [2024-11-20 06:43:08.302386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.582 [2024-11-20 06:43:08.302422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.582 qpair failed and we were unable to recover it. 00:32:36.582 [2024-11-20 06:43:08.302610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.582 [2024-11-20 06:43:08.302643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.582 qpair failed and we were unable to recover it. 00:32:36.582 [2024-11-20 06:43:08.302770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.582 [2024-11-20 06:43:08.302803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.582 qpair failed and we were unable to recover it. 00:32:36.582 [2024-11-20 06:43:08.302994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.582 [2024-11-20 06:43:08.303028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.582 qpair failed and we were unable to recover it. 00:32:36.582 [2024-11-20 06:43:08.303260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.582 [2024-11-20 06:43:08.303296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.582 qpair failed and we were unable to recover it. 00:32:36.582 [2024-11-20 06:43:08.303578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.582 [2024-11-20 06:43:08.303612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.582 qpair failed and we were unable to recover it. 00:32:36.582 [2024-11-20 06:43:08.303794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.582 [2024-11-20 06:43:08.303827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.582 qpair failed and we were unable to recover it. 00:32:36.582 [2024-11-20 06:43:08.304007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.582 [2024-11-20 06:43:08.304041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.582 qpair failed and we were unable to recover it. 00:32:36.582 [2024-11-20 06:43:08.304173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.582 [2024-11-20 06:43:08.304215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.582 qpair failed and we were unable to recover it. 00:32:36.582 [2024-11-20 06:43:08.304412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.582 [2024-11-20 06:43:08.304446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.582 qpair failed and we were unable to recover it. 00:32:36.582 [2024-11-20 06:43:08.304626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.582 [2024-11-20 06:43:08.304659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.582 qpair failed and we were unable to recover it. 00:32:36.582 [2024-11-20 06:43:08.304833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.582 [2024-11-20 06:43:08.304865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.582 qpair failed and we were unable to recover it. 00:32:36.582 [2024-11-20 06:43:08.305053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.582 [2024-11-20 06:43:08.305087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.582 qpair failed and we were unable to recover it. 00:32:36.582 [2024-11-20 06:43:08.305225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.582 [2024-11-20 06:43:08.305260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.582 qpair failed and we were unable to recover it. 00:32:36.582 [2024-11-20 06:43:08.305434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.582 [2024-11-20 06:43:08.305467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.582 qpair failed and we were unable to recover it. 00:32:36.582 [2024-11-20 06:43:08.305642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.582 [2024-11-20 06:43:08.305674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.582 qpair failed and we were unable to recover it. 00:32:36.582 [2024-11-20 06:43:08.305811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.582 [2024-11-20 06:43:08.305844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.582 qpair failed and we were unable to recover it. 00:32:36.582 [2024-11-20 06:43:08.306085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.582 [2024-11-20 06:43:08.306119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.582 qpair failed and we were unable to recover it. 00:32:36.582 [2024-11-20 06:43:08.306300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.582 [2024-11-20 06:43:08.306334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.582 qpair failed and we were unable to recover it. 00:32:36.582 [2024-11-20 06:43:08.306500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.582 [2024-11-20 06:43:08.306533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.582 qpair failed and we were unable to recover it. 00:32:36.582 [2024-11-20 06:43:08.306731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.582 [2024-11-20 06:43:08.306764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.582 qpair failed and we were unable to recover it. 00:32:36.583 [2024-11-20 06:43:08.306915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.583 [2024-11-20 06:43:08.306948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.583 qpair failed and we were unable to recover it. 00:32:36.583 [2024-11-20 06:43:08.307241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.583 [2024-11-20 06:43:08.307275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.583 qpair failed and we were unable to recover it. 00:32:36.583 [2024-11-20 06:43:08.307456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.583 [2024-11-20 06:43:08.307490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.583 qpair failed and we were unable to recover it. 00:32:36.583 [2024-11-20 06:43:08.307620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.583 [2024-11-20 06:43:08.307653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.583 qpair failed and we were unable to recover it. 00:32:36.583 [2024-11-20 06:43:08.307899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.583 [2024-11-20 06:43:08.307932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.583 qpair failed and we were unable to recover it. 00:32:36.583 [2024-11-20 06:43:08.308054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.583 [2024-11-20 06:43:08.308086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.583 qpair failed and we were unable to recover it. 00:32:36.583 [2024-11-20 06:43:08.308219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.583 [2024-11-20 06:43:08.308253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.583 qpair failed and we were unable to recover it. 00:32:36.583 [2024-11-20 06:43:08.308440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.583 [2024-11-20 06:43:08.308474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.583 qpair failed and we were unable to recover it. 00:32:36.583 [2024-11-20 06:43:08.308675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.583 [2024-11-20 06:43:08.308707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.583 qpair failed and we were unable to recover it. 00:32:36.583 [2024-11-20 06:43:08.308810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.583 [2024-11-20 06:43:08.308843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.583 qpair failed and we were unable to recover it. 00:32:36.583 [2024-11-20 06:43:08.308970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.583 [2024-11-20 06:43:08.309001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.583 qpair failed and we were unable to recover it. 00:32:36.583 [2024-11-20 06:43:08.309127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.583 [2024-11-20 06:43:08.309160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.583 qpair failed and we were unable to recover it. 00:32:36.583 [2024-11-20 06:43:08.309292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.583 [2024-11-20 06:43:08.309324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.583 qpair failed and we were unable to recover it. 00:32:36.583 [2024-11-20 06:43:08.309433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.583 [2024-11-20 06:43:08.309472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.583 qpair failed and we were unable to recover it. 00:32:36.583 [2024-11-20 06:43:08.309591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.583 [2024-11-20 06:43:08.309623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.583 qpair failed and we were unable to recover it. 00:32:36.583 [2024-11-20 06:43:08.309862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.583 [2024-11-20 06:43:08.309894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.583 qpair failed and we were unable to recover it. 00:32:36.583 [2024-11-20 06:43:08.310128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.583 [2024-11-20 06:43:08.310159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.583 qpair failed and we were unable to recover it. 00:32:36.583 [2024-11-20 06:43:08.310363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.583 [2024-11-20 06:43:08.310396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.583 qpair failed and we were unable to recover it. 00:32:36.583 [2024-11-20 06:43:08.310594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.583 [2024-11-20 06:43:08.310626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.583 qpair failed and we were unable to recover it. 00:32:36.583 [2024-11-20 06:43:08.310873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.583 [2024-11-20 06:43:08.310906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.583 qpair failed and we were unable to recover it. 00:32:36.583 [2024-11-20 06:43:08.311141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.583 [2024-11-20 06:43:08.311174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.583 qpair failed and we were unable to recover it. 00:32:36.583 [2024-11-20 06:43:08.311320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.583 [2024-11-20 06:43:08.311353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.583 qpair failed and we were unable to recover it. 00:32:36.583 [2024-11-20 06:43:08.311527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.583 [2024-11-20 06:43:08.311561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.583 qpair failed and we were unable to recover it. 00:32:36.583 [2024-11-20 06:43:08.311742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.583 [2024-11-20 06:43:08.311774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.583 qpair failed and we were unable to recover it. 00:32:36.583 [2024-11-20 06:43:08.312022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.583 [2024-11-20 06:43:08.312054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.583 qpair failed and we were unable to recover it. 00:32:36.583 [2024-11-20 06:43:08.312310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.583 [2024-11-20 06:43:08.312347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.583 qpair failed and we were unable to recover it. 00:32:36.583 [2024-11-20 06:43:08.312531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.583 [2024-11-20 06:43:08.312566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.583 qpair failed and we were unable to recover it. 00:32:36.583 [2024-11-20 06:43:08.312701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.583 [2024-11-20 06:43:08.312733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.583 qpair failed and we were unable to recover it. 00:32:36.583 [2024-11-20 06:43:08.312873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.583 [2024-11-20 06:43:08.312906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.583 qpair failed and we were unable to recover it. 00:32:36.583 [2024-11-20 06:43:08.313019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.583 [2024-11-20 06:43:08.313050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.583 qpair failed and we were unable to recover it. 00:32:36.583 [2024-11-20 06:43:08.313261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.583 [2024-11-20 06:43:08.313295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.583 qpair failed and we were unable to recover it. 00:32:36.583 [2024-11-20 06:43:08.313421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.583 [2024-11-20 06:43:08.313453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.583 qpair failed and we were unable to recover it. 00:32:36.583 [2024-11-20 06:43:08.313638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.583 [2024-11-20 06:43:08.313672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.583 qpair failed and we were unable to recover it. 00:32:36.583 [2024-11-20 06:43:08.313799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.583 [2024-11-20 06:43:08.313830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.583 qpair failed and we were unable to recover it. 00:32:36.583 [2024-11-20 06:43:08.313950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.583 [2024-11-20 06:43:08.313983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.583 qpair failed and we were unable to recover it. 00:32:36.583 [2024-11-20 06:43:08.314264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.583 [2024-11-20 06:43:08.314298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.583 qpair failed and we were unable to recover it. 00:32:36.583 [2024-11-20 06:43:08.314562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.583 [2024-11-20 06:43:08.314596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.583 qpair failed and we were unable to recover it. 00:32:36.583 [2024-11-20 06:43:08.314772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.583 [2024-11-20 06:43:08.314804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.583 qpair failed and we were unable to recover it. 00:32:36.583 [2024-11-20 06:43:08.314929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.583 [2024-11-20 06:43:08.314961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.583 qpair failed and we were unable to recover it. 00:32:36.583 [2024-11-20 06:43:08.315151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.583 [2024-11-20 06:43:08.315182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.583 qpair failed and we were unable to recover it. 00:32:36.583 [2024-11-20 06:43:08.315507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.584 [2024-11-20 06:43:08.315543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.584 qpair failed and we were unable to recover it. 00:32:36.584 [2024-11-20 06:43:08.315665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.584 [2024-11-20 06:43:08.315697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.584 qpair failed and we were unable to recover it. 00:32:36.584 [2024-11-20 06:43:08.315884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.584 [2024-11-20 06:43:08.315917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.584 qpair failed and we were unable to recover it. 00:32:36.584 [2024-11-20 06:43:08.316053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.584 [2024-11-20 06:43:08.316086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.584 qpair failed and we were unable to recover it. 00:32:36.584 [2024-11-20 06:43:08.316232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.584 [2024-11-20 06:43:08.316266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.584 qpair failed and we were unable to recover it. 00:32:36.584 [2024-11-20 06:43:08.316473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.584 [2024-11-20 06:43:08.316507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.584 qpair failed and we were unable to recover it. 00:32:36.584 [2024-11-20 06:43:08.316679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.584 [2024-11-20 06:43:08.316713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.584 qpair failed and we were unable to recover it. 00:32:36.584 [2024-11-20 06:43:08.316956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.584 [2024-11-20 06:43:08.316990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.584 qpair failed and we were unable to recover it. 00:32:36.584 [2024-11-20 06:43:08.317108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.584 [2024-11-20 06:43:08.317142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.584 qpair failed and we were unable to recover it. 00:32:36.584 [2024-11-20 06:43:08.317293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.584 [2024-11-20 06:43:08.317326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.584 qpair failed and we were unable to recover it. 00:32:36.584 [2024-11-20 06:43:08.317432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.584 [2024-11-20 06:43:08.317465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.584 qpair failed and we were unable to recover it. 00:32:36.584 [2024-11-20 06:43:08.317657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.584 [2024-11-20 06:43:08.317690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.584 qpair failed and we were unable to recover it. 00:32:36.584 [2024-11-20 06:43:08.317813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.584 [2024-11-20 06:43:08.317845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.584 qpair failed and we were unable to recover it. 00:32:36.584 [2024-11-20 06:43:08.318021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.584 [2024-11-20 06:43:08.318061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.584 qpair failed and we were unable to recover it. 00:32:36.584 [2024-11-20 06:43:08.318344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.584 [2024-11-20 06:43:08.318378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.584 qpair failed and we were unable to recover it. 00:32:36.584 [2024-11-20 06:43:08.318563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.584 [2024-11-20 06:43:08.318597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.584 qpair failed and we were unable to recover it. 00:32:36.584 [2024-11-20 06:43:08.318883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.584 [2024-11-20 06:43:08.318916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.584 qpair failed and we were unable to recover it. 00:32:36.584 [2024-11-20 06:43:08.319068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.584 [2024-11-20 06:43:08.319101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.584 qpair failed and we were unable to recover it. 00:32:36.584 [2024-11-20 06:43:08.319229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.584 [2024-11-20 06:43:08.319263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.584 qpair failed and we were unable to recover it. 00:32:36.584 [2024-11-20 06:43:08.319496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.584 [2024-11-20 06:43:08.319533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.584 qpair failed and we were unable to recover it. 00:32:36.584 [2024-11-20 06:43:08.319778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.584 [2024-11-20 06:43:08.319810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.584 qpair failed and we were unable to recover it. 00:32:36.584 [2024-11-20 06:43:08.319951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.584 [2024-11-20 06:43:08.319984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.584 qpair failed and we were unable to recover it. 00:32:36.584 [2024-11-20 06:43:08.320162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.584 [2024-11-20 06:43:08.320195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.584 qpair failed and we were unable to recover it. 00:32:36.584 [2024-11-20 06:43:08.320337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.584 [2024-11-20 06:43:08.320371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.584 qpair failed and we were unable to recover it. 00:32:36.584 [2024-11-20 06:43:08.320502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.584 [2024-11-20 06:43:08.320536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.584 qpair failed and we were unable to recover it. 00:32:36.584 [2024-11-20 06:43:08.320727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.584 [2024-11-20 06:43:08.320761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.584 qpair failed and we were unable to recover it. 00:32:36.584 [2024-11-20 06:43:08.320966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.584 [2024-11-20 06:43:08.320999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.584 qpair failed and we were unable to recover it. 00:32:36.584 [2024-11-20 06:43:08.321145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.584 [2024-11-20 06:43:08.321179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.584 qpair failed and we were unable to recover it. 00:32:36.584 [2024-11-20 06:43:08.321424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.584 [2024-11-20 06:43:08.321460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.584 qpair failed and we were unable to recover it. 00:32:36.584 [2024-11-20 06:43:08.321607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.584 [2024-11-20 06:43:08.321640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.584 qpair failed and we were unable to recover it. 00:32:36.584 [2024-11-20 06:43:08.321770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.584 [2024-11-20 06:43:08.321801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.584 qpair failed and we were unable to recover it. 00:32:36.584 [2024-11-20 06:43:08.322002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.584 [2024-11-20 06:43:08.322035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.584 qpair failed and we were unable to recover it. 00:32:36.584 [2024-11-20 06:43:08.322139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.584 [2024-11-20 06:43:08.322170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.584 qpair failed and we were unable to recover it. 00:32:36.584 [2024-11-20 06:43:08.322372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.584 [2024-11-20 06:43:08.322407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.584 qpair failed and we were unable to recover it. 00:32:36.584 [2024-11-20 06:43:08.322580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.584 [2024-11-20 06:43:08.322612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.584 qpair failed and we were unable to recover it. 00:32:36.584 [2024-11-20 06:43:08.322794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.584 [2024-11-20 06:43:08.322827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.584 qpair failed and we were unable to recover it. 00:32:36.584 [2024-11-20 06:43:08.323004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.584 [2024-11-20 06:43:08.323036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.584 qpair failed and we were unable to recover it. 00:32:36.584 [2024-11-20 06:43:08.323241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.584 [2024-11-20 06:43:08.323276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.584 qpair failed and we were unable to recover it. 00:32:36.584 [2024-11-20 06:43:08.323392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.584 [2024-11-20 06:43:08.323426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.584 qpair failed and we were unable to recover it. 00:32:36.584 [2024-11-20 06:43:08.323527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.584 [2024-11-20 06:43:08.323560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.584 qpair failed and we were unable to recover it. 00:32:36.584 [2024-11-20 06:43:08.323824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.584 [2024-11-20 06:43:08.323857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.584 qpair failed and we were unable to recover it. 00:32:36.584 [2024-11-20 06:43:08.324034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.585 [2024-11-20 06:43:08.324067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.585 qpair failed and we were unable to recover it. 00:32:36.585 [2024-11-20 06:43:08.324176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.585 [2024-11-20 06:43:08.324218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.585 qpair failed and we were unable to recover it. 00:32:36.585 [2024-11-20 06:43:08.324351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.585 [2024-11-20 06:43:08.324383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.585 qpair failed and we were unable to recover it. 00:32:36.585 [2024-11-20 06:43:08.324564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.585 [2024-11-20 06:43:08.324598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.585 qpair failed and we were unable to recover it. 00:32:36.585 [2024-11-20 06:43:08.324783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.585 [2024-11-20 06:43:08.324817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.585 qpair failed and we were unable to recover it. 00:32:36.585 [2024-11-20 06:43:08.324960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.585 [2024-11-20 06:43:08.324996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.585 qpair failed and we were unable to recover it. 00:32:36.585 [2024-11-20 06:43:08.325111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.585 [2024-11-20 06:43:08.325144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.585 qpair failed and we were unable to recover it. 00:32:36.585 [2024-11-20 06:43:08.325412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.585 [2024-11-20 06:43:08.325446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.585 qpair failed and we were unable to recover it. 00:32:36.585 [2024-11-20 06:43:08.325590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.585 [2024-11-20 06:43:08.325623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.585 qpair failed and we were unable to recover it. 00:32:36.585 [2024-11-20 06:43:08.325798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.585 [2024-11-20 06:43:08.325833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.585 qpair failed and we were unable to recover it. 00:32:36.585 [2024-11-20 06:43:08.325958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.585 [2024-11-20 06:43:08.325992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.585 qpair failed and we were unable to recover it. 00:32:36.585 [2024-11-20 06:43:08.326188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.585 [2024-11-20 06:43:08.326231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.585 qpair failed and we were unable to recover it. 00:32:36.585 [2024-11-20 06:43:08.326414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.585 [2024-11-20 06:43:08.326454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.585 qpair failed and we were unable to recover it. 00:32:36.585 [2024-11-20 06:43:08.326575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.585 [2024-11-20 06:43:08.326609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.585 qpair failed and we were unable to recover it. 00:32:36.585 [2024-11-20 06:43:08.326806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.585 [2024-11-20 06:43:08.326838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.585 qpair failed and we were unable to recover it. 00:32:36.585 [2024-11-20 06:43:08.327079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.585 [2024-11-20 06:43:08.327113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.585 qpair failed and we were unable to recover it. 00:32:36.585 [2024-11-20 06:43:08.327242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.585 [2024-11-20 06:43:08.327276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.585 qpair failed and we were unable to recover it. 00:32:36.585 [2024-11-20 06:43:08.327393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.585 [2024-11-20 06:43:08.327426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.585 qpair failed and we were unable to recover it. 00:32:36.585 [2024-11-20 06:43:08.327636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.585 [2024-11-20 06:43:08.327670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.585 qpair failed and we were unable to recover it. 00:32:36.585 [2024-11-20 06:43:08.327793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.585 [2024-11-20 06:43:08.327825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.585 qpair failed and we were unable to recover it. 00:32:36.585 [2024-11-20 06:43:08.327944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.585 [2024-11-20 06:43:08.327976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.585 qpair failed and we were unable to recover it. 00:32:36.585 [2024-11-20 06:43:08.328239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.585 [2024-11-20 06:43:08.328273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.585 qpair failed and we were unable to recover it. 00:32:36.585 [2024-11-20 06:43:08.328493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.585 [2024-11-20 06:43:08.328525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.585 qpair failed and we were unable to recover it. 00:32:36.585 [2024-11-20 06:43:08.328633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.585 [2024-11-20 06:43:08.328666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.585 qpair failed and we were unable to recover it. 00:32:36.585 [2024-11-20 06:43:08.328775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.585 [2024-11-20 06:43:08.328808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.585 qpair failed and we were unable to recover it. 00:32:36.585 [2024-11-20 06:43:08.329067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.585 [2024-11-20 06:43:08.329099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.585 qpair failed and we were unable to recover it. 00:32:36.585 [2024-11-20 06:43:08.329345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.585 [2024-11-20 06:43:08.329378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.585 qpair failed and we were unable to recover it. 00:32:36.585 [2024-11-20 06:43:08.329503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.585 [2024-11-20 06:43:08.329536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.585 qpair failed and we were unable to recover it. 00:32:36.585 [2024-11-20 06:43:08.329649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.585 [2024-11-20 06:43:08.329682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.585 qpair failed and we were unable to recover it. 00:32:36.585 [2024-11-20 06:43:08.329842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.585 [2024-11-20 06:43:08.329874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.585 qpair failed and we were unable to recover it. 00:32:36.585 [2024-11-20 06:43:08.329977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.585 [2024-11-20 06:43:08.330009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.585 qpair failed and we were unable to recover it. 00:32:36.585 [2024-11-20 06:43:08.330198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.585 [2024-11-20 06:43:08.330255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.585 qpair failed and we were unable to recover it. 00:32:36.585 [2024-11-20 06:43:08.330431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.585 [2024-11-20 06:43:08.330464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.585 qpair failed and we were unable to recover it. 00:32:36.585 [2024-11-20 06:43:08.330726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.585 [2024-11-20 06:43:08.330759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.585 qpair failed and we were unable to recover it. 00:32:36.585 [2024-11-20 06:43:08.330950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.585 [2024-11-20 06:43:08.330982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.585 qpair failed and we were unable to recover it. 00:32:36.585 [2024-11-20 06:43:08.331157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.585 [2024-11-20 06:43:08.331189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.586 qpair failed and we were unable to recover it. 00:32:36.586 [2024-11-20 06:43:08.331375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.586 [2024-11-20 06:43:08.331408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.586 qpair failed and we were unable to recover it. 00:32:36.586 [2024-11-20 06:43:08.331583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.586 [2024-11-20 06:43:08.331614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.586 qpair failed and we were unable to recover it. 00:32:36.586 [2024-11-20 06:43:08.331789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.586 [2024-11-20 06:43:08.331821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.586 qpair failed and we were unable to recover it. 00:32:36.586 [2024-11-20 06:43:08.332006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.586 [2024-11-20 06:43:08.332037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.586 qpair failed and we were unable to recover it. 00:32:36.586 [2024-11-20 06:43:08.332252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.586 [2024-11-20 06:43:08.332287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.586 qpair failed and we were unable to recover it. 00:32:36.586 [2024-11-20 06:43:08.332409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.586 [2024-11-20 06:43:08.332441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.586 qpair failed and we were unable to recover it. 00:32:36.586 [2024-11-20 06:43:08.332559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.586 [2024-11-20 06:43:08.332590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.586 qpair failed and we were unable to recover it. 00:32:36.586 [2024-11-20 06:43:08.332812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.586 [2024-11-20 06:43:08.332845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.586 qpair failed and we were unable to recover it. 00:32:36.586 [2024-11-20 06:43:08.333081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.586 [2024-11-20 06:43:08.333111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.586 qpair failed and we were unable to recover it. 00:32:36.586 [2024-11-20 06:43:08.333291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.586 [2024-11-20 06:43:08.333325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.586 qpair failed and we were unable to recover it. 00:32:36.586 [2024-11-20 06:43:08.333436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.586 [2024-11-20 06:43:08.333468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.586 qpair failed and we were unable to recover it. 00:32:36.586 [2024-11-20 06:43:08.333705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.586 [2024-11-20 06:43:08.333738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.586 qpair failed and we were unable to recover it. 00:32:36.586 [2024-11-20 06:43:08.333907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.586 [2024-11-20 06:43:08.333938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.586 qpair failed and we were unable to recover it. 00:32:36.586 [2024-11-20 06:43:08.334117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.586 [2024-11-20 06:43:08.334149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.586 qpair failed and we were unable to recover it. 00:32:36.586 [2024-11-20 06:43:08.334342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.586 [2024-11-20 06:43:08.334376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.586 qpair failed and we were unable to recover it. 00:32:36.586 [2024-11-20 06:43:08.334550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.586 [2024-11-20 06:43:08.334583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.586 qpair failed and we were unable to recover it. 00:32:36.586 [2024-11-20 06:43:08.334769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.586 [2024-11-20 06:43:08.334807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.586 qpair failed and we were unable to recover it. 00:32:36.586 [2024-11-20 06:43:08.334990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.586 [2024-11-20 06:43:08.335023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.586 qpair failed and we were unable to recover it. 00:32:36.586 [2024-11-20 06:43:08.335153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.586 [2024-11-20 06:43:08.335186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.586 qpair failed and we were unable to recover it. 00:32:36.586 [2024-11-20 06:43:08.335386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.586 [2024-11-20 06:43:08.335420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.586 qpair failed and we were unable to recover it. 00:32:36.586 [2024-11-20 06:43:08.335671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.586 [2024-11-20 06:43:08.335703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.586 qpair failed and we were unable to recover it. 00:32:36.586 [2024-11-20 06:43:08.335879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.586 [2024-11-20 06:43:08.335912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.586 qpair failed and we were unable to recover it. 00:32:36.586 [2024-11-20 06:43:08.336026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.586 [2024-11-20 06:43:08.336058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.586 qpair failed and we were unable to recover it. 00:32:36.586 [2024-11-20 06:43:08.336171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.586 [2024-11-20 06:43:08.336215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.586 qpair failed and we were unable to recover it. 00:32:36.586 [2024-11-20 06:43:08.336329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.586 [2024-11-20 06:43:08.336360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.586 qpair failed and we were unable to recover it. 00:32:36.586 [2024-11-20 06:43:08.336612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.586 [2024-11-20 06:43:08.336644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.586 qpair failed and we were unable to recover it. 00:32:36.586 [2024-11-20 06:43:08.336832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.586 [2024-11-20 06:43:08.336863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.586 qpair failed and we were unable to recover it. 00:32:36.586 [2024-11-20 06:43:08.337118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.586 [2024-11-20 06:43:08.337151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.586 qpair failed and we were unable to recover it. 00:32:36.586 [2024-11-20 06:43:08.337335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.586 [2024-11-20 06:43:08.337367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.586 qpair failed and we were unable to recover it. 00:32:36.586 [2024-11-20 06:43:08.337542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.586 [2024-11-20 06:43:08.337574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.586 qpair failed and we were unable to recover it. 00:32:36.586 [2024-11-20 06:43:08.337688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.586 [2024-11-20 06:43:08.337719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.586 qpair failed and we were unable to recover it. 00:32:36.586 [2024-11-20 06:43:08.337823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.586 [2024-11-20 06:43:08.337855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.586 qpair failed and we were unable to recover it. 00:32:36.586 [2024-11-20 06:43:08.337974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.586 [2024-11-20 06:43:08.338005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.586 qpair failed and we were unable to recover it. 00:32:36.586 [2024-11-20 06:43:08.338139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.586 [2024-11-20 06:43:08.338171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.586 qpair failed and we were unable to recover it. 00:32:36.586 [2024-11-20 06:43:08.338332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.586 [2024-11-20 06:43:08.338391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.586 qpair failed and we were unable to recover it. 00:32:36.586 [2024-11-20 06:43:08.338637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.586 [2024-11-20 06:43:08.338669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.586 qpair failed and we were unable to recover it. 00:32:36.586 [2024-11-20 06:43:08.338861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.586 [2024-11-20 06:43:08.338893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.586 qpair failed and we were unable to recover it. 00:32:36.586 [2024-11-20 06:43:08.339082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.586 [2024-11-20 06:43:08.339113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.586 qpair failed and we were unable to recover it. 00:32:36.586 [2024-11-20 06:43:08.339299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.586 [2024-11-20 06:43:08.339332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.586 qpair failed and we were unable to recover it. 00:32:36.586 [2024-11-20 06:43:08.339455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.586 [2024-11-20 06:43:08.339487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.586 qpair failed and we were unable to recover it. 00:32:36.586 [2024-11-20 06:43:08.339608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.586 [2024-11-20 06:43:08.339640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.586 qpair failed and we were unable to recover it. 00:32:36.586 [2024-11-20 06:43:08.339742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.586 [2024-11-20 06:43:08.339774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.586 qpair failed and we were unable to recover it. 00:32:36.586 [2024-11-20 06:43:08.339949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.586 [2024-11-20 06:43:08.339981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.586 qpair failed and we were unable to recover it. 00:32:36.586 [2024-11-20 06:43:08.340163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.587 [2024-11-20 06:43:08.340194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.587 qpair failed and we were unable to recover it. 00:32:36.587 [2024-11-20 06:43:08.340458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.587 [2024-11-20 06:43:08.340490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.587 qpair failed and we were unable to recover it. 00:32:36.587 [2024-11-20 06:43:08.340703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.587 [2024-11-20 06:43:08.340733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.587 qpair failed and we were unable to recover it. 00:32:36.587 [2024-11-20 06:43:08.340861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.587 [2024-11-20 06:43:08.340891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.587 qpair failed and we were unable to recover it. 00:32:36.587 [2024-11-20 06:43:08.341077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.587 [2024-11-20 06:43:08.341109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.587 qpair failed and we were unable to recover it. 00:32:36.587 [2024-11-20 06:43:08.341285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.587 [2024-11-20 06:43:08.341318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.587 qpair failed and we were unable to recover it. 00:32:36.587 [2024-11-20 06:43:08.341500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.587 [2024-11-20 06:43:08.341532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.587 qpair failed and we were unable to recover it. 00:32:36.587 [2024-11-20 06:43:08.341728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.587 [2024-11-20 06:43:08.341760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.587 qpair failed and we were unable to recover it. 00:32:36.587 [2024-11-20 06:43:08.341876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.587 [2024-11-20 06:43:08.341908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.587 qpair failed and we were unable to recover it. 00:32:36.587 [2024-11-20 06:43:08.342029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.587 [2024-11-20 06:43:08.342060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.587 qpair failed and we were unable to recover it. 00:32:36.587 [2024-11-20 06:43:08.342183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.587 [2024-11-20 06:43:08.342223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.587 qpair failed and we were unable to recover it. 00:32:36.587 [2024-11-20 06:43:08.342477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.587 [2024-11-20 06:43:08.342509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.587 qpair failed and we were unable to recover it. 00:32:36.587 [2024-11-20 06:43:08.342622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.587 [2024-11-20 06:43:08.342654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.587 qpair failed and we were unable to recover it. 00:32:36.587 [2024-11-20 06:43:08.342831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.587 [2024-11-20 06:43:08.342868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.587 qpair failed and we were unable to recover it. 00:32:36.587 [2024-11-20 06:43:08.343138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.587 [2024-11-20 06:43:08.343169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.587 qpair failed and we were unable to recover it. 00:32:36.587 [2024-11-20 06:43:08.343422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.587 [2024-11-20 06:43:08.343458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.587 qpair failed and we were unable to recover it. 00:32:36.587 [2024-11-20 06:43:08.343701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.587 [2024-11-20 06:43:08.343733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.587 qpair failed and we were unable to recover it. 00:32:36.587 [2024-11-20 06:43:08.343980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.587 [2024-11-20 06:43:08.344012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.587 qpair failed and we were unable to recover it. 00:32:36.587 [2024-11-20 06:43:08.344185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.587 [2024-11-20 06:43:08.344246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.587 qpair failed and we were unable to recover it. 00:32:36.587 [2024-11-20 06:43:08.344485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.587 [2024-11-20 06:43:08.344518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.587 qpair failed and we were unable to recover it. 00:32:36.587 [2024-11-20 06:43:08.344638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.587 [2024-11-20 06:43:08.344670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.587 qpair failed and we were unable to recover it. 00:32:36.587 [2024-11-20 06:43:08.344909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.587 [2024-11-20 06:43:08.344941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.587 qpair failed and we were unable to recover it. 00:32:36.587 [2024-11-20 06:43:08.345045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.587 [2024-11-20 06:43:08.345076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.587 qpair failed and we were unable to recover it. 00:32:36.587 [2024-11-20 06:43:08.345249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.587 [2024-11-20 06:43:08.345283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.587 qpair failed and we were unable to recover it. 00:32:36.587 [2024-11-20 06:43:08.345404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.587 [2024-11-20 06:43:08.345436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.587 qpair failed and we were unable to recover it. 00:32:36.587 [2024-11-20 06:43:08.345632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.587 [2024-11-20 06:43:08.345665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.587 qpair failed and we were unable to recover it. 00:32:36.587 [2024-11-20 06:43:08.345838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.587 [2024-11-20 06:43:08.345871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.587 qpair failed and we were unable to recover it. 00:32:36.587 [2024-11-20 06:43:08.346073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.587 [2024-11-20 06:43:08.346105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.587 qpair failed and we were unable to recover it. 00:32:36.587 [2024-11-20 06:43:08.346296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.587 [2024-11-20 06:43:08.346331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.587 qpair failed and we were unable to recover it. 00:32:36.587 [2024-11-20 06:43:08.346566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.587 [2024-11-20 06:43:08.346615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.587 qpair failed and we were unable to recover it. 00:32:36.587 [2024-11-20 06:43:08.346823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.587 [2024-11-20 06:43:08.346858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.587 qpair failed and we were unable to recover it. 00:32:36.587 [2024-11-20 06:43:08.347044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.587 [2024-11-20 06:43:08.347075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.587 qpair failed and we were unable to recover it. 00:32:36.587 [2024-11-20 06:43:08.347278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.587 [2024-11-20 06:43:08.347310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.587 qpair failed and we were unable to recover it. 00:32:36.587 [2024-11-20 06:43:08.347495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.587 [2024-11-20 06:43:08.347525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.587 qpair failed and we were unable to recover it. 00:32:36.587 [2024-11-20 06:43:08.347765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.587 [2024-11-20 06:43:08.347796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.587 qpair failed and we were unable to recover it. 00:32:36.587 [2024-11-20 06:43:08.347919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.587 [2024-11-20 06:43:08.347950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.587 qpair failed and we were unable to recover it. 00:32:36.587 [2024-11-20 06:43:08.348135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.587 [2024-11-20 06:43:08.348166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.587 qpair failed and we were unable to recover it. 00:32:36.587 [2024-11-20 06:43:08.348283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.587 [2024-11-20 06:43:08.348315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.587 qpair failed and we were unable to recover it. 00:32:36.587 [2024-11-20 06:43:08.348494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.587 [2024-11-20 06:43:08.348525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.587 qpair failed and we were unable to recover it. 00:32:36.587 [2024-11-20 06:43:08.348738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.587 [2024-11-20 06:43:08.348770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.587 qpair failed and we were unable to recover it. 00:32:36.587 [2024-11-20 06:43:08.348949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.587 [2024-11-20 06:43:08.348981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.587 qpair failed and we were unable to recover it. 00:32:36.587 [2024-11-20 06:43:08.349087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.587 [2024-11-20 06:43:08.349118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.587 qpair failed and we were unable to recover it. 00:32:36.587 [2024-11-20 06:43:08.349227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.587 [2024-11-20 06:43:08.349259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.588 qpair failed and we were unable to recover it. 00:32:36.588 [2024-11-20 06:43:08.349467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.588 [2024-11-20 06:43:08.349499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.588 qpair failed and we were unable to recover it. 00:32:36.588 [2024-11-20 06:43:08.349734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.588 [2024-11-20 06:43:08.349765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.588 qpair failed and we were unable to recover it. 00:32:36.588 [2024-11-20 06:43:08.349966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.588 [2024-11-20 06:43:08.349996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.588 qpair failed and we were unable to recover it. 00:32:36.588 [2024-11-20 06:43:08.350193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.588 [2024-11-20 06:43:08.350235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.588 qpair failed and we were unable to recover it. 00:32:36.588 [2024-11-20 06:43:08.350412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.588 [2024-11-20 06:43:08.350444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.588 qpair failed and we were unable to recover it. 00:32:36.588 [2024-11-20 06:43:08.350708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.588 [2024-11-20 06:43:08.350739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.588 qpair failed and we were unable to recover it. 00:32:36.857 [2024-11-20 06:43:08.350860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.857 [2024-11-20 06:43:08.350891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.857 qpair failed and we were unable to recover it. 00:32:36.857 [2024-11-20 06:43:08.351092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.857 [2024-11-20 06:43:08.351124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.857 qpair failed and we were unable to recover it. 00:32:36.857 [2024-11-20 06:43:08.351316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.857 [2024-11-20 06:43:08.351349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.857 qpair failed and we were unable to recover it. 00:32:36.857 [2024-11-20 06:43:08.351559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.857 [2024-11-20 06:43:08.351590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.857 qpair failed and we were unable to recover it. 00:32:36.857 [2024-11-20 06:43:08.351775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.857 [2024-11-20 06:43:08.351817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.857 qpair failed and we were unable to recover it. 00:32:36.857 [2024-11-20 06:43:08.351947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.857 [2024-11-20 06:43:08.351978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.857 qpair failed and we were unable to recover it. 00:32:36.857 [2024-11-20 06:43:08.352112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.857 [2024-11-20 06:43:08.352143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.857 qpair failed and we were unable to recover it. 00:32:36.857 [2024-11-20 06:43:08.352356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.857 [2024-11-20 06:43:08.352389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.857 qpair failed and we were unable to recover it. 00:32:36.857 [2024-11-20 06:43:08.352649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.857 [2024-11-20 06:43:08.352680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.857 qpair failed and we were unable to recover it. 00:32:36.857 [2024-11-20 06:43:08.352847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.857 [2024-11-20 06:43:08.352879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.857 qpair failed and we were unable to recover it. 00:32:36.857 [2024-11-20 06:43:08.352997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.857 [2024-11-20 06:43:08.353027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.857 qpair failed and we were unable to recover it. 00:32:36.857 [2024-11-20 06:43:08.353279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.857 [2024-11-20 06:43:08.353312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.857 qpair failed and we were unable to recover it. 00:32:36.857 [2024-11-20 06:43:08.353511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.857 [2024-11-20 06:43:08.353542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.857 qpair failed and we were unable to recover it. 00:32:36.857 [2024-11-20 06:43:08.353656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.857 [2024-11-20 06:43:08.353687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.857 qpair failed and we were unable to recover it. 00:32:36.857 [2024-11-20 06:43:08.353894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.857 [2024-11-20 06:43:08.353925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.857 qpair failed and we were unable to recover it. 00:32:36.857 [2024-11-20 06:43:08.354110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.857 [2024-11-20 06:43:08.354142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.857 qpair failed and we were unable to recover it. 00:32:36.857 [2024-11-20 06:43:08.354332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.857 [2024-11-20 06:43:08.354364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.857 qpair failed and we were unable to recover it. 00:32:36.857 [2024-11-20 06:43:08.354503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.857 [2024-11-20 06:43:08.354534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.857 qpair failed and we were unable to recover it. 00:32:36.857 [2024-11-20 06:43:08.354722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.857 [2024-11-20 06:43:08.354754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.857 qpair failed and we were unable to recover it. 00:32:36.857 [2024-11-20 06:43:08.354952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.857 [2024-11-20 06:43:08.354983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.857 qpair failed and we were unable to recover it. 00:32:36.857 [2024-11-20 06:43:08.355172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.857 [2024-11-20 06:43:08.355212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.857 qpair failed and we were unable to recover it. 00:32:36.857 [2024-11-20 06:43:08.355401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.857 [2024-11-20 06:43:08.355433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.857 qpair failed and we were unable to recover it. 00:32:36.857 [2024-11-20 06:43:08.355557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.857 [2024-11-20 06:43:08.355587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.857 qpair failed and we were unable to recover it. 00:32:36.857 [2024-11-20 06:43:08.355704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.857 [2024-11-20 06:43:08.355734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.857 qpair failed and we were unable to recover it. 00:32:36.857 [2024-11-20 06:43:08.355858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.857 [2024-11-20 06:43:08.355889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.857 qpair failed and we were unable to recover it. 00:32:36.857 [2024-11-20 06:43:08.356082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.857 [2024-11-20 06:43:08.356113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.857 qpair failed and we were unable to recover it. 00:32:36.857 [2024-11-20 06:43:08.356380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.857 [2024-11-20 06:43:08.356412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.857 qpair failed and we were unable to recover it. 00:32:36.857 [2024-11-20 06:43:08.356534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.857 [2024-11-20 06:43:08.356564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.857 qpair failed and we were unable to recover it. 00:32:36.857 [2024-11-20 06:43:08.356805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.857 [2024-11-20 06:43:08.356835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.857 qpair failed and we were unable to recover it. 00:32:36.857 [2024-11-20 06:43:08.356994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.857 [2024-11-20 06:43:08.357025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.857 qpair failed and we were unable to recover it. 00:32:36.857 [2024-11-20 06:43:08.357150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.857 [2024-11-20 06:43:08.357181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.857 qpair failed and we were unable to recover it. 00:32:36.857 [2024-11-20 06:43:08.357449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.857 [2024-11-20 06:43:08.357483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.857 qpair failed and we were unable to recover it. 00:32:36.857 [2024-11-20 06:43:08.357657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.857 [2024-11-20 06:43:08.357688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.857 qpair failed and we were unable to recover it. 00:32:36.857 [2024-11-20 06:43:08.357897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.857 [2024-11-20 06:43:08.357928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.857 qpair failed and we were unable to recover it. 00:32:36.857 [2024-11-20 06:43:08.358052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.857 [2024-11-20 06:43:08.358084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.857 qpair failed and we were unable to recover it. 00:32:36.857 [2024-11-20 06:43:08.358273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.857 [2024-11-20 06:43:08.358311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.857 qpair failed and we were unable to recover it. 00:32:36.857 [2024-11-20 06:43:08.358424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.858 [2024-11-20 06:43:08.358455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.858 qpair failed and we were unable to recover it. 00:32:36.858 [2024-11-20 06:43:08.358581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.858 [2024-11-20 06:43:08.358612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.858 qpair failed and we were unable to recover it. 00:32:36.858 [2024-11-20 06:43:08.358769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.858 [2024-11-20 06:43:08.358800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.858 qpair failed and we were unable to recover it. 00:32:36.858 [2024-11-20 06:43:08.358986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.858 [2024-11-20 06:43:08.359018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.858 qpair failed and we were unable to recover it. 00:32:36.858 [2024-11-20 06:43:08.359139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.858 [2024-11-20 06:43:08.359170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.858 qpair failed and we were unable to recover it. 00:32:36.858 [2024-11-20 06:43:08.359473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.858 [2024-11-20 06:43:08.359528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.858 qpair failed and we were unable to recover it. 00:32:36.858 [2024-11-20 06:43:08.359716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.858 [2024-11-20 06:43:08.359750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.858 qpair failed and we were unable to recover it. 00:32:36.858 [2024-11-20 06:43:08.359862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.858 [2024-11-20 06:43:08.359894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.858 qpair failed and we were unable to recover it. 00:32:36.858 [2024-11-20 06:43:08.360155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.858 [2024-11-20 06:43:08.360196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.858 qpair failed and we were unable to recover it. 00:32:36.858 [2024-11-20 06:43:08.360385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.858 [2024-11-20 06:43:08.360418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.858 qpair failed and we were unable to recover it. 00:32:36.858 [2024-11-20 06:43:08.360704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.858 [2024-11-20 06:43:08.360736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.858 qpair failed and we were unable to recover it. 00:32:36.858 [2024-11-20 06:43:08.360874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.858 [2024-11-20 06:43:08.360906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.858 qpair failed and we were unable to recover it. 00:32:36.858 [2024-11-20 06:43:08.361085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.858 [2024-11-20 06:43:08.361117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.858 qpair failed and we were unable to recover it. 00:32:36.858 [2024-11-20 06:43:08.361419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.858 [2024-11-20 06:43:08.361453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.858 qpair failed and we were unable to recover it. 00:32:36.858 [2024-11-20 06:43:08.361647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.858 [2024-11-20 06:43:08.361678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.858 qpair failed and we were unable to recover it. 00:32:36.858 [2024-11-20 06:43:08.361805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.858 [2024-11-20 06:43:08.361837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.858 qpair failed and we were unable to recover it. 00:32:36.858 [2024-11-20 06:43:08.362104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.858 [2024-11-20 06:43:08.362136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.858 qpair failed and we were unable to recover it. 00:32:36.858 [2024-11-20 06:43:08.362400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.858 [2024-11-20 06:43:08.362433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.858 qpair failed and we were unable to recover it. 00:32:36.858 [2024-11-20 06:43:08.362570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.858 [2024-11-20 06:43:08.362602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.858 qpair failed and we were unable to recover it. 00:32:36.858 [2024-11-20 06:43:08.362791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.858 [2024-11-20 06:43:08.362823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.858 qpair failed and we were unable to recover it. 00:32:36.858 [2024-11-20 06:43:08.363051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.858 [2024-11-20 06:43:08.363083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.858 qpair failed and we were unable to recover it. 00:32:36.858 [2024-11-20 06:43:08.363217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.858 [2024-11-20 06:43:08.363251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.858 qpair failed and we were unable to recover it. 00:32:36.858 [2024-11-20 06:43:08.363379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.858 [2024-11-20 06:43:08.363412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.858 qpair failed and we were unable to recover it. 00:32:36.858 [2024-11-20 06:43:08.363579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.858 [2024-11-20 06:43:08.363611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.858 qpair failed and we were unable to recover it. 00:32:36.858 [2024-11-20 06:43:08.363737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.858 [2024-11-20 06:43:08.363769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.858 qpair failed and we were unable to recover it. 00:32:36.858 [2024-11-20 06:43:08.363942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.858 [2024-11-20 06:43:08.363974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.858 qpair failed and we were unable to recover it. 00:32:36.858 [2024-11-20 06:43:08.364184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.858 [2024-11-20 06:43:08.364232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.858 qpair failed and we were unable to recover it. 00:32:36.858 [2024-11-20 06:43:08.364498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.858 [2024-11-20 06:43:08.364531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.858 qpair failed and we were unable to recover it. 00:32:36.858 [2024-11-20 06:43:08.364651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.858 [2024-11-20 06:43:08.364682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.858 qpair failed and we were unable to recover it. 00:32:36.858 [2024-11-20 06:43:08.364817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.858 [2024-11-20 06:43:08.364849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.858 qpair failed and we were unable to recover it. 00:32:36.858 [2024-11-20 06:43:08.365053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.858 [2024-11-20 06:43:08.365085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.858 qpair failed and we were unable to recover it. 00:32:36.858 [2024-11-20 06:43:08.365274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.858 [2024-11-20 06:43:08.365307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.858 qpair failed and we were unable to recover it. 00:32:36.858 [2024-11-20 06:43:08.365483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.858 [2024-11-20 06:43:08.365515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.858 qpair failed and we were unable to recover it. 00:32:36.858 [2024-11-20 06:43:08.365628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.858 [2024-11-20 06:43:08.365659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.858 qpair failed and we were unable to recover it. 00:32:36.858 [2024-11-20 06:43:08.365844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.858 [2024-11-20 06:43:08.365876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.858 qpair failed and we were unable to recover it. 00:32:36.858 [2024-11-20 06:43:08.366085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.858 [2024-11-20 06:43:08.366119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.858 qpair failed and we were unable to recover it. 00:32:36.858 [2024-11-20 06:43:08.366243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.858 [2024-11-20 06:43:08.366277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.858 qpair failed and we were unable to recover it. 00:32:36.858 [2024-11-20 06:43:08.366489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.859 [2024-11-20 06:43:08.366520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.859 qpair failed and we were unable to recover it. 00:32:36.859 [2024-11-20 06:43:08.366698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.859 [2024-11-20 06:43:08.366730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.859 qpair failed and we were unable to recover it. 00:32:36.859 [2024-11-20 06:43:08.366930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.859 [2024-11-20 06:43:08.366962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.859 qpair failed and we were unable to recover it. 00:32:36.859 [2024-11-20 06:43:08.367220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.859 [2024-11-20 06:43:08.367253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.859 qpair failed and we were unable to recover it. 00:32:36.859 [2024-11-20 06:43:08.367369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.859 [2024-11-20 06:43:08.367401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.859 qpair failed and we were unable to recover it. 00:32:36.859 [2024-11-20 06:43:08.367581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.859 [2024-11-20 06:43:08.367612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.859 qpair failed and we were unable to recover it. 00:32:36.859 [2024-11-20 06:43:08.367734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.859 [2024-11-20 06:43:08.367770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.859 qpair failed and we were unable to recover it. 00:32:36.859 [2024-11-20 06:43:08.368023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.859 [2024-11-20 06:43:08.368054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.859 qpair failed and we were unable to recover it. 00:32:36.859 [2024-11-20 06:43:08.368322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.859 [2024-11-20 06:43:08.368356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.859 qpair failed and we were unable to recover it. 00:32:36.859 [2024-11-20 06:43:08.368608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.859 [2024-11-20 06:43:08.368639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.859 qpair failed and we were unable to recover it. 00:32:36.859 [2024-11-20 06:43:08.368763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.859 [2024-11-20 06:43:08.368795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.859 qpair failed and we were unable to recover it. 00:32:36.859 [2024-11-20 06:43:08.369074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.859 [2024-11-20 06:43:08.369112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.859 qpair failed and we were unable to recover it. 00:32:36.859 [2024-11-20 06:43:08.369298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.859 [2024-11-20 06:43:08.369329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.859 qpair failed and we were unable to recover it. 00:32:36.859 [2024-11-20 06:43:08.369458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.859 [2024-11-20 06:43:08.369490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.859 qpair failed and we were unable to recover it. 00:32:36.859 [2024-11-20 06:43:08.369597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.859 [2024-11-20 06:43:08.369628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.859 qpair failed and we were unable to recover it. 00:32:36.859 [2024-11-20 06:43:08.369887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.859 [2024-11-20 06:43:08.369917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.859 qpair failed and we were unable to recover it. 00:32:36.859 [2024-11-20 06:43:08.370095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.859 [2024-11-20 06:43:08.370126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.859 qpair failed and we were unable to recover it. 00:32:36.859 [2024-11-20 06:43:08.370338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.859 [2024-11-20 06:43:08.370371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.859 qpair failed and we were unable to recover it. 00:32:36.859 [2024-11-20 06:43:08.370637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.859 [2024-11-20 06:43:08.370667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.859 qpair failed and we were unable to recover it. 00:32:36.859 [2024-11-20 06:43:08.370835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.859 [2024-11-20 06:43:08.370867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.859 qpair failed and we were unable to recover it. 00:32:36.859 [2024-11-20 06:43:08.371099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.859 [2024-11-20 06:43:08.371131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.859 qpair failed and we were unable to recover it. 00:32:36.859 [2024-11-20 06:43:08.371325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.859 [2024-11-20 06:43:08.371357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.859 qpair failed and we were unable to recover it. 00:32:36.859 [2024-11-20 06:43:08.371584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.859 [2024-11-20 06:43:08.371615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.859 qpair failed and we were unable to recover it. 00:32:36.859 [2024-11-20 06:43:08.371798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.859 [2024-11-20 06:43:08.371830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.859 qpair failed and we were unable to recover it. 00:32:36.859 [2024-11-20 06:43:08.372004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.859 [2024-11-20 06:43:08.372035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.859 qpair failed and we were unable to recover it. 00:32:36.859 [2024-11-20 06:43:08.372236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.859 [2024-11-20 06:43:08.372268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.859 qpair failed and we were unable to recover it. 00:32:36.859 [2024-11-20 06:43:08.372473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.859 [2024-11-20 06:43:08.372505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.859 qpair failed and we were unable to recover it. 00:32:36.859 [2024-11-20 06:43:08.372684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.859 [2024-11-20 06:43:08.372715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.859 qpair failed and we were unable to recover it. 00:32:36.859 [2024-11-20 06:43:08.372983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.859 [2024-11-20 06:43:08.373015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.859 qpair failed and we were unable to recover it. 00:32:36.859 [2024-11-20 06:43:08.373153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.859 [2024-11-20 06:43:08.373185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.859 qpair failed and we were unable to recover it. 00:32:36.859 [2024-11-20 06:43:08.373380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.859 [2024-11-20 06:43:08.373412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.859 qpair failed and we were unable to recover it. 00:32:36.859 [2024-11-20 06:43:08.373599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.859 [2024-11-20 06:43:08.373630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.859 qpair failed and we were unable to recover it. 00:32:36.859 [2024-11-20 06:43:08.373765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.859 [2024-11-20 06:43:08.373798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.859 qpair failed and we were unable to recover it. 00:32:36.859 06:43:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:32:36.859 [2024-11-20 06:43:08.374043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.859 [2024-11-20 06:43:08.374076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.859 qpair failed and we were unable to recover it. 00:32:36.859 06:43:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@866 -- # return 0 00:32:36.859 [2024-11-20 06:43:08.374261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.859 [2024-11-20 06:43:08.374295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.859 qpair failed and we were unable to recover it. 00:32:36.859 [2024-11-20 06:43:08.374536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.859 [2024-11-20 06:43:08.374569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.859 06:43:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:36.859 qpair failed and we were unable to recover it. 00:32:36.859 [2024-11-20 06:43:08.374697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.859 [2024-11-20 06:43:08.374728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.860 qpair failed and we were unable to recover it. 00:32:36.860 [2024-11-20 06:43:08.374853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.860 06:43:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:36.860 [2024-11-20 06:43:08.374885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.860 qpair failed and we were unable to recover it. 00:32:36.860 [2024-11-20 06:43:08.374988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.860 [2024-11-20 06:43:08.375019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.860 qpair failed and we were unable to recover it. 00:32:36.860 [2024-11-20 06:43:08.375140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.860 [2024-11-20 06:43:08.375174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b9 06:43:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:36.860 0 with addr=10.0.0.2, port=4420 00:32:36.860 qpair failed and we were unable to recover it. 00:32:36.860 [2024-11-20 06:43:08.375376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.860 [2024-11-20 06:43:08.375408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.860 qpair failed and we were unable to recover it. 00:32:36.860 [2024-11-20 06:43:08.375583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.860 [2024-11-20 06:43:08.375614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.860 qpair failed and we were unable to recover it. 00:32:36.860 [2024-11-20 06:43:08.375733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.860 [2024-11-20 06:43:08.375764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.860 qpair failed and we were unable to recover it. 00:32:36.860 [2024-11-20 06:43:08.375937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.860 [2024-11-20 06:43:08.375968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.860 qpair failed and we were unable to recover it. 00:32:36.860 [2024-11-20 06:43:08.376098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.860 [2024-11-20 06:43:08.376129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.860 qpair failed and we were unable to recover it. 00:32:36.860 [2024-11-20 06:43:08.376327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.860 [2024-11-20 06:43:08.376361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.860 qpair failed and we were unable to recover it. 00:32:36.860 [2024-11-20 06:43:08.376494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.860 [2024-11-20 06:43:08.376527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.860 qpair failed and we were unable to recover it. 00:32:36.860 [2024-11-20 06:43:08.376648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.860 [2024-11-20 06:43:08.376679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.860 qpair failed and we were unable to recover it. 00:32:36.860 [2024-11-20 06:43:08.376854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.860 [2024-11-20 06:43:08.376885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.860 qpair failed and we were unable to recover it. 00:32:36.860 [2024-11-20 06:43:08.377129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.860 [2024-11-20 06:43:08.377167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.860 qpair failed and we were unable to recover it. 00:32:36.860 [2024-11-20 06:43:08.377418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.860 [2024-11-20 06:43:08.377450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.860 qpair failed and we were unable to recover it. 00:32:36.860 [2024-11-20 06:43:08.377576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.860 [2024-11-20 06:43:08.377608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.860 qpair failed and we were unable to recover it. 00:32:36.860 [2024-11-20 06:43:08.377798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.860 [2024-11-20 06:43:08.377829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.860 qpair failed and we were unable to recover it. 00:32:36.860 [2024-11-20 06:43:08.378036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.860 [2024-11-20 06:43:08.378068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.860 qpair failed and we were unable to recover it. 00:32:36.860 [2024-11-20 06:43:08.378249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.860 [2024-11-20 06:43:08.378281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.860 qpair failed and we were unable to recover it. 00:32:36.860 [2024-11-20 06:43:08.378467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.860 [2024-11-20 06:43:08.378499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.860 qpair failed and we were unable to recover it. 00:32:36.860 [2024-11-20 06:43:08.378615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.860 [2024-11-20 06:43:08.378647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.860 qpair failed and we were unable to recover it. 00:32:36.860 [2024-11-20 06:43:08.378867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.860 [2024-11-20 06:43:08.378899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.860 qpair failed and we were unable to recover it. 00:32:36.860 [2024-11-20 06:43:08.379022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.860 [2024-11-20 06:43:08.379054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.860 qpair failed and we were unable to recover it. 00:32:36.860 [2024-11-20 06:43:08.379246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.860 [2024-11-20 06:43:08.379281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.860 qpair failed and we were unable to recover it. 00:32:36.860 [2024-11-20 06:43:08.379385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.860 [2024-11-20 06:43:08.379418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.860 qpair failed and we were unable to recover it. 00:32:36.860 [2024-11-20 06:43:08.379591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.860 [2024-11-20 06:43:08.379622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.860 qpair failed and we were unable to recover it. 00:32:36.860 [2024-11-20 06:43:08.379820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.860 [2024-11-20 06:43:08.379852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.860 qpair failed and we were unable to recover it. 00:32:36.860 [2024-11-20 06:43:08.380054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.860 [2024-11-20 06:43:08.380086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.860 qpair failed and we were unable to recover it. 00:32:36.860 [2024-11-20 06:43:08.380224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.860 [2024-11-20 06:43:08.380257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.860 qpair failed and we were unable to recover it. 00:32:36.860 [2024-11-20 06:43:08.380433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.860 [2024-11-20 06:43:08.380464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.860 qpair failed and we were unable to recover it. 00:32:36.860 [2024-11-20 06:43:08.380648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.860 [2024-11-20 06:43:08.380681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.860 qpair failed and we were unable to recover it. 00:32:36.860 [2024-11-20 06:43:08.380866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.860 [2024-11-20 06:43:08.380898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.860 qpair failed and we were unable to recover it. 00:32:36.860 [2024-11-20 06:43:08.381070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.860 [2024-11-20 06:43:08.381101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.860 qpair failed and we were unable to recover it. 00:32:36.860 [2024-11-20 06:43:08.381225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.860 [2024-11-20 06:43:08.381258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.860 qpair failed and we were unable to recover it. 00:32:36.860 [2024-11-20 06:43:08.381443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.860 [2024-11-20 06:43:08.381474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.860 qpair failed and we were unable to recover it. 00:32:36.860 [2024-11-20 06:43:08.381678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.860 [2024-11-20 06:43:08.381710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.860 qpair failed and we were unable to recover it. 00:32:36.860 [2024-11-20 06:43:08.381844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.860 [2024-11-20 06:43:08.381876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.860 qpair failed and we were unable to recover it. 00:32:36.860 [2024-11-20 06:43:08.382063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.861 [2024-11-20 06:43:08.382095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.861 qpair failed and we were unable to recover it. 00:32:36.861 [2024-11-20 06:43:08.382237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.861 [2024-11-20 06:43:08.382270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.861 qpair failed and we were unable to recover it. 00:32:36.861 [2024-11-20 06:43:08.382471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.861 [2024-11-20 06:43:08.382504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.861 qpair failed and we were unable to recover it. 00:32:36.861 [2024-11-20 06:43:08.382717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.861 [2024-11-20 06:43:08.382754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.861 qpair failed and we were unable to recover it. 00:32:36.861 [2024-11-20 06:43:08.382932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.861 [2024-11-20 06:43:08.382965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.861 qpair failed and we were unable to recover it. 00:32:36.861 [2024-11-20 06:43:08.383091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.861 [2024-11-20 06:43:08.383123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.861 qpair failed and we were unable to recover it. 00:32:36.861 [2024-11-20 06:43:08.383254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.861 [2024-11-20 06:43:08.383287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.861 qpair failed and we were unable to recover it. 00:32:36.861 [2024-11-20 06:43:08.383497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.861 [2024-11-20 06:43:08.383530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.861 qpair failed and we were unable to recover it. 00:32:36.861 [2024-11-20 06:43:08.383704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.861 [2024-11-20 06:43:08.383736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.861 qpair failed and we were unable to recover it. 00:32:36.861 [2024-11-20 06:43:08.383907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.861 [2024-11-20 06:43:08.383939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.861 qpair failed and we were unable to recover it. 00:32:36.861 [2024-11-20 06:43:08.384073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.861 [2024-11-20 06:43:08.384106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.861 qpair failed and we were unable to recover it. 00:32:36.861 [2024-11-20 06:43:08.384248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.861 [2024-11-20 06:43:08.384282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.861 qpair failed and we were unable to recover it. 00:32:36.861 [2024-11-20 06:43:08.384533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.861 [2024-11-20 06:43:08.384566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.861 qpair failed and we were unable to recover it. 00:32:36.861 [2024-11-20 06:43:08.384745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.861 [2024-11-20 06:43:08.384779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.861 qpair failed and we were unable to recover it. 00:32:36.861 [2024-11-20 06:43:08.384927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.861 [2024-11-20 06:43:08.384959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.861 qpair failed and we were unable to recover it. 00:32:36.861 [2024-11-20 06:43:08.385082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.861 [2024-11-20 06:43:08.385114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.861 qpair failed and we were unable to recover it. 00:32:36.861 [2024-11-20 06:43:08.385304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.861 [2024-11-20 06:43:08.385345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.861 qpair failed and we were unable to recover it. 00:32:36.861 [2024-11-20 06:43:08.385470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.861 [2024-11-20 06:43:08.385506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.861 qpair failed and we were unable to recover it. 00:32:36.861 [2024-11-20 06:43:08.385804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.861 [2024-11-20 06:43:08.385837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.861 qpair failed and we were unable to recover it. 00:32:36.861 [2024-11-20 06:43:08.386013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.861 [2024-11-20 06:43:08.386045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.861 qpair failed and we were unable to recover it. 00:32:36.861 [2024-11-20 06:43:08.386164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.861 [2024-11-20 06:43:08.386198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.861 qpair failed and we were unable to recover it. 00:32:36.861 [2024-11-20 06:43:08.386382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.861 [2024-11-20 06:43:08.386415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.861 qpair failed and we were unable to recover it. 00:32:36.861 [2024-11-20 06:43:08.386565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.861 [2024-11-20 06:43:08.386600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.861 qpair failed and we were unable to recover it. 00:32:36.861 [2024-11-20 06:43:08.386727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.861 [2024-11-20 06:43:08.386759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.861 qpair failed and we were unable to recover it. 00:32:36.861 [2024-11-20 06:43:08.386878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.861 [2024-11-20 06:43:08.386910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.861 qpair failed and we were unable to recover it. 00:32:36.861 [2024-11-20 06:43:08.387028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.861 [2024-11-20 06:43:08.387060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.861 qpair failed and we were unable to recover it. 00:32:36.861 [2024-11-20 06:43:08.387176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.861 [2024-11-20 06:43:08.387220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.861 qpair failed and we were unable to recover it. 00:32:36.861 [2024-11-20 06:43:08.387413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.861 [2024-11-20 06:43:08.387445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.861 qpair failed and we were unable to recover it. 00:32:36.861 [2024-11-20 06:43:08.387643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.861 [2024-11-20 06:43:08.387675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.861 qpair failed and we were unable to recover it. 00:32:36.861 [2024-11-20 06:43:08.387892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.861 [2024-11-20 06:43:08.387925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.861 qpair failed and we were unable to recover it. 00:32:36.861 [2024-11-20 06:43:08.388058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.861 [2024-11-20 06:43:08.388091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.861 qpair failed and we were unable to recover it. 00:32:36.861 [2024-11-20 06:43:08.388228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.861 [2024-11-20 06:43:08.388264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.861 qpair failed and we were unable to recover it. 00:32:36.862 [2024-11-20 06:43:08.388452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.862 [2024-11-20 06:43:08.388485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.862 qpair failed and we were unable to recover it. 00:32:36.862 [2024-11-20 06:43:08.388600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.862 [2024-11-20 06:43:08.388632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.862 qpair failed and we were unable to recover it. 00:32:36.862 [2024-11-20 06:43:08.388731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.862 [2024-11-20 06:43:08.388763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.862 qpair failed and we were unable to recover it. 00:32:36.862 [2024-11-20 06:43:08.389008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.862 [2024-11-20 06:43:08.389041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.862 qpair failed and we were unable to recover it. 00:32:36.862 [2024-11-20 06:43:08.389222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.862 [2024-11-20 06:43:08.389256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.862 qpair failed and we were unable to recover it. 00:32:36.862 [2024-11-20 06:43:08.389385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.862 [2024-11-20 06:43:08.389417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.862 qpair failed and we were unable to recover it. 00:32:36.862 [2024-11-20 06:43:08.389591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.862 [2024-11-20 06:43:08.389623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.862 qpair failed and we were unable to recover it. 00:32:36.862 [2024-11-20 06:43:08.389813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.862 [2024-11-20 06:43:08.389846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.862 qpair failed and we were unable to recover it. 00:32:36.862 [2024-11-20 06:43:08.389969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.862 [2024-11-20 06:43:08.390002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.862 qpair failed and we were unable to recover it. 00:32:36.862 [2024-11-20 06:43:08.390124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.862 [2024-11-20 06:43:08.390156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.862 qpair failed and we were unable to recover it. 00:32:36.862 [2024-11-20 06:43:08.390279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.862 [2024-11-20 06:43:08.390312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.862 qpair failed and we were unable to recover it. 00:32:36.862 [2024-11-20 06:43:08.390439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.862 [2024-11-20 06:43:08.390472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.862 qpair failed and we were unable to recover it. 00:32:36.862 [2024-11-20 06:43:08.390579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.862 [2024-11-20 06:43:08.390610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.862 qpair failed and we were unable to recover it. 00:32:36.862 [2024-11-20 06:43:08.390880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.862 [2024-11-20 06:43:08.390912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.862 qpair failed and we were unable to recover it. 00:32:36.862 [2024-11-20 06:43:08.391053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.862 [2024-11-20 06:43:08.391086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.862 qpair failed and we were unable to recover it. 00:32:36.862 [2024-11-20 06:43:08.391208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.862 [2024-11-20 06:43:08.391241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.862 qpair failed and we were unable to recover it. 00:32:36.862 [2024-11-20 06:43:08.391429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.862 [2024-11-20 06:43:08.391462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.862 qpair failed and we were unable to recover it. 00:32:36.862 [2024-11-20 06:43:08.391639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.862 [2024-11-20 06:43:08.391671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.862 qpair failed and we were unable to recover it. 00:32:36.862 [2024-11-20 06:43:08.391909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.862 [2024-11-20 06:43:08.391942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.862 qpair failed and we were unable to recover it. 00:32:36.862 [2024-11-20 06:43:08.392067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.862 [2024-11-20 06:43:08.392099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.862 qpair failed and we were unable to recover it. 00:32:36.862 [2024-11-20 06:43:08.392382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.862 [2024-11-20 06:43:08.392418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.862 qpair failed and we were unable to recover it. 00:32:36.862 [2024-11-20 06:43:08.392561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.862 [2024-11-20 06:43:08.392593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.862 qpair failed and we were unable to recover it. 00:32:36.862 [2024-11-20 06:43:08.392806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.862 [2024-11-20 06:43:08.392839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.862 qpair failed and we were unable to recover it. 00:32:36.862 [2024-11-20 06:43:08.393029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.862 [2024-11-20 06:43:08.393062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.862 qpair failed and we were unable to recover it. 00:32:36.862 [2024-11-20 06:43:08.393236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.862 [2024-11-20 06:43:08.393275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.862 qpair failed and we were unable to recover it. 00:32:36.862 [2024-11-20 06:43:08.393400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.862 [2024-11-20 06:43:08.393434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.862 qpair failed and we were unable to recover it. 00:32:36.862 [2024-11-20 06:43:08.393643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.862 [2024-11-20 06:43:08.393675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.862 qpair failed and we were unable to recover it. 00:32:36.862 [2024-11-20 06:43:08.393801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.862 [2024-11-20 06:43:08.393832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.862 qpair failed and we were unable to recover it. 00:32:36.862 [2024-11-20 06:43:08.393943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.862 [2024-11-20 06:43:08.393975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.862 qpair failed and we were unable to recover it. 00:32:36.862 [2024-11-20 06:43:08.394082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.862 [2024-11-20 06:43:08.394115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.862 qpair failed and we were unable to recover it. 00:32:36.862 [2024-11-20 06:43:08.394383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.862 [2024-11-20 06:43:08.394417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.862 qpair failed and we were unable to recover it. 00:32:36.862 [2024-11-20 06:43:08.394553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.862 [2024-11-20 06:43:08.394586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.862 qpair failed and we were unable to recover it. 00:32:36.862 [2024-11-20 06:43:08.394696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.862 [2024-11-20 06:43:08.394731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.862 qpair failed and we were unable to recover it. 00:32:36.862 [2024-11-20 06:43:08.394841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.862 [2024-11-20 06:43:08.394873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.862 qpair failed and we were unable to recover it. 00:32:36.862 [2024-11-20 06:43:08.394992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.862 [2024-11-20 06:43:08.395024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.862 qpair failed and we were unable to recover it. 00:32:36.862 [2024-11-20 06:43:08.395249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.862 [2024-11-20 06:43:08.395283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.862 qpair failed and we were unable to recover it. 00:32:36.862 [2024-11-20 06:43:08.395394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.862 [2024-11-20 06:43:08.395427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.862 qpair failed and we were unable to recover it. 00:32:36.863 [2024-11-20 06:43:08.395628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.863 [2024-11-20 06:43:08.395660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.863 qpair failed and we were unable to recover it. 00:32:36.863 [2024-11-20 06:43:08.395843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.863 [2024-11-20 06:43:08.395876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.863 qpair failed and we were unable to recover it. 00:32:36.863 [2024-11-20 06:43:08.396011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.863 [2024-11-20 06:43:08.396043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.863 qpair failed and we were unable to recover it. 00:32:36.863 [2024-11-20 06:43:08.396236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.863 [2024-11-20 06:43:08.396271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.863 qpair failed and we were unable to recover it. 00:32:36.863 [2024-11-20 06:43:08.396450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.863 [2024-11-20 06:43:08.396482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.863 qpair failed and we were unable to recover it. 00:32:36.863 [2024-11-20 06:43:08.396667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.863 [2024-11-20 06:43:08.396699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.863 qpair failed and we were unable to recover it. 00:32:36.863 [2024-11-20 06:43:08.396805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.863 [2024-11-20 06:43:08.396838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.863 qpair failed and we were unable to recover it. 00:32:36.863 [2024-11-20 06:43:08.396959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.863 [2024-11-20 06:43:08.396991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.863 qpair failed and we were unable to recover it. 00:32:36.863 [2024-11-20 06:43:08.397116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.863 [2024-11-20 06:43:08.397148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.863 qpair failed and we were unable to recover it. 00:32:36.863 [2024-11-20 06:43:08.397280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.863 [2024-11-20 06:43:08.397317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.863 qpair failed and we were unable to recover it. 00:32:36.863 [2024-11-20 06:43:08.397490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.863 [2024-11-20 06:43:08.397523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.863 qpair failed and we were unable to recover it. 00:32:36.863 [2024-11-20 06:43:08.397698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.863 [2024-11-20 06:43:08.397732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.863 qpair failed and we were unable to recover it. 00:32:36.863 [2024-11-20 06:43:08.397846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.863 [2024-11-20 06:43:08.397878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.863 qpair failed and we were unable to recover it. 00:32:36.863 [2024-11-20 06:43:08.398065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.863 [2024-11-20 06:43:08.398099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.863 qpair failed and we were unable to recover it. 00:32:36.863 [2024-11-20 06:43:08.398303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.863 [2024-11-20 06:43:08.398358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.863 qpair failed and we were unable to recover it. 00:32:36.863 [2024-11-20 06:43:08.398510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.863 [2024-11-20 06:43:08.398544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.863 qpair failed and we were unable to recover it. 00:32:36.863 [2024-11-20 06:43:08.398740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.863 [2024-11-20 06:43:08.398773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.863 qpair failed and we were unable to recover it. 00:32:36.863 [2024-11-20 06:43:08.398956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.863 [2024-11-20 06:43:08.398989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.863 qpair failed and we were unable to recover it. 00:32:36.863 [2024-11-20 06:43:08.399159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.863 [2024-11-20 06:43:08.399194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.863 qpair failed and we were unable to recover it. 00:32:36.863 [2024-11-20 06:43:08.399338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.863 [2024-11-20 06:43:08.399372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.863 qpair failed and we were unable to recover it. 00:32:36.863 [2024-11-20 06:43:08.399494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.863 [2024-11-20 06:43:08.399527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.863 qpair failed and we were unable to recover it. 00:32:36.863 [2024-11-20 06:43:08.399655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.863 [2024-11-20 06:43:08.399687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.863 qpair failed and we were unable to recover it. 00:32:36.863 [2024-11-20 06:43:08.399865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.863 [2024-11-20 06:43:08.399897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.863 qpair failed and we were unable to recover it. 00:32:36.863 [2024-11-20 06:43:08.400083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.863 [2024-11-20 06:43:08.400116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.863 qpair failed and we were unable to recover it. 00:32:36.863 [2024-11-20 06:43:08.400233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.863 [2024-11-20 06:43:08.400268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.863 qpair failed and we were unable to recover it. 00:32:36.863 [2024-11-20 06:43:08.400381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.863 [2024-11-20 06:43:08.400414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.863 qpair failed and we were unable to recover it. 00:32:36.863 [2024-11-20 06:43:08.400535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.863 [2024-11-20 06:43:08.400567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.863 qpair failed and we were unable to recover it. 00:32:36.863 [2024-11-20 06:43:08.400679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.863 [2024-11-20 06:43:08.400712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.863 qpair failed and we were unable to recover it. 00:32:36.863 [2024-11-20 06:43:08.400840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.863 [2024-11-20 06:43:08.400872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.863 qpair failed and we were unable to recover it. 00:32:36.863 [2024-11-20 06:43:08.401003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.863 [2024-11-20 06:43:08.401035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.863 qpair failed and we were unable to recover it. 00:32:36.863 [2024-11-20 06:43:08.401144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.863 [2024-11-20 06:43:08.401177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.863 qpair failed and we were unable to recover it. 00:32:36.863 [2024-11-20 06:43:08.401310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.863 [2024-11-20 06:43:08.401343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.863 qpair failed and we were unable to recover it. 00:32:36.863 [2024-11-20 06:43:08.401534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.863 [2024-11-20 06:43:08.401566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.863 qpair failed and we were unable to recover it. 00:32:36.863 [2024-11-20 06:43:08.401694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.863 [2024-11-20 06:43:08.401727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.863 qpair failed and we were unable to recover it. 00:32:36.863 [2024-11-20 06:43:08.401905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.863 [2024-11-20 06:43:08.401937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.863 qpair failed and we were unable to recover it. 00:32:36.863 [2024-11-20 06:43:08.402051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.863 [2024-11-20 06:43:08.402084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.863 qpair failed and we were unable to recover it. 00:32:36.863 [2024-11-20 06:43:08.402260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.863 [2024-11-20 06:43:08.402295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.863 qpair failed and we were unable to recover it. 00:32:36.863 [2024-11-20 06:43:08.402400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.863 [2024-11-20 06:43:08.402431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.863 qpair failed and we were unable to recover it. 00:32:36.864 [2024-11-20 06:43:08.402672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.864 [2024-11-20 06:43:08.402704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.864 qpair failed and we were unable to recover it. 00:32:36.864 [2024-11-20 06:43:08.402968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.864 [2024-11-20 06:43:08.403000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.864 qpair failed and we were unable to recover it. 00:32:36.864 [2024-11-20 06:43:08.403127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.864 [2024-11-20 06:43:08.403158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.864 qpair failed and we were unable to recover it. 00:32:36.864 [2024-11-20 06:43:08.403280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.864 [2024-11-20 06:43:08.403320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.864 qpair failed and we were unable to recover it. 00:32:36.864 [2024-11-20 06:43:08.403497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.864 [2024-11-20 06:43:08.403530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.864 qpair failed and we were unable to recover it. 00:32:36.864 [2024-11-20 06:43:08.403637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.864 [2024-11-20 06:43:08.403670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.864 qpair failed and we were unable to recover it. 00:32:36.864 [2024-11-20 06:43:08.403789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.864 [2024-11-20 06:43:08.403822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.864 qpair failed and we were unable to recover it. 00:32:36.864 [2024-11-20 06:43:08.404001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.864 [2024-11-20 06:43:08.404034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.864 qpair failed and we were unable to recover it. 00:32:36.864 [2024-11-20 06:43:08.404138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.864 [2024-11-20 06:43:08.404169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.864 qpair failed and we were unable to recover it. 00:32:36.864 [2024-11-20 06:43:08.404298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.864 [2024-11-20 06:43:08.404334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.864 qpair failed and we were unable to recover it. 00:32:36.864 [2024-11-20 06:43:08.404450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.864 [2024-11-20 06:43:08.404483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.864 qpair failed and we were unable to recover it. 00:32:36.864 [2024-11-20 06:43:08.404601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.864 [2024-11-20 06:43:08.404633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.864 qpair failed and we were unable to recover it. 00:32:36.864 [2024-11-20 06:43:08.404737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.864 [2024-11-20 06:43:08.404770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.864 qpair failed and we were unable to recover it. 00:32:36.864 [2024-11-20 06:43:08.404886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.864 [2024-11-20 06:43:08.404918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.864 qpair failed and we were unable to recover it. 00:32:36.864 [2024-11-20 06:43:08.405042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.864 [2024-11-20 06:43:08.405075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.864 qpair failed and we were unable to recover it. 00:32:36.864 [2024-11-20 06:43:08.405263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.864 [2024-11-20 06:43:08.405297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.864 qpair failed and we were unable to recover it. 00:32:36.864 [2024-11-20 06:43:08.405413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.864 [2024-11-20 06:43:08.405447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.864 qpair failed and we were unable to recover it. 00:32:36.864 [2024-11-20 06:43:08.405699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.864 [2024-11-20 06:43:08.405734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.864 qpair failed and we were unable to recover it. 00:32:36.864 [2024-11-20 06:43:08.405877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.864 [2024-11-20 06:43:08.405909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.864 qpair failed and we were unable to recover it. 00:32:36.864 [2024-11-20 06:43:08.406011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.864 [2024-11-20 06:43:08.406043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.864 qpair failed and we were unable to recover it. 00:32:36.864 [2024-11-20 06:43:08.406137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.864 [2024-11-20 06:43:08.406169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.864 qpair failed and we were unable to recover it. 00:32:36.864 [2024-11-20 06:43:08.406299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.864 [2024-11-20 06:43:08.406336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.864 qpair failed and we were unable to recover it. 00:32:36.864 [2024-11-20 06:43:08.406457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.864 [2024-11-20 06:43:08.406489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.864 qpair failed and we were unable to recover it. 00:32:36.864 [2024-11-20 06:43:08.406685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.864 [2024-11-20 06:43:08.406718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.864 qpair failed and we were unable to recover it. 00:32:36.864 [2024-11-20 06:43:08.406838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.864 [2024-11-20 06:43:08.406872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.864 qpair failed and we were unable to recover it. 00:32:36.864 [2024-11-20 06:43:08.407046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.864 [2024-11-20 06:43:08.407079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.864 qpair failed and we were unable to recover it. 00:32:36.864 [2024-11-20 06:43:08.407231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.864 [2024-11-20 06:43:08.407265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.864 qpair failed and we were unable to recover it. 00:32:36.864 [2024-11-20 06:43:08.407378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.864 [2024-11-20 06:43:08.407410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.864 qpair failed and we were unable to recover it. 00:32:36.864 [2024-11-20 06:43:08.407541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.864 [2024-11-20 06:43:08.407574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.864 qpair failed and we were unable to recover it. 00:32:36.864 [2024-11-20 06:43:08.407679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.864 [2024-11-20 06:43:08.407711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.864 qpair failed and we were unable to recover it. 00:32:36.864 [2024-11-20 06:43:08.407903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.864 [2024-11-20 06:43:08.407942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.864 qpair failed and we were unable to recover it. 00:32:36.864 [2024-11-20 06:43:08.408062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.864 [2024-11-20 06:43:08.408093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.864 qpair failed and we were unable to recover it. 00:32:36.864 [2024-11-20 06:43:08.408285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.864 [2024-11-20 06:43:08.408318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.864 qpair failed and we were unable to recover it. 00:32:36.864 [2024-11-20 06:43:08.408441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.864 [2024-11-20 06:43:08.408473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.864 qpair failed and we were unable to recover it. 00:32:36.864 [2024-11-20 06:43:08.408582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.864 [2024-11-20 06:43:08.408614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.864 qpair failed and we were unable to recover it. 00:32:36.864 [2024-11-20 06:43:08.408806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.864 [2024-11-20 06:43:08.408838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.864 qpair failed and we were unable to recover it. 00:32:36.864 [2024-11-20 06:43:08.408957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.864 [2024-11-20 06:43:08.408990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.864 qpair failed and we were unable to recover it. 00:32:36.864 06:43:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:36.865 [2024-11-20 06:43:08.409171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.865 [2024-11-20 06:43:08.409214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.865 qpair failed and we were unable to recover it. 00:32:36.865 [2024-11-20 06:43:08.409318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.865 [2024-11-20 06:43:08.409349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.865 qpair failed and we were unable to recover it. 00:32:36.865 [2024-11-20 06:43:08.409461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.865 06:43:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:36.865 [2024-11-20 06:43:08.409494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.865 qpair failed and we were unable to recover it. 00:32:36.865 [2024-11-20 06:43:08.409626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.865 [2024-11-20 06:43:08.409659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.865 qpair failed and we were unable to recover it. 00:32:36.865 06:43:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:36.865 [2024-11-20 06:43:08.409838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.865 [2024-11-20 06:43:08.409873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.865 qpair failed and we were unable to recover it. 00:32:36.865 [2024-11-20 06:43:08.409989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.865 [2024-11-20 06:43:08.410025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.865 qpair failed and we were unable to recover it. 00:32:36.865 06:43:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:36.865 [2024-11-20 06:43:08.410163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.865 [2024-11-20 06:43:08.410196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.865 qpair failed and we were unable to recover it. 00:32:36.865 [2024-11-20 06:43:08.410408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.865 [2024-11-20 06:43:08.410441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.865 qpair failed and we were unable to recover it. 00:32:36.865 [2024-11-20 06:43:08.410552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.865 [2024-11-20 06:43:08.410584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.865 qpair failed and we were unable to recover it. 00:32:36.865 [2024-11-20 06:43:08.410699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.865 [2024-11-20 06:43:08.410731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.865 qpair failed and we were unable to recover it. 00:32:36.865 [2024-11-20 06:43:08.410856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.865 [2024-11-20 06:43:08.410889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.865 qpair failed and we were unable to recover it. 00:32:36.865 [2024-11-20 06:43:08.411100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.865 [2024-11-20 06:43:08.411133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.865 qpair failed and we were unable to recover it. 00:32:36.865 [2024-11-20 06:43:08.411266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.865 [2024-11-20 06:43:08.411299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.865 qpair failed and we were unable to recover it. 00:32:36.865 [2024-11-20 06:43:08.411487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.865 [2024-11-20 06:43:08.411520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.865 qpair failed and we were unable to recover it. 00:32:36.865 [2024-11-20 06:43:08.411702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.865 [2024-11-20 06:43:08.411734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.865 qpair failed and we were unable to recover it. 00:32:36.865 [2024-11-20 06:43:08.411838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.865 [2024-11-20 06:43:08.411870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.865 qpair failed and we were unable to recover it. 00:32:36.865 [2024-11-20 06:43:08.411988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.865 [2024-11-20 06:43:08.412019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.865 qpair failed and we were unable to recover it. 00:32:36.865 [2024-11-20 06:43:08.412129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.865 [2024-11-20 06:43:08.412162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.865 qpair failed and we were unable to recover it. 00:32:36.865 [2024-11-20 06:43:08.412362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.865 [2024-11-20 06:43:08.412401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.865 qpair failed and we were unable to recover it. 00:32:36.865 [2024-11-20 06:43:08.412522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.865 [2024-11-20 06:43:08.412554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.865 qpair failed and we were unable to recover it. 00:32:36.865 [2024-11-20 06:43:08.412661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.865 [2024-11-20 06:43:08.412694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.865 qpair failed and we were unable to recover it. 00:32:36.865 [2024-11-20 06:43:08.412806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.865 [2024-11-20 06:43:08.412839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.865 qpair failed and we were unable to recover it. 00:32:36.865 [2024-11-20 06:43:08.413022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.865 [2024-11-20 06:43:08.413054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.865 qpair failed and we were unable to recover it. 00:32:36.865 [2024-11-20 06:43:08.413152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.865 [2024-11-20 06:43:08.413183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.865 qpair failed and we were unable to recover it. 00:32:36.865 [2024-11-20 06:43:08.413378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.865 [2024-11-20 06:43:08.413411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.865 qpair failed and we were unable to recover it. 00:32:36.865 [2024-11-20 06:43:08.413583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.865 [2024-11-20 06:43:08.413615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.865 qpair failed and we were unable to recover it. 00:32:36.865 [2024-11-20 06:43:08.413731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.865 [2024-11-20 06:43:08.413763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.865 qpair failed and we were unable to recover it. 00:32:36.865 [2024-11-20 06:43:08.413877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.865 [2024-11-20 06:43:08.413909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.865 qpair failed and we were unable to recover it. 00:32:36.865 [2024-11-20 06:43:08.414034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.865 [2024-11-20 06:43:08.414066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.865 qpair failed and we were unable to recover it. 00:32:36.865 [2024-11-20 06:43:08.414196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.865 [2024-11-20 06:43:08.414241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.865 qpair failed and we were unable to recover it. 00:32:36.865 [2024-11-20 06:43:08.414357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.865 [2024-11-20 06:43:08.414389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.865 qpair failed and we were unable to recover it. 00:32:36.865 [2024-11-20 06:43:08.414570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.865 [2024-11-20 06:43:08.414601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.865 qpair failed and we were unable to recover it. 00:32:36.865 [2024-11-20 06:43:08.414726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.865 [2024-11-20 06:43:08.414758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.865 qpair failed and we were unable to recover it. 00:32:36.865 [2024-11-20 06:43:08.414932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.865 [2024-11-20 06:43:08.414965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.865 qpair failed and we were unable to recover it. 00:32:36.865 [2024-11-20 06:43:08.415071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.865 [2024-11-20 06:43:08.415104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.865 qpair failed and we were unable to recover it. 00:32:36.865 [2024-11-20 06:43:08.415225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.865 [2024-11-20 06:43:08.415257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.865 qpair failed and we were unable to recover it. 00:32:36.865 [2024-11-20 06:43:08.415450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.866 [2024-11-20 06:43:08.415483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.866 qpair failed and we were unable to recover it. 00:32:36.866 [2024-11-20 06:43:08.415596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.866 [2024-11-20 06:43:08.415629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.866 qpair failed and we were unable to recover it. 00:32:36.866 [2024-11-20 06:43:08.415804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.866 [2024-11-20 06:43:08.415836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.866 qpair failed and we were unable to recover it. 00:32:36.866 [2024-11-20 06:43:08.415951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.866 [2024-11-20 06:43:08.415983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.866 qpair failed and we were unable to recover it. 00:32:36.866 [2024-11-20 06:43:08.416086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.866 [2024-11-20 06:43:08.416118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.866 qpair failed and we were unable to recover it. 00:32:36.866 [2024-11-20 06:43:08.416305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.866 [2024-11-20 06:43:08.416339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.866 qpair failed and we were unable to recover it. 00:32:36.866 [2024-11-20 06:43:08.416580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.866 [2024-11-20 06:43:08.416613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.866 qpair failed and we were unable to recover it. 00:32:36.866 [2024-11-20 06:43:08.416832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.866 [2024-11-20 06:43:08.416864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.866 qpair failed and we were unable to recover it. 00:32:36.866 [2024-11-20 06:43:08.416994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.866 [2024-11-20 06:43:08.417026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.866 qpair failed and we were unable to recover it. 00:32:36.866 [2024-11-20 06:43:08.417220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.866 [2024-11-20 06:43:08.417253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.866 qpair failed and we were unable to recover it. 00:32:36.866 [2024-11-20 06:43:08.417448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.866 [2024-11-20 06:43:08.417480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.866 qpair failed and we were unable to recover it. 00:32:36.866 [2024-11-20 06:43:08.417653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.866 [2024-11-20 06:43:08.417684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.866 qpair failed and we were unable to recover it. 00:32:36.866 [2024-11-20 06:43:08.417796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.866 [2024-11-20 06:43:08.417829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.866 qpair failed and we were unable to recover it. 00:32:36.866 [2024-11-20 06:43:08.418024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.866 [2024-11-20 06:43:08.418056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.866 qpair failed and we were unable to recover it. 00:32:36.866 [2024-11-20 06:43:08.418324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.866 [2024-11-20 06:43:08.418358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.866 qpair failed and we were unable to recover it. 00:32:36.866 [2024-11-20 06:43:08.418603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.866 [2024-11-20 06:43:08.418634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.866 qpair failed and we were unable to recover it. 00:32:36.866 [2024-11-20 06:43:08.418761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.866 [2024-11-20 06:43:08.418793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.866 qpair failed and we were unable to recover it. 00:32:36.866 [2024-11-20 06:43:08.418892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.866 [2024-11-20 06:43:08.418924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.866 qpair failed and we were unable to recover it. 00:32:36.866 [2024-11-20 06:43:08.419051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.866 [2024-11-20 06:43:08.419083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.866 qpair failed and we were unable to recover it. 00:32:36.866 [2024-11-20 06:43:08.419211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.866 [2024-11-20 06:43:08.419245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.866 qpair failed and we were unable to recover it. 00:32:36.866 [2024-11-20 06:43:08.419344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.866 [2024-11-20 06:43:08.419376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.866 qpair failed and we were unable to recover it. 00:32:36.866 [2024-11-20 06:43:08.419555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.866 [2024-11-20 06:43:08.419586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.866 qpair failed and we were unable to recover it. 00:32:36.866 [2024-11-20 06:43:08.419760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.866 [2024-11-20 06:43:08.419792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.866 qpair failed and we were unable to recover it. 00:32:36.866 [2024-11-20 06:43:08.419993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.866 [2024-11-20 06:43:08.420030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.866 qpair failed and we were unable to recover it. 00:32:36.866 [2024-11-20 06:43:08.420238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.866 [2024-11-20 06:43:08.420273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.866 qpair failed and we were unable to recover it. 00:32:36.866 [2024-11-20 06:43:08.420374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.866 [2024-11-20 06:43:08.420406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.866 qpair failed and we were unable to recover it. 00:32:36.866 [2024-11-20 06:43:08.420537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.866 [2024-11-20 06:43:08.420570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.866 qpair failed and we were unable to recover it. 00:32:36.866 [2024-11-20 06:43:08.420689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.866 [2024-11-20 06:43:08.420721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.866 qpair failed and we were unable to recover it. 00:32:36.866 [2024-11-20 06:43:08.420918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.866 [2024-11-20 06:43:08.420950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.866 qpair failed and we were unable to recover it. 00:32:36.866 [2024-11-20 06:43:08.421057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.866 [2024-11-20 06:43:08.421090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.866 qpair failed and we were unable to recover it. 00:32:36.866 [2024-11-20 06:43:08.421335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.866 [2024-11-20 06:43:08.421369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.866 qpair failed and we were unable to recover it. 00:32:36.866 [2024-11-20 06:43:08.421615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.866 [2024-11-20 06:43:08.421656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.866 qpair failed and we were unable to recover it. 00:32:36.866 [2024-11-20 06:43:08.421833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.867 [2024-11-20 06:43:08.421866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.867 qpair failed and we were unable to recover it. 00:32:36.867 [2024-11-20 06:43:08.421985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.867 [2024-11-20 06:43:08.422018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.867 qpair failed and we were unable to recover it. 00:32:36.867 [2024-11-20 06:43:08.422225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.867 [2024-11-20 06:43:08.422260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.867 qpair failed and we were unable to recover it. 00:32:36.867 [2024-11-20 06:43:08.422467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.867 [2024-11-20 06:43:08.422515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.867 qpair failed and we were unable to recover it. 00:32:36.867 [2024-11-20 06:43:08.422715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.867 [2024-11-20 06:43:08.422765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.867 qpair failed and we were unable to recover it. 00:32:36.867 [2024-11-20 06:43:08.422939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.867 [2024-11-20 06:43:08.422971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.867 qpair failed and we were unable to recover it. 00:32:36.867 [2024-11-20 06:43:08.423088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.867 [2024-11-20 06:43:08.423121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.867 qpair failed and we were unable to recover it. 00:32:36.867 [2024-11-20 06:43:08.423247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.867 [2024-11-20 06:43:08.423281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.867 qpair failed and we were unable to recover it. 00:32:36.867 [2024-11-20 06:43:08.423522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.867 [2024-11-20 06:43:08.423554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.867 qpair failed and we were unable to recover it. 00:32:36.867 [2024-11-20 06:43:08.423725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.867 [2024-11-20 06:43:08.423757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.867 qpair failed and we were unable to recover it. 00:32:36.867 [2024-11-20 06:43:08.423877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.867 [2024-11-20 06:43:08.423908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.867 qpair failed and we were unable to recover it. 00:32:36.867 [2024-11-20 06:43:08.424020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.867 [2024-11-20 06:43:08.424053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.867 qpair failed and we were unable to recover it. 00:32:36.867 [2024-11-20 06:43:08.424226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.867 [2024-11-20 06:43:08.424260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.867 qpair failed and we were unable to recover it. 00:32:36.867 [2024-11-20 06:43:08.424432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.867 [2024-11-20 06:43:08.424464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.867 qpair failed and we were unable to recover it. 00:32:36.867 [2024-11-20 06:43:08.424640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.867 [2024-11-20 06:43:08.424672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.867 qpair failed and we were unable to recover it. 00:32:36.867 [2024-11-20 06:43:08.424847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.867 [2024-11-20 06:43:08.424879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.867 qpair failed and we were unable to recover it. 00:32:36.867 [2024-11-20 06:43:08.424991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.867 [2024-11-20 06:43:08.425023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.867 qpair failed and we were unable to recover it. 00:32:36.867 [2024-11-20 06:43:08.425128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.867 [2024-11-20 06:43:08.425160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.867 qpair failed and we were unable to recover it. 00:32:36.867 [2024-11-20 06:43:08.425422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.867 [2024-11-20 06:43:08.425456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.867 qpair failed and we were unable to recover it. 00:32:36.867 [2024-11-20 06:43:08.425634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.867 [2024-11-20 06:43:08.425666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.867 qpair failed and we were unable to recover it. 00:32:36.867 [2024-11-20 06:43:08.425916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.867 [2024-11-20 06:43:08.425947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.867 qpair failed and we were unable to recover it. 00:32:36.867 [2024-11-20 06:43:08.426052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.867 [2024-11-20 06:43:08.426085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.867 qpair failed and we were unable to recover it. 00:32:36.867 [2024-11-20 06:43:08.426263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.867 [2024-11-20 06:43:08.426296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.867 qpair failed and we were unable to recover it. 00:32:36.867 [2024-11-20 06:43:08.426431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.867 [2024-11-20 06:43:08.426463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.867 qpair failed and we were unable to recover it. 00:32:36.867 [2024-11-20 06:43:08.426702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.867 [2024-11-20 06:43:08.426735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.867 qpair failed and we were unable to recover it. 00:32:36.867 [2024-11-20 06:43:08.426939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.867 [2024-11-20 06:43:08.426971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.867 qpair failed and we were unable to recover it. 00:32:36.867 [2024-11-20 06:43:08.427078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.867 [2024-11-20 06:43:08.427110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.867 qpair failed and we were unable to recover it. 00:32:36.867 [2024-11-20 06:43:08.427234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.867 [2024-11-20 06:43:08.427268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.867 qpair failed and we were unable to recover it. 00:32:36.867 [2024-11-20 06:43:08.427442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.867 [2024-11-20 06:43:08.427474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.867 qpair failed and we were unable to recover it. 00:32:36.867 [2024-11-20 06:43:08.427612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.867 [2024-11-20 06:43:08.427644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.867 qpair failed and we were unable to recover it. 00:32:36.867 [2024-11-20 06:43:08.427776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.867 [2024-11-20 06:43:08.427809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.867 qpair failed and we were unable to recover it. 00:32:36.867 [2024-11-20 06:43:08.427930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.867 [2024-11-20 06:43:08.427973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.867 qpair failed and we were unable to recover it. 00:32:36.867 [2024-11-20 06:43:08.428194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.867 [2024-11-20 06:43:08.428240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.867 qpair failed and we were unable to recover it. 00:32:36.867 [2024-11-20 06:43:08.428414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.867 [2024-11-20 06:43:08.428447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.867 qpair failed and we were unable to recover it. 00:32:36.867 [2024-11-20 06:43:08.428642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.867 [2024-11-20 06:43:08.428673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.867 qpair failed and we were unable to recover it. 00:32:36.867 [2024-11-20 06:43:08.428799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.867 [2024-11-20 06:43:08.428830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.867 qpair failed and we were unable to recover it. 00:32:36.867 [2024-11-20 06:43:08.428962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.867 [2024-11-20 06:43:08.428994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.867 qpair failed and we were unable to recover it. 00:32:36.867 [2024-11-20 06:43:08.429112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.867 [2024-11-20 06:43:08.429143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.868 qpair failed and we were unable to recover it. 00:32:36.868 [2024-11-20 06:43:08.429339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.868 [2024-11-20 06:43:08.429372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.868 qpair failed and we were unable to recover it. 00:32:36.868 [2024-11-20 06:43:08.429490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.868 [2024-11-20 06:43:08.429522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.868 qpair failed and we were unable to recover it. 00:32:36.868 [2024-11-20 06:43:08.429661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.868 [2024-11-20 06:43:08.429692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.868 qpair failed and we were unable to recover it. 00:32:36.868 [2024-11-20 06:43:08.429869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.868 [2024-11-20 06:43:08.429901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.868 qpair failed and we were unable to recover it. 00:32:36.868 [2024-11-20 06:43:08.430098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.868 [2024-11-20 06:43:08.430129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.868 qpair failed and we were unable to recover it. 00:32:36.868 [2024-11-20 06:43:08.430314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.868 [2024-11-20 06:43:08.430347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.868 qpair failed and we were unable to recover it. 00:32:36.868 [2024-11-20 06:43:08.430471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.868 [2024-11-20 06:43:08.430502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.868 qpair failed and we were unable to recover it. 00:32:36.868 [2024-11-20 06:43:08.430690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.868 [2024-11-20 06:43:08.430720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.868 qpair failed and we were unable to recover it. 00:32:36.868 [2024-11-20 06:43:08.430893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.868 [2024-11-20 06:43:08.430924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.868 qpair failed and we were unable to recover it. 00:32:36.868 [2024-11-20 06:43:08.431127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.868 [2024-11-20 06:43:08.431158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.868 qpair failed and we were unable to recover it. 00:32:36.868 [2024-11-20 06:43:08.431288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.868 [2024-11-20 06:43:08.431394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.868 qpair failed and we were unable to recover it. 00:32:36.868 [2024-11-20 06:43:08.431616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.868 [2024-11-20 06:43:08.431648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.868 qpair failed and we were unable to recover it. 00:32:36.868 [2024-11-20 06:43:08.431883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.868 [2024-11-20 06:43:08.431917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.868 qpair failed and we were unable to recover it. 00:32:36.868 [2024-11-20 06:43:08.432155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.868 [2024-11-20 06:43:08.432187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.868 qpair failed and we were unable to recover it. 00:32:36.868 [2024-11-20 06:43:08.432375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.868 [2024-11-20 06:43:08.432407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.868 qpair failed and we were unable to recover it. 00:32:36.868 [2024-11-20 06:43:08.432592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.868 [2024-11-20 06:43:08.432623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.868 qpair failed and we were unable to recover it. 00:32:36.868 [2024-11-20 06:43:08.432742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.868 [2024-11-20 06:43:08.432774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.868 qpair failed and we were unable to recover it. 00:32:36.868 [2024-11-20 06:43:08.432963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.868 [2024-11-20 06:43:08.432996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.868 qpair failed and we were unable to recover it. 00:32:36.868 [2024-11-20 06:43:08.433173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.868 [2024-11-20 06:43:08.433212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.868 qpair failed and we were unable to recover it. 00:32:36.868 [2024-11-20 06:43:08.433386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.868 [2024-11-20 06:43:08.433417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.868 qpair failed and we were unable to recover it. 00:32:36.868 [2024-11-20 06:43:08.433551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.868 [2024-11-20 06:43:08.433582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.868 qpair failed and we were unable to recover it. 00:32:36.868 [2024-11-20 06:43:08.433799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.868 [2024-11-20 06:43:08.433832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.868 qpair failed and we were unable to recover it. 00:32:36.868 [2024-11-20 06:43:08.434026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.868 [2024-11-20 06:43:08.434057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.868 qpair failed and we were unable to recover it. 00:32:36.868 [2024-11-20 06:43:08.434168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.868 [2024-11-20 06:43:08.434199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.868 qpair failed and we were unable to recover it. 00:32:36.868 [2024-11-20 06:43:08.434390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.868 [2024-11-20 06:43:08.434422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.868 qpair failed and we were unable to recover it. 00:32:36.868 [2024-11-20 06:43:08.434624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.868 [2024-11-20 06:43:08.434655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.868 qpair failed and we were unable to recover it. 00:32:36.868 [2024-11-20 06:43:08.434837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.868 [2024-11-20 06:43:08.434868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.868 qpair failed and we were unable to recover it. 00:32:36.868 [2024-11-20 06:43:08.435058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.868 [2024-11-20 06:43:08.435090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.868 qpair failed and we were unable to recover it. 00:32:36.868 [2024-11-20 06:43:08.435268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.868 [2024-11-20 06:43:08.435301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.868 qpair failed and we were unable to recover it. 00:32:36.868 [2024-11-20 06:43:08.435445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.868 [2024-11-20 06:43:08.435478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.868 qpair failed and we were unable to recover it. 00:32:36.868 [2024-11-20 06:43:08.435665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.868 [2024-11-20 06:43:08.435698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.868 qpair failed and we were unable to recover it. 00:32:36.868 [2024-11-20 06:43:08.435948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.868 [2024-11-20 06:43:08.435979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.868 qpair failed and we were unable to recover it. 00:32:36.868 [2024-11-20 06:43:08.436110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.868 [2024-11-20 06:43:08.436142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.868 qpair failed and we were unable to recover it. 00:32:36.868 [2024-11-20 06:43:08.436276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.868 [2024-11-20 06:43:08.436314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.868 qpair failed and we were unable to recover it. 00:32:36.868 [2024-11-20 06:43:08.436490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.868 [2024-11-20 06:43:08.436522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.868 qpair failed and we were unable to recover it. 00:32:36.868 [2024-11-20 06:43:08.436765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.868 [2024-11-20 06:43:08.436797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.868 qpair failed and we were unable to recover it. 00:32:36.868 [2024-11-20 06:43:08.437029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.868 [2024-11-20 06:43:08.437061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.868 qpair failed and we were unable to recover it. 00:32:36.869 [2024-11-20 06:43:08.437171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.869 [2024-11-20 06:43:08.437214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.869 qpair failed and we were unable to recover it. 00:32:36.869 [2024-11-20 06:43:08.437336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.869 [2024-11-20 06:43:08.437368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.869 qpair failed and we were unable to recover it. 00:32:36.869 [2024-11-20 06:43:08.437552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.869 [2024-11-20 06:43:08.437583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.869 qpair failed and we were unable to recover it. 00:32:36.869 [2024-11-20 06:43:08.437703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.869 [2024-11-20 06:43:08.437733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.869 qpair failed and we were unable to recover it. 00:32:36.869 [2024-11-20 06:43:08.437909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.869 [2024-11-20 06:43:08.437941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.869 qpair failed and we were unable to recover it. 00:32:36.869 [2024-11-20 06:43:08.438068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.869 [2024-11-20 06:43:08.438099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.869 qpair failed and we were unable to recover it. 00:32:36.869 [2024-11-20 06:43:08.438231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.869 [2024-11-20 06:43:08.438265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.869 qpair failed and we were unable to recover it. 00:32:36.869 [2024-11-20 06:43:08.438469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.869 [2024-11-20 06:43:08.438501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.869 qpair failed and we were unable to recover it. 00:32:36.869 [2024-11-20 06:43:08.438770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.869 [2024-11-20 06:43:08.438801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.869 qpair failed and we were unable to recover it. 00:32:36.869 [2024-11-20 06:43:08.438924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.869 [2024-11-20 06:43:08.438956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.869 qpair failed and we were unable to recover it. 00:32:36.869 [2024-11-20 06:43:08.439140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.869 [2024-11-20 06:43:08.439173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.869 qpair failed and we were unable to recover it. 00:32:36.869 [2024-11-20 06:43:08.439435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.869 [2024-11-20 06:43:08.439510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:32:36.869 qpair failed and we were unable to recover it. 00:32:36.869 [2024-11-20 06:43:08.439745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.869 [2024-11-20 06:43:08.439790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.869 qpair failed and we were unable to recover it. 00:32:36.869 [2024-11-20 06:43:08.439984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.869 [2024-11-20 06:43:08.440017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.869 qpair failed and we were unable to recover it. 00:32:36.869 [2024-11-20 06:43:08.440190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.869 [2024-11-20 06:43:08.440235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.869 qpair failed and we were unable to recover it. 00:32:36.869 [2024-11-20 06:43:08.440418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.869 [2024-11-20 06:43:08.440452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.869 qpair failed and we were unable to recover it. 00:32:36.869 [2024-11-20 06:43:08.440668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.869 [2024-11-20 06:43:08.440700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.869 qpair failed and we were unable to recover it. 00:32:36.869 [2024-11-20 06:43:08.440897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.869 [2024-11-20 06:43:08.440930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.869 qpair failed and we were unable to recover it. 00:32:36.869 [2024-11-20 06:43:08.441220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.869 [2024-11-20 06:43:08.441255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.869 qpair failed and we were unable to recover it. 00:32:36.869 [2024-11-20 06:43:08.441533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.869 [2024-11-20 06:43:08.441564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.869 qpair failed and we were unable to recover it. 00:32:36.869 [2024-11-20 06:43:08.441692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.869 [2024-11-20 06:43:08.441724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.869 qpair failed and we were unable to recover it. 00:32:36.869 [2024-11-20 06:43:08.441840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.869 [2024-11-20 06:43:08.441872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.869 qpair failed and we were unable to recover it. 00:32:36.869 [2024-11-20 06:43:08.442060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.869 [2024-11-20 06:43:08.442092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d0ba0 with addr=10.0.0.2, port=4420 00:32:36.869 qpair failed and we were unable to recover it. 00:32:36.869 [2024-11-20 06:43:08.442273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.869 [2024-11-20 06:43:08.442308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.869 qpair failed and we were unable to recover it. 00:32:36.869 [2024-11-20 06:43:08.442487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.869 [2024-11-20 06:43:08.442519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.869 qpair failed and we were unable to recover it. 00:32:36.869 [2024-11-20 06:43:08.442706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.869 [2024-11-20 06:43:08.442738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.869 qpair failed and we were unable to recover it. 00:32:36.869 [2024-11-20 06:43:08.442936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.869 [2024-11-20 06:43:08.442967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.869 qpair failed and we were unable to recover it. 00:32:36.869 [2024-11-20 06:43:08.443220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.869 [2024-11-20 06:43:08.443252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.869 qpair failed and we were unable to recover it. 00:32:36.869 [2024-11-20 06:43:08.443452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.869 [2024-11-20 06:43:08.443484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.869 qpair failed and we were unable to recover it. 00:32:36.869 [2024-11-20 06:43:08.443674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.869 [2024-11-20 06:43:08.443706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.869 qpair failed and we were unable to recover it. 00:32:36.869 [2024-11-20 06:43:08.443887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.869 [2024-11-20 06:43:08.443918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.869 qpair failed and we were unable to recover it. 00:32:36.869 [2024-11-20 06:43:08.444035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.869 [2024-11-20 06:43:08.444066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.869 qpair failed and we were unable to recover it. 00:32:36.869 [2024-11-20 06:43:08.444175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.869 [2024-11-20 06:43:08.444217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.869 qpair failed and we were unable to recover it. 00:32:36.869 [2024-11-20 06:43:08.444426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.869 [2024-11-20 06:43:08.444457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.869 qpair failed and we were unable to recover it. 00:32:36.869 [2024-11-20 06:43:08.444717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.869 [2024-11-20 06:43:08.444750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.869 qpair failed and we were unable to recover it. 00:32:36.869 [2024-11-20 06:43:08.444866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.869 [2024-11-20 06:43:08.444899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.869 qpair failed and we were unable to recover it. 00:32:36.869 [2024-11-20 06:43:08.445036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.869 [2024-11-20 06:43:08.445073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.869 qpair failed and we were unable to recover it. 00:32:36.870 [2024-11-20 06:43:08.445298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.870 [2024-11-20 06:43:08.445331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.870 qpair failed and we were unable to recover it. 00:32:36.870 [2024-11-20 06:43:08.445573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.870 [2024-11-20 06:43:08.445605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.870 qpair failed and we were unable to recover it. 00:32:36.870 [2024-11-20 06:43:08.445872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.870 [2024-11-20 06:43:08.445904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.870 qpair failed and we were unable to recover it. 00:32:36.870 [2024-11-20 06:43:08.446034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.870 [2024-11-20 06:43:08.446065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.870 qpair failed and we were unable to recover it. 00:32:36.870 [2024-11-20 06:43:08.446209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.870 [2024-11-20 06:43:08.446242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.870 qpair failed and we were unable to recover it. 00:32:36.870 [2024-11-20 06:43:08.446421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.870 [2024-11-20 06:43:08.446458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.870 qpair failed and we were unable to recover it. 00:32:36.870 [2024-11-20 06:43:08.446641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.870 [2024-11-20 06:43:08.446673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.870 qpair failed and we were unable to recover it. 00:32:36.870 Malloc0 00:32:36.870 [2024-11-20 06:43:08.446877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.870 [2024-11-20 06:43:08.446910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.870 qpair failed and we were unable to recover it. 00:32:36.870 [2024-11-20 06:43:08.447049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.870 [2024-11-20 06:43:08.447082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.870 qpair failed and we were unable to recover it. 00:32:36.870 [2024-11-20 06:43:08.447267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.870 [2024-11-20 06:43:08.447301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.870 qpair failed and we were unable to recover it. 00:32:36.870 [2024-11-20 06:43:08.447511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.870 06:43:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:36.870 [2024-11-20 06:43:08.447544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.870 qpair failed and we were unable to recover it. 00:32:36.870 [2024-11-20 06:43:08.447793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.870 [2024-11-20 06:43:08.447825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.870 qpair failed and we were unable to recover it. 00:32:36.870 06:43:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:32:36.870 [2024-11-20 06:43:08.448109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.870 [2024-11-20 06:43:08.448142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.870 qpair failed and we were unable to recover it. 00:32:36.870 06:43:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:36.870 [2024-11-20 06:43:08.448435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.870 [2024-11-20 06:43:08.448468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.870 qpair failed and we were unable to recover it. 00:32:36.870 06:43:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:36.870 [2024-11-20 06:43:08.448641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.870 [2024-11-20 06:43:08.448674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.870 qpair failed and we were unable to recover it. 00:32:36.870 [2024-11-20 06:43:08.448861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.870 [2024-11-20 06:43:08.448892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.870 qpair failed and we were unable to recover it. 00:32:36.870 [2024-11-20 06:43:08.449081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.870 [2024-11-20 06:43:08.449112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.870 qpair failed and we were unable to recover it. 00:32:36.870 [2024-11-20 06:43:08.449351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.870 [2024-11-20 06:43:08.449386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.870 qpair failed and we were unable to recover it. 00:32:36.870 [2024-11-20 06:43:08.449639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.870 [2024-11-20 06:43:08.449671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.870 qpair failed and we were unable to recover it. 00:32:36.870 [2024-11-20 06:43:08.449776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.870 [2024-11-20 06:43:08.449808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.870 qpair failed and we were unable to recover it. 00:32:36.870 [2024-11-20 06:43:08.450091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.870 [2024-11-20 06:43:08.450124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.870 qpair failed and we were unable to recover it. 00:32:36.870 [2024-11-20 06:43:08.450324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.870 [2024-11-20 06:43:08.450358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.870 qpair failed and we were unable to recover it. 00:32:36.870 [2024-11-20 06:43:08.450623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.870 [2024-11-20 06:43:08.450656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.870 qpair failed and we were unable to recover it. 00:32:36.870 [2024-11-20 06:43:08.450769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.870 [2024-11-20 06:43:08.450800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.870 qpair failed and we were unable to recover it. 00:32:36.870 [2024-11-20 06:43:08.450985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.870 [2024-11-20 06:43:08.451024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.870 qpair failed and we were unable to recover it. 00:32:36.870 [2024-11-20 06:43:08.451293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.870 [2024-11-20 06:43:08.451326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.870 qpair failed and we were unable to recover it. 00:32:36.870 [2024-11-20 06:43:08.451465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.870 [2024-11-20 06:43:08.451496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.870 qpair failed and we were unable to recover it. 00:32:36.870 [2024-11-20 06:43:08.451701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.870 [2024-11-20 06:43:08.451734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.870 qpair failed and we were unable to recover it. 00:32:36.870 [2024-11-20 06:43:08.451845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.870 [2024-11-20 06:43:08.451877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.870 qpair failed and we were unable to recover it. 00:32:36.870 [2024-11-20 06:43:08.452157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.870 [2024-11-20 06:43:08.452190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.870 qpair failed and we were unable to recover it. 00:32:36.870 [2024-11-20 06:43:08.452445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.870 [2024-11-20 06:43:08.452478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.870 qpair failed and we were unable to recover it. 00:32:36.870 [2024-11-20 06:43:08.452591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.870 [2024-11-20 06:43:08.452622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.870 qpair failed and we were unable to recover it. 00:32:36.870 [2024-11-20 06:43:08.452750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.870 [2024-11-20 06:43:08.452782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.871 qpair failed and we were unable to recover it. 00:32:36.871 [2024-11-20 06:43:08.452998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.871 [2024-11-20 06:43:08.453029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.871 qpair failed and we were unable to recover it. 00:32:36.871 [2024-11-20 06:43:08.453212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.871 [2024-11-20 06:43:08.453245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.871 qpair failed and we were unable to recover it. 00:32:36.871 [2024-11-20 06:43:08.453381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.871 [2024-11-20 06:43:08.453412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.871 qpair failed and we were unable to recover it. 00:32:36.871 [2024-11-20 06:43:08.453604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.871 [2024-11-20 06:43:08.453636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.871 qpair failed and we were unable to recover it. 00:32:36.871 [2024-11-20 06:43:08.453769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.871 [2024-11-20 06:43:08.453801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.871 qpair failed and we were unable to recover it. 00:32:36.871 [2024-11-20 06:43:08.453937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.871 [2024-11-20 06:43:08.453968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.871 qpair failed and we were unable to recover it. 00:32:36.871 [2024-11-20 06:43:08.454234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.871 [2024-11-20 06:43:08.454267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.871 qpair failed and we were unable to recover it. 00:32:36.871 [2024-11-20 06:43:08.454440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.871 [2024-11-20 06:43:08.454472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.871 [2024-11-20 06:43:08.454472] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:36.871 qpair failed and we were unable to recover it. 00:32:36.871 [2024-11-20 06:43:08.454643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.871 [2024-11-20 06:43:08.454675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.871 qpair failed and we were unable to recover it. 00:32:36.871 [2024-11-20 06:43:08.454950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.871 [2024-11-20 06:43:08.454982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.871 qpair failed and we were unable to recover it. 00:32:36.871 [2024-11-20 06:43:08.455195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.871 [2024-11-20 06:43:08.455237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.871 qpair failed and we were unable to recover it. 00:32:36.871 [2024-11-20 06:43:08.455366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.871 [2024-11-20 06:43:08.455397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.871 qpair failed and we were unable to recover it. 00:32:36.871 [2024-11-20 06:43:08.455572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.871 [2024-11-20 06:43:08.455603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.871 qpair failed and we were unable to recover it. 00:32:36.871 [2024-11-20 06:43:08.455867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.871 [2024-11-20 06:43:08.455898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.871 qpair failed and we were unable to recover it. 00:32:36.871 [2024-11-20 06:43:08.456099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.871 [2024-11-20 06:43:08.456130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.871 qpair failed and we were unable to recover it. 00:32:36.871 [2024-11-20 06:43:08.456302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.871 [2024-11-20 06:43:08.456335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.871 qpair failed and we were unable to recover it. 00:32:36.871 [2024-11-20 06:43:08.456517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.871 [2024-11-20 06:43:08.456548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.871 qpair failed and we were unable to recover it. 00:32:36.871 [2024-11-20 06:43:08.456673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.871 [2024-11-20 06:43:08.456705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.871 qpair failed and we were unable to recover it. 00:32:36.871 [2024-11-20 06:43:08.456820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.871 [2024-11-20 06:43:08.456852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.871 qpair failed and we were unable to recover it. 00:32:36.871 [2024-11-20 06:43:08.457025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.871 [2024-11-20 06:43:08.457057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.871 qpair failed and we were unable to recover it. 00:32:36.871 [2024-11-20 06:43:08.457236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.871 [2024-11-20 06:43:08.457277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.871 qpair failed and we were unable to recover it. 00:32:36.871 [2024-11-20 06:43:08.457473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.871 [2024-11-20 06:43:08.457505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.871 qpair failed and we were unable to recover it. 00:32:36.871 [2024-11-20 06:43:08.457770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.871 [2024-11-20 06:43:08.457801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.871 qpair failed and we were unable to recover it. 00:32:36.871 [2024-11-20 06:43:08.457991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.871 [2024-11-20 06:43:08.458023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.871 qpair failed and we were unable to recover it. 00:32:36.871 [2024-11-20 06:43:08.458261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.871 [2024-11-20 06:43:08.458293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.871 qpair failed and we were unable to recover it. 00:32:36.871 [2024-11-20 06:43:08.458480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.871 [2024-11-20 06:43:08.458512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.871 qpair failed and we were unable to recover it. 00:32:36.871 [2024-11-20 06:43:08.458629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.871 [2024-11-20 06:43:08.458660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.871 qpair failed and we were unable to recover it. 00:32:36.871 [2024-11-20 06:43:08.458841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.871 [2024-11-20 06:43:08.458872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.871 qpair failed and we were unable to recover it. 00:32:36.871 [2024-11-20 06:43:08.459045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.871 [2024-11-20 06:43:08.459076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.871 qpair failed and we were unable to recover it. 00:32:36.871 [2024-11-20 06:43:08.459192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.871 [2024-11-20 06:43:08.459235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.871 qpair failed and we were unable to recover it. 00:32:36.871 [2024-11-20 06:43:08.459504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.871 [2024-11-20 06:43:08.459537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd164000b90 with addr=10.0.0.2, port=4420 00:32:36.871 qpair failed and we were unable to recover it. 00:32:36.871 [2024-11-20 06:43:08.459745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.871 [2024-11-20 06:43:08.459787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.871 qpair failed and we were unable to recover it. 00:32:36.871 06:43:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:36.871 [2024-11-20 06:43:08.459976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.871 [2024-11-20 06:43:08.460010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.871 qpair failed and we were unable to recover it. 00:32:36.871 [2024-11-20 06:43:08.460123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.871 [2024-11-20 06:43:08.460156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.871 qpair failed and we were unable to recover it. 00:32:36.872 06:43:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:36.872 [2024-11-20 06:43:08.460341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.872 [2024-11-20 06:43:08.460374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.872 qpair failed and we were unable to recover it. 00:32:36.872 [2024-11-20 06:43:08.460513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.872 [2024-11-20 06:43:08.460547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.872 qpair failed and we were unable to recover it. 00:32:36.872 06:43:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:36.872 [2024-11-20 06:43:08.460757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.872 [2024-11-20 06:43:08.460790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.872 qpair failed and we were unable to recover it. 00:32:36.872 [2024-11-20 06:43:08.460903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.872 06:43:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:36.872 [2024-11-20 06:43:08.460936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.872 qpair failed and we were unable to recover it. 00:32:36.872 [2024-11-20 06:43:08.461122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.872 [2024-11-20 06:43:08.461159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.872 qpair failed and we were unable to recover it. 00:32:36.872 [2024-11-20 06:43:08.461284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.872 [2024-11-20 06:43:08.461315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.872 qpair failed and we were unable to recover it. 00:32:36.872 [2024-11-20 06:43:08.461414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.872 [2024-11-20 06:43:08.461446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.872 qpair failed and we were unable to recover it. 00:32:36.872 [2024-11-20 06:43:08.461572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.872 [2024-11-20 06:43:08.461605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.872 qpair failed and we were unable to recover it. 00:32:36.872 [2024-11-20 06:43:08.461848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.872 [2024-11-20 06:43:08.461888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.872 qpair failed and we were unable to recover it. 00:32:36.872 [2024-11-20 06:43:08.462126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.872 [2024-11-20 06:43:08.462158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.872 qpair failed and we were unable to recover it. 00:32:36.872 [2024-11-20 06:43:08.462342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.872 [2024-11-20 06:43:08.462376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.872 qpair failed and we were unable to recover it. 00:32:36.872 [2024-11-20 06:43:08.462556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.872 [2024-11-20 06:43:08.462588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.872 qpair failed and we were unable to recover it. 00:32:36.872 [2024-11-20 06:43:08.462861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.872 [2024-11-20 06:43:08.462894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.872 qpair failed and we were unable to recover it. 00:32:36.872 [2024-11-20 06:43:08.463105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.872 [2024-11-20 06:43:08.463137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.872 qpair failed and we were unable to recover it. 00:32:36.872 [2024-11-20 06:43:08.463278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.872 [2024-11-20 06:43:08.463313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.872 qpair failed and we were unable to recover it. 00:32:36.872 [2024-11-20 06:43:08.463500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.872 [2024-11-20 06:43:08.463532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.872 qpair failed and we were unable to recover it. 00:32:36.872 [2024-11-20 06:43:08.463745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.872 [2024-11-20 06:43:08.463778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.872 qpair failed and we were unable to recover it. 00:32:36.872 [2024-11-20 06:43:08.463968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.872 [2024-11-20 06:43:08.464001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.872 qpair failed and we were unable to recover it. 00:32:36.872 [2024-11-20 06:43:08.464109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.872 [2024-11-20 06:43:08.464141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.872 qpair failed and we were unable to recover it. 00:32:36.872 [2024-11-20 06:43:08.464332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.872 [2024-11-20 06:43:08.464365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.872 qpair failed and we were unable to recover it. 00:32:36.872 [2024-11-20 06:43:08.464631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.872 [2024-11-20 06:43:08.464663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.872 qpair failed and we were unable to recover it. 00:32:36.872 [2024-11-20 06:43:08.464916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.872 [2024-11-20 06:43:08.464949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.872 qpair failed and we were unable to recover it. 00:32:36.872 [2024-11-20 06:43:08.465143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.872 [2024-11-20 06:43:08.465175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.872 qpair failed and we were unable to recover it. 00:32:36.872 [2024-11-20 06:43:08.465362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.872 [2024-11-20 06:43:08.465395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.872 qpair failed and we were unable to recover it. 00:32:36.872 [2024-11-20 06:43:08.465525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.872 [2024-11-20 06:43:08.465558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.872 qpair failed and we were unable to recover it. 00:32:36.872 [2024-11-20 06:43:08.465729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.872 [2024-11-20 06:43:08.465760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.872 qpair failed and we were unable to recover it. 00:32:36.872 [2024-11-20 06:43:08.465953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.872 [2024-11-20 06:43:08.465985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.872 qpair failed and we were unable to recover it. 00:32:36.872 [2024-11-20 06:43:08.466271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.872 [2024-11-20 06:43:08.466306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.872 qpair failed and we were unable to recover it. 00:32:36.872 [2024-11-20 06:43:08.466500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.872 [2024-11-20 06:43:08.466532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.872 qpair failed and we were unable to recover it. 00:32:36.872 [2024-11-20 06:43:08.466706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.872 [2024-11-20 06:43:08.466738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.872 qpair failed and we were unable to recover it. 00:32:36.872 [2024-11-20 06:43:08.466882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.872 [2024-11-20 06:43:08.466914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.872 qpair failed and we were unable to recover it. 00:32:36.872 [2024-11-20 06:43:08.467095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.872 [2024-11-20 06:43:08.467127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.872 qpair failed and we were unable to recover it. 00:32:36.872 [2024-11-20 06:43:08.467311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.872 [2024-11-20 06:43:08.467345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.872 qpair failed and we were unable to recover it. 00:32:36.872 [2024-11-20 06:43:08.467595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.872 [2024-11-20 06:43:08.467627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.872 qpair failed and we were unable to recover it. 00:32:36.873 [2024-11-20 06:43:08.467753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.873 [2024-11-20 06:43:08.467786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.873 qpair failed and we were unable to recover it. 00:32:36.873 06:43:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:36.873 [2024-11-20 06:43:08.468025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.873 [2024-11-20 06:43:08.468059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.873 qpair failed and we were unable to recover it. 00:32:36.873 [2024-11-20 06:43:08.468182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.873 [2024-11-20 06:43:08.468221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.873 qpair failed and we were unable to recover it. 00:32:36.873 06:43:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:36.873 [2024-11-20 06:43:08.468471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.873 [2024-11-20 06:43:08.468504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.873 qpair failed and we were unable to recover it. 00:32:36.873 [2024-11-20 06:43:08.468635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.873 [2024-11-20 06:43:08.468668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.873 06:43:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:36.873 qpair failed and we were unable to recover it. 00:32:36.873 [2024-11-20 06:43:08.468914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.873 [2024-11-20 06:43:08.468947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.873 06:43:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:36.873 qpair failed and we were unable to recover it. 00:32:36.873 [2024-11-20 06:43:08.469216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.873 [2024-11-20 06:43:08.469250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.873 qpair failed and we were unable to recover it. 00:32:36.873 [2024-11-20 06:43:08.469363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.873 [2024-11-20 06:43:08.469396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.873 qpair failed and we were unable to recover it. 00:32:36.873 [2024-11-20 06:43:08.469568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.873 [2024-11-20 06:43:08.469600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.873 qpair failed and we were unable to recover it. 00:32:36.873 [2024-11-20 06:43:08.469789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.873 [2024-11-20 06:43:08.469822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.873 qpair failed and we were unable to recover it. 00:32:36.873 [2024-11-20 06:43:08.469945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.873 [2024-11-20 06:43:08.469977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.873 qpair failed and we were unable to recover it. 00:32:36.873 [2024-11-20 06:43:08.470238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.873 [2024-11-20 06:43:08.470272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.873 qpair failed and we were unable to recover it. 00:32:36.873 [2024-11-20 06:43:08.470463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.873 [2024-11-20 06:43:08.470495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.873 qpair failed and we were unable to recover it. 00:32:36.873 [2024-11-20 06:43:08.470676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.873 [2024-11-20 06:43:08.470709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.873 qpair failed and we were unable to recover it. 00:32:36.873 [2024-11-20 06:43:08.470891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.873 [2024-11-20 06:43:08.470923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.873 qpair failed and we were unable to recover it. 00:32:36.873 [2024-11-20 06:43:08.471131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.873 [2024-11-20 06:43:08.471164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.873 qpair failed and we were unable to recover it. 00:32:36.873 [2024-11-20 06:43:08.471299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.873 [2024-11-20 06:43:08.471332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.873 qpair failed and we were unable to recover it. 00:32:36.873 [2024-11-20 06:43:08.471598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.873 [2024-11-20 06:43:08.471630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.873 qpair failed and we were unable to recover it. 00:32:36.873 [2024-11-20 06:43:08.471759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.873 [2024-11-20 06:43:08.471792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.873 qpair failed and we were unable to recover it. 00:32:36.873 [2024-11-20 06:43:08.471912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.873 [2024-11-20 06:43:08.471945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.873 qpair failed and we were unable to recover it. 00:32:36.873 [2024-11-20 06:43:08.472067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.873 [2024-11-20 06:43:08.472099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.873 qpair failed and we were unable to recover it. 00:32:36.873 [2024-11-20 06:43:08.472224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.873 [2024-11-20 06:43:08.472258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.873 qpair failed and we were unable to recover it. 00:32:36.873 [2024-11-20 06:43:08.472464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.873 [2024-11-20 06:43:08.472497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.873 qpair failed and we were unable to recover it. 00:32:36.873 [2024-11-20 06:43:08.472634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.873 [2024-11-20 06:43:08.472666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.873 qpair failed and we were unable to recover it. 00:32:36.873 [2024-11-20 06:43:08.472838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.873 [2024-11-20 06:43:08.472871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.873 qpair failed and we were unable to recover it. 00:32:36.873 [2024-11-20 06:43:08.472992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.873 [2024-11-20 06:43:08.473025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.873 qpair failed and we were unable to recover it. 00:32:36.873 [2024-11-20 06:43:08.473264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.873 [2024-11-20 06:43:08.473298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.873 qpair failed and we were unable to recover it. 00:32:36.873 [2024-11-20 06:43:08.473472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.873 [2024-11-20 06:43:08.473505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.873 qpair failed and we were unable to recover it. 00:32:36.873 [2024-11-20 06:43:08.473633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.873 [2024-11-20 06:43:08.473665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.873 qpair failed and we were unable to recover it. 00:32:36.873 [2024-11-20 06:43:08.473797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.873 [2024-11-20 06:43:08.473829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.873 qpair failed and we were unable to recover it. 00:32:36.873 [2024-11-20 06:43:08.474078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.873 [2024-11-20 06:43:08.474110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.873 qpair failed and we were unable to recover it. 00:32:36.873 [2024-11-20 06:43:08.474287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.873 [2024-11-20 06:43:08.474321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.873 qpair failed and we were unable to recover it. 00:32:36.873 [2024-11-20 06:43:08.474510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.873 [2024-11-20 06:43:08.474542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.873 qpair failed and we were unable to recover it. 00:32:36.873 [2024-11-20 06:43:08.474728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.873 [2024-11-20 06:43:08.474761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.873 qpair failed and we were unable to recover it. 00:32:36.873 [2024-11-20 06:43:08.474947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.873 [2024-11-20 06:43:08.474980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.873 qpair failed and we were unable to recover it. 00:32:36.874 [2024-11-20 06:43:08.475211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.874 [2024-11-20 06:43:08.475244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.874 qpair failed and we were unable to recover it. 00:32:36.874 [2024-11-20 06:43:08.475433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.874 [2024-11-20 06:43:08.475465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.874 qpair failed and we were unable to recover it. 00:32:36.874 [2024-11-20 06:43:08.475595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.874 [2024-11-20 06:43:08.475627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.874 qpair failed and we were unable to recover it. 00:32:36.874 [2024-11-20 06:43:08.475763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.874 [2024-11-20 06:43:08.475795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.874 qpair failed and we were unable to recover it. 00:32:36.874 06:43:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:36.874 [2024-11-20 06:43:08.475987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.874 [2024-11-20 06:43:08.476020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.874 qpair failed and we were unable to recover it. 00:32:36.874 [2024-11-20 06:43:08.476199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.874 [2024-11-20 06:43:08.476239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.874 qpair failed and we were unable to recover it. 00:32:36.874 06:43:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:36.874 [2024-11-20 06:43:08.476437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.874 [2024-11-20 06:43:08.476470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.874 qpair failed and we were unable to recover it. 00:32:36.874 [2024-11-20 06:43:08.476700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.874 06:43:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:36.874 [2024-11-20 06:43:08.476733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.874 qpair failed and we were unable to recover it. 00:32:36.874 [2024-11-20 06:43:08.476977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.874 06:43:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:36.874 [2024-11-20 06:43:08.477009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.874 qpair failed and we were unable to recover it. 00:32:36.874 [2024-11-20 06:43:08.477194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.874 [2024-11-20 06:43:08.477236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.874 qpair failed and we were unable to recover it. 00:32:36.874 [2024-11-20 06:43:08.477348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.874 [2024-11-20 06:43:08.477380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.874 qpair failed and we were unable to recover it. 00:32:36.874 [2024-11-20 06:43:08.477669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.874 [2024-11-20 06:43:08.477702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.874 qpair failed and we were unable to recover it. 00:32:36.874 [2024-11-20 06:43:08.477915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.874 [2024-11-20 06:43:08.477947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.874 qpair failed and we were unable to recover it. 00:32:36.874 [2024-11-20 06:43:08.478131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.874 [2024-11-20 06:43:08.478164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.874 qpair failed and we were unable to recover it. 00:32:36.874 [2024-11-20 06:43:08.478379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.874 [2024-11-20 06:43:08.478412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.874 qpair failed and we were unable to recover it. 00:32:36.874 [2024-11-20 06:43:08.478552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.874 [2024-11-20 06:43:08.478584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.874 qpair failed and we were unable to recover it. 00:32:36.874 [2024-11-20 06:43:08.478834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.874 [2024-11-20 06:43:08.478865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.874 qpair failed and we were unable to recover it. 00:32:36.874 [2024-11-20 06:43:08.479054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.874 [2024-11-20 06:43:08.479087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.874 qpair failed and we were unable to recover it. 00:32:36.874 [2024-11-20 06:43:08.479224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.874 [2024-11-20 06:43:08.479258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd15c000b90 with addr=10.0.0.2, port=4420 00:32:36.874 qpair failed and we were unable to recover it. 00:32:36.874 [2024-11-20 06:43:08.479446] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:36.874 06:43:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:36.874 06:43:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:36.874 06:43:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:36.874 06:43:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:36.874 [2024-11-20 06:43:08.485138] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:36.874 [2024-11-20 06:43:08.485256] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:36.874 [2024-11-20 06:43:08.485304] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:36.874 [2024-11-20 06:43:08.485328] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:36.874 [2024-11-20 06:43:08.485351] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:36.874 [2024-11-20 06:43:08.485404] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:36.874 qpair failed and we were unable to recover it. 00:32:36.874 06:43:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:36.874 06:43:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 710240 00:32:36.874 [2024-11-20 06:43:08.495073] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:36.874 [2024-11-20 06:43:08.495187] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:36.874 [2024-11-20 06:43:08.495229] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:36.874 [2024-11-20 06:43:08.495247] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:36.874 [2024-11-20 06:43:08.495262] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:36.874 [2024-11-20 06:43:08.495298] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:36.874 qpair failed and we were unable to recover it. 00:32:36.874 [2024-11-20 06:43:08.505061] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:36.874 [2024-11-20 06:43:08.505143] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:36.874 [2024-11-20 06:43:08.505171] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:36.874 [2024-11-20 06:43:08.505182] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:36.874 [2024-11-20 06:43:08.505193] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:36.874 [2024-11-20 06:43:08.505223] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:36.874 qpair failed and we were unable to recover it. 00:32:36.874 [2024-11-20 06:43:08.515045] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:36.874 [2024-11-20 06:43:08.515106] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:36.874 [2024-11-20 06:43:08.515121] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:36.874 [2024-11-20 06:43:08.515129] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:36.874 [2024-11-20 06:43:08.515136] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:36.874 [2024-11-20 06:43:08.515153] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:36.874 qpair failed and we were unable to recover it. 00:32:36.874 [2024-11-20 06:43:08.525040] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:36.874 [2024-11-20 06:43:08.525096] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:36.875 [2024-11-20 06:43:08.525110] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:36.875 [2024-11-20 06:43:08.525116] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:36.875 [2024-11-20 06:43:08.525122] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:36.875 [2024-11-20 06:43:08.525137] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:36.875 qpair failed and we were unable to recover it. 00:32:36.875 [2024-11-20 06:43:08.535055] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:36.875 [2024-11-20 06:43:08.535105] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:36.875 [2024-11-20 06:43:08.535120] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:36.875 [2024-11-20 06:43:08.535126] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:36.875 [2024-11-20 06:43:08.535133] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:36.875 [2024-11-20 06:43:08.535147] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:36.875 qpair failed and we were unable to recover it. 00:32:36.875 [2024-11-20 06:43:08.545077] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:36.875 [2024-11-20 06:43:08.545131] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:36.875 [2024-11-20 06:43:08.545145] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:36.875 [2024-11-20 06:43:08.545152] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:36.875 [2024-11-20 06:43:08.545161] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:36.875 [2024-11-20 06:43:08.545176] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:36.875 qpair failed and we were unable to recover it. 00:32:36.875 [2024-11-20 06:43:08.555132] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:36.875 [2024-11-20 06:43:08.555186] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:36.875 [2024-11-20 06:43:08.555200] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:36.875 [2024-11-20 06:43:08.555211] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:36.875 [2024-11-20 06:43:08.555217] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:36.875 [2024-11-20 06:43:08.555232] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:36.875 qpair failed and we were unable to recover it. 00:32:36.875 [2024-11-20 06:43:08.565197] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:36.875 [2024-11-20 06:43:08.565256] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:36.875 [2024-11-20 06:43:08.565270] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:36.875 [2024-11-20 06:43:08.565276] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:36.875 [2024-11-20 06:43:08.565282] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:36.875 [2024-11-20 06:43:08.565297] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:36.875 qpair failed and we were unable to recover it. 00:32:36.875 [2024-11-20 06:43:08.575197] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:36.875 [2024-11-20 06:43:08.575250] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:36.875 [2024-11-20 06:43:08.575263] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:36.875 [2024-11-20 06:43:08.575270] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:36.875 [2024-11-20 06:43:08.575276] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:36.875 [2024-11-20 06:43:08.575291] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:36.875 qpair failed and we were unable to recover it. 00:32:36.875 [2024-11-20 06:43:08.585245] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:36.875 [2024-11-20 06:43:08.585296] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:36.875 [2024-11-20 06:43:08.585310] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:36.875 [2024-11-20 06:43:08.585316] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:36.875 [2024-11-20 06:43:08.585322] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:36.875 [2024-11-20 06:43:08.585336] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:36.875 qpair failed and we were unable to recover it. 00:32:36.875 [2024-11-20 06:43:08.595164] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:36.875 [2024-11-20 06:43:08.595222] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:36.875 [2024-11-20 06:43:08.595236] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:36.875 [2024-11-20 06:43:08.595243] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:36.875 [2024-11-20 06:43:08.595249] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:36.875 [2024-11-20 06:43:08.595263] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:36.875 qpair failed and we were unable to recover it. 00:32:36.875 [2024-11-20 06:43:08.605292] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:36.875 [2024-11-20 06:43:08.605349] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:36.875 [2024-11-20 06:43:08.605363] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:36.875 [2024-11-20 06:43:08.605370] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:36.875 [2024-11-20 06:43:08.605376] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:36.875 [2024-11-20 06:43:08.605391] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:36.875 qpair failed and we were unable to recover it. 00:32:36.875 [2024-11-20 06:43:08.615291] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:36.875 [2024-11-20 06:43:08.615344] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:36.875 [2024-11-20 06:43:08.615357] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:36.875 [2024-11-20 06:43:08.615364] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:36.875 [2024-11-20 06:43:08.615370] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:36.875 [2024-11-20 06:43:08.615385] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:36.875 qpair failed and we were unable to recover it. 00:32:36.875 [2024-11-20 06:43:08.625318] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:36.875 [2024-11-20 06:43:08.625367] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:36.875 [2024-11-20 06:43:08.625380] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:36.875 [2024-11-20 06:43:08.625386] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:36.875 [2024-11-20 06:43:08.625393] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:36.875 [2024-11-20 06:43:08.625406] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:36.875 qpair failed and we were unable to recover it. 00:32:36.875 [2024-11-20 06:43:08.635368] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:36.876 [2024-11-20 06:43:08.635423] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:36.876 [2024-11-20 06:43:08.635440] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:36.876 [2024-11-20 06:43:08.635447] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:36.876 [2024-11-20 06:43:08.635453] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:36.876 [2024-11-20 06:43:08.635467] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:36.876 qpair failed and we were unable to recover it. 00:32:36.876 [2024-11-20 06:43:08.645378] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:36.876 [2024-11-20 06:43:08.645434] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:36.876 [2024-11-20 06:43:08.645448] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:36.876 [2024-11-20 06:43:08.645455] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:36.876 [2024-11-20 06:43:08.645461] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:36.876 [2024-11-20 06:43:08.645476] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:36.876 qpair failed and we were unable to recover it. 00:32:36.876 [2024-11-20 06:43:08.655402] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:36.876 [2024-11-20 06:43:08.655461] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:36.876 [2024-11-20 06:43:08.655474] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:36.876 [2024-11-20 06:43:08.655480] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:36.876 [2024-11-20 06:43:08.655486] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:36.876 [2024-11-20 06:43:08.655501] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:36.876 qpair failed and we were unable to recover it. 00:32:36.876 [2024-11-20 06:43:08.665414] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:36.876 [2024-11-20 06:43:08.665465] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:36.876 [2024-11-20 06:43:08.665478] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:36.876 [2024-11-20 06:43:08.665485] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:36.876 [2024-11-20 06:43:08.665491] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:36.876 [2024-11-20 06:43:08.665505] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:36.876 qpair failed and we were unable to recover it. 00:32:36.876 [2024-11-20 06:43:08.675505] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:36.876 [2024-11-20 06:43:08.675565] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:36.876 [2024-11-20 06:43:08.675582] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:36.876 [2024-11-20 06:43:08.675592] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:36.876 [2024-11-20 06:43:08.675598] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:36.876 [2024-11-20 06:43:08.675615] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:36.876 qpair failed and we were unable to recover it. 00:32:37.135 [2024-11-20 06:43:08.685509] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.135 [2024-11-20 06:43:08.685568] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.135 [2024-11-20 06:43:08.685585] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.135 [2024-11-20 06:43:08.685593] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.135 [2024-11-20 06:43:08.685599] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:37.135 [2024-11-20 06:43:08.685614] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:37.135 qpair failed and we were unable to recover it. 00:32:37.135 [2024-11-20 06:43:08.695509] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.135 [2024-11-20 06:43:08.695578] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.135 [2024-11-20 06:43:08.695592] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.135 [2024-11-20 06:43:08.695599] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.135 [2024-11-20 06:43:08.695605] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:37.135 [2024-11-20 06:43:08.695619] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:37.135 qpair failed and we were unable to recover it. 00:32:37.135 [2024-11-20 06:43:08.705528] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.135 [2024-11-20 06:43:08.705620] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.135 [2024-11-20 06:43:08.705634] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.135 [2024-11-20 06:43:08.705641] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.135 [2024-11-20 06:43:08.705647] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:37.135 [2024-11-20 06:43:08.705661] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:37.135 qpair failed and we were unable to recover it. 00:32:37.135 [2024-11-20 06:43:08.715597] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.135 [2024-11-20 06:43:08.715659] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.135 [2024-11-20 06:43:08.715676] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.135 [2024-11-20 06:43:08.715683] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.135 [2024-11-20 06:43:08.715689] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:37.135 [2024-11-20 06:43:08.715709] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:37.135 qpair failed and we were unable to recover it. 00:32:37.135 [2024-11-20 06:43:08.725607] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.135 [2024-11-20 06:43:08.725666] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.135 [2024-11-20 06:43:08.725680] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.135 [2024-11-20 06:43:08.725687] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.135 [2024-11-20 06:43:08.725693] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:37.135 [2024-11-20 06:43:08.725708] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:37.135 qpair failed and we were unable to recover it. 00:32:37.135 [2024-11-20 06:43:08.735633] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.135 [2024-11-20 06:43:08.735683] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.135 [2024-11-20 06:43:08.735697] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.135 [2024-11-20 06:43:08.735703] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.135 [2024-11-20 06:43:08.735709] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:37.135 [2024-11-20 06:43:08.735724] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:37.135 qpair failed and we were unable to recover it. 00:32:37.135 [2024-11-20 06:43:08.745647] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.135 [2024-11-20 06:43:08.745724] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.135 [2024-11-20 06:43:08.745738] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.135 [2024-11-20 06:43:08.745745] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.135 [2024-11-20 06:43:08.745750] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:37.135 [2024-11-20 06:43:08.745765] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:37.135 qpair failed and we were unable to recover it. 00:32:37.135 [2024-11-20 06:43:08.755696] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.135 [2024-11-20 06:43:08.755749] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.135 [2024-11-20 06:43:08.755763] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.135 [2024-11-20 06:43:08.755769] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.135 [2024-11-20 06:43:08.755775] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:37.135 [2024-11-20 06:43:08.755790] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:37.135 qpair failed and we were unable to recover it. 00:32:37.136 [2024-11-20 06:43:08.765720] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.136 [2024-11-20 06:43:08.765789] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.136 [2024-11-20 06:43:08.765803] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.136 [2024-11-20 06:43:08.765810] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.136 [2024-11-20 06:43:08.765816] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:37.136 [2024-11-20 06:43:08.765830] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:37.136 qpair failed and we were unable to recover it. 00:32:37.136 [2024-11-20 06:43:08.775782] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.136 [2024-11-20 06:43:08.775887] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.136 [2024-11-20 06:43:08.775901] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.136 [2024-11-20 06:43:08.775908] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.136 [2024-11-20 06:43:08.775914] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:37.136 [2024-11-20 06:43:08.775928] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:37.136 qpair failed and we were unable to recover it. 00:32:37.136 [2024-11-20 06:43:08.785741] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.136 [2024-11-20 06:43:08.785804] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.136 [2024-11-20 06:43:08.785817] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.136 [2024-11-20 06:43:08.785824] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.136 [2024-11-20 06:43:08.785829] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:37.136 [2024-11-20 06:43:08.785843] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:37.136 qpair failed and we were unable to recover it. 00:32:37.136 [2024-11-20 06:43:08.795800] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.136 [2024-11-20 06:43:08.795855] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.136 [2024-11-20 06:43:08.795868] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.136 [2024-11-20 06:43:08.795874] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.136 [2024-11-20 06:43:08.795880] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:37.136 [2024-11-20 06:43:08.795895] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:37.136 qpair failed and we were unable to recover it. 00:32:37.136 [2024-11-20 06:43:08.805815] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.136 [2024-11-20 06:43:08.805873] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.136 [2024-11-20 06:43:08.805886] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.136 [2024-11-20 06:43:08.805896] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.136 [2024-11-20 06:43:08.805902] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:37.136 [2024-11-20 06:43:08.805916] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:37.136 qpair failed and we were unable to recover it. 00:32:37.136 [2024-11-20 06:43:08.815907] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.136 [2024-11-20 06:43:08.815964] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.136 [2024-11-20 06:43:08.815978] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.136 [2024-11-20 06:43:08.815985] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.136 [2024-11-20 06:43:08.815990] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:37.136 [2024-11-20 06:43:08.816004] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:37.136 qpair failed and we were unable to recover it. 00:32:37.136 [2024-11-20 06:43:08.825874] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.136 [2024-11-20 06:43:08.825926] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.136 [2024-11-20 06:43:08.825939] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.136 [2024-11-20 06:43:08.825945] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.136 [2024-11-20 06:43:08.825951] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:37.136 [2024-11-20 06:43:08.825965] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:37.136 qpair failed and we were unable to recover it. 00:32:37.136 [2024-11-20 06:43:08.835923] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.136 [2024-11-20 06:43:08.835979] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.136 [2024-11-20 06:43:08.835993] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.136 [2024-11-20 06:43:08.835999] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.136 [2024-11-20 06:43:08.836006] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:37.136 [2024-11-20 06:43:08.836020] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:37.136 qpair failed and we were unable to recover it. 00:32:37.136 [2024-11-20 06:43:08.845948] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.136 [2024-11-20 06:43:08.846005] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.136 [2024-11-20 06:43:08.846019] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.136 [2024-11-20 06:43:08.846025] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.136 [2024-11-20 06:43:08.846031] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:37.136 [2024-11-20 06:43:08.846049] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:37.136 qpair failed and we were unable to recover it. 00:32:37.136 [2024-11-20 06:43:08.855940] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.136 [2024-11-20 06:43:08.855997] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.136 [2024-11-20 06:43:08.856010] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.136 [2024-11-20 06:43:08.856017] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.136 [2024-11-20 06:43:08.856022] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:37.136 [2024-11-20 06:43:08.856037] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:37.136 qpair failed and we were unable to recover it. 00:32:37.136 [2024-11-20 06:43:08.865989] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.136 [2024-11-20 06:43:08.866039] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.136 [2024-11-20 06:43:08.866052] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.136 [2024-11-20 06:43:08.866059] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.136 [2024-11-20 06:43:08.866065] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:37.137 [2024-11-20 06:43:08.866079] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:37.137 qpair failed and we were unable to recover it. 00:32:37.137 [2024-11-20 06:43:08.876042] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.137 [2024-11-20 06:43:08.876102] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.137 [2024-11-20 06:43:08.876115] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.137 [2024-11-20 06:43:08.876122] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.137 [2024-11-20 06:43:08.876127] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:37.137 [2024-11-20 06:43:08.876142] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:37.137 qpair failed and we were unable to recover it. 00:32:37.137 [2024-11-20 06:43:08.886078] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.137 [2024-11-20 06:43:08.886135] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.137 [2024-11-20 06:43:08.886149] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.137 [2024-11-20 06:43:08.886155] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.137 [2024-11-20 06:43:08.886161] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:37.137 [2024-11-20 06:43:08.886175] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:37.137 qpair failed and we were unable to recover it. 00:32:37.137 [2024-11-20 06:43:08.896079] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.137 [2024-11-20 06:43:08.896131] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.137 [2024-11-20 06:43:08.896144] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.137 [2024-11-20 06:43:08.896150] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.137 [2024-11-20 06:43:08.896156] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:37.137 [2024-11-20 06:43:08.896170] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:37.137 qpair failed and we were unable to recover it. 00:32:37.137 [2024-11-20 06:43:08.906160] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.137 [2024-11-20 06:43:08.906249] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.137 [2024-11-20 06:43:08.906262] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.137 [2024-11-20 06:43:08.906269] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.137 [2024-11-20 06:43:08.906274] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:37.137 [2024-11-20 06:43:08.906288] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:37.137 qpair failed and we were unable to recover it. 00:32:37.137 [2024-11-20 06:43:08.916144] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.137 [2024-11-20 06:43:08.916198] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.137 [2024-11-20 06:43:08.916215] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.137 [2024-11-20 06:43:08.916221] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.137 [2024-11-20 06:43:08.916227] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:37.137 [2024-11-20 06:43:08.916242] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:37.137 qpair failed and we were unable to recover it. 00:32:37.137 [2024-11-20 06:43:08.926196] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.137 [2024-11-20 06:43:08.926268] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.137 [2024-11-20 06:43:08.926285] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.137 [2024-11-20 06:43:08.926292] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.137 [2024-11-20 06:43:08.926298] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:37.137 [2024-11-20 06:43:08.926314] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:37.137 qpair failed and we were unable to recover it. 00:32:37.137 [2024-11-20 06:43:08.936210] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.137 [2024-11-20 06:43:08.936268] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.137 [2024-11-20 06:43:08.936287] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.137 [2024-11-20 06:43:08.936294] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.137 [2024-11-20 06:43:08.936300] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:37.137 [2024-11-20 06:43:08.936316] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:37.137 qpair failed and we were unable to recover it. 00:32:37.137 [2024-11-20 06:43:08.946227] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.137 [2024-11-20 06:43:08.946277] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.137 [2024-11-20 06:43:08.946291] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.137 [2024-11-20 06:43:08.946298] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.137 [2024-11-20 06:43:08.946304] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:37.137 [2024-11-20 06:43:08.946318] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:37.137 qpair failed and we were unable to recover it. 00:32:37.137 [2024-11-20 06:43:08.956270] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.137 [2024-11-20 06:43:08.956326] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.137 [2024-11-20 06:43:08.956340] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.137 [2024-11-20 06:43:08.956347] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.137 [2024-11-20 06:43:08.956354] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:37.137 [2024-11-20 06:43:08.956368] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:37.137 qpair failed and we were unable to recover it. 00:32:37.395 [2024-11-20 06:43:08.966363] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.395 [2024-11-20 06:43:08.966466] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.395 [2024-11-20 06:43:08.966484] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.396 [2024-11-20 06:43:08.966491] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.396 [2024-11-20 06:43:08.966498] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:37.396 [2024-11-20 06:43:08.966515] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:37.396 qpair failed and we were unable to recover it. 00:32:37.396 [2024-11-20 06:43:08.976341] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.396 [2024-11-20 06:43:08.976404] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.396 [2024-11-20 06:43:08.976421] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.396 [2024-11-20 06:43:08.976428] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.396 [2024-11-20 06:43:08.976438] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:37.396 [2024-11-20 06:43:08.976454] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:37.396 qpair failed and we were unable to recover it. 00:32:37.396 [2024-11-20 06:43:08.986369] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.396 [2024-11-20 06:43:08.986426] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.396 [2024-11-20 06:43:08.986440] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.396 [2024-11-20 06:43:08.986447] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.396 [2024-11-20 06:43:08.986452] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:37.396 [2024-11-20 06:43:08.986467] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:37.396 qpair failed and we were unable to recover it. 00:32:37.396 [2024-11-20 06:43:08.996389] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.396 [2024-11-20 06:43:08.996446] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.396 [2024-11-20 06:43:08.996460] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.396 [2024-11-20 06:43:08.996466] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.396 [2024-11-20 06:43:08.996472] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:37.396 [2024-11-20 06:43:08.996486] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:37.396 qpair failed and we were unable to recover it. 00:32:37.396 [2024-11-20 06:43:09.006412] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.396 [2024-11-20 06:43:09.006463] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.396 [2024-11-20 06:43:09.006477] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.396 [2024-11-20 06:43:09.006483] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.396 [2024-11-20 06:43:09.006488] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:37.396 [2024-11-20 06:43:09.006502] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:37.396 qpair failed and we were unable to recover it. 00:32:37.396 [2024-11-20 06:43:09.016438] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.396 [2024-11-20 06:43:09.016495] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.396 [2024-11-20 06:43:09.016508] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.396 [2024-11-20 06:43:09.016515] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.396 [2024-11-20 06:43:09.016520] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:37.396 [2024-11-20 06:43:09.016535] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:37.396 qpair failed and we were unable to recover it. 00:32:37.396 [2024-11-20 06:43:09.026473] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.396 [2024-11-20 06:43:09.026525] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.396 [2024-11-20 06:43:09.026539] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.396 [2024-11-20 06:43:09.026545] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.396 [2024-11-20 06:43:09.026551] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:37.396 [2024-11-20 06:43:09.026566] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:37.396 qpair failed and we were unable to recover it. 00:32:37.396 [2024-11-20 06:43:09.036501] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.396 [2024-11-20 06:43:09.036555] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.396 [2024-11-20 06:43:09.036569] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.396 [2024-11-20 06:43:09.036576] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.396 [2024-11-20 06:43:09.036582] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:37.396 [2024-11-20 06:43:09.036596] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:37.396 qpair failed and we were unable to recover it. 00:32:37.396 [2024-11-20 06:43:09.046537] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.396 [2024-11-20 06:43:09.046590] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.396 [2024-11-20 06:43:09.046603] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.396 [2024-11-20 06:43:09.046610] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.396 [2024-11-20 06:43:09.046616] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:37.396 [2024-11-20 06:43:09.046630] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:37.396 qpair failed and we were unable to recover it. 00:32:37.396 [2024-11-20 06:43:09.056549] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.396 [2024-11-20 06:43:09.056609] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.396 [2024-11-20 06:43:09.056639] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.396 [2024-11-20 06:43:09.056646] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.396 [2024-11-20 06:43:09.056652] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:37.396 [2024-11-20 06:43:09.056672] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:37.396 qpair failed and we were unable to recover it. 00:32:37.396 [2024-11-20 06:43:09.066585] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.396 [2024-11-20 06:43:09.066640] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.396 [2024-11-20 06:43:09.066657] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.396 [2024-11-20 06:43:09.066664] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.396 [2024-11-20 06:43:09.066669] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:37.396 [2024-11-20 06:43:09.066684] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:37.396 qpair failed and we were unable to recover it. 00:32:37.396 [2024-11-20 06:43:09.076618] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.396 [2024-11-20 06:43:09.076672] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.396 [2024-11-20 06:43:09.076685] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.396 [2024-11-20 06:43:09.076692] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.396 [2024-11-20 06:43:09.076698] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:37.396 [2024-11-20 06:43:09.076713] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:37.396 qpair failed and we were unable to recover it. 00:32:37.396 [2024-11-20 06:43:09.086700] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.396 [2024-11-20 06:43:09.086761] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.396 [2024-11-20 06:43:09.086775] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.396 [2024-11-20 06:43:09.086781] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.396 [2024-11-20 06:43:09.086787] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:37.396 [2024-11-20 06:43:09.086802] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:37.396 qpair failed and we were unable to recover it. 00:32:37.396 [2024-11-20 06:43:09.096680] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.396 [2024-11-20 06:43:09.096779] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.397 [2024-11-20 06:43:09.096792] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.397 [2024-11-20 06:43:09.096799] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.397 [2024-11-20 06:43:09.096805] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:37.397 [2024-11-20 06:43:09.096821] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:37.397 qpair failed and we were unable to recover it. 00:32:37.397 [2024-11-20 06:43:09.106691] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.397 [2024-11-20 06:43:09.106745] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.397 [2024-11-20 06:43:09.106759] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.397 [2024-11-20 06:43:09.106766] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.397 [2024-11-20 06:43:09.106775] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:37.397 [2024-11-20 06:43:09.106790] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:37.397 qpair failed and we were unable to recover it. 00:32:37.397 [2024-11-20 06:43:09.116811] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.397 [2024-11-20 06:43:09.116889] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.397 [2024-11-20 06:43:09.116904] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.397 [2024-11-20 06:43:09.116911] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.397 [2024-11-20 06:43:09.116917] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:37.397 [2024-11-20 06:43:09.116931] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:37.397 qpair failed and we were unable to recover it. 00:32:37.397 [2024-11-20 06:43:09.126753] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.397 [2024-11-20 06:43:09.126824] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.397 [2024-11-20 06:43:09.126838] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.397 [2024-11-20 06:43:09.126845] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.397 [2024-11-20 06:43:09.126852] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:37.397 [2024-11-20 06:43:09.126866] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:37.397 qpair failed and we were unable to recover it. 00:32:37.397 [2024-11-20 06:43:09.136792] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.397 [2024-11-20 06:43:09.136858] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.397 [2024-11-20 06:43:09.136873] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.397 [2024-11-20 06:43:09.136879] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.397 [2024-11-20 06:43:09.136885] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:37.397 [2024-11-20 06:43:09.136899] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:37.397 qpair failed and we were unable to recover it. 00:32:37.397 [2024-11-20 06:43:09.146823] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.397 [2024-11-20 06:43:09.146889] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.397 [2024-11-20 06:43:09.146904] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.397 [2024-11-20 06:43:09.146910] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.397 [2024-11-20 06:43:09.146916] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:37.397 [2024-11-20 06:43:09.146931] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:37.397 qpair failed and we were unable to recover it. 00:32:37.397 [2024-11-20 06:43:09.156847] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.397 [2024-11-20 06:43:09.156907] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.397 [2024-11-20 06:43:09.156920] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.397 [2024-11-20 06:43:09.156926] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.397 [2024-11-20 06:43:09.156932] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:37.397 [2024-11-20 06:43:09.156946] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:37.397 qpair failed and we were unable to recover it. 00:32:37.397 [2024-11-20 06:43:09.166803] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.397 [2024-11-20 06:43:09.166855] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.397 [2024-11-20 06:43:09.166869] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.397 [2024-11-20 06:43:09.166876] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.397 [2024-11-20 06:43:09.166882] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:37.397 [2024-11-20 06:43:09.166897] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:37.397 qpair failed and we were unable to recover it. 00:32:37.397 [2024-11-20 06:43:09.176911] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.397 [2024-11-20 06:43:09.176962] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.397 [2024-11-20 06:43:09.176975] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.397 [2024-11-20 06:43:09.176982] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.397 [2024-11-20 06:43:09.176987] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:37.397 [2024-11-20 06:43:09.177001] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:37.397 qpair failed and we were unable to recover it. 00:32:37.397 [2024-11-20 06:43:09.186951] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.397 [2024-11-20 06:43:09.187008] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.397 [2024-11-20 06:43:09.187021] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.397 [2024-11-20 06:43:09.187027] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.397 [2024-11-20 06:43:09.187033] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:37.397 [2024-11-20 06:43:09.187047] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:37.397 qpair failed and we were unable to recover it. 00:32:37.397 [2024-11-20 06:43:09.196947] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.397 [2024-11-20 06:43:09.197002] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.397 [2024-11-20 06:43:09.197018] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.397 [2024-11-20 06:43:09.197025] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.397 [2024-11-20 06:43:09.197031] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:37.397 [2024-11-20 06:43:09.197045] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:37.397 qpair failed and we were unable to recover it. 00:32:37.397 [2024-11-20 06:43:09.207000] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.397 [2024-11-20 06:43:09.207056] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.397 [2024-11-20 06:43:09.207069] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.397 [2024-11-20 06:43:09.207076] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.397 [2024-11-20 06:43:09.207081] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:37.397 [2024-11-20 06:43:09.207096] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:37.397 qpair failed and we were unable to recover it. 00:32:37.397 [2024-11-20 06:43:09.216996] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.397 [2024-11-20 06:43:09.217049] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.397 [2024-11-20 06:43:09.217062] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.397 [2024-11-20 06:43:09.217069] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.397 [2024-11-20 06:43:09.217075] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:37.397 [2024-11-20 06:43:09.217089] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:37.397 qpair failed and we were unable to recover it. 00:32:37.656 [2024-11-20 06:43:09.227080] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.656 [2024-11-20 06:43:09.227143] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.656 [2024-11-20 06:43:09.227161] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.656 [2024-11-20 06:43:09.227169] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.656 [2024-11-20 06:43:09.227175] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:37.656 [2024-11-20 06:43:09.227191] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:37.656 qpair failed and we were unable to recover it. 00:32:37.656 [2024-11-20 06:43:09.237088] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.656 [2024-11-20 06:43:09.237150] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.656 [2024-11-20 06:43:09.237167] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.656 [2024-11-20 06:43:09.237178] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.656 [2024-11-20 06:43:09.237184] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:37.656 [2024-11-20 06:43:09.237200] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:37.656 qpair failed and we were unable to recover it. 00:32:37.656 [2024-11-20 06:43:09.247140] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.656 [2024-11-20 06:43:09.247193] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.656 [2024-11-20 06:43:09.247213] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.656 [2024-11-20 06:43:09.247220] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.656 [2024-11-20 06:43:09.247226] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:37.656 [2024-11-20 06:43:09.247241] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:37.656 qpair failed and we were unable to recover it. 00:32:37.656 [2024-11-20 06:43:09.257111] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.656 [2024-11-20 06:43:09.257169] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.656 [2024-11-20 06:43:09.257183] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.656 [2024-11-20 06:43:09.257189] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.656 [2024-11-20 06:43:09.257195] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:37.656 [2024-11-20 06:43:09.257214] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:37.656 qpair failed and we were unable to recover it. 00:32:37.656 [2024-11-20 06:43:09.267129] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.656 [2024-11-20 06:43:09.267184] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.656 [2024-11-20 06:43:09.267196] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.656 [2024-11-20 06:43:09.267207] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.656 [2024-11-20 06:43:09.267213] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:37.656 [2024-11-20 06:43:09.267228] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:37.656 qpair failed and we were unable to recover it. 00:32:37.656 [2024-11-20 06:43:09.277103] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.656 [2024-11-20 06:43:09.277160] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.656 [2024-11-20 06:43:09.277174] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.656 [2024-11-20 06:43:09.277180] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.656 [2024-11-20 06:43:09.277186] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:37.656 [2024-11-20 06:43:09.277210] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:37.656 qpair failed and we were unable to recover it. 00:32:37.656 [2024-11-20 06:43:09.287322] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.656 [2024-11-20 06:43:09.287391] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.656 [2024-11-20 06:43:09.287405] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.656 [2024-11-20 06:43:09.287411] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.656 [2024-11-20 06:43:09.287417] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:37.656 [2024-11-20 06:43:09.287432] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:37.656 qpair failed and we were unable to recover it. 00:32:37.656 [2024-11-20 06:43:09.297261] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.656 [2024-11-20 06:43:09.297310] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.656 [2024-11-20 06:43:09.297323] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.656 [2024-11-20 06:43:09.297330] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.656 [2024-11-20 06:43:09.297336] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:37.656 [2024-11-20 06:43:09.297350] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:37.656 qpair failed and we were unable to recover it. 00:32:37.656 [2024-11-20 06:43:09.307297] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.656 [2024-11-20 06:43:09.307363] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.656 [2024-11-20 06:43:09.307376] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.656 [2024-11-20 06:43:09.307383] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.656 [2024-11-20 06:43:09.307388] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:37.656 [2024-11-20 06:43:09.307403] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:37.656 qpair failed and we were unable to recover it. 00:32:37.656 [2024-11-20 06:43:09.317355] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.657 [2024-11-20 06:43:09.317412] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.657 [2024-11-20 06:43:09.317425] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.657 [2024-11-20 06:43:09.317431] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.657 [2024-11-20 06:43:09.317437] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:37.657 [2024-11-20 06:43:09.317452] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:37.657 qpair failed and we were unable to recover it. 00:32:37.657 [2024-11-20 06:43:09.327363] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.657 [2024-11-20 06:43:09.327427] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.657 [2024-11-20 06:43:09.327440] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.657 [2024-11-20 06:43:09.327447] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.657 [2024-11-20 06:43:09.327453] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:37.657 [2024-11-20 06:43:09.327467] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:37.657 qpair failed and we were unable to recover it. 00:32:37.657 [2024-11-20 06:43:09.337288] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.657 [2024-11-20 06:43:09.337391] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.657 [2024-11-20 06:43:09.337406] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.657 [2024-11-20 06:43:09.337412] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.657 [2024-11-20 06:43:09.337418] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:37.657 [2024-11-20 06:43:09.337434] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:37.657 qpair failed and we were unable to recover it. 00:32:37.657 [2024-11-20 06:43:09.347416] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.657 [2024-11-20 06:43:09.347472] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.657 [2024-11-20 06:43:09.347490] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.657 [2024-11-20 06:43:09.347497] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.657 [2024-11-20 06:43:09.347503] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:37.657 [2024-11-20 06:43:09.347519] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:37.657 qpair failed and we were unable to recover it. 00:32:37.657 [2024-11-20 06:43:09.357389] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.657 [2024-11-20 06:43:09.357445] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.657 [2024-11-20 06:43:09.357458] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.657 [2024-11-20 06:43:09.357465] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.657 [2024-11-20 06:43:09.357471] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:37.657 [2024-11-20 06:43:09.357485] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:37.657 qpair failed and we were unable to recover it. 00:32:37.657 [2024-11-20 06:43:09.367494] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.657 [2024-11-20 06:43:09.367545] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.657 [2024-11-20 06:43:09.367558] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.657 [2024-11-20 06:43:09.367569] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.657 [2024-11-20 06:43:09.367575] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:37.657 [2024-11-20 06:43:09.367590] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:37.657 qpair failed and we were unable to recover it. 00:32:37.657 [2024-11-20 06:43:09.377466] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.657 [2024-11-20 06:43:09.377561] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.657 [2024-11-20 06:43:09.377575] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.657 [2024-11-20 06:43:09.377581] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.657 [2024-11-20 06:43:09.377588] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:37.657 [2024-11-20 06:43:09.377602] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:37.657 qpair failed and we were unable to recover it. 00:32:37.657 [2024-11-20 06:43:09.387500] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.657 [2024-11-20 06:43:09.387549] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.657 [2024-11-20 06:43:09.387562] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.657 [2024-11-20 06:43:09.387569] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.657 [2024-11-20 06:43:09.387575] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:37.657 [2024-11-20 06:43:09.387590] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:37.657 qpair failed and we were unable to recover it. 00:32:37.657 [2024-11-20 06:43:09.397533] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.657 [2024-11-20 06:43:09.397613] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.657 [2024-11-20 06:43:09.397627] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.657 [2024-11-20 06:43:09.397633] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.657 [2024-11-20 06:43:09.397639] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:37.657 [2024-11-20 06:43:09.397653] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:37.657 qpair failed and we were unable to recover it. 00:32:37.657 [2024-11-20 06:43:09.407591] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.657 [2024-11-20 06:43:09.407643] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.657 [2024-11-20 06:43:09.407656] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.657 [2024-11-20 06:43:09.407663] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.657 [2024-11-20 06:43:09.407669] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:37.657 [2024-11-20 06:43:09.407686] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:37.657 qpair failed and we were unable to recover it. 00:32:37.657 [2024-11-20 06:43:09.417518] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.657 [2024-11-20 06:43:09.417608] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.657 [2024-11-20 06:43:09.417621] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.657 [2024-11-20 06:43:09.417628] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.657 [2024-11-20 06:43:09.417633] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:37.657 [2024-11-20 06:43:09.417648] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:37.657 qpair failed and we were unable to recover it. 00:32:37.657 [2024-11-20 06:43:09.427578] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.657 [2024-11-20 06:43:09.427633] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.657 [2024-11-20 06:43:09.427645] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.657 [2024-11-20 06:43:09.427652] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.657 [2024-11-20 06:43:09.427658] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:37.657 [2024-11-20 06:43:09.427672] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:37.657 qpair failed and we were unable to recover it. 00:32:37.657 [2024-11-20 06:43:09.437631] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.657 [2024-11-20 06:43:09.437685] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.657 [2024-11-20 06:43:09.437700] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.657 [2024-11-20 06:43:09.437707] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.657 [2024-11-20 06:43:09.437713] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:37.657 [2024-11-20 06:43:09.437727] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:37.657 qpair failed and we were unable to recover it. 00:32:37.657 [2024-11-20 06:43:09.447620] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.658 [2024-11-20 06:43:09.447707] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.658 [2024-11-20 06:43:09.447720] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.658 [2024-11-20 06:43:09.447727] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.658 [2024-11-20 06:43:09.447732] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:37.658 [2024-11-20 06:43:09.447747] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:37.658 qpair failed and we were unable to recover it. 00:32:37.658 [2024-11-20 06:43:09.457717] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.658 [2024-11-20 06:43:09.457802] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.658 [2024-11-20 06:43:09.457816] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.658 [2024-11-20 06:43:09.457822] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.658 [2024-11-20 06:43:09.457828] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:37.658 [2024-11-20 06:43:09.457842] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:37.658 qpair failed and we were unable to recover it. 00:32:37.658 [2024-11-20 06:43:09.467740] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.658 [2024-11-20 06:43:09.467819] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.658 [2024-11-20 06:43:09.467832] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.658 [2024-11-20 06:43:09.467838] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.658 [2024-11-20 06:43:09.467844] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:37.658 [2024-11-20 06:43:09.467858] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:37.658 qpair failed and we were unable to recover it. 00:32:37.658 [2024-11-20 06:43:09.477706] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.658 [2024-11-20 06:43:09.477812] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.658 [2024-11-20 06:43:09.477825] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.658 [2024-11-20 06:43:09.477831] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.658 [2024-11-20 06:43:09.477837] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:37.658 [2024-11-20 06:43:09.477851] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:37.658 qpair failed and we were unable to recover it. 00:32:37.916 [2024-11-20 06:43:09.487810] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.916 [2024-11-20 06:43:09.487905] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.916 [2024-11-20 06:43:09.487923] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.916 [2024-11-20 06:43:09.487930] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.916 [2024-11-20 06:43:09.487937] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:37.917 [2024-11-20 06:43:09.487953] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:37.917 qpair failed and we were unable to recover it. 00:32:37.917 [2024-11-20 06:43:09.497776] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.917 [2024-11-20 06:43:09.497832] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.917 [2024-11-20 06:43:09.497853] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.917 [2024-11-20 06:43:09.497860] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.917 [2024-11-20 06:43:09.497866] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:37.917 [2024-11-20 06:43:09.497882] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:37.917 qpair failed and we were unable to recover it. 00:32:37.917 [2024-11-20 06:43:09.507880] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.917 [2024-11-20 06:43:09.507934] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.917 [2024-11-20 06:43:09.507947] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.917 [2024-11-20 06:43:09.507954] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.917 [2024-11-20 06:43:09.507960] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:37.917 [2024-11-20 06:43:09.507974] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:37.917 qpair failed and we were unable to recover it. 00:32:37.917 [2024-11-20 06:43:09.517931] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.917 [2024-11-20 06:43:09.517989] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.917 [2024-11-20 06:43:09.518003] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.917 [2024-11-20 06:43:09.518010] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.917 [2024-11-20 06:43:09.518016] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:37.917 [2024-11-20 06:43:09.518031] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:37.917 qpair failed and we were unable to recover it. 00:32:37.917 [2024-11-20 06:43:09.527892] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.917 [2024-11-20 06:43:09.527948] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.917 [2024-11-20 06:43:09.527961] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.917 [2024-11-20 06:43:09.527968] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.917 [2024-11-20 06:43:09.527974] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:37.917 [2024-11-20 06:43:09.527988] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:37.917 qpair failed and we were unable to recover it. 00:32:37.917 [2024-11-20 06:43:09.537963] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.917 [2024-11-20 06:43:09.538059] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.917 [2024-11-20 06:43:09.538074] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.917 [2024-11-20 06:43:09.538080] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.917 [2024-11-20 06:43:09.538089] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:37.917 [2024-11-20 06:43:09.538104] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:37.917 qpair failed and we were unable to recover it. 00:32:37.917 [2024-11-20 06:43:09.547935] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.917 [2024-11-20 06:43:09.547987] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.917 [2024-11-20 06:43:09.548003] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.917 [2024-11-20 06:43:09.548010] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.917 [2024-11-20 06:43:09.548016] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:37.917 [2024-11-20 06:43:09.548031] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:37.917 qpair failed and we were unable to recover it. 00:32:37.917 [2024-11-20 06:43:09.558016] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.917 [2024-11-20 06:43:09.558070] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.917 [2024-11-20 06:43:09.558083] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.917 [2024-11-20 06:43:09.558090] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.917 [2024-11-20 06:43:09.558096] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:37.917 [2024-11-20 06:43:09.558110] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:37.917 qpair failed and we were unable to recover it. 00:32:37.917 [2024-11-20 06:43:09.568049] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.917 [2024-11-20 06:43:09.568105] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.917 [2024-11-20 06:43:09.568118] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.917 [2024-11-20 06:43:09.568124] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.917 [2024-11-20 06:43:09.568130] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:37.917 [2024-11-20 06:43:09.568144] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:37.917 qpair failed and we were unable to recover it. 00:32:37.917 [2024-11-20 06:43:09.578040] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.917 [2024-11-20 06:43:09.578096] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.917 [2024-11-20 06:43:09.578110] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.917 [2024-11-20 06:43:09.578117] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.917 [2024-11-20 06:43:09.578123] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:37.917 [2024-11-20 06:43:09.578137] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:37.917 qpair failed and we were unable to recover it. 00:32:37.917 [2024-11-20 06:43:09.588105] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.917 [2024-11-20 06:43:09.588156] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.917 [2024-11-20 06:43:09.588169] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.917 [2024-11-20 06:43:09.588175] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.917 [2024-11-20 06:43:09.588181] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:37.917 [2024-11-20 06:43:09.588196] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:37.917 qpair failed and we were unable to recover it. 00:32:37.917 [2024-11-20 06:43:09.598106] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.917 [2024-11-20 06:43:09.598161] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.917 [2024-11-20 06:43:09.598174] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.917 [2024-11-20 06:43:09.598180] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.917 [2024-11-20 06:43:09.598186] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:37.917 [2024-11-20 06:43:09.598200] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:37.917 qpair failed and we were unable to recover it. 00:32:37.917 [2024-11-20 06:43:09.608126] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.917 [2024-11-20 06:43:09.608180] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.917 [2024-11-20 06:43:09.608194] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.917 [2024-11-20 06:43:09.608200] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.917 [2024-11-20 06:43:09.608210] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:37.917 [2024-11-20 06:43:09.608225] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:37.918 qpair failed and we were unable to recover it. 00:32:37.918 [2024-11-20 06:43:09.618158] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.918 [2024-11-20 06:43:09.618211] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.918 [2024-11-20 06:43:09.618224] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.918 [2024-11-20 06:43:09.618231] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.918 [2024-11-20 06:43:09.618237] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:37.918 [2024-11-20 06:43:09.618251] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:37.918 qpair failed and we were unable to recover it. 00:32:37.918 [2024-11-20 06:43:09.628172] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.918 [2024-11-20 06:43:09.628230] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.918 [2024-11-20 06:43:09.628245] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.918 [2024-11-20 06:43:09.628252] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.918 [2024-11-20 06:43:09.628257] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:37.918 [2024-11-20 06:43:09.628271] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:37.918 qpair failed and we were unable to recover it. 00:32:37.918 [2024-11-20 06:43:09.638216] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.918 [2024-11-20 06:43:09.638275] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.918 [2024-11-20 06:43:09.638289] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.918 [2024-11-20 06:43:09.638296] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.918 [2024-11-20 06:43:09.638302] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:37.918 [2024-11-20 06:43:09.638317] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:37.918 qpair failed and we were unable to recover it. 00:32:37.918 [2024-11-20 06:43:09.648260] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.918 [2024-11-20 06:43:09.648315] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.918 [2024-11-20 06:43:09.648329] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.918 [2024-11-20 06:43:09.648336] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.918 [2024-11-20 06:43:09.648342] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:37.918 [2024-11-20 06:43:09.648357] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:37.918 qpair failed and we were unable to recover it. 00:32:37.918 [2024-11-20 06:43:09.658258] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.918 [2024-11-20 06:43:09.658310] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.918 [2024-11-20 06:43:09.658323] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.918 [2024-11-20 06:43:09.658329] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.918 [2024-11-20 06:43:09.658335] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:37.918 [2024-11-20 06:43:09.658349] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:37.918 qpair failed and we were unable to recover it. 00:32:37.918 [2024-11-20 06:43:09.668286] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.918 [2024-11-20 06:43:09.668342] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.918 [2024-11-20 06:43:09.668355] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.918 [2024-11-20 06:43:09.668362] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.918 [2024-11-20 06:43:09.668370] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:37.918 [2024-11-20 06:43:09.668385] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:37.918 qpair failed and we were unable to recover it. 00:32:37.918 [2024-11-20 06:43:09.678359] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.918 [2024-11-20 06:43:09.678441] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.918 [2024-11-20 06:43:09.678454] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.918 [2024-11-20 06:43:09.678461] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.918 [2024-11-20 06:43:09.678467] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:37.918 [2024-11-20 06:43:09.678481] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:37.918 qpair failed and we were unable to recover it. 00:32:37.918 [2024-11-20 06:43:09.688352] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.918 [2024-11-20 06:43:09.688421] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.918 [2024-11-20 06:43:09.688433] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.918 [2024-11-20 06:43:09.688440] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.918 [2024-11-20 06:43:09.688445] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:37.918 [2024-11-20 06:43:09.688458] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:37.918 qpair failed and we were unable to recover it. 00:32:37.918 [2024-11-20 06:43:09.698382] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.918 [2024-11-20 06:43:09.698440] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.918 [2024-11-20 06:43:09.698453] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.918 [2024-11-20 06:43:09.698460] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.918 [2024-11-20 06:43:09.698465] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:37.918 [2024-11-20 06:43:09.698480] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:37.918 qpair failed and we were unable to recover it. 00:32:37.918 [2024-11-20 06:43:09.708397] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.918 [2024-11-20 06:43:09.708449] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.918 [2024-11-20 06:43:09.708462] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.918 [2024-11-20 06:43:09.708469] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.918 [2024-11-20 06:43:09.708474] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:37.918 [2024-11-20 06:43:09.708489] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:37.918 qpair failed and we were unable to recover it. 00:32:37.918 [2024-11-20 06:43:09.718439] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.918 [2024-11-20 06:43:09.718499] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.918 [2024-11-20 06:43:09.718512] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.918 [2024-11-20 06:43:09.718518] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.918 [2024-11-20 06:43:09.718524] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:37.919 [2024-11-20 06:43:09.718538] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:37.919 qpair failed and we were unable to recover it. 00:32:37.919 [2024-11-20 06:43:09.728491] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.919 [2024-11-20 06:43:09.728548] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.919 [2024-11-20 06:43:09.728561] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.919 [2024-11-20 06:43:09.728567] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.919 [2024-11-20 06:43:09.728573] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:37.919 [2024-11-20 06:43:09.728586] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:37.919 qpair failed and we were unable to recover it. 00:32:37.919 [2024-11-20 06:43:09.738483] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.919 [2024-11-20 06:43:09.738536] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.919 [2024-11-20 06:43:09.738549] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.919 [2024-11-20 06:43:09.738556] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.919 [2024-11-20 06:43:09.738562] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:37.919 [2024-11-20 06:43:09.738576] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:37.919 qpair failed and we were unable to recover it. 00:32:38.178 [2024-11-20 06:43:09.748549] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.178 [2024-11-20 06:43:09.748604] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.178 [2024-11-20 06:43:09.748621] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.178 [2024-11-20 06:43:09.748629] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.178 [2024-11-20 06:43:09.748635] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:38.178 [2024-11-20 06:43:09.748651] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:38.178 qpair failed and we were unable to recover it. 00:32:38.178 [2024-11-20 06:43:09.758573] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.178 [2024-11-20 06:43:09.758661] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.178 [2024-11-20 06:43:09.758681] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.178 [2024-11-20 06:43:09.758689] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.178 [2024-11-20 06:43:09.758694] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:38.178 [2024-11-20 06:43:09.758710] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:38.178 qpair failed and we were unable to recover it. 00:32:38.178 [2024-11-20 06:43:09.768579] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.178 [2024-11-20 06:43:09.768635] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.178 [2024-11-20 06:43:09.768649] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.178 [2024-11-20 06:43:09.768656] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.178 [2024-11-20 06:43:09.768662] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:38.178 [2024-11-20 06:43:09.768677] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:38.178 qpair failed and we were unable to recover it. 00:32:38.178 [2024-11-20 06:43:09.778576] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.178 [2024-11-20 06:43:09.778629] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.178 [2024-11-20 06:43:09.778642] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.178 [2024-11-20 06:43:09.778649] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.178 [2024-11-20 06:43:09.778655] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:38.178 [2024-11-20 06:43:09.778669] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:38.178 qpair failed and we were unable to recover it. 00:32:38.178 [2024-11-20 06:43:09.788624] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.178 [2024-11-20 06:43:09.788676] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.178 [2024-11-20 06:43:09.788689] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.179 [2024-11-20 06:43:09.788695] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.179 [2024-11-20 06:43:09.788701] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:38.179 [2024-11-20 06:43:09.788715] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:38.179 qpair failed and we were unable to recover it. 00:32:38.179 [2024-11-20 06:43:09.798658] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.179 [2024-11-20 06:43:09.798714] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.179 [2024-11-20 06:43:09.798728] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.179 [2024-11-20 06:43:09.798737] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.179 [2024-11-20 06:43:09.798743] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:38.179 [2024-11-20 06:43:09.798757] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:38.179 qpair failed and we were unable to recover it. 00:32:38.179 [2024-11-20 06:43:09.808748] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.179 [2024-11-20 06:43:09.808831] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.179 [2024-11-20 06:43:09.808844] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.179 [2024-11-20 06:43:09.808851] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.179 [2024-11-20 06:43:09.808856] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:38.179 [2024-11-20 06:43:09.808870] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:38.179 qpair failed and we were unable to recover it. 00:32:38.179 [2024-11-20 06:43:09.818765] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.179 [2024-11-20 06:43:09.818816] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.179 [2024-11-20 06:43:09.818829] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.179 [2024-11-20 06:43:09.818836] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.179 [2024-11-20 06:43:09.818841] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:38.179 [2024-11-20 06:43:09.818856] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:38.179 qpair failed and we were unable to recover it. 00:32:38.179 [2024-11-20 06:43:09.828730] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.179 [2024-11-20 06:43:09.828785] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.179 [2024-11-20 06:43:09.828797] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.179 [2024-11-20 06:43:09.828804] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.179 [2024-11-20 06:43:09.828809] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:38.179 [2024-11-20 06:43:09.828824] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:38.179 qpair failed and we were unable to recover it. 00:32:38.179 [2024-11-20 06:43:09.838777] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.179 [2024-11-20 06:43:09.838834] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.179 [2024-11-20 06:43:09.838848] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.179 [2024-11-20 06:43:09.838855] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.179 [2024-11-20 06:43:09.838861] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:38.179 [2024-11-20 06:43:09.838888] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:38.179 qpair failed and we were unable to recover it. 00:32:38.179 [2024-11-20 06:43:09.848853] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.179 [2024-11-20 06:43:09.848911] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.179 [2024-11-20 06:43:09.848926] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.179 [2024-11-20 06:43:09.848932] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.179 [2024-11-20 06:43:09.848938] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:38.179 [2024-11-20 06:43:09.848953] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:38.179 qpair failed and we were unable to recover it. 00:32:38.179 [2024-11-20 06:43:09.858862] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.179 [2024-11-20 06:43:09.858919] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.179 [2024-11-20 06:43:09.858933] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.179 [2024-11-20 06:43:09.858940] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.179 [2024-11-20 06:43:09.858946] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:38.179 [2024-11-20 06:43:09.858960] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:38.179 qpair failed and we were unable to recover it. 00:32:38.179 [2024-11-20 06:43:09.868847] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.179 [2024-11-20 06:43:09.868898] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.179 [2024-11-20 06:43:09.868911] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.179 [2024-11-20 06:43:09.868917] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.179 [2024-11-20 06:43:09.868924] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:38.179 [2024-11-20 06:43:09.868938] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:38.179 qpair failed and we were unable to recover it. 00:32:38.179 [2024-11-20 06:43:09.878911] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.179 [2024-11-20 06:43:09.878964] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.179 [2024-11-20 06:43:09.878977] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.179 [2024-11-20 06:43:09.878983] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.179 [2024-11-20 06:43:09.878989] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:38.179 [2024-11-20 06:43:09.879004] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:38.179 qpair failed and we were unable to recover it. 00:32:38.179 [2024-11-20 06:43:09.888913] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.179 [2024-11-20 06:43:09.888976] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.179 [2024-11-20 06:43:09.888988] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.179 [2024-11-20 06:43:09.888995] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.179 [2024-11-20 06:43:09.889000] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:38.179 [2024-11-20 06:43:09.889014] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:38.179 qpair failed and we were unable to recover it. 00:32:38.179 [2024-11-20 06:43:09.898880] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.179 [2024-11-20 06:43:09.898935] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.179 [2024-11-20 06:43:09.898948] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.179 [2024-11-20 06:43:09.898957] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.179 [2024-11-20 06:43:09.898963] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:38.179 [2024-11-20 06:43:09.898978] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:38.179 qpair failed and we were unable to recover it. 00:32:38.179 [2024-11-20 06:43:09.909010] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.179 [2024-11-20 06:43:09.909091] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.179 [2024-11-20 06:43:09.909104] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.179 [2024-11-20 06:43:09.909111] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.179 [2024-11-20 06:43:09.909117] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:38.179 [2024-11-20 06:43:09.909131] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:38.179 qpair failed and we were unable to recover it. 00:32:38.179 [2024-11-20 06:43:09.918989] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.179 [2024-11-20 06:43:09.919044] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.179 [2024-11-20 06:43:09.919057] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.180 [2024-11-20 06:43:09.919063] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.180 [2024-11-20 06:43:09.919069] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:38.180 [2024-11-20 06:43:09.919082] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:38.180 qpair failed and we were unable to recover it. 00:32:38.180 [2024-11-20 06:43:09.929022] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.180 [2024-11-20 06:43:09.929080] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.180 [2024-11-20 06:43:09.929093] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.180 [2024-11-20 06:43:09.929103] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.180 [2024-11-20 06:43:09.929109] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:38.180 [2024-11-20 06:43:09.929123] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:38.180 qpair failed and we were unable to recover it. 00:32:38.180 [2024-11-20 06:43:09.939052] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.180 [2024-11-20 06:43:09.939130] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.180 [2024-11-20 06:43:09.939144] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.180 [2024-11-20 06:43:09.939150] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.180 [2024-11-20 06:43:09.939156] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:38.180 [2024-11-20 06:43:09.939171] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:38.180 qpair failed and we were unable to recover it. 00:32:38.180 [2024-11-20 06:43:09.949119] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.180 [2024-11-20 06:43:09.949218] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.180 [2024-11-20 06:43:09.949231] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.180 [2024-11-20 06:43:09.949238] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.180 [2024-11-20 06:43:09.949243] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:38.180 [2024-11-20 06:43:09.949258] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:38.180 qpair failed and we were unable to recover it. 00:32:38.180 [2024-11-20 06:43:09.959125] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.180 [2024-11-20 06:43:09.959181] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.180 [2024-11-20 06:43:09.959195] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.180 [2024-11-20 06:43:09.959204] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.180 [2024-11-20 06:43:09.959211] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:38.180 [2024-11-20 06:43:09.959225] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:38.180 qpair failed and we were unable to recover it. 00:32:38.180 [2024-11-20 06:43:09.969139] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.180 [2024-11-20 06:43:09.969193] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.180 [2024-11-20 06:43:09.969210] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.180 [2024-11-20 06:43:09.969217] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.180 [2024-11-20 06:43:09.969222] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:38.180 [2024-11-20 06:43:09.969240] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:38.180 qpair failed and we were unable to recover it. 00:32:38.180 [2024-11-20 06:43:09.979214] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.180 [2024-11-20 06:43:09.979267] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.180 [2024-11-20 06:43:09.979280] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.180 [2024-11-20 06:43:09.979286] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.180 [2024-11-20 06:43:09.979292] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:38.180 [2024-11-20 06:43:09.979306] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:38.180 qpair failed and we were unable to recover it. 00:32:38.180 [2024-11-20 06:43:09.989212] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.180 [2024-11-20 06:43:09.989260] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.180 [2024-11-20 06:43:09.989273] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.180 [2024-11-20 06:43:09.989279] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.180 [2024-11-20 06:43:09.989285] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:38.180 [2024-11-20 06:43:09.989299] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:38.180 qpair failed and we were unable to recover it. 00:32:38.180 [2024-11-20 06:43:09.999158] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.180 [2024-11-20 06:43:09.999216] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.180 [2024-11-20 06:43:09.999230] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.180 [2024-11-20 06:43:09.999237] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.180 [2024-11-20 06:43:09.999243] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:38.180 [2024-11-20 06:43:09.999257] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:38.180 qpair failed and we were unable to recover it. 00:32:38.180 [2024-11-20 06:43:10.009288] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.180 [2024-11-20 06:43:10.009368] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.180 [2024-11-20 06:43:10.009389] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.180 [2024-11-20 06:43:10.009397] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.180 [2024-11-20 06:43:10.009404] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:38.180 [2024-11-20 06:43:10.009422] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:38.180 qpair failed and we were unable to recover it. 00:32:38.439 [2024-11-20 06:43:10.019331] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.439 [2024-11-20 06:43:10.019395] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.439 [2024-11-20 06:43:10.019409] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.439 [2024-11-20 06:43:10.019416] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.439 [2024-11-20 06:43:10.019423] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:38.439 [2024-11-20 06:43:10.019439] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:38.439 qpair failed and we were unable to recover it. 00:32:38.439 [2024-11-20 06:43:10.029397] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.439 [2024-11-20 06:43:10.029464] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.439 [2024-11-20 06:43:10.029478] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.439 [2024-11-20 06:43:10.029486] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.439 [2024-11-20 06:43:10.029493] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:38.439 [2024-11-20 06:43:10.029509] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:38.439 qpair failed and we were unable to recover it. 00:32:38.439 [2024-11-20 06:43:10.039369] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.439 [2024-11-20 06:43:10.039433] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.439 [2024-11-20 06:43:10.039450] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.439 [2024-11-20 06:43:10.039458] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.439 [2024-11-20 06:43:10.039465] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:38.439 [2024-11-20 06:43:10.039481] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:38.439 qpair failed and we were unable to recover it. 00:32:38.439 [2024-11-20 06:43:10.049396] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.439 [2024-11-20 06:43:10.049452] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.439 [2024-11-20 06:43:10.049466] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.439 [2024-11-20 06:43:10.049472] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.439 [2024-11-20 06:43:10.049478] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:38.439 [2024-11-20 06:43:10.049493] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:38.439 qpair failed and we were unable to recover it. 00:32:38.439 [2024-11-20 06:43:10.059377] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.440 [2024-11-20 06:43:10.059480] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.440 [2024-11-20 06:43:10.059500] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.440 [2024-11-20 06:43:10.059507] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.440 [2024-11-20 06:43:10.059514] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:38.440 [2024-11-20 06:43:10.059530] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:38.440 qpair failed and we were unable to recover it. 00:32:38.440 [2024-11-20 06:43:10.069439] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.440 [2024-11-20 06:43:10.069497] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.440 [2024-11-20 06:43:10.069513] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.440 [2024-11-20 06:43:10.069520] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.440 [2024-11-20 06:43:10.069526] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:38.440 [2024-11-20 06:43:10.069541] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:38.440 qpair failed and we were unable to recover it. 00:32:38.440 [2024-11-20 06:43:10.079467] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.440 [2024-11-20 06:43:10.079543] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.440 [2024-11-20 06:43:10.079557] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.440 [2024-11-20 06:43:10.079564] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.440 [2024-11-20 06:43:10.079569] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:38.440 [2024-11-20 06:43:10.079584] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:38.440 qpair failed and we were unable to recover it. 00:32:38.440 [2024-11-20 06:43:10.089447] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.440 [2024-11-20 06:43:10.089504] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.440 [2024-11-20 06:43:10.089517] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.440 [2024-11-20 06:43:10.089524] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.440 [2024-11-20 06:43:10.089530] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:38.440 [2024-11-20 06:43:10.089544] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:38.440 qpair failed and we were unable to recover it. 00:32:38.440 [2024-11-20 06:43:10.099533] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.440 [2024-11-20 06:43:10.099585] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.440 [2024-11-20 06:43:10.099598] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.440 [2024-11-20 06:43:10.099605] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.440 [2024-11-20 06:43:10.099614] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:38.440 [2024-11-20 06:43:10.099629] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:38.440 qpair failed and we were unable to recover it. 00:32:38.440 [2024-11-20 06:43:10.109529] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.440 [2024-11-20 06:43:10.109582] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.440 [2024-11-20 06:43:10.109599] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.440 [2024-11-20 06:43:10.109606] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.440 [2024-11-20 06:43:10.109612] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:38.440 [2024-11-20 06:43:10.109626] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:38.440 qpair failed and we were unable to recover it. 00:32:38.440 [2024-11-20 06:43:10.119568] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.440 [2024-11-20 06:43:10.119626] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.440 [2024-11-20 06:43:10.119639] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.440 [2024-11-20 06:43:10.119645] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.440 [2024-11-20 06:43:10.119651] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:38.440 [2024-11-20 06:43:10.119665] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:38.440 qpair failed and we were unable to recover it. 00:32:38.440 [2024-11-20 06:43:10.129612] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.440 [2024-11-20 06:43:10.129677] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.440 [2024-11-20 06:43:10.129690] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.440 [2024-11-20 06:43:10.129697] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.440 [2024-11-20 06:43:10.129703] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:38.440 [2024-11-20 06:43:10.129717] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:38.440 qpair failed and we were unable to recover it. 00:32:38.440 [2024-11-20 06:43:10.139602] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.440 [2024-11-20 06:43:10.139656] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.440 [2024-11-20 06:43:10.139669] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.440 [2024-11-20 06:43:10.139676] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.440 [2024-11-20 06:43:10.139681] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:38.440 [2024-11-20 06:43:10.139696] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:38.440 qpair failed and we were unable to recover it. 00:32:38.440 [2024-11-20 06:43:10.149637] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.440 [2024-11-20 06:43:10.149708] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.440 [2024-11-20 06:43:10.149722] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.440 [2024-11-20 06:43:10.149728] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.440 [2024-11-20 06:43:10.149734] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:38.440 [2024-11-20 06:43:10.149749] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:38.440 qpair failed and we were unable to recover it. 00:32:38.440 [2024-11-20 06:43:10.159696] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.440 [2024-11-20 06:43:10.159753] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.440 [2024-11-20 06:43:10.159766] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.440 [2024-11-20 06:43:10.159773] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.440 [2024-11-20 06:43:10.159779] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:38.440 [2024-11-20 06:43:10.159793] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:38.440 qpair failed and we were unable to recover it. 00:32:38.440 [2024-11-20 06:43:10.169713] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.440 [2024-11-20 06:43:10.169770] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.440 [2024-11-20 06:43:10.169785] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.440 [2024-11-20 06:43:10.169791] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.440 [2024-11-20 06:43:10.169798] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:38.440 [2024-11-20 06:43:10.169812] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:38.440 qpair failed and we were unable to recover it. 00:32:38.440 [2024-11-20 06:43:10.179739] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.440 [2024-11-20 06:43:10.179792] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.440 [2024-11-20 06:43:10.179805] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.440 [2024-11-20 06:43:10.179812] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.440 [2024-11-20 06:43:10.179817] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:38.440 [2024-11-20 06:43:10.179832] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:38.440 qpair failed and we were unable to recover it. 00:32:38.440 [2024-11-20 06:43:10.189778] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.441 [2024-11-20 06:43:10.189832] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.441 [2024-11-20 06:43:10.189849] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.441 [2024-11-20 06:43:10.189855] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.441 [2024-11-20 06:43:10.189861] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:38.441 [2024-11-20 06:43:10.189876] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:38.441 qpair failed and we were unable to recover it. 00:32:38.441 [2024-11-20 06:43:10.199794] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.441 [2024-11-20 06:43:10.199851] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.441 [2024-11-20 06:43:10.199864] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.441 [2024-11-20 06:43:10.199870] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.441 [2024-11-20 06:43:10.199876] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:38.441 [2024-11-20 06:43:10.199890] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:38.441 qpair failed and we were unable to recover it. 00:32:38.441 [2024-11-20 06:43:10.209861] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.441 [2024-11-20 06:43:10.209915] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.441 [2024-11-20 06:43:10.209929] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.441 [2024-11-20 06:43:10.209935] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.441 [2024-11-20 06:43:10.209941] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:38.441 [2024-11-20 06:43:10.209955] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:38.441 qpair failed and we were unable to recover it. 00:32:38.441 [2024-11-20 06:43:10.219868] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.441 [2024-11-20 06:43:10.219965] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.441 [2024-11-20 06:43:10.219978] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.441 [2024-11-20 06:43:10.219984] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.441 [2024-11-20 06:43:10.219990] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:38.441 [2024-11-20 06:43:10.220005] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:38.441 qpair failed and we were unable to recover it. 00:32:38.441 [2024-11-20 06:43:10.229896] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.441 [2024-11-20 06:43:10.229954] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.441 [2024-11-20 06:43:10.229967] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.441 [2024-11-20 06:43:10.229974] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.441 [2024-11-20 06:43:10.229983] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:38.441 [2024-11-20 06:43:10.229998] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:38.441 qpair failed and we were unable to recover it. 00:32:38.441 [2024-11-20 06:43:10.239854] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.441 [2024-11-20 06:43:10.239910] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.441 [2024-11-20 06:43:10.239924] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.441 [2024-11-20 06:43:10.239931] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.441 [2024-11-20 06:43:10.239937] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:38.441 [2024-11-20 06:43:10.239951] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:38.441 qpair failed and we were unable to recover it. 00:32:38.441 [2024-11-20 06:43:10.249910] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.441 [2024-11-20 06:43:10.249989] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.441 [2024-11-20 06:43:10.250002] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.441 [2024-11-20 06:43:10.250009] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.441 [2024-11-20 06:43:10.250015] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:38.441 [2024-11-20 06:43:10.250029] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:38.441 qpair failed and we were unable to recover it. 00:32:38.441 [2024-11-20 06:43:10.260013] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.441 [2024-11-20 06:43:10.260068] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.441 [2024-11-20 06:43:10.260082] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.441 [2024-11-20 06:43:10.260089] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.441 [2024-11-20 06:43:10.260094] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:38.441 [2024-11-20 06:43:10.260109] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:38.441 qpair failed and we were unable to recover it. 00:32:38.441 [2024-11-20 06:43:10.269999] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.441 [2024-11-20 06:43:10.270073] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.441 [2024-11-20 06:43:10.270088] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.441 [2024-11-20 06:43:10.270094] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.441 [2024-11-20 06:43:10.270100] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:38.441 [2024-11-20 06:43:10.270114] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:38.441 qpair failed and we were unable to recover it. 00:32:38.700 [2024-11-20 06:43:10.280030] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.700 [2024-11-20 06:43:10.280102] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.700 [2024-11-20 06:43:10.280115] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.700 [2024-11-20 06:43:10.280122] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.700 [2024-11-20 06:43:10.280127] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:38.700 [2024-11-20 06:43:10.280142] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:38.700 qpair failed and we were unable to recover it. 00:32:38.700 [2024-11-20 06:43:10.290101] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.700 [2024-11-20 06:43:10.290156] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.700 [2024-11-20 06:43:10.290169] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.700 [2024-11-20 06:43:10.290176] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.700 [2024-11-20 06:43:10.290182] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:38.700 [2024-11-20 06:43:10.290196] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:38.700 qpair failed and we were unable to recover it. 00:32:38.700 [2024-11-20 06:43:10.300054] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.700 [2024-11-20 06:43:10.300108] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.700 [2024-11-20 06:43:10.300121] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.700 [2024-11-20 06:43:10.300127] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.700 [2024-11-20 06:43:10.300133] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:38.700 [2024-11-20 06:43:10.300147] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:38.700 qpair failed and we were unable to recover it. 00:32:38.700 [2024-11-20 06:43:10.310102] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.700 [2024-11-20 06:43:10.310150] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.700 [2024-11-20 06:43:10.310163] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.700 [2024-11-20 06:43:10.310169] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.700 [2024-11-20 06:43:10.310175] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:38.700 [2024-11-20 06:43:10.310190] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:38.700 qpair failed and we were unable to recover it. 00:32:38.700 [2024-11-20 06:43:10.320150] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.700 [2024-11-20 06:43:10.320213] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.700 [2024-11-20 06:43:10.320226] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.700 [2024-11-20 06:43:10.320232] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.700 [2024-11-20 06:43:10.320238] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:38.700 [2024-11-20 06:43:10.320253] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:38.700 qpair failed and we were unable to recover it. 00:32:38.700 [2024-11-20 06:43:10.330162] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.700 [2024-11-20 06:43:10.330243] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.700 [2024-11-20 06:43:10.330256] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.700 [2024-11-20 06:43:10.330263] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.700 [2024-11-20 06:43:10.330269] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:38.701 [2024-11-20 06:43:10.330282] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:38.701 qpair failed and we were unable to recover it. 00:32:38.701 [2024-11-20 06:43:10.340239] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.701 [2024-11-20 06:43:10.340332] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.701 [2024-11-20 06:43:10.340346] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.701 [2024-11-20 06:43:10.340353] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.701 [2024-11-20 06:43:10.340358] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:38.701 [2024-11-20 06:43:10.340373] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:38.701 qpair failed and we were unable to recover it. 00:32:38.701 [2024-11-20 06:43:10.350223] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.701 [2024-11-20 06:43:10.350277] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.701 [2024-11-20 06:43:10.350290] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.701 [2024-11-20 06:43:10.350297] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.701 [2024-11-20 06:43:10.350303] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:38.701 [2024-11-20 06:43:10.350317] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:38.701 qpair failed and we were unable to recover it. 00:32:38.701 [2024-11-20 06:43:10.360249] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.701 [2024-11-20 06:43:10.360305] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.701 [2024-11-20 06:43:10.360319] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.701 [2024-11-20 06:43:10.360329] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.701 [2024-11-20 06:43:10.360335] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:38.701 [2024-11-20 06:43:10.360349] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:38.701 qpair failed and we were unable to recover it. 00:32:38.701 [2024-11-20 06:43:10.370215] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.701 [2024-11-20 06:43:10.370271] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.701 [2024-11-20 06:43:10.370285] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.701 [2024-11-20 06:43:10.370292] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.701 [2024-11-20 06:43:10.370300] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:38.701 [2024-11-20 06:43:10.370315] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:38.701 qpair failed and we were unable to recover it. 00:32:38.701 [2024-11-20 06:43:10.380299] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.701 [2024-11-20 06:43:10.380355] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.701 [2024-11-20 06:43:10.380369] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.701 [2024-11-20 06:43:10.380376] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.701 [2024-11-20 06:43:10.380382] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:38.701 [2024-11-20 06:43:10.380396] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:38.701 qpair failed and we were unable to recover it. 00:32:38.701 [2024-11-20 06:43:10.390258] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.701 [2024-11-20 06:43:10.390322] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.701 [2024-11-20 06:43:10.390335] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.701 [2024-11-20 06:43:10.390342] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.701 [2024-11-20 06:43:10.390347] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:38.701 [2024-11-20 06:43:10.390362] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:38.701 qpair failed and we were unable to recover it. 00:32:38.701 [2024-11-20 06:43:10.400381] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.701 [2024-11-20 06:43:10.400436] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.701 [2024-11-20 06:43:10.400449] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.701 [2024-11-20 06:43:10.400455] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.701 [2024-11-20 06:43:10.400461] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:38.701 [2024-11-20 06:43:10.400480] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:38.701 qpair failed and we were unable to recover it. 00:32:38.701 [2024-11-20 06:43:10.410435] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.701 [2024-11-20 06:43:10.410488] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.701 [2024-11-20 06:43:10.410501] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.701 [2024-11-20 06:43:10.410508] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.701 [2024-11-20 06:43:10.410513] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:38.701 [2024-11-20 06:43:10.410527] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:38.701 qpair failed and we were unable to recover it. 00:32:38.701 [2024-11-20 06:43:10.420424] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.701 [2024-11-20 06:43:10.420473] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.701 [2024-11-20 06:43:10.420487] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.701 [2024-11-20 06:43:10.420493] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.701 [2024-11-20 06:43:10.420499] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:38.701 [2024-11-20 06:43:10.420512] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:38.701 qpair failed and we were unable to recover it. 00:32:38.701 [2024-11-20 06:43:10.430451] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.701 [2024-11-20 06:43:10.430503] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.701 [2024-11-20 06:43:10.430516] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.701 [2024-11-20 06:43:10.430523] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.701 [2024-11-20 06:43:10.430528] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:38.701 [2024-11-20 06:43:10.430542] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:38.701 qpair failed and we were unable to recover it. 00:32:38.701 [2024-11-20 06:43:10.440494] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.701 [2024-11-20 06:43:10.440546] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.701 [2024-11-20 06:43:10.440559] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.701 [2024-11-20 06:43:10.440566] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.701 [2024-11-20 06:43:10.440571] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:38.701 [2024-11-20 06:43:10.440586] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:38.701 qpair failed and we were unable to recover it. 00:32:38.701 [2024-11-20 06:43:10.450487] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.701 [2024-11-20 06:43:10.450542] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.701 [2024-11-20 06:43:10.450556] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.701 [2024-11-20 06:43:10.450562] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.701 [2024-11-20 06:43:10.450568] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:38.701 [2024-11-20 06:43:10.450582] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:38.701 qpair failed and we were unable to recover it. 00:32:38.701 [2024-11-20 06:43:10.460594] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.701 [2024-11-20 06:43:10.460646] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.701 [2024-11-20 06:43:10.460659] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.701 [2024-11-20 06:43:10.460665] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.701 [2024-11-20 06:43:10.460671] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:38.702 [2024-11-20 06:43:10.460685] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:38.702 qpair failed and we were unable to recover it. 00:32:38.702 [2024-11-20 06:43:10.470575] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.702 [2024-11-20 06:43:10.470625] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.702 [2024-11-20 06:43:10.470638] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.702 [2024-11-20 06:43:10.470644] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.702 [2024-11-20 06:43:10.470650] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:38.702 [2024-11-20 06:43:10.470664] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:38.702 qpair failed and we were unable to recover it. 00:32:38.702 [2024-11-20 06:43:10.480611] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.702 [2024-11-20 06:43:10.480687] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.702 [2024-11-20 06:43:10.480700] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.702 [2024-11-20 06:43:10.480707] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.702 [2024-11-20 06:43:10.480712] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:38.702 [2024-11-20 06:43:10.480727] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:38.702 qpair failed and we were unable to recover it. 00:32:38.702 [2024-11-20 06:43:10.490626] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.702 [2024-11-20 06:43:10.490674] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.702 [2024-11-20 06:43:10.490688] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.702 [2024-11-20 06:43:10.490700] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.702 [2024-11-20 06:43:10.490706] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:38.702 [2024-11-20 06:43:10.490720] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:38.702 qpair failed and we were unable to recover it. 00:32:38.702 [2024-11-20 06:43:10.500661] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.702 [2024-11-20 06:43:10.500760] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.702 [2024-11-20 06:43:10.500773] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.702 [2024-11-20 06:43:10.500780] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.702 [2024-11-20 06:43:10.500785] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:38.702 [2024-11-20 06:43:10.500800] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:38.702 qpair failed and we were unable to recover it. 00:32:38.702 [2024-11-20 06:43:10.510680] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.702 [2024-11-20 06:43:10.510728] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.702 [2024-11-20 06:43:10.510741] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.702 [2024-11-20 06:43:10.510747] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.702 [2024-11-20 06:43:10.510752] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:38.702 [2024-11-20 06:43:10.510767] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:38.702 qpair failed and we were unable to recover it. 00:32:38.702 [2024-11-20 06:43:10.520776] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.702 [2024-11-20 06:43:10.520858] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.702 [2024-11-20 06:43:10.520871] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.702 [2024-11-20 06:43:10.520877] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.702 [2024-11-20 06:43:10.520883] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:38.702 [2024-11-20 06:43:10.520897] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:38.702 qpair failed and we were unable to recover it. 00:32:38.702 [2024-11-20 06:43:10.530788] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.702 [2024-11-20 06:43:10.530887] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.702 [2024-11-20 06:43:10.530899] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.702 [2024-11-20 06:43:10.530905] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.702 [2024-11-20 06:43:10.530911] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:38.702 [2024-11-20 06:43:10.530928] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:38.702 qpair failed and we were unable to recover it. 00:32:38.961 [2024-11-20 06:43:10.540763] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.961 [2024-11-20 06:43:10.540861] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.961 [2024-11-20 06:43:10.540874] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.961 [2024-11-20 06:43:10.540881] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.961 [2024-11-20 06:43:10.540887] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:38.961 [2024-11-20 06:43:10.540902] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:38.961 qpair failed and we were unable to recover it. 00:32:38.961 [2024-11-20 06:43:10.550789] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.961 [2024-11-20 06:43:10.550840] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.961 [2024-11-20 06:43:10.550854] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.961 [2024-11-20 06:43:10.550860] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.961 [2024-11-20 06:43:10.550866] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:38.961 [2024-11-20 06:43:10.550880] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:38.961 qpair failed and we were unable to recover it. 00:32:38.961 [2024-11-20 06:43:10.560829] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.961 [2024-11-20 06:43:10.560886] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.961 [2024-11-20 06:43:10.560899] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.961 [2024-11-20 06:43:10.560905] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.962 [2024-11-20 06:43:10.560911] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:38.962 [2024-11-20 06:43:10.560925] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:38.962 qpair failed and we were unable to recover it. 00:32:38.962 [2024-11-20 06:43:10.570870] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.962 [2024-11-20 06:43:10.570929] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.962 [2024-11-20 06:43:10.570942] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.962 [2024-11-20 06:43:10.570949] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.962 [2024-11-20 06:43:10.570955] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:38.962 [2024-11-20 06:43:10.570968] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:38.962 qpair failed and we were unable to recover it. 00:32:38.962 [2024-11-20 06:43:10.580846] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.962 [2024-11-20 06:43:10.580912] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.962 [2024-11-20 06:43:10.580924] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.962 [2024-11-20 06:43:10.580931] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.962 [2024-11-20 06:43:10.580936] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:38.962 [2024-11-20 06:43:10.580950] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:38.962 qpair failed and we were unable to recover it. 00:32:38.962 [2024-11-20 06:43:10.590841] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.962 [2024-11-20 06:43:10.590894] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.962 [2024-11-20 06:43:10.590908] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.962 [2024-11-20 06:43:10.590915] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.962 [2024-11-20 06:43:10.590921] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:38.962 [2024-11-20 06:43:10.590935] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:38.962 qpair failed and we were unable to recover it. 00:32:38.962 [2024-11-20 06:43:10.600892] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.962 [2024-11-20 06:43:10.600949] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.962 [2024-11-20 06:43:10.600963] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.962 [2024-11-20 06:43:10.600970] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.962 [2024-11-20 06:43:10.600975] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:38.962 [2024-11-20 06:43:10.600989] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:38.962 qpair failed and we were unable to recover it. 00:32:38.962 [2024-11-20 06:43:10.610968] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.962 [2024-11-20 06:43:10.611024] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.962 [2024-11-20 06:43:10.611037] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.962 [2024-11-20 06:43:10.611044] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.962 [2024-11-20 06:43:10.611050] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:38.962 [2024-11-20 06:43:10.611063] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:38.962 qpair failed and we were unable to recover it. 00:32:38.962 [2024-11-20 06:43:10.620946] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.962 [2024-11-20 06:43:10.621028] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.962 [2024-11-20 06:43:10.621045] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.962 [2024-11-20 06:43:10.621052] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.962 [2024-11-20 06:43:10.621058] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:38.962 [2024-11-20 06:43:10.621072] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:38.962 qpair failed and we were unable to recover it. 00:32:38.962 [2024-11-20 06:43:10.630953] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.962 [2024-11-20 06:43:10.631004] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.962 [2024-11-20 06:43:10.631017] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.962 [2024-11-20 06:43:10.631024] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.962 [2024-11-20 06:43:10.631030] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:38.962 [2024-11-20 06:43:10.631044] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:38.962 qpair failed and we were unable to recover it. 00:32:38.962 [2024-11-20 06:43:10.641056] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.962 [2024-11-20 06:43:10.641113] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.962 [2024-11-20 06:43:10.641128] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.962 [2024-11-20 06:43:10.641134] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.962 [2024-11-20 06:43:10.641140] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:38.962 [2024-11-20 06:43:10.641155] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:38.962 qpair failed and we were unable to recover it. 00:32:38.962 [2024-11-20 06:43:10.651100] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.962 [2024-11-20 06:43:10.651159] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.962 [2024-11-20 06:43:10.651173] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.962 [2024-11-20 06:43:10.651179] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.962 [2024-11-20 06:43:10.651185] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:38.962 [2024-11-20 06:43:10.651199] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:38.962 qpair failed and we were unable to recover it. 00:32:38.962 [2024-11-20 06:43:10.661066] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.962 [2024-11-20 06:43:10.661115] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.962 [2024-11-20 06:43:10.661129] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.962 [2024-11-20 06:43:10.661135] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.962 [2024-11-20 06:43:10.661144] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:38.962 [2024-11-20 06:43:10.661159] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:38.962 qpair failed and we were unable to recover it. 00:32:38.962 [2024-11-20 06:43:10.671169] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.962 [2024-11-20 06:43:10.671230] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.962 [2024-11-20 06:43:10.671243] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.962 [2024-11-20 06:43:10.671250] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.962 [2024-11-20 06:43:10.671255] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:38.962 [2024-11-20 06:43:10.671269] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:38.962 qpair failed and we were unable to recover it. 00:32:38.962 [2024-11-20 06:43:10.681214] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.962 [2024-11-20 06:43:10.681269] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.962 [2024-11-20 06:43:10.681282] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.962 [2024-11-20 06:43:10.681288] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.962 [2024-11-20 06:43:10.681294] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:38.962 [2024-11-20 06:43:10.681308] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:38.962 qpair failed and we were unable to recover it. 00:32:38.962 [2024-11-20 06:43:10.691231] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.962 [2024-11-20 06:43:10.691305] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.962 [2024-11-20 06:43:10.691319] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.962 [2024-11-20 06:43:10.691326] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.962 [2024-11-20 06:43:10.691331] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:38.963 [2024-11-20 06:43:10.691345] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:38.963 qpair failed and we were unable to recover it. 00:32:38.963 [2024-11-20 06:43:10.701243] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.963 [2024-11-20 06:43:10.701298] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.963 [2024-11-20 06:43:10.701311] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.963 [2024-11-20 06:43:10.701317] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.963 [2024-11-20 06:43:10.701323] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:38.963 [2024-11-20 06:43:10.701337] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:38.963 qpair failed and we were unable to recover it. 00:32:38.963 [2024-11-20 06:43:10.711173] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.963 [2024-11-20 06:43:10.711237] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.963 [2024-11-20 06:43:10.711250] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.963 [2024-11-20 06:43:10.711257] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.963 [2024-11-20 06:43:10.711262] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:38.963 [2024-11-20 06:43:10.711277] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:38.963 qpair failed and we were unable to recover it. 00:32:38.963 [2024-11-20 06:43:10.721287] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.963 [2024-11-20 06:43:10.721345] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.963 [2024-11-20 06:43:10.721358] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.963 [2024-11-20 06:43:10.721364] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.963 [2024-11-20 06:43:10.721370] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:38.963 [2024-11-20 06:43:10.721384] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:38.963 qpair failed and we were unable to recover it. 00:32:38.963 [2024-11-20 06:43:10.731258] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.963 [2024-11-20 06:43:10.731311] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.963 [2024-11-20 06:43:10.731324] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.963 [2024-11-20 06:43:10.731330] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.963 [2024-11-20 06:43:10.731336] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:38.963 [2024-11-20 06:43:10.731350] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:38.963 qpair failed and we were unable to recover it. 00:32:38.963 [2024-11-20 06:43:10.741311] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.963 [2024-11-20 06:43:10.741400] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.963 [2024-11-20 06:43:10.741414] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.963 [2024-11-20 06:43:10.741420] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.963 [2024-11-20 06:43:10.741426] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:38.963 [2024-11-20 06:43:10.741441] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:38.963 qpair failed and we were unable to recover it. 00:32:38.963 [2024-11-20 06:43:10.751361] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.963 [2024-11-20 06:43:10.751442] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.963 [2024-11-20 06:43:10.751459] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.963 [2024-11-20 06:43:10.751466] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.963 [2024-11-20 06:43:10.751471] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:38.963 [2024-11-20 06:43:10.751485] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:38.963 qpair failed and we were unable to recover it. 00:32:38.963 [2024-11-20 06:43:10.761348] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.963 [2024-11-20 06:43:10.761407] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.963 [2024-11-20 06:43:10.761420] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.963 [2024-11-20 06:43:10.761427] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.963 [2024-11-20 06:43:10.761432] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:38.963 [2024-11-20 06:43:10.761446] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:38.963 qpair failed and we were unable to recover it. 00:32:38.963 [2024-11-20 06:43:10.771372] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.963 [2024-11-20 06:43:10.771430] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.963 [2024-11-20 06:43:10.771443] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.963 [2024-11-20 06:43:10.771450] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.963 [2024-11-20 06:43:10.771455] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:38.963 [2024-11-20 06:43:10.771470] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:38.963 qpair failed and we were unable to recover it. 00:32:38.963 [2024-11-20 06:43:10.781437] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.963 [2024-11-20 06:43:10.781496] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.963 [2024-11-20 06:43:10.781510] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.963 [2024-11-20 06:43:10.781516] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.963 [2024-11-20 06:43:10.781522] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:38.963 [2024-11-20 06:43:10.781536] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:38.963 qpair failed and we were unable to recover it. 00:32:38.963 [2024-11-20 06:43:10.791498] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.963 [2024-11-20 06:43:10.791557] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.963 [2024-11-20 06:43:10.791570] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.963 [2024-11-20 06:43:10.791577] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.963 [2024-11-20 06:43:10.791586] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:38.963 [2024-11-20 06:43:10.791600] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:38.963 qpair failed and we were unable to recover it. 00:32:39.223 [2024-11-20 06:43:10.801528] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.223 [2024-11-20 06:43:10.801587] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.223 [2024-11-20 06:43:10.801600] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.223 [2024-11-20 06:43:10.801607] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.223 [2024-11-20 06:43:10.801613] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:39.223 [2024-11-20 06:43:10.801628] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:39.223 qpair failed and we were unable to recover it. 00:32:39.223 [2024-11-20 06:43:10.811590] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.223 [2024-11-20 06:43:10.811641] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.223 [2024-11-20 06:43:10.811654] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.223 [2024-11-20 06:43:10.811660] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.223 [2024-11-20 06:43:10.811666] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:39.223 [2024-11-20 06:43:10.811680] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:39.223 qpair failed and we were unable to recover it. 00:32:39.223 [2024-11-20 06:43:10.821562] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.223 [2024-11-20 06:43:10.821611] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.223 [2024-11-20 06:43:10.821624] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.223 [2024-11-20 06:43:10.821630] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.223 [2024-11-20 06:43:10.821636] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:39.223 [2024-11-20 06:43:10.821650] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:39.223 qpair failed and we were unable to recover it. 00:32:39.223 [2024-11-20 06:43:10.831530] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.223 [2024-11-20 06:43:10.831595] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.223 [2024-11-20 06:43:10.831608] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.223 [2024-11-20 06:43:10.831614] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.223 [2024-11-20 06:43:10.831620] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:39.223 [2024-11-20 06:43:10.831634] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:39.223 qpair failed and we were unable to recover it. 00:32:39.223 [2024-11-20 06:43:10.841574] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.223 [2024-11-20 06:43:10.841631] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.223 [2024-11-20 06:43:10.841645] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.223 [2024-11-20 06:43:10.841651] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.223 [2024-11-20 06:43:10.841657] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:39.223 [2024-11-20 06:43:10.841671] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:39.223 qpair failed and we were unable to recover it. 00:32:39.223 [2024-11-20 06:43:10.851667] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.223 [2024-11-20 06:43:10.851722] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.223 [2024-11-20 06:43:10.851735] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.223 [2024-11-20 06:43:10.851741] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.223 [2024-11-20 06:43:10.851747] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:39.223 [2024-11-20 06:43:10.851761] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:39.223 qpair failed and we were unable to recover it. 00:32:39.223 [2024-11-20 06:43:10.861653] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.223 [2024-11-20 06:43:10.861708] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.223 [2024-11-20 06:43:10.861721] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.223 [2024-11-20 06:43:10.861727] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.223 [2024-11-20 06:43:10.861733] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:39.223 [2024-11-20 06:43:10.861747] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:39.223 qpair failed and we were unable to recover it. 00:32:39.223 [2024-11-20 06:43:10.871669] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.223 [2024-11-20 06:43:10.871722] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.223 [2024-11-20 06:43:10.871734] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.223 [2024-11-20 06:43:10.871740] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.223 [2024-11-20 06:43:10.871746] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:39.223 [2024-11-20 06:43:10.871760] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:39.223 qpair failed and we were unable to recover it. 00:32:39.223 [2024-11-20 06:43:10.881786] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.223 [2024-11-20 06:43:10.881866] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.223 [2024-11-20 06:43:10.881879] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.223 [2024-11-20 06:43:10.881885] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.223 [2024-11-20 06:43:10.881891] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:39.223 [2024-11-20 06:43:10.881905] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:39.223 qpair failed and we were unable to recover it. 00:32:39.223 [2024-11-20 06:43:10.891768] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.223 [2024-11-20 06:43:10.891865] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.223 [2024-11-20 06:43:10.891877] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.223 [2024-11-20 06:43:10.891884] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.223 [2024-11-20 06:43:10.891889] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:39.223 [2024-11-20 06:43:10.891903] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:39.223 qpair failed and we were unable to recover it. 00:32:39.223 [2024-11-20 06:43:10.901753] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.223 [2024-11-20 06:43:10.901808] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.223 [2024-11-20 06:43:10.901822] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.223 [2024-11-20 06:43:10.901828] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.223 [2024-11-20 06:43:10.901834] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:39.223 [2024-11-20 06:43:10.901848] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:39.223 qpair failed and we were unable to recover it. 00:32:39.223 [2024-11-20 06:43:10.911845] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.223 [2024-11-20 06:43:10.911894] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.223 [2024-11-20 06:43:10.911907] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.223 [2024-11-20 06:43:10.911913] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.223 [2024-11-20 06:43:10.911920] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:39.223 [2024-11-20 06:43:10.911934] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:39.223 qpair failed and we were unable to recover it. 00:32:39.223 [2024-11-20 06:43:10.921884] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.223 [2024-11-20 06:43:10.921950] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.223 [2024-11-20 06:43:10.921963] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.223 [2024-11-20 06:43:10.921973] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.223 [2024-11-20 06:43:10.921979] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:39.224 [2024-11-20 06:43:10.921994] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:39.224 qpair failed and we were unable to recover it. 00:32:39.224 [2024-11-20 06:43:10.931913] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.224 [2024-11-20 06:43:10.931982] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.224 [2024-11-20 06:43:10.931995] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.224 [2024-11-20 06:43:10.932001] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.224 [2024-11-20 06:43:10.932007] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:39.224 [2024-11-20 06:43:10.932021] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:39.224 qpair failed and we were unable to recover it. 00:32:39.224 [2024-11-20 06:43:10.941952] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.224 [2024-11-20 06:43:10.942001] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.224 [2024-11-20 06:43:10.942014] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.224 [2024-11-20 06:43:10.942021] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.224 [2024-11-20 06:43:10.942028] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:39.224 [2024-11-20 06:43:10.942042] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:39.224 qpair failed and we were unable to recover it. 00:32:39.224 [2024-11-20 06:43:10.951966] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.224 [2024-11-20 06:43:10.952025] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.224 [2024-11-20 06:43:10.952038] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.224 [2024-11-20 06:43:10.952045] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.224 [2024-11-20 06:43:10.952051] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:39.224 [2024-11-20 06:43:10.952065] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:39.224 qpair failed and we were unable to recover it. 00:32:39.224 [2024-11-20 06:43:10.961987] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.224 [2024-11-20 06:43:10.962039] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.224 [2024-11-20 06:43:10.962053] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.224 [2024-11-20 06:43:10.962059] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.224 [2024-11-20 06:43:10.962065] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:39.224 [2024-11-20 06:43:10.962083] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:39.224 qpair failed and we were unable to recover it. 00:32:39.224 [2024-11-20 06:43:10.972046] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.224 [2024-11-20 06:43:10.972105] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.224 [2024-11-20 06:43:10.972118] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.224 [2024-11-20 06:43:10.972125] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.224 [2024-11-20 06:43:10.972130] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:39.224 [2024-11-20 06:43:10.972145] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:39.224 qpair failed and we were unable to recover it. 00:32:39.224 [2024-11-20 06:43:10.981983] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.224 [2024-11-20 06:43:10.982042] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.224 [2024-11-20 06:43:10.982055] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.224 [2024-11-20 06:43:10.982062] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.224 [2024-11-20 06:43:10.982067] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:39.224 [2024-11-20 06:43:10.982081] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:39.224 qpair failed and we were unable to recover it. 00:32:39.224 [2024-11-20 06:43:10.991996] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.224 [2024-11-20 06:43:10.992048] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.224 [2024-11-20 06:43:10.992061] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.224 [2024-11-20 06:43:10.992067] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.224 [2024-11-20 06:43:10.992073] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:39.224 [2024-11-20 06:43:10.992087] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:39.224 qpair failed and we were unable to recover it. 00:32:39.224 [2024-11-20 06:43:11.002059] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.224 [2024-11-20 06:43:11.002122] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.224 [2024-11-20 06:43:11.002139] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.224 [2024-11-20 06:43:11.002145] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.224 [2024-11-20 06:43:11.002151] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:39.224 [2024-11-20 06:43:11.002167] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:39.224 qpair failed and we were unable to recover it. 00:32:39.224 [2024-11-20 06:43:11.012171] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.224 [2024-11-20 06:43:11.012231] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.224 [2024-11-20 06:43:11.012245] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.224 [2024-11-20 06:43:11.012251] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.224 [2024-11-20 06:43:11.012257] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:39.224 [2024-11-20 06:43:11.012271] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:39.224 qpair failed and we were unable to recover it. 00:32:39.224 [2024-11-20 06:43:11.022152] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.224 [2024-11-20 06:43:11.022210] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.224 [2024-11-20 06:43:11.022224] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.224 [2024-11-20 06:43:11.022231] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.224 [2024-11-20 06:43:11.022237] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:39.224 [2024-11-20 06:43:11.022251] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:39.224 qpair failed and we were unable to recover it. 00:32:39.224 [2024-11-20 06:43:11.032223] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.224 [2024-11-20 06:43:11.032278] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.224 [2024-11-20 06:43:11.032291] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.224 [2024-11-20 06:43:11.032298] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.224 [2024-11-20 06:43:11.032304] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:39.224 [2024-11-20 06:43:11.032318] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:39.224 qpair failed and we were unable to recover it. 00:32:39.224 [2024-11-20 06:43:11.042220] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.224 [2024-11-20 06:43:11.042275] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.224 [2024-11-20 06:43:11.042289] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.224 [2024-11-20 06:43:11.042295] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.224 [2024-11-20 06:43:11.042301] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:39.224 [2024-11-20 06:43:11.042316] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:39.224 qpair failed and we were unable to recover it. 00:32:39.224 [2024-11-20 06:43:11.052262] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.224 [2024-11-20 06:43:11.052325] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.224 [2024-11-20 06:43:11.052342] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.224 [2024-11-20 06:43:11.052349] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.224 [2024-11-20 06:43:11.052354] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:39.225 [2024-11-20 06:43:11.052369] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:39.225 qpair failed and we were unable to recover it. 00:32:39.484 [2024-11-20 06:43:11.062278] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.484 [2024-11-20 06:43:11.062338] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.484 [2024-11-20 06:43:11.062351] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.484 [2024-11-20 06:43:11.062357] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.484 [2024-11-20 06:43:11.062363] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:39.484 [2024-11-20 06:43:11.062377] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:39.484 qpair failed and we were unable to recover it. 00:32:39.484 [2024-11-20 06:43:11.072285] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.484 [2024-11-20 06:43:11.072343] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.484 [2024-11-20 06:43:11.072356] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.484 [2024-11-20 06:43:11.072363] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.484 [2024-11-20 06:43:11.072369] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:39.484 [2024-11-20 06:43:11.072383] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:39.484 qpair failed and we were unable to recover it. 00:32:39.484 [2024-11-20 06:43:11.082321] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.484 [2024-11-20 06:43:11.082406] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.484 [2024-11-20 06:43:11.082419] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.484 [2024-11-20 06:43:11.082426] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.484 [2024-11-20 06:43:11.082432] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:39.484 [2024-11-20 06:43:11.082445] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:39.484 qpair failed and we were unable to recover it. 00:32:39.484 [2024-11-20 06:43:11.092359] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.484 [2024-11-20 06:43:11.092409] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.484 [2024-11-20 06:43:11.092422] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.484 [2024-11-20 06:43:11.092429] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.484 [2024-11-20 06:43:11.092435] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:39.484 [2024-11-20 06:43:11.092454] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:39.484 qpair failed and we were unable to recover it. 00:32:39.484 [2024-11-20 06:43:11.102388] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.484 [2024-11-20 06:43:11.102445] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.484 [2024-11-20 06:43:11.102460] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.484 [2024-11-20 06:43:11.102467] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.484 [2024-11-20 06:43:11.102473] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:39.484 [2024-11-20 06:43:11.102488] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:39.484 qpair failed and we were unable to recover it. 00:32:39.484 [2024-11-20 06:43:11.112405] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.484 [2024-11-20 06:43:11.112458] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.484 [2024-11-20 06:43:11.112473] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.484 [2024-11-20 06:43:11.112480] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.484 [2024-11-20 06:43:11.112487] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:39.484 [2024-11-20 06:43:11.112503] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:39.484 qpair failed and we were unable to recover it. 00:32:39.484 [2024-11-20 06:43:11.122451] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.484 [2024-11-20 06:43:11.122505] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.484 [2024-11-20 06:43:11.122518] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.484 [2024-11-20 06:43:11.122524] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.484 [2024-11-20 06:43:11.122530] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:39.484 [2024-11-20 06:43:11.122544] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:39.484 qpair failed and we were unable to recover it. 00:32:39.484 [2024-11-20 06:43:11.132498] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.484 [2024-11-20 06:43:11.132552] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.484 [2024-11-20 06:43:11.132566] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.484 [2024-11-20 06:43:11.132572] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.484 [2024-11-20 06:43:11.132578] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:39.484 [2024-11-20 06:43:11.132592] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:39.484 qpair failed and we were unable to recover it. 00:32:39.484 [2024-11-20 06:43:11.142491] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.484 [2024-11-20 06:43:11.142588] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.484 [2024-11-20 06:43:11.142601] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.484 [2024-11-20 06:43:11.142607] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.484 [2024-11-20 06:43:11.142613] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:39.484 [2024-11-20 06:43:11.142627] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:39.484 qpair failed and we were unable to recover it. 00:32:39.484 [2024-11-20 06:43:11.152481] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.484 [2024-11-20 06:43:11.152547] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.484 [2024-11-20 06:43:11.152560] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.484 [2024-11-20 06:43:11.152567] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.485 [2024-11-20 06:43:11.152572] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:39.485 [2024-11-20 06:43:11.152588] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:39.485 qpair failed and we were unable to recover it. 00:32:39.485 [2024-11-20 06:43:11.162587] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.485 [2024-11-20 06:43:11.162668] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.485 [2024-11-20 06:43:11.162681] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.485 [2024-11-20 06:43:11.162688] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.485 [2024-11-20 06:43:11.162693] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:39.485 [2024-11-20 06:43:11.162707] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:39.485 qpair failed and we were unable to recover it. 00:32:39.485 [2024-11-20 06:43:11.172620] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.485 [2024-11-20 06:43:11.172680] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.485 [2024-11-20 06:43:11.172693] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.485 [2024-11-20 06:43:11.172699] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.485 [2024-11-20 06:43:11.172705] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:39.485 [2024-11-20 06:43:11.172719] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:39.485 qpair failed and we were unable to recover it. 00:32:39.485 [2024-11-20 06:43:11.182612] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.485 [2024-11-20 06:43:11.182664] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.485 [2024-11-20 06:43:11.182680] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.485 [2024-11-20 06:43:11.182686] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.485 [2024-11-20 06:43:11.182692] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:39.485 [2024-11-20 06:43:11.182706] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:39.485 qpair failed and we were unable to recover it. 00:32:39.485 [2024-11-20 06:43:11.192633] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.485 [2024-11-20 06:43:11.192681] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.485 [2024-11-20 06:43:11.192694] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.485 [2024-11-20 06:43:11.192700] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.485 [2024-11-20 06:43:11.192705] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:39.485 [2024-11-20 06:43:11.192719] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:39.485 qpair failed and we were unable to recover it. 00:32:39.485 [2024-11-20 06:43:11.202667] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.485 [2024-11-20 06:43:11.202721] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.485 [2024-11-20 06:43:11.202734] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.485 [2024-11-20 06:43:11.202740] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.485 [2024-11-20 06:43:11.202746] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:39.485 [2024-11-20 06:43:11.202760] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:39.485 qpair failed and we were unable to recover it. 00:32:39.485 [2024-11-20 06:43:11.212695] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.485 [2024-11-20 06:43:11.212773] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.485 [2024-11-20 06:43:11.212786] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.485 [2024-11-20 06:43:11.212793] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.485 [2024-11-20 06:43:11.212798] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:39.485 [2024-11-20 06:43:11.212813] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:39.485 qpair failed and we were unable to recover it. 00:32:39.485 [2024-11-20 06:43:11.222755] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.485 [2024-11-20 06:43:11.222824] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.485 [2024-11-20 06:43:11.222837] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.485 [2024-11-20 06:43:11.222844] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.485 [2024-11-20 06:43:11.222855] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:39.485 [2024-11-20 06:43:11.222869] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:39.485 qpair failed and we were unable to recover it. 00:32:39.485 [2024-11-20 06:43:11.232669] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.485 [2024-11-20 06:43:11.232731] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.485 [2024-11-20 06:43:11.232744] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.485 [2024-11-20 06:43:11.232751] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.485 [2024-11-20 06:43:11.232756] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:39.485 [2024-11-20 06:43:11.232771] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:39.485 qpair failed and we were unable to recover it. 00:32:39.485 [2024-11-20 06:43:11.242832] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.485 [2024-11-20 06:43:11.242914] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.485 [2024-11-20 06:43:11.242928] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.485 [2024-11-20 06:43:11.242935] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.485 [2024-11-20 06:43:11.242941] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:39.485 [2024-11-20 06:43:11.242955] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:39.485 qpair failed and we were unable to recover it. 00:32:39.485 [2024-11-20 06:43:11.252862] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.485 [2024-11-20 06:43:11.252923] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.485 [2024-11-20 06:43:11.252936] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.485 [2024-11-20 06:43:11.252943] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.485 [2024-11-20 06:43:11.252949] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:39.485 [2024-11-20 06:43:11.252963] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:39.485 qpair failed and we were unable to recover it. 00:32:39.485 [2024-11-20 06:43:11.262850] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.485 [2024-11-20 06:43:11.262899] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.485 [2024-11-20 06:43:11.262911] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.485 [2024-11-20 06:43:11.262918] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.485 [2024-11-20 06:43:11.262923] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:39.485 [2024-11-20 06:43:11.262937] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:39.485 qpair failed and we were unable to recover it. 00:32:39.485 [2024-11-20 06:43:11.272860] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.485 [2024-11-20 06:43:11.272910] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.485 [2024-11-20 06:43:11.272923] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.485 [2024-11-20 06:43:11.272930] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.485 [2024-11-20 06:43:11.272936] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:39.485 [2024-11-20 06:43:11.272950] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:39.485 qpair failed and we were unable to recover it. 00:32:39.485 [2024-11-20 06:43:11.282965] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.485 [2024-11-20 06:43:11.283048] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.485 [2024-11-20 06:43:11.283061] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.485 [2024-11-20 06:43:11.283067] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.486 [2024-11-20 06:43:11.283073] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:39.486 [2024-11-20 06:43:11.283087] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:39.486 qpair failed and we were unable to recover it. 00:32:39.486 [2024-11-20 06:43:11.293015] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.486 [2024-11-20 06:43:11.293076] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.486 [2024-11-20 06:43:11.293089] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.486 [2024-11-20 06:43:11.293096] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.486 [2024-11-20 06:43:11.293102] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:39.486 [2024-11-20 06:43:11.293116] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:39.486 qpair failed and we were unable to recover it. 00:32:39.486 [2024-11-20 06:43:11.302986] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.486 [2024-11-20 06:43:11.303060] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.486 [2024-11-20 06:43:11.303073] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.486 [2024-11-20 06:43:11.303080] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.486 [2024-11-20 06:43:11.303085] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:39.486 [2024-11-20 06:43:11.303100] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:39.486 qpair failed and we were unable to recover it. 00:32:39.486 [2024-11-20 06:43:11.313006] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.486 [2024-11-20 06:43:11.313084] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.486 [2024-11-20 06:43:11.313101] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.486 [2024-11-20 06:43:11.313107] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.486 [2024-11-20 06:43:11.313112] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:39.486 [2024-11-20 06:43:11.313127] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:39.486 qpair failed and we were unable to recover it. 00:32:39.745 [2024-11-20 06:43:11.323045] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.745 [2024-11-20 06:43:11.323102] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.745 [2024-11-20 06:43:11.323115] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.745 [2024-11-20 06:43:11.323122] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.745 [2024-11-20 06:43:11.323128] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:39.745 [2024-11-20 06:43:11.323142] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:39.745 qpair failed and we were unable to recover it. 00:32:39.745 [2024-11-20 06:43:11.333043] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.745 [2024-11-20 06:43:11.333114] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.745 [2024-11-20 06:43:11.333127] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.745 [2024-11-20 06:43:11.333133] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.745 [2024-11-20 06:43:11.333139] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:39.745 [2024-11-20 06:43:11.333154] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:39.745 qpair failed and we were unable to recover it. 00:32:39.745 [2024-11-20 06:43:11.343049] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.745 [2024-11-20 06:43:11.343097] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.745 [2024-11-20 06:43:11.343110] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.745 [2024-11-20 06:43:11.343117] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.745 [2024-11-20 06:43:11.343122] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:39.745 [2024-11-20 06:43:11.343137] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:39.745 qpair failed and we were unable to recover it. 00:32:39.745 [2024-11-20 06:43:11.353080] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.745 [2024-11-20 06:43:11.353136] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.745 [2024-11-20 06:43:11.353149] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.745 [2024-11-20 06:43:11.353159] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.745 [2024-11-20 06:43:11.353164] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:39.745 [2024-11-20 06:43:11.353179] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:39.745 qpair failed and we were unable to recover it. 00:32:39.745 [2024-11-20 06:43:11.363160] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.745 [2024-11-20 06:43:11.363219] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.746 [2024-11-20 06:43:11.363232] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.746 [2024-11-20 06:43:11.363239] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.746 [2024-11-20 06:43:11.363244] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:39.746 [2024-11-20 06:43:11.363259] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:39.746 qpair failed and we were unable to recover it. 00:32:39.746 [2024-11-20 06:43:11.373152] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.746 [2024-11-20 06:43:11.373208] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.746 [2024-11-20 06:43:11.373221] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.746 [2024-11-20 06:43:11.373227] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.746 [2024-11-20 06:43:11.373233] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:39.746 [2024-11-20 06:43:11.373248] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:39.746 qpair failed and we were unable to recover it. 00:32:39.746 [2024-11-20 06:43:11.383170] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.746 [2024-11-20 06:43:11.383226] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.746 [2024-11-20 06:43:11.383239] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.746 [2024-11-20 06:43:11.383245] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.746 [2024-11-20 06:43:11.383251] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:39.746 [2024-11-20 06:43:11.383265] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:39.746 qpair failed and we were unable to recover it. 00:32:39.746 [2024-11-20 06:43:11.393196] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.746 [2024-11-20 06:43:11.393253] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.746 [2024-11-20 06:43:11.393266] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.746 [2024-11-20 06:43:11.393272] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.746 [2024-11-20 06:43:11.393278] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:39.746 [2024-11-20 06:43:11.393291] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:39.746 qpair failed and we were unable to recover it. 00:32:39.746 [2024-11-20 06:43:11.403242] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.746 [2024-11-20 06:43:11.403297] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.746 [2024-11-20 06:43:11.403310] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.746 [2024-11-20 06:43:11.403317] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.746 [2024-11-20 06:43:11.403323] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:39.746 [2024-11-20 06:43:11.403338] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:39.746 qpair failed and we were unable to recover it. 00:32:39.746 [2024-11-20 06:43:11.413292] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.746 [2024-11-20 06:43:11.413347] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.746 [2024-11-20 06:43:11.413361] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.746 [2024-11-20 06:43:11.413367] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.746 [2024-11-20 06:43:11.413373] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:39.746 [2024-11-20 06:43:11.413388] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:39.746 qpair failed and we were unable to recover it. 00:32:39.746 [2024-11-20 06:43:11.423367] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.746 [2024-11-20 06:43:11.423423] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.746 [2024-11-20 06:43:11.423435] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.746 [2024-11-20 06:43:11.423442] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.746 [2024-11-20 06:43:11.423448] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:39.746 [2024-11-20 06:43:11.423462] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:39.746 qpair failed and we were unable to recover it. 00:32:39.746 [2024-11-20 06:43:11.433368] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.746 [2024-11-20 06:43:11.433459] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.746 [2024-11-20 06:43:11.433472] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.746 [2024-11-20 06:43:11.433479] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.746 [2024-11-20 06:43:11.433485] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:39.746 [2024-11-20 06:43:11.433498] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:39.746 qpair failed and we were unable to recover it. 00:32:39.746 [2024-11-20 06:43:11.443347] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.746 [2024-11-20 06:43:11.443424] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.746 [2024-11-20 06:43:11.443438] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.746 [2024-11-20 06:43:11.443445] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.746 [2024-11-20 06:43:11.443450] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:39.746 [2024-11-20 06:43:11.443466] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:39.746 qpair failed and we were unable to recover it. 00:32:39.746 [2024-11-20 06:43:11.453430] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.746 [2024-11-20 06:43:11.453510] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.746 [2024-11-20 06:43:11.453524] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.746 [2024-11-20 06:43:11.453530] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.746 [2024-11-20 06:43:11.453536] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:39.746 [2024-11-20 06:43:11.453550] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:39.746 qpair failed and we were unable to recover it. 00:32:39.746 [2024-11-20 06:43:11.463391] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.746 [2024-11-20 06:43:11.463475] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.746 [2024-11-20 06:43:11.463489] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.746 [2024-11-20 06:43:11.463495] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.747 [2024-11-20 06:43:11.463501] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:39.747 [2024-11-20 06:43:11.463515] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:39.747 qpair failed and we were unable to recover it. 00:32:39.747 [2024-11-20 06:43:11.473442] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.747 [2024-11-20 06:43:11.473495] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.747 [2024-11-20 06:43:11.473508] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.747 [2024-11-20 06:43:11.473514] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.747 [2024-11-20 06:43:11.473521] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:39.747 [2024-11-20 06:43:11.473534] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:39.747 qpair failed and we were unable to recover it. 00:32:39.747 [2024-11-20 06:43:11.483470] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.747 [2024-11-20 06:43:11.483547] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.747 [2024-11-20 06:43:11.483560] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.747 [2024-11-20 06:43:11.483570] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.747 [2024-11-20 06:43:11.483575] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:39.747 [2024-11-20 06:43:11.483589] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:39.747 qpair failed and we were unable to recover it. 00:32:39.747 [2024-11-20 06:43:11.493462] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.747 [2024-11-20 06:43:11.493522] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.747 [2024-11-20 06:43:11.493535] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.747 [2024-11-20 06:43:11.493542] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.747 [2024-11-20 06:43:11.493548] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:39.747 [2024-11-20 06:43:11.493561] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:39.747 qpair failed and we were unable to recover it. 00:32:39.747 [2024-11-20 06:43:11.503519] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.747 [2024-11-20 06:43:11.503570] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.747 [2024-11-20 06:43:11.503584] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.747 [2024-11-20 06:43:11.503590] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.747 [2024-11-20 06:43:11.503596] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:39.747 [2024-11-20 06:43:11.503609] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:39.747 qpair failed and we were unable to recover it. 00:32:39.747 [2024-11-20 06:43:11.513532] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.747 [2024-11-20 06:43:11.513606] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.747 [2024-11-20 06:43:11.513619] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.747 [2024-11-20 06:43:11.513625] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.747 [2024-11-20 06:43:11.513631] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:39.747 [2024-11-20 06:43:11.513645] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:39.747 qpair failed and we were unable to recover it. 00:32:39.747 [2024-11-20 06:43:11.523584] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.747 [2024-11-20 06:43:11.523666] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.747 [2024-11-20 06:43:11.523679] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.747 [2024-11-20 06:43:11.523685] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.747 [2024-11-20 06:43:11.523691] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:39.747 [2024-11-20 06:43:11.523708] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:39.747 qpair failed and we were unable to recover it. 00:32:39.747 [2024-11-20 06:43:11.533647] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.747 [2024-11-20 06:43:11.533706] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.747 [2024-11-20 06:43:11.533719] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.747 [2024-11-20 06:43:11.533726] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.747 [2024-11-20 06:43:11.533731] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:39.747 [2024-11-20 06:43:11.533745] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:39.747 qpair failed and we were unable to recover it. 00:32:39.747 [2024-11-20 06:43:11.543628] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.747 [2024-11-20 06:43:11.543704] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.747 [2024-11-20 06:43:11.543717] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.747 [2024-11-20 06:43:11.543724] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.747 [2024-11-20 06:43:11.543730] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:39.747 [2024-11-20 06:43:11.543744] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:39.747 qpair failed and we were unable to recover it. 00:32:39.747 [2024-11-20 06:43:11.553678] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.747 [2024-11-20 06:43:11.553740] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.747 [2024-11-20 06:43:11.553753] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.747 [2024-11-20 06:43:11.553759] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.747 [2024-11-20 06:43:11.553765] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:39.747 [2024-11-20 06:43:11.553779] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:39.747 qpair failed and we were unable to recover it. 00:32:39.747 [2024-11-20 06:43:11.563696] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.747 [2024-11-20 06:43:11.563753] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.747 [2024-11-20 06:43:11.563766] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.747 [2024-11-20 06:43:11.563772] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.747 [2024-11-20 06:43:11.563778] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:39.747 [2024-11-20 06:43:11.563792] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:39.747 qpair failed and we were unable to recover it. 00:32:39.747 [2024-11-20 06:43:11.573706] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.747 [2024-11-20 06:43:11.573760] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.747 [2024-11-20 06:43:11.573773] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.747 [2024-11-20 06:43:11.573779] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.747 [2024-11-20 06:43:11.573785] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:39.747 [2024-11-20 06:43:11.573798] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:39.747 qpair failed and we were unable to recover it. 00:32:40.007 [2024-11-20 06:43:11.583740] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.007 [2024-11-20 06:43:11.583792] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.007 [2024-11-20 06:43:11.583805] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.007 [2024-11-20 06:43:11.583812] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.007 [2024-11-20 06:43:11.583817] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:40.007 [2024-11-20 06:43:11.583831] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:40.007 qpair failed and we were unable to recover it. 00:32:40.007 [2024-11-20 06:43:11.593775] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.007 [2024-11-20 06:43:11.593828] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.007 [2024-11-20 06:43:11.593841] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.007 [2024-11-20 06:43:11.593847] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.007 [2024-11-20 06:43:11.593853] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:40.007 [2024-11-20 06:43:11.593867] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:40.007 qpair failed and we were unable to recover it. 00:32:40.007 [2024-11-20 06:43:11.603841] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.007 [2024-11-20 06:43:11.603919] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.007 [2024-11-20 06:43:11.603932] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.007 [2024-11-20 06:43:11.603939] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.007 [2024-11-20 06:43:11.603944] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:40.007 [2024-11-20 06:43:11.603958] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:40.007 qpair failed and we were unable to recover it. 00:32:40.007 [2024-11-20 06:43:11.613846] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.007 [2024-11-20 06:43:11.613898] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.007 [2024-11-20 06:43:11.613914] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.007 [2024-11-20 06:43:11.613921] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.007 [2024-11-20 06:43:11.613926] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:40.007 [2024-11-20 06:43:11.613940] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:40.007 qpair failed and we were unable to recover it. 00:32:40.007 [2024-11-20 06:43:11.623867] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.007 [2024-11-20 06:43:11.623919] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.007 [2024-11-20 06:43:11.623933] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.007 [2024-11-20 06:43:11.623940] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.007 [2024-11-20 06:43:11.623945] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:40.007 [2024-11-20 06:43:11.623960] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:40.007 qpair failed and we were unable to recover it. 00:32:40.007 [2024-11-20 06:43:11.633880] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.007 [2024-11-20 06:43:11.633932] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.007 [2024-11-20 06:43:11.633945] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.007 [2024-11-20 06:43:11.633952] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.007 [2024-11-20 06:43:11.633958] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:40.007 [2024-11-20 06:43:11.633971] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:40.007 qpair failed and we were unable to recover it. 00:32:40.007 [2024-11-20 06:43:11.643921] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.007 [2024-11-20 06:43:11.643977] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.007 [2024-11-20 06:43:11.643991] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.007 [2024-11-20 06:43:11.643997] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.007 [2024-11-20 06:43:11.644003] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:40.007 [2024-11-20 06:43:11.644017] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:40.007 qpair failed and we were unable to recover it. 00:32:40.007 [2024-11-20 06:43:11.653961] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.007 [2024-11-20 06:43:11.654020] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.007 [2024-11-20 06:43:11.654034] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.007 [2024-11-20 06:43:11.654040] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.007 [2024-11-20 06:43:11.654046] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:40.007 [2024-11-20 06:43:11.654064] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:40.007 qpair failed and we were unable to recover it. 00:32:40.007 [2024-11-20 06:43:11.663967] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.007 [2024-11-20 06:43:11.664020] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.007 [2024-11-20 06:43:11.664034] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.007 [2024-11-20 06:43:11.664040] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.007 [2024-11-20 06:43:11.664046] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:40.007 [2024-11-20 06:43:11.664060] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:40.007 qpair failed and we were unable to recover it. 00:32:40.007 [2024-11-20 06:43:11.674041] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.007 [2024-11-20 06:43:11.674092] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.007 [2024-11-20 06:43:11.674105] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.007 [2024-11-20 06:43:11.674112] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.007 [2024-11-20 06:43:11.674117] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:40.007 [2024-11-20 06:43:11.674131] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:40.007 qpair failed and we were unable to recover it. 00:32:40.007 [2024-11-20 06:43:11.684040] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.007 [2024-11-20 06:43:11.684097] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.007 [2024-11-20 06:43:11.684109] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.007 [2024-11-20 06:43:11.684116] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.007 [2024-11-20 06:43:11.684122] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:40.007 [2024-11-20 06:43:11.684135] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:40.007 qpair failed and we were unable to recover it. 00:32:40.007 [2024-11-20 06:43:11.694098] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.007 [2024-11-20 06:43:11.694152] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.008 [2024-11-20 06:43:11.694166] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.008 [2024-11-20 06:43:11.694172] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.008 [2024-11-20 06:43:11.694178] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:40.008 [2024-11-20 06:43:11.694193] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:40.008 qpair failed and we were unable to recover it. 00:32:40.008 [2024-11-20 06:43:11.704096] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.008 [2024-11-20 06:43:11.704195] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.008 [2024-11-20 06:43:11.704212] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.008 [2024-11-20 06:43:11.704219] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.008 [2024-11-20 06:43:11.704225] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:40.008 [2024-11-20 06:43:11.704239] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:40.008 qpair failed and we were unable to recover it. 00:32:40.008 [2024-11-20 06:43:11.714106] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.008 [2024-11-20 06:43:11.714158] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.008 [2024-11-20 06:43:11.714171] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.008 [2024-11-20 06:43:11.714178] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.008 [2024-11-20 06:43:11.714184] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:40.008 [2024-11-20 06:43:11.714197] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:40.008 qpair failed and we were unable to recover it. 00:32:40.008 [2024-11-20 06:43:11.724172] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.008 [2024-11-20 06:43:11.724233] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.008 [2024-11-20 06:43:11.724246] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.008 [2024-11-20 06:43:11.724253] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.008 [2024-11-20 06:43:11.724259] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:40.008 [2024-11-20 06:43:11.724273] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:40.008 qpair failed and we were unable to recover it. 00:32:40.008 [2024-11-20 06:43:11.734171] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.008 [2024-11-20 06:43:11.734228] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.008 [2024-11-20 06:43:11.734242] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.008 [2024-11-20 06:43:11.734249] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.008 [2024-11-20 06:43:11.734254] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:40.008 [2024-11-20 06:43:11.734269] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:40.008 qpair failed and we were unable to recover it. 00:32:40.008 [2024-11-20 06:43:11.744197] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.008 [2024-11-20 06:43:11.744257] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.008 [2024-11-20 06:43:11.744274] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.008 [2024-11-20 06:43:11.744280] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.008 [2024-11-20 06:43:11.744286] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:40.008 [2024-11-20 06:43:11.744300] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:40.008 qpair failed and we were unable to recover it. 00:32:40.008 [2024-11-20 06:43:11.754227] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.008 [2024-11-20 06:43:11.754277] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.008 [2024-11-20 06:43:11.754289] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.008 [2024-11-20 06:43:11.754296] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.008 [2024-11-20 06:43:11.754302] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:40.008 [2024-11-20 06:43:11.754316] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:40.008 qpair failed and we were unable to recover it. 00:32:40.008 [2024-11-20 06:43:11.764263] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.008 [2024-11-20 06:43:11.764316] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.008 [2024-11-20 06:43:11.764330] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.008 [2024-11-20 06:43:11.764336] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.008 [2024-11-20 06:43:11.764342] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:40.008 [2024-11-20 06:43:11.764356] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:40.008 qpair failed and we were unable to recover it. 00:32:40.008 [2024-11-20 06:43:11.774289] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.008 [2024-11-20 06:43:11.774345] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.008 [2024-11-20 06:43:11.774358] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.008 [2024-11-20 06:43:11.774364] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.008 [2024-11-20 06:43:11.774370] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:40.008 [2024-11-20 06:43:11.774383] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:40.008 qpair failed and we were unable to recover it. 00:32:40.008 [2024-11-20 06:43:11.784321] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.008 [2024-11-20 06:43:11.784369] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.008 [2024-11-20 06:43:11.784382] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.008 [2024-11-20 06:43:11.784389] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.008 [2024-11-20 06:43:11.784397] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:40.008 [2024-11-20 06:43:11.784412] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:40.008 qpair failed and we were unable to recover it. 00:32:40.008 [2024-11-20 06:43:11.794382] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.008 [2024-11-20 06:43:11.794434] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.008 [2024-11-20 06:43:11.794447] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.008 [2024-11-20 06:43:11.794453] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.008 [2024-11-20 06:43:11.794459] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:40.008 [2024-11-20 06:43:11.794472] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:40.008 qpair failed and we were unable to recover it. 00:32:40.008 [2024-11-20 06:43:11.804387] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.008 [2024-11-20 06:43:11.804440] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.008 [2024-11-20 06:43:11.804453] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.008 [2024-11-20 06:43:11.804460] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.008 [2024-11-20 06:43:11.804465] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:40.009 [2024-11-20 06:43:11.804480] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:40.009 qpair failed and we were unable to recover it. 00:32:40.009 [2024-11-20 06:43:11.814411] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.009 [2024-11-20 06:43:11.814460] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.009 [2024-11-20 06:43:11.814473] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.009 [2024-11-20 06:43:11.814480] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.009 [2024-11-20 06:43:11.814485] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:40.009 [2024-11-20 06:43:11.814500] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:40.009 qpair failed and we were unable to recover it. 00:32:40.009 [2024-11-20 06:43:11.824492] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.009 [2024-11-20 06:43:11.824547] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.009 [2024-11-20 06:43:11.824560] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.009 [2024-11-20 06:43:11.824567] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.009 [2024-11-20 06:43:11.824572] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:40.009 [2024-11-20 06:43:11.824586] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:40.009 qpair failed and we were unable to recover it. 00:32:40.009 [2024-11-20 06:43:11.834488] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.009 [2024-11-20 06:43:11.834540] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.009 [2024-11-20 06:43:11.834554] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.009 [2024-11-20 06:43:11.834560] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.009 [2024-11-20 06:43:11.834566] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:40.009 [2024-11-20 06:43:11.834580] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:40.009 qpair failed and we were unable to recover it. 00:32:40.268 [2024-11-20 06:43:11.844541] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.268 [2024-11-20 06:43:11.844602] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.268 [2024-11-20 06:43:11.844616] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.268 [2024-11-20 06:43:11.844622] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.268 [2024-11-20 06:43:11.844628] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:40.268 [2024-11-20 06:43:11.844642] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:40.268 qpair failed and we were unable to recover it. 00:32:40.268 [2024-11-20 06:43:11.854536] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.268 [2024-11-20 06:43:11.854592] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.268 [2024-11-20 06:43:11.854605] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.268 [2024-11-20 06:43:11.854611] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.268 [2024-11-20 06:43:11.854617] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:40.268 [2024-11-20 06:43:11.854631] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:40.268 qpair failed and we were unable to recover it. 00:32:40.268 [2024-11-20 06:43:11.864478] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.268 [2024-11-20 06:43:11.864533] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.268 [2024-11-20 06:43:11.864545] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.268 [2024-11-20 06:43:11.864552] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.268 [2024-11-20 06:43:11.864558] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:40.268 [2024-11-20 06:43:11.864572] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:40.268 qpair failed and we were unable to recover it. 00:32:40.268 [2024-11-20 06:43:11.874574] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.268 [2024-11-20 06:43:11.874623] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.268 [2024-11-20 06:43:11.874640] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.268 [2024-11-20 06:43:11.874646] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.268 [2024-11-20 06:43:11.874652] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:40.268 [2024-11-20 06:43:11.874666] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:40.268 qpair failed and we were unable to recover it. 00:32:40.268 [2024-11-20 06:43:11.884610] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.268 [2024-11-20 06:43:11.884692] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.268 [2024-11-20 06:43:11.884704] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.268 [2024-11-20 06:43:11.884711] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.268 [2024-11-20 06:43:11.884717] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:40.268 [2024-11-20 06:43:11.884730] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:40.268 qpair failed and we were unable to recover it. 00:32:40.268 [2024-11-20 06:43:11.894608] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.269 [2024-11-20 06:43:11.894662] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.269 [2024-11-20 06:43:11.894675] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.269 [2024-11-20 06:43:11.894681] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.269 [2024-11-20 06:43:11.894687] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:40.269 [2024-11-20 06:43:11.894701] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:40.269 qpair failed and we were unable to recover it. 00:32:40.269 [2024-11-20 06:43:11.904651] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.269 [2024-11-20 06:43:11.904706] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.269 [2024-11-20 06:43:11.904719] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.269 [2024-11-20 06:43:11.904726] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.269 [2024-11-20 06:43:11.904732] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:40.269 [2024-11-20 06:43:11.904746] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:40.269 qpair failed and we were unable to recover it. 00:32:40.269 [2024-11-20 06:43:11.914676] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.269 [2024-11-20 06:43:11.914725] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.269 [2024-11-20 06:43:11.914738] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.269 [2024-11-20 06:43:11.914747] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.269 [2024-11-20 06:43:11.914753] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:40.269 [2024-11-20 06:43:11.914767] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:40.269 qpair failed and we were unable to recover it. 00:32:40.269 [2024-11-20 06:43:11.924726] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.269 [2024-11-20 06:43:11.924780] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.269 [2024-11-20 06:43:11.924793] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.269 [2024-11-20 06:43:11.924799] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.269 [2024-11-20 06:43:11.924805] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:40.269 [2024-11-20 06:43:11.924819] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:40.269 qpair failed and we were unable to recover it. 00:32:40.269 [2024-11-20 06:43:11.934754] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.269 [2024-11-20 06:43:11.934811] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.269 [2024-11-20 06:43:11.934825] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.269 [2024-11-20 06:43:11.934831] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.269 [2024-11-20 06:43:11.934837] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:40.269 [2024-11-20 06:43:11.934851] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:40.269 qpair failed and we were unable to recover it. 00:32:40.269 [2024-11-20 06:43:11.944768] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.269 [2024-11-20 06:43:11.944818] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.269 [2024-11-20 06:43:11.944832] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.269 [2024-11-20 06:43:11.944838] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.269 [2024-11-20 06:43:11.944844] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:40.269 [2024-11-20 06:43:11.944858] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:40.269 qpair failed and we were unable to recover it. 00:32:40.269 [2024-11-20 06:43:11.954803] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.269 [2024-11-20 06:43:11.954851] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.269 [2024-11-20 06:43:11.954864] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.269 [2024-11-20 06:43:11.954871] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.269 [2024-11-20 06:43:11.954877] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:40.269 [2024-11-20 06:43:11.954891] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:40.269 qpair failed and we were unable to recover it. 00:32:40.269 [2024-11-20 06:43:11.964832] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.269 [2024-11-20 06:43:11.964885] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.269 [2024-11-20 06:43:11.964898] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.269 [2024-11-20 06:43:11.964905] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.269 [2024-11-20 06:43:11.964911] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:40.269 [2024-11-20 06:43:11.964925] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:40.269 qpair failed and we were unable to recover it. 00:32:40.269 [2024-11-20 06:43:11.974860] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.269 [2024-11-20 06:43:11.974916] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.269 [2024-11-20 06:43:11.974929] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.269 [2024-11-20 06:43:11.974936] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.269 [2024-11-20 06:43:11.974942] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:40.269 [2024-11-20 06:43:11.974955] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:40.269 qpair failed and we were unable to recover it. 00:32:40.269 [2024-11-20 06:43:11.984906] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.269 [2024-11-20 06:43:11.984972] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.269 [2024-11-20 06:43:11.984985] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.269 [2024-11-20 06:43:11.984991] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.269 [2024-11-20 06:43:11.984997] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:40.270 [2024-11-20 06:43:11.985011] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:40.270 qpair failed and we were unable to recover it. 00:32:40.270 [2024-11-20 06:43:11.994946] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.270 [2024-11-20 06:43:11.995017] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.270 [2024-11-20 06:43:11.995030] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.270 [2024-11-20 06:43:11.995036] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.270 [2024-11-20 06:43:11.995043] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:40.270 [2024-11-20 06:43:11.995056] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:40.270 qpair failed and we were unable to recover it. 00:32:40.270 [2024-11-20 06:43:12.004955] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.270 [2024-11-20 06:43:12.005013] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.270 [2024-11-20 06:43:12.005026] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.270 [2024-11-20 06:43:12.005033] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.270 [2024-11-20 06:43:12.005038] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:40.270 [2024-11-20 06:43:12.005052] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:40.270 qpair failed and we were unable to recover it. 00:32:40.270 [2024-11-20 06:43:12.014976] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.270 [2024-11-20 06:43:12.015033] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.270 [2024-11-20 06:43:12.015047] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.270 [2024-11-20 06:43:12.015053] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.270 [2024-11-20 06:43:12.015060] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:40.270 [2024-11-20 06:43:12.015076] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:40.270 qpair failed and we were unable to recover it. 00:32:40.270 [2024-11-20 06:43:12.024995] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.270 [2024-11-20 06:43:12.025058] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.270 [2024-11-20 06:43:12.025071] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.270 [2024-11-20 06:43:12.025078] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.270 [2024-11-20 06:43:12.025085] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:40.270 [2024-11-20 06:43:12.025099] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:40.270 qpair failed and we were unable to recover it. 00:32:40.270 [2024-11-20 06:43:12.035029] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.270 [2024-11-20 06:43:12.035081] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.270 [2024-11-20 06:43:12.035094] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.270 [2024-11-20 06:43:12.035101] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.270 [2024-11-20 06:43:12.035107] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:40.270 [2024-11-20 06:43:12.035121] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:40.270 qpair failed and we were unable to recover it. 00:32:40.270 [2024-11-20 06:43:12.045104] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.270 [2024-11-20 06:43:12.045191] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.270 [2024-11-20 06:43:12.045211] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.270 [2024-11-20 06:43:12.045221] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.270 [2024-11-20 06:43:12.045226] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:40.270 [2024-11-20 06:43:12.045241] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:40.270 qpair failed and we were unable to recover it. 00:32:40.270 [2024-11-20 06:43:12.055115] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.270 [2024-11-20 06:43:12.055219] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.270 [2024-11-20 06:43:12.055233] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.270 [2024-11-20 06:43:12.055240] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.270 [2024-11-20 06:43:12.055245] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:40.270 [2024-11-20 06:43:12.055260] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:40.270 qpair failed and we were unable to recover it. 00:32:40.270 [2024-11-20 06:43:12.065118] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.270 [2024-11-20 06:43:12.065170] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.270 [2024-11-20 06:43:12.065183] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.270 [2024-11-20 06:43:12.065190] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.270 [2024-11-20 06:43:12.065196] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:40.270 [2024-11-20 06:43:12.065215] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:40.270 qpair failed and we were unable to recover it. 00:32:40.270 [2024-11-20 06:43:12.075145] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.270 [2024-11-20 06:43:12.075200] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.271 [2024-11-20 06:43:12.075217] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.271 [2024-11-20 06:43:12.075224] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.271 [2024-11-20 06:43:12.075230] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:40.271 [2024-11-20 06:43:12.075244] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:40.271 qpair failed and we were unable to recover it. 00:32:40.271 [2024-11-20 06:43:12.085108] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.271 [2024-11-20 06:43:12.085166] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.271 [2024-11-20 06:43:12.085178] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.271 [2024-11-20 06:43:12.085185] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.271 [2024-11-20 06:43:12.085190] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:40.271 [2024-11-20 06:43:12.085212] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:40.271 qpair failed and we were unable to recover it. 00:32:40.271 [2024-11-20 06:43:12.095214] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.271 [2024-11-20 06:43:12.095270] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.271 [2024-11-20 06:43:12.095282] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.271 [2024-11-20 06:43:12.095288] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.271 [2024-11-20 06:43:12.095294] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:40.271 [2024-11-20 06:43:12.095308] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:40.271 qpair failed and we were unable to recover it. 00:32:40.531 [2024-11-20 06:43:12.105186] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.531 [2024-11-20 06:43:12.105279] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.531 [2024-11-20 06:43:12.105292] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.531 [2024-11-20 06:43:12.105299] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.531 [2024-11-20 06:43:12.105304] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:40.531 [2024-11-20 06:43:12.105318] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:40.531 qpair failed and we were unable to recover it. 00:32:40.531 [2024-11-20 06:43:12.115284] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.531 [2024-11-20 06:43:12.115361] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.531 [2024-11-20 06:43:12.115375] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.531 [2024-11-20 06:43:12.115382] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.531 [2024-11-20 06:43:12.115388] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:40.531 [2024-11-20 06:43:12.115403] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:40.531 qpair failed and we were unable to recover it. 00:32:40.531 [2024-11-20 06:43:12.125232] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.531 [2024-11-20 06:43:12.125290] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.531 [2024-11-20 06:43:12.125303] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.531 [2024-11-20 06:43:12.125309] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.531 [2024-11-20 06:43:12.125315] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:40.531 [2024-11-20 06:43:12.125329] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:40.531 qpair failed and we were unable to recover it. 00:32:40.531 [2024-11-20 06:43:12.135325] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.531 [2024-11-20 06:43:12.135382] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.531 [2024-11-20 06:43:12.135395] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.531 [2024-11-20 06:43:12.135402] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.531 [2024-11-20 06:43:12.135408] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:40.531 [2024-11-20 06:43:12.135422] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:40.531 qpair failed and we were unable to recover it. 00:32:40.531 [2024-11-20 06:43:12.145326] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.531 [2024-11-20 06:43:12.145401] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.531 [2024-11-20 06:43:12.145415] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.532 [2024-11-20 06:43:12.145422] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.532 [2024-11-20 06:43:12.145428] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:40.532 [2024-11-20 06:43:12.145443] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:40.532 qpair failed and we were unable to recover it. 00:32:40.532 [2024-11-20 06:43:12.155405] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.532 [2024-11-20 06:43:12.155455] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.532 [2024-11-20 06:43:12.155469] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.532 [2024-11-20 06:43:12.155476] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.532 [2024-11-20 06:43:12.155482] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:40.532 [2024-11-20 06:43:12.155496] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:40.532 qpair failed and we were unable to recover it. 00:32:40.532 [2024-11-20 06:43:12.165400] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.532 [2024-11-20 06:43:12.165459] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.532 [2024-11-20 06:43:12.165472] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.532 [2024-11-20 06:43:12.165479] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.532 [2024-11-20 06:43:12.165485] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:40.532 [2024-11-20 06:43:12.165498] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:40.532 qpair failed and we were unable to recover it. 00:32:40.532 [2024-11-20 06:43:12.175354] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.532 [2024-11-20 06:43:12.175407] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.532 [2024-11-20 06:43:12.175424] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.532 [2024-11-20 06:43:12.175431] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.532 [2024-11-20 06:43:12.175437] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:40.532 [2024-11-20 06:43:12.175452] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:40.532 qpair failed and we were unable to recover it. 00:32:40.532 [2024-11-20 06:43:12.185515] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.532 [2024-11-20 06:43:12.185566] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.532 [2024-11-20 06:43:12.185579] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.532 [2024-11-20 06:43:12.185585] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.532 [2024-11-20 06:43:12.185591] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:40.532 [2024-11-20 06:43:12.185605] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:40.532 qpair failed and we were unable to recover it. 00:32:40.532 [2024-11-20 06:43:12.195526] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.532 [2024-11-20 06:43:12.195604] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.532 [2024-11-20 06:43:12.195618] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.532 [2024-11-20 06:43:12.195624] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.532 [2024-11-20 06:43:12.195630] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:40.532 [2024-11-20 06:43:12.195644] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:40.532 qpair failed and we were unable to recover it. 00:32:40.532 [2024-11-20 06:43:12.205447] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.532 [2024-11-20 06:43:12.205512] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.532 [2024-11-20 06:43:12.205525] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.532 [2024-11-20 06:43:12.205532] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.532 [2024-11-20 06:43:12.205537] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:40.532 [2024-11-20 06:43:12.205552] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:40.532 qpair failed and we were unable to recover it. 00:32:40.532 [2024-11-20 06:43:12.215471] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.532 [2024-11-20 06:43:12.215525] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.532 [2024-11-20 06:43:12.215538] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.532 [2024-11-20 06:43:12.215544] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.532 [2024-11-20 06:43:12.215553] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:40.532 [2024-11-20 06:43:12.215567] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:40.532 qpair failed and we were unable to recover it. 00:32:40.532 [2024-11-20 06:43:12.225584] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.532 [2024-11-20 06:43:12.225638] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.532 [2024-11-20 06:43:12.225650] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.532 [2024-11-20 06:43:12.225656] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.532 [2024-11-20 06:43:12.225662] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:40.532 [2024-11-20 06:43:12.225676] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:40.532 qpair failed and we were unable to recover it. 00:32:40.532 [2024-11-20 06:43:12.235568] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.532 [2024-11-20 06:43:12.235622] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.532 [2024-11-20 06:43:12.235637] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.532 [2024-11-20 06:43:12.235643] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.532 [2024-11-20 06:43:12.235649] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:40.532 [2024-11-20 06:43:12.235663] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:40.532 qpair failed and we were unable to recover it. 00:32:40.532 [2024-11-20 06:43:12.245740] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.532 [2024-11-20 06:43:12.245794] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.532 [2024-11-20 06:43:12.245808] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.532 [2024-11-20 06:43:12.245814] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.532 [2024-11-20 06:43:12.245819] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:40.532 [2024-11-20 06:43:12.245834] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:40.532 qpair failed and we were unable to recover it. 00:32:40.532 [2024-11-20 06:43:12.255597] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.532 [2024-11-20 06:43:12.255649] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.532 [2024-11-20 06:43:12.255663] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.532 [2024-11-20 06:43:12.255669] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.532 [2024-11-20 06:43:12.255675] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:40.532 [2024-11-20 06:43:12.255689] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:40.532 qpair failed and we were unable to recover it. 00:32:40.532 [2024-11-20 06:43:12.265623] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.532 [2024-11-20 06:43:12.265677] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.532 [2024-11-20 06:43:12.265690] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.532 [2024-11-20 06:43:12.265696] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.532 [2024-11-20 06:43:12.265702] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:40.532 [2024-11-20 06:43:12.265716] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:40.532 qpair failed and we were unable to recover it. 00:32:40.532 [2024-11-20 06:43:12.275691] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.532 [2024-11-20 06:43:12.275755] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.532 [2024-11-20 06:43:12.275768] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.533 [2024-11-20 06:43:12.275775] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.533 [2024-11-20 06:43:12.275780] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:40.533 [2024-11-20 06:43:12.275795] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:40.533 qpair failed and we were unable to recover it. 00:32:40.533 [2024-11-20 06:43:12.285754] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.533 [2024-11-20 06:43:12.285809] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.533 [2024-11-20 06:43:12.285822] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.533 [2024-11-20 06:43:12.285828] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.533 [2024-11-20 06:43:12.285835] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:40.533 [2024-11-20 06:43:12.285849] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:40.533 qpair failed and we were unable to recover it. 00:32:40.533 [2024-11-20 06:43:12.295790] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.533 [2024-11-20 06:43:12.295839] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.533 [2024-11-20 06:43:12.295852] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.533 [2024-11-20 06:43:12.295859] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.533 [2024-11-20 06:43:12.295864] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:40.533 [2024-11-20 06:43:12.295879] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:40.533 qpair failed and we were unable to recover it. 00:32:40.533 [2024-11-20 06:43:12.305731] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.533 [2024-11-20 06:43:12.305785] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.533 [2024-11-20 06:43:12.305801] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.533 [2024-11-20 06:43:12.305807] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.533 [2024-11-20 06:43:12.305813] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:40.533 [2024-11-20 06:43:12.305828] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:40.533 qpair failed and we were unable to recover it. 00:32:40.533 [2024-11-20 06:43:12.315829] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.533 [2024-11-20 06:43:12.315882] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.533 [2024-11-20 06:43:12.315895] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.533 [2024-11-20 06:43:12.315901] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.533 [2024-11-20 06:43:12.315907] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:40.533 [2024-11-20 06:43:12.315921] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:40.533 qpair failed and we were unable to recover it. 00:32:40.533 [2024-11-20 06:43:12.325881] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.533 [2024-11-20 06:43:12.325937] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.533 [2024-11-20 06:43:12.325950] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.533 [2024-11-20 06:43:12.325956] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.533 [2024-11-20 06:43:12.325961] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:40.533 [2024-11-20 06:43:12.325975] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:40.533 qpair failed and we were unable to recover it. 00:32:40.533 [2024-11-20 06:43:12.335829] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.533 [2024-11-20 06:43:12.335888] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.533 [2024-11-20 06:43:12.335902] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.533 [2024-11-20 06:43:12.335908] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.533 [2024-11-20 06:43:12.335914] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:40.533 [2024-11-20 06:43:12.335928] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:40.533 qpair failed and we were unable to recover it. 00:32:40.533 [2024-11-20 06:43:12.345921] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.533 [2024-11-20 06:43:12.345974] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.533 [2024-11-20 06:43:12.345988] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.533 [2024-11-20 06:43:12.345994] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.533 [2024-11-20 06:43:12.346003] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:40.533 [2024-11-20 06:43:12.346017] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:40.533 qpair failed and we were unable to recover it. 00:32:40.533 [2024-11-20 06:43:12.355918] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.533 [2024-11-20 06:43:12.355994] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.533 [2024-11-20 06:43:12.356008] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.533 [2024-11-20 06:43:12.356014] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.533 [2024-11-20 06:43:12.356020] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:40.533 [2024-11-20 06:43:12.356034] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:40.533 qpair failed and we were unable to recover it. 00:32:40.794 [2024-11-20 06:43:12.365995] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.794 [2024-11-20 06:43:12.366053] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.794 [2024-11-20 06:43:12.366066] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.794 [2024-11-20 06:43:12.366073] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.794 [2024-11-20 06:43:12.366079] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:40.794 [2024-11-20 06:43:12.366094] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:40.794 qpair failed and we were unable to recover it. 00:32:40.794 [2024-11-20 06:43:12.376009] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.794 [2024-11-20 06:43:12.376111] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.794 [2024-11-20 06:43:12.376124] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.794 [2024-11-20 06:43:12.376130] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.794 [2024-11-20 06:43:12.376136] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:40.794 [2024-11-20 06:43:12.376151] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:40.794 qpair failed and we were unable to recover it. 00:32:40.794 [2024-11-20 06:43:12.385965] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.794 [2024-11-20 06:43:12.386014] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.794 [2024-11-20 06:43:12.386028] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.794 [2024-11-20 06:43:12.386034] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.794 [2024-11-20 06:43:12.386040] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:40.794 [2024-11-20 06:43:12.386055] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:40.794 qpair failed and we were unable to recover it. 00:32:40.794 [2024-11-20 06:43:12.396026] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.794 [2024-11-20 06:43:12.396119] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.794 [2024-11-20 06:43:12.396133] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.794 [2024-11-20 06:43:12.396139] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.794 [2024-11-20 06:43:12.396145] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:40.794 [2024-11-20 06:43:12.396159] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:40.794 qpair failed and we were unable to recover it. 00:32:40.794 [2024-11-20 06:43:12.406014] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.794 [2024-11-20 06:43:12.406067] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.794 [2024-11-20 06:43:12.406081] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.794 [2024-11-20 06:43:12.406087] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.794 [2024-11-20 06:43:12.406093] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:40.794 [2024-11-20 06:43:12.406107] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:40.794 qpair failed and we were unable to recover it. 00:32:40.794 [2024-11-20 06:43:12.416102] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.795 [2024-11-20 06:43:12.416156] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.795 [2024-11-20 06:43:12.416169] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.795 [2024-11-20 06:43:12.416176] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.795 [2024-11-20 06:43:12.416181] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:40.795 [2024-11-20 06:43:12.416196] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:40.795 qpair failed and we were unable to recover it. 00:32:40.795 [2024-11-20 06:43:12.426079] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.795 [2024-11-20 06:43:12.426131] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.795 [2024-11-20 06:43:12.426143] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.795 [2024-11-20 06:43:12.426150] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.795 [2024-11-20 06:43:12.426156] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:40.795 [2024-11-20 06:43:12.426170] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:40.795 qpair failed and we were unable to recover it. 00:32:40.795 [2024-11-20 06:43:12.436098] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.795 [2024-11-20 06:43:12.436151] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.795 [2024-11-20 06:43:12.436168] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.795 [2024-11-20 06:43:12.436175] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.795 [2024-11-20 06:43:12.436181] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:40.795 [2024-11-20 06:43:12.436195] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:40.795 qpair failed and we were unable to recover it. 00:32:40.795 [2024-11-20 06:43:12.446217] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.795 [2024-11-20 06:43:12.446271] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.795 [2024-11-20 06:43:12.446284] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.795 [2024-11-20 06:43:12.446290] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.795 [2024-11-20 06:43:12.446296] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:40.795 [2024-11-20 06:43:12.446310] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:40.795 qpair failed and we were unable to recover it. 00:32:40.795 [2024-11-20 06:43:12.456165] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.795 [2024-11-20 06:43:12.456222] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.795 [2024-11-20 06:43:12.456236] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.795 [2024-11-20 06:43:12.456243] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.795 [2024-11-20 06:43:12.456248] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:40.795 [2024-11-20 06:43:12.456263] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:40.795 qpair failed and we were unable to recover it. 00:32:40.795 [2024-11-20 06:43:12.466254] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.795 [2024-11-20 06:43:12.466320] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.795 [2024-11-20 06:43:12.466334] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.795 [2024-11-20 06:43:12.466340] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.795 [2024-11-20 06:43:12.466347] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:40.795 [2024-11-20 06:43:12.466360] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:40.795 qpair failed and we were unable to recover it. 00:32:40.795 [2024-11-20 06:43:12.476268] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.795 [2024-11-20 06:43:12.476320] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.795 [2024-11-20 06:43:12.476333] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.795 [2024-11-20 06:43:12.476342] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.795 [2024-11-20 06:43:12.476348] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:40.795 [2024-11-20 06:43:12.476363] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:40.795 qpair failed and we were unable to recover it. 00:32:40.795 [2024-11-20 06:43:12.486333] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.795 [2024-11-20 06:43:12.486386] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.795 [2024-11-20 06:43:12.486399] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.795 [2024-11-20 06:43:12.486405] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.795 [2024-11-20 06:43:12.486412] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:40.795 [2024-11-20 06:43:12.486427] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:40.795 qpair failed and we were unable to recover it. 00:32:40.795 [2024-11-20 06:43:12.496336] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.795 [2024-11-20 06:43:12.496388] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.795 [2024-11-20 06:43:12.496401] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.795 [2024-11-20 06:43:12.496407] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.795 [2024-11-20 06:43:12.496413] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:40.795 [2024-11-20 06:43:12.496427] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:40.795 qpair failed and we were unable to recover it. 00:32:40.795 [2024-11-20 06:43:12.506403] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.795 [2024-11-20 06:43:12.506459] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.795 [2024-11-20 06:43:12.506473] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.795 [2024-11-20 06:43:12.506479] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.795 [2024-11-20 06:43:12.506485] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:40.795 [2024-11-20 06:43:12.506499] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:40.795 qpair failed and we were unable to recover it. 00:32:40.795 [2024-11-20 06:43:12.516380] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.795 [2024-11-20 06:43:12.516436] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.795 [2024-11-20 06:43:12.516449] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.795 [2024-11-20 06:43:12.516455] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.795 [2024-11-20 06:43:12.516460] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:40.795 [2024-11-20 06:43:12.516474] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:40.795 qpair failed and we were unable to recover it. 00:32:40.795 [2024-11-20 06:43:12.526419] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.795 [2024-11-20 06:43:12.526478] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.795 [2024-11-20 06:43:12.526490] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.795 [2024-11-20 06:43:12.526497] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.795 [2024-11-20 06:43:12.526502] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:40.795 [2024-11-20 06:43:12.526517] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:40.795 qpair failed and we were unable to recover it. 00:32:40.795 [2024-11-20 06:43:12.536443] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.795 [2024-11-20 06:43:12.536497] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.795 [2024-11-20 06:43:12.536511] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.795 [2024-11-20 06:43:12.536517] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.795 [2024-11-20 06:43:12.536522] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:40.795 [2024-11-20 06:43:12.536536] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:40.795 qpair failed and we were unable to recover it. 00:32:40.795 [2024-11-20 06:43:12.546472] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.796 [2024-11-20 06:43:12.546520] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.796 [2024-11-20 06:43:12.546534] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.796 [2024-11-20 06:43:12.546540] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.796 [2024-11-20 06:43:12.546545] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:40.796 [2024-11-20 06:43:12.546560] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:40.796 qpair failed and we were unable to recover it. 00:32:40.796 [2024-11-20 06:43:12.556418] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.796 [2024-11-20 06:43:12.556467] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.796 [2024-11-20 06:43:12.556480] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.796 [2024-11-20 06:43:12.556486] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.796 [2024-11-20 06:43:12.556492] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:40.796 [2024-11-20 06:43:12.556506] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:40.796 qpair failed and we were unable to recover it. 00:32:40.796 [2024-11-20 06:43:12.566539] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.796 [2024-11-20 06:43:12.566615] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.796 [2024-11-20 06:43:12.566629] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.796 [2024-11-20 06:43:12.566636] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.796 [2024-11-20 06:43:12.566641] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:40.796 [2024-11-20 06:43:12.566656] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:40.796 qpair failed and we were unable to recover it. 00:32:40.796 [2024-11-20 06:43:12.576615] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.796 [2024-11-20 06:43:12.576673] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.796 [2024-11-20 06:43:12.576686] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.796 [2024-11-20 06:43:12.576692] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.796 [2024-11-20 06:43:12.576698] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:40.796 [2024-11-20 06:43:12.576712] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:40.796 qpair failed and we were unable to recover it. 00:32:40.796 [2024-11-20 06:43:12.586581] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.796 [2024-11-20 06:43:12.586635] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.796 [2024-11-20 06:43:12.586648] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.796 [2024-11-20 06:43:12.586654] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.796 [2024-11-20 06:43:12.586660] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:40.796 [2024-11-20 06:43:12.586673] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:40.796 qpair failed and we were unable to recover it. 00:32:40.796 [2024-11-20 06:43:12.596606] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.796 [2024-11-20 06:43:12.596658] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.796 [2024-11-20 06:43:12.596671] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.796 [2024-11-20 06:43:12.596677] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.796 [2024-11-20 06:43:12.596683] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:40.796 [2024-11-20 06:43:12.596696] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:40.796 qpair failed and we were unable to recover it. 00:32:40.796 [2024-11-20 06:43:12.606647] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.796 [2024-11-20 06:43:12.606698] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.796 [2024-11-20 06:43:12.606711] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.796 [2024-11-20 06:43:12.606721] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.796 [2024-11-20 06:43:12.606726] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:40.796 [2024-11-20 06:43:12.606741] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:40.796 qpair failed and we were unable to recover it. 00:32:40.796 [2024-11-20 06:43:12.616600] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.796 [2024-11-20 06:43:12.616655] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.796 [2024-11-20 06:43:12.616669] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.796 [2024-11-20 06:43:12.616675] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.796 [2024-11-20 06:43:12.616681] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:40.796 [2024-11-20 06:43:12.616694] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:40.796 qpair failed and we were unable to recover it. 00:32:41.056 [2024-11-20 06:43:12.626723] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.056 [2024-11-20 06:43:12.626783] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.056 [2024-11-20 06:43:12.626796] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.056 [2024-11-20 06:43:12.626802] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.056 [2024-11-20 06:43:12.626808] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:41.056 [2024-11-20 06:43:12.626822] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:41.056 qpair failed and we were unable to recover it. 00:32:41.056 [2024-11-20 06:43:12.636736] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.056 [2024-11-20 06:43:12.636796] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.056 [2024-11-20 06:43:12.636809] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.056 [2024-11-20 06:43:12.636817] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.056 [2024-11-20 06:43:12.636822] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:41.056 [2024-11-20 06:43:12.636836] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:41.056 qpair failed and we were unable to recover it. 00:32:41.056 [2024-11-20 06:43:12.646771] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.056 [2024-11-20 06:43:12.646831] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.056 [2024-11-20 06:43:12.646844] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.056 [2024-11-20 06:43:12.646851] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.056 [2024-11-20 06:43:12.646856] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:41.056 [2024-11-20 06:43:12.646873] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:41.056 qpair failed and we were unable to recover it. 00:32:41.056 [2024-11-20 06:43:12.656787] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.056 [2024-11-20 06:43:12.656843] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.056 [2024-11-20 06:43:12.656856] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.056 [2024-11-20 06:43:12.656862] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.056 [2024-11-20 06:43:12.656868] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:41.056 [2024-11-20 06:43:12.656882] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:41.056 qpair failed and we were unable to recover it. 00:32:41.056 [2024-11-20 06:43:12.666811] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.056 [2024-11-20 06:43:12.666865] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.056 [2024-11-20 06:43:12.666879] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.057 [2024-11-20 06:43:12.666886] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.057 [2024-11-20 06:43:12.666891] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:41.057 [2024-11-20 06:43:12.666906] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:41.057 qpair failed and we were unable to recover it. 00:32:41.057 [2024-11-20 06:43:12.676851] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.057 [2024-11-20 06:43:12.676905] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.057 [2024-11-20 06:43:12.676918] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.057 [2024-11-20 06:43:12.676925] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.057 [2024-11-20 06:43:12.676931] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:41.057 [2024-11-20 06:43:12.676945] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:41.057 qpair failed and we were unable to recover it. 00:32:41.057 [2024-11-20 06:43:12.686877] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.057 [2024-11-20 06:43:12.686933] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.057 [2024-11-20 06:43:12.686946] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.057 [2024-11-20 06:43:12.686952] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.057 [2024-11-20 06:43:12.686957] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:41.057 [2024-11-20 06:43:12.686971] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:41.057 qpair failed and we were unable to recover it. 00:32:41.057 [2024-11-20 06:43:12.696953] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.057 [2024-11-20 06:43:12.697055] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.057 [2024-11-20 06:43:12.697068] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.057 [2024-11-20 06:43:12.697074] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.057 [2024-11-20 06:43:12.697080] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:41.057 [2024-11-20 06:43:12.697094] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:41.057 qpair failed and we were unable to recover it. 00:32:41.057 [2024-11-20 06:43:12.706936] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.057 [2024-11-20 06:43:12.707018] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.057 [2024-11-20 06:43:12.707031] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.057 [2024-11-20 06:43:12.707038] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.057 [2024-11-20 06:43:12.707043] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:41.057 [2024-11-20 06:43:12.707058] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:41.057 qpair failed and we were unable to recover it. 00:32:41.057 [2024-11-20 06:43:12.716958] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.057 [2024-11-20 06:43:12.717037] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.057 [2024-11-20 06:43:12.717050] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.057 [2024-11-20 06:43:12.717057] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.057 [2024-11-20 06:43:12.717062] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:41.057 [2024-11-20 06:43:12.717076] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:41.057 qpair failed and we were unable to recover it. 00:32:41.057 [2024-11-20 06:43:12.727026] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.057 [2024-11-20 06:43:12.727079] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.057 [2024-11-20 06:43:12.727093] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.057 [2024-11-20 06:43:12.727099] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.057 [2024-11-20 06:43:12.727105] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:41.057 [2024-11-20 06:43:12.727119] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:41.057 qpair failed and we were unable to recover it. 00:32:41.057 [2024-11-20 06:43:12.737025] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.057 [2024-11-20 06:43:12.737077] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.057 [2024-11-20 06:43:12.737093] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.057 [2024-11-20 06:43:12.737099] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.057 [2024-11-20 06:43:12.737105] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:41.057 [2024-11-20 06:43:12.737119] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:41.057 qpair failed and we were unable to recover it. 00:32:41.057 [2024-11-20 06:43:12.747054] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.057 [2024-11-20 06:43:12.747109] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.057 [2024-11-20 06:43:12.747123] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.057 [2024-11-20 06:43:12.747130] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.057 [2024-11-20 06:43:12.747136] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:41.057 [2024-11-20 06:43:12.747150] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:41.057 qpair failed and we were unable to recover it. 00:32:41.057 [2024-11-20 06:43:12.757085] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.057 [2024-11-20 06:43:12.757137] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.057 [2024-11-20 06:43:12.757151] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.057 [2024-11-20 06:43:12.757158] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.057 [2024-11-20 06:43:12.757164] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:41.057 [2024-11-20 06:43:12.757178] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:41.057 qpair failed and we were unable to recover it. 00:32:41.057 [2024-11-20 06:43:12.767116] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.057 [2024-11-20 06:43:12.767171] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.057 [2024-11-20 06:43:12.767184] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.057 [2024-11-20 06:43:12.767191] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.057 [2024-11-20 06:43:12.767197] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:41.057 [2024-11-20 06:43:12.767217] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:41.057 qpair failed and we were unable to recover it. 00:32:41.057 [2024-11-20 06:43:12.777150] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.057 [2024-11-20 06:43:12.777211] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.057 [2024-11-20 06:43:12.777224] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.057 [2024-11-20 06:43:12.777231] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.057 [2024-11-20 06:43:12.777240] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:41.057 [2024-11-20 06:43:12.777254] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:41.057 qpair failed and we were unable to recover it. 00:32:41.057 [2024-11-20 06:43:12.787218] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.057 [2024-11-20 06:43:12.787272] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.057 [2024-11-20 06:43:12.787285] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.057 [2024-11-20 06:43:12.787292] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.057 [2024-11-20 06:43:12.787297] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:41.057 [2024-11-20 06:43:12.787312] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:41.057 qpair failed and we were unable to recover it. 00:32:41.057 [2024-11-20 06:43:12.797189] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.057 [2024-11-20 06:43:12.797243] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.057 [2024-11-20 06:43:12.797256] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.058 [2024-11-20 06:43:12.797263] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.058 [2024-11-20 06:43:12.797268] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:41.058 [2024-11-20 06:43:12.797283] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:41.058 qpair failed and we were unable to recover it. 00:32:41.058 [2024-11-20 06:43:12.807176] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.058 [2024-11-20 06:43:12.807270] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.058 [2024-11-20 06:43:12.807283] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.058 [2024-11-20 06:43:12.807289] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.058 [2024-11-20 06:43:12.807295] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:41.058 [2024-11-20 06:43:12.807309] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:41.058 qpair failed and we were unable to recover it. 00:32:41.058 [2024-11-20 06:43:12.817231] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.058 [2024-11-20 06:43:12.817286] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.058 [2024-11-20 06:43:12.817299] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.058 [2024-11-20 06:43:12.817305] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.058 [2024-11-20 06:43:12.817311] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:41.058 [2024-11-20 06:43:12.817325] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:41.058 qpair failed and we were unable to recover it. 00:32:41.058 [2024-11-20 06:43:12.827273] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.058 [2024-11-20 06:43:12.827322] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.058 [2024-11-20 06:43:12.827335] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.058 [2024-11-20 06:43:12.827342] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.058 [2024-11-20 06:43:12.827347] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:41.058 [2024-11-20 06:43:12.827362] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:41.058 qpair failed and we were unable to recover it. 00:32:41.058 [2024-11-20 06:43:12.837292] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.058 [2024-11-20 06:43:12.837345] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.058 [2024-11-20 06:43:12.837358] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.058 [2024-11-20 06:43:12.837364] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.058 [2024-11-20 06:43:12.837370] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:41.058 [2024-11-20 06:43:12.837384] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:41.058 qpair failed and we were unable to recover it. 00:32:41.058 [2024-11-20 06:43:12.847321] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.058 [2024-11-20 06:43:12.847376] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.058 [2024-11-20 06:43:12.847389] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.058 [2024-11-20 06:43:12.847395] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.058 [2024-11-20 06:43:12.847401] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:41.058 [2024-11-20 06:43:12.847415] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:41.058 qpair failed and we were unable to recover it. 00:32:41.058 [2024-11-20 06:43:12.857367] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.058 [2024-11-20 06:43:12.857421] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.058 [2024-11-20 06:43:12.857434] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.058 [2024-11-20 06:43:12.857440] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.058 [2024-11-20 06:43:12.857446] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:41.058 [2024-11-20 06:43:12.857460] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:41.058 qpair failed and we were unable to recover it. 00:32:41.058 [2024-11-20 06:43:12.867390] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.058 [2024-11-20 06:43:12.867441] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.058 [2024-11-20 06:43:12.867457] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.058 [2024-11-20 06:43:12.867464] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.058 [2024-11-20 06:43:12.867469] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:41.058 [2024-11-20 06:43:12.867484] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:41.058 qpair failed and we were unable to recover it. 00:32:41.058 [2024-11-20 06:43:12.877425] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.058 [2024-11-20 06:43:12.877482] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.058 [2024-11-20 06:43:12.877495] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.058 [2024-11-20 06:43:12.877502] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.058 [2024-11-20 06:43:12.877507] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:41.058 [2024-11-20 06:43:12.877522] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:41.058 qpair failed and we were unable to recover it. 00:32:41.318 [2024-11-20 06:43:12.887518] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.318 [2024-11-20 06:43:12.887597] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.318 [2024-11-20 06:43:12.887610] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.318 [2024-11-20 06:43:12.887616] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.318 [2024-11-20 06:43:12.887622] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:41.318 [2024-11-20 06:43:12.887636] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:41.318 qpair failed and we were unable to recover it. 00:32:41.318 [2024-11-20 06:43:12.897496] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.318 [2024-11-20 06:43:12.897555] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.318 [2024-11-20 06:43:12.897567] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.318 [2024-11-20 06:43:12.897574] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.318 [2024-11-20 06:43:12.897580] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:41.318 [2024-11-20 06:43:12.897593] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:41.318 qpair failed and we were unable to recover it. 00:32:41.318 [2024-11-20 06:43:12.907512] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.318 [2024-11-20 06:43:12.907564] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.318 [2024-11-20 06:43:12.907577] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.318 [2024-11-20 06:43:12.907583] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.318 [2024-11-20 06:43:12.907592] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:41.318 [2024-11-20 06:43:12.907606] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:41.318 qpair failed and we were unable to recover it. 00:32:41.318 [2024-11-20 06:43:12.917530] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.318 [2024-11-20 06:43:12.917584] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.318 [2024-11-20 06:43:12.917597] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.318 [2024-11-20 06:43:12.917603] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.318 [2024-11-20 06:43:12.917609] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:41.318 [2024-11-20 06:43:12.917623] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:41.318 qpair failed and we were unable to recover it. 00:32:41.319 [2024-11-20 06:43:12.927548] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.319 [2024-11-20 06:43:12.927610] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.319 [2024-11-20 06:43:12.927622] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.319 [2024-11-20 06:43:12.927628] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.319 [2024-11-20 06:43:12.927634] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:41.319 [2024-11-20 06:43:12.927648] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:41.319 qpair failed and we were unable to recover it. 00:32:41.319 [2024-11-20 06:43:12.937591] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.319 [2024-11-20 06:43:12.937643] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.319 [2024-11-20 06:43:12.937656] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.319 [2024-11-20 06:43:12.937663] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.319 [2024-11-20 06:43:12.937669] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:41.319 [2024-11-20 06:43:12.937682] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:41.319 qpair failed and we were unable to recover it. 00:32:41.319 [2024-11-20 06:43:12.947606] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.319 [2024-11-20 06:43:12.947658] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.319 [2024-11-20 06:43:12.947671] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.319 [2024-11-20 06:43:12.947677] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.319 [2024-11-20 06:43:12.947683] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:41.319 [2024-11-20 06:43:12.947697] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:41.319 qpair failed and we were unable to recover it. 00:32:41.319 [2024-11-20 06:43:12.957679] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.319 [2024-11-20 06:43:12.957759] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.319 [2024-11-20 06:43:12.957773] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.319 [2024-11-20 06:43:12.957779] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.319 [2024-11-20 06:43:12.957785] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:41.319 [2024-11-20 06:43:12.957799] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:41.319 qpair failed and we were unable to recover it. 00:32:41.319 [2024-11-20 06:43:12.967693] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.319 [2024-11-20 06:43:12.967760] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.319 [2024-11-20 06:43:12.967773] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.319 [2024-11-20 06:43:12.967780] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.319 [2024-11-20 06:43:12.967786] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:41.319 [2024-11-20 06:43:12.967800] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:41.319 qpair failed and we were unable to recover it. 00:32:41.319 [2024-11-20 06:43:12.977748] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.319 [2024-11-20 06:43:12.977807] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.319 [2024-11-20 06:43:12.977820] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.319 [2024-11-20 06:43:12.977827] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.319 [2024-11-20 06:43:12.977833] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:41.319 [2024-11-20 06:43:12.977847] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:41.319 qpair failed and we were unable to recover it. 00:32:41.319 [2024-11-20 06:43:12.987740] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.319 [2024-11-20 06:43:12.987799] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.319 [2024-11-20 06:43:12.987812] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.319 [2024-11-20 06:43:12.987818] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.319 [2024-11-20 06:43:12.987824] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:41.319 [2024-11-20 06:43:12.987838] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:41.319 qpair failed and we were unable to recover it. 00:32:41.319 [2024-11-20 06:43:12.997751] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.319 [2024-11-20 06:43:12.997812] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.319 [2024-11-20 06:43:12.997828] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.319 [2024-11-20 06:43:12.997834] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.319 [2024-11-20 06:43:12.997839] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:41.319 [2024-11-20 06:43:12.997853] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:41.319 qpair failed and we were unable to recover it. 00:32:41.319 [2024-11-20 06:43:13.007788] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.319 [2024-11-20 06:43:13.007844] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.319 [2024-11-20 06:43:13.007857] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.319 [2024-11-20 06:43:13.007863] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.319 [2024-11-20 06:43:13.007869] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:41.319 [2024-11-20 06:43:13.007883] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:41.319 qpair failed and we were unable to recover it. 00:32:41.319 [2024-11-20 06:43:13.017812] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.319 [2024-11-20 06:43:13.017866] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.319 [2024-11-20 06:43:13.017879] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.319 [2024-11-20 06:43:13.017885] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.319 [2024-11-20 06:43:13.017891] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:41.319 [2024-11-20 06:43:13.017905] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:41.319 qpair failed and we were unable to recover it. 00:32:41.319 [2024-11-20 06:43:13.027757] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.319 [2024-11-20 06:43:13.027813] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.319 [2024-11-20 06:43:13.027826] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.319 [2024-11-20 06:43:13.027832] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.319 [2024-11-20 06:43:13.027838] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:41.319 [2024-11-20 06:43:13.027852] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:41.319 qpair failed and we were unable to recover it. 00:32:41.319 [2024-11-20 06:43:13.037896] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.319 [2024-11-20 06:43:13.037949] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.319 [2024-11-20 06:43:13.037963] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.319 [2024-11-20 06:43:13.037972] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.319 [2024-11-20 06:43:13.037978] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:41.319 [2024-11-20 06:43:13.037992] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:41.319 qpair failed and we were unable to recover it. 00:32:41.319 [2024-11-20 06:43:13.047900] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.319 [2024-11-20 06:43:13.047957] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.319 [2024-11-20 06:43:13.047971] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.319 [2024-11-20 06:43:13.047977] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.319 [2024-11-20 06:43:13.047983] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:41.319 [2024-11-20 06:43:13.047998] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:41.320 qpair failed and we were unable to recover it. 00:32:41.320 [2024-11-20 06:43:13.057898] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.320 [2024-11-20 06:43:13.057953] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.320 [2024-11-20 06:43:13.057966] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.320 [2024-11-20 06:43:13.057973] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.320 [2024-11-20 06:43:13.057978] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:41.320 [2024-11-20 06:43:13.057993] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:41.320 qpair failed and we were unable to recover it. 00:32:41.320 [2024-11-20 06:43:13.067948] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.320 [2024-11-20 06:43:13.068001] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.320 [2024-11-20 06:43:13.068013] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.320 [2024-11-20 06:43:13.068020] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.320 [2024-11-20 06:43:13.068025] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:41.320 [2024-11-20 06:43:13.068039] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:41.320 qpair failed and we were unable to recover it. 00:32:41.320 [2024-11-20 06:43:13.078010] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.320 [2024-11-20 06:43:13.078065] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.320 [2024-11-20 06:43:13.078081] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.320 [2024-11-20 06:43:13.078088] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.320 [2024-11-20 06:43:13.078094] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:41.320 [2024-11-20 06:43:13.078110] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:41.320 qpair failed and we were unable to recover it. 00:32:41.320 [2024-11-20 06:43:13.088017] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.320 [2024-11-20 06:43:13.088071] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.320 [2024-11-20 06:43:13.088084] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.320 [2024-11-20 06:43:13.088090] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.320 [2024-11-20 06:43:13.088096] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:41.320 [2024-11-20 06:43:13.088111] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:41.320 qpair failed and we were unable to recover it. 00:32:41.320 [2024-11-20 06:43:13.098040] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.320 [2024-11-20 06:43:13.098095] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.320 [2024-11-20 06:43:13.098108] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.320 [2024-11-20 06:43:13.098115] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.320 [2024-11-20 06:43:13.098121] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:41.320 [2024-11-20 06:43:13.098135] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:41.320 qpair failed and we were unable to recover it. 00:32:41.320 [2024-11-20 06:43:13.108057] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.320 [2024-11-20 06:43:13.108107] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.320 [2024-11-20 06:43:13.108120] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.320 [2024-11-20 06:43:13.108126] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.320 [2024-11-20 06:43:13.108132] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:41.320 [2024-11-20 06:43:13.108147] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:41.320 qpair failed and we were unable to recover it. 00:32:41.320 [2024-11-20 06:43:13.118091] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.320 [2024-11-20 06:43:13.118145] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.320 [2024-11-20 06:43:13.118160] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.320 [2024-11-20 06:43:13.118167] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.320 [2024-11-20 06:43:13.118173] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:41.320 [2024-11-20 06:43:13.118187] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:41.320 qpair failed and we were unable to recover it. 00:32:41.320 [2024-11-20 06:43:13.128138] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.320 [2024-11-20 06:43:13.128196] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.320 [2024-11-20 06:43:13.128213] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.320 [2024-11-20 06:43:13.128219] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.320 [2024-11-20 06:43:13.128225] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:41.320 [2024-11-20 06:43:13.128239] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:41.320 qpair failed and we were unable to recover it. 00:32:41.320 [2024-11-20 06:43:13.138153] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.320 [2024-11-20 06:43:13.138209] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.320 [2024-11-20 06:43:13.138223] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.320 [2024-11-20 06:43:13.138229] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.320 [2024-11-20 06:43:13.138235] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:41.320 [2024-11-20 06:43:13.138249] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:41.320 qpair failed and we were unable to recover it. 00:32:41.320 [2024-11-20 06:43:13.148207] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.320 [2024-11-20 06:43:13.148262] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.320 [2024-11-20 06:43:13.148276] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.320 [2024-11-20 06:43:13.148282] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.320 [2024-11-20 06:43:13.148288] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:41.320 [2024-11-20 06:43:13.148302] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:41.320 qpair failed and we were unable to recover it. 00:32:41.579 [2024-11-20 06:43:13.158238] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.579 [2024-11-20 06:43:13.158301] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.579 [2024-11-20 06:43:13.158314] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.579 [2024-11-20 06:43:13.158321] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.579 [2024-11-20 06:43:13.158326] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:41.579 [2024-11-20 06:43:13.158341] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:41.579 qpair failed and we were unable to recover it. 00:32:41.579 [2024-11-20 06:43:13.168254] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.579 [2024-11-20 06:43:13.168314] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.579 [2024-11-20 06:43:13.168327] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.579 [2024-11-20 06:43:13.168337] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.579 [2024-11-20 06:43:13.168342] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:41.579 [2024-11-20 06:43:13.168357] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:41.579 qpair failed and we were unable to recover it. 00:32:41.579 [2024-11-20 06:43:13.178277] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.579 [2024-11-20 06:43:13.178331] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.579 [2024-11-20 06:43:13.178345] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.579 [2024-11-20 06:43:13.178351] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.579 [2024-11-20 06:43:13.178357] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:41.579 [2024-11-20 06:43:13.178371] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:41.579 qpair failed and we were unable to recover it. 00:32:41.579 [2024-11-20 06:43:13.188309] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.579 [2024-11-20 06:43:13.188365] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.579 [2024-11-20 06:43:13.188378] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.579 [2024-11-20 06:43:13.188384] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.579 [2024-11-20 06:43:13.188390] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:41.579 [2024-11-20 06:43:13.188404] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:41.579 qpair failed and we were unable to recover it. 00:32:41.579 [2024-11-20 06:43:13.198332] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.579 [2024-11-20 06:43:13.198387] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.579 [2024-11-20 06:43:13.198400] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.579 [2024-11-20 06:43:13.198407] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.579 [2024-11-20 06:43:13.198413] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:41.579 [2024-11-20 06:43:13.198427] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:41.579 qpair failed and we were unable to recover it. 00:32:41.579 [2024-11-20 06:43:13.208352] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.579 [2024-11-20 06:43:13.208407] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.579 [2024-11-20 06:43:13.208420] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.579 [2024-11-20 06:43:13.208427] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.579 [2024-11-20 06:43:13.208433] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:41.579 [2024-11-20 06:43:13.208450] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:41.579 qpair failed and we were unable to recover it. 00:32:41.579 [2024-11-20 06:43:13.218387] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.579 [2024-11-20 06:43:13.218440] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.579 [2024-11-20 06:43:13.218453] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.579 [2024-11-20 06:43:13.218459] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.580 [2024-11-20 06:43:13.218465] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:41.580 [2024-11-20 06:43:13.218479] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:41.580 qpair failed and we were unable to recover it. 00:32:41.580 [2024-11-20 06:43:13.228405] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.580 [2024-11-20 06:43:13.228454] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.580 [2024-11-20 06:43:13.228467] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.580 [2024-11-20 06:43:13.228473] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.580 [2024-11-20 06:43:13.228478] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:41.580 [2024-11-20 06:43:13.228492] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:41.580 qpair failed and we were unable to recover it. 00:32:41.580 [2024-11-20 06:43:13.238429] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.580 [2024-11-20 06:43:13.238483] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.580 [2024-11-20 06:43:13.238498] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.580 [2024-11-20 06:43:13.238504] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.580 [2024-11-20 06:43:13.238510] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:41.580 [2024-11-20 06:43:13.238523] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:41.580 qpair failed and we were unable to recover it. 00:32:41.580 [2024-11-20 06:43:13.248477] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.580 [2024-11-20 06:43:13.248531] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.580 [2024-11-20 06:43:13.248545] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.580 [2024-11-20 06:43:13.248551] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.580 [2024-11-20 06:43:13.248558] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:41.580 [2024-11-20 06:43:13.248573] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:41.580 qpair failed and we were unable to recover it. 00:32:41.580 [2024-11-20 06:43:13.258497] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.580 [2024-11-20 06:43:13.258584] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.580 [2024-11-20 06:43:13.258597] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.580 [2024-11-20 06:43:13.258604] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.580 [2024-11-20 06:43:13.258610] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:41.580 [2024-11-20 06:43:13.258624] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:41.580 qpair failed and we were unable to recover it. 00:32:41.580 [2024-11-20 06:43:13.268516] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.580 [2024-11-20 06:43:13.268566] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.580 [2024-11-20 06:43:13.268579] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.580 [2024-11-20 06:43:13.268586] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.580 [2024-11-20 06:43:13.268592] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:41.580 [2024-11-20 06:43:13.268607] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:41.580 qpair failed and we were unable to recover it. 00:32:41.580 [2024-11-20 06:43:13.278461] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.580 [2024-11-20 06:43:13.278523] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.580 [2024-11-20 06:43:13.278535] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.580 [2024-11-20 06:43:13.278542] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.580 [2024-11-20 06:43:13.278547] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:41.580 [2024-11-20 06:43:13.278562] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:41.580 qpair failed and we were unable to recover it. 00:32:41.580 [2024-11-20 06:43:13.288592] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.580 [2024-11-20 06:43:13.288650] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.580 [2024-11-20 06:43:13.288663] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.580 [2024-11-20 06:43:13.288670] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.580 [2024-11-20 06:43:13.288676] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:41.580 [2024-11-20 06:43:13.288691] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:41.580 qpair failed and we were unable to recover it. 00:32:41.580 [2024-11-20 06:43:13.298585] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.580 [2024-11-20 06:43:13.298640] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.580 [2024-11-20 06:43:13.298656] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.580 [2024-11-20 06:43:13.298662] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.580 [2024-11-20 06:43:13.298668] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:41.580 [2024-11-20 06:43:13.298682] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:41.580 qpair failed and we were unable to recover it. 00:32:41.580 [2024-11-20 06:43:13.308576] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.580 [2024-11-20 06:43:13.308625] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.580 [2024-11-20 06:43:13.308638] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.580 [2024-11-20 06:43:13.308644] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.580 [2024-11-20 06:43:13.308650] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:41.580 [2024-11-20 06:43:13.308664] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:41.580 qpair failed and we were unable to recover it. 00:32:41.580 [2024-11-20 06:43:13.318654] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.580 [2024-11-20 06:43:13.318707] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.580 [2024-11-20 06:43:13.318720] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.580 [2024-11-20 06:43:13.318727] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.580 [2024-11-20 06:43:13.318732] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:41.580 [2024-11-20 06:43:13.318746] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:41.580 qpair failed and we were unable to recover it. 00:32:41.580 [2024-11-20 06:43:13.328698] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.580 [2024-11-20 06:43:13.328752] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.580 [2024-11-20 06:43:13.328765] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.580 [2024-11-20 06:43:13.328771] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.580 [2024-11-20 06:43:13.328777] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:41.580 [2024-11-20 06:43:13.328792] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:41.580 qpair failed and we were unable to recover it. 00:32:41.580 [2024-11-20 06:43:13.338731] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.580 [2024-11-20 06:43:13.338784] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.580 [2024-11-20 06:43:13.338797] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.580 [2024-11-20 06:43:13.338804] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.580 [2024-11-20 06:43:13.338813] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:41.580 [2024-11-20 06:43:13.338828] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:41.580 qpair failed and we were unable to recover it. 00:32:41.580 [2024-11-20 06:43:13.348769] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.580 [2024-11-20 06:43:13.348833] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.580 [2024-11-20 06:43:13.348846] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.580 [2024-11-20 06:43:13.348852] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.580 [2024-11-20 06:43:13.348857] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:41.580 [2024-11-20 06:43:13.348871] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:41.580 qpair failed and we were unable to recover it. 00:32:41.580 [2024-11-20 06:43:13.358767] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.580 [2024-11-20 06:43:13.358824] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.580 [2024-11-20 06:43:13.358837] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.580 [2024-11-20 06:43:13.358843] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.580 [2024-11-20 06:43:13.358849] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:41.580 [2024-11-20 06:43:13.358862] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:41.580 qpair failed and we were unable to recover it. 00:32:41.580 [2024-11-20 06:43:13.368826] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.580 [2024-11-20 06:43:13.368881] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.580 [2024-11-20 06:43:13.368894] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.580 [2024-11-20 06:43:13.368900] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.580 [2024-11-20 06:43:13.368906] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:41.581 [2024-11-20 06:43:13.368920] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:41.581 qpair failed and we were unable to recover it. 00:32:41.581 [2024-11-20 06:43:13.378840] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.581 [2024-11-20 06:43:13.378892] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.581 [2024-11-20 06:43:13.378905] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.581 [2024-11-20 06:43:13.378912] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.581 [2024-11-20 06:43:13.378917] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:41.581 [2024-11-20 06:43:13.378931] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:41.581 qpair failed and we were unable to recover it. 00:32:41.581 [2024-11-20 06:43:13.388847] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.581 [2024-11-20 06:43:13.388902] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.581 [2024-11-20 06:43:13.388916] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.581 [2024-11-20 06:43:13.388922] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.581 [2024-11-20 06:43:13.388928] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:41.581 [2024-11-20 06:43:13.388942] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:41.581 qpair failed and we were unable to recover it. 00:32:41.581 [2024-11-20 06:43:13.398923] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.581 [2024-11-20 06:43:13.398973] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.581 [2024-11-20 06:43:13.398986] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.581 [2024-11-20 06:43:13.398993] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.581 [2024-11-20 06:43:13.398998] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:41.581 [2024-11-20 06:43:13.399012] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:41.581 qpair failed and we were unable to recover it. 00:32:41.581 [2024-11-20 06:43:13.408976] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.581 [2024-11-20 06:43:13.409034] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.581 [2024-11-20 06:43:13.409048] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.581 [2024-11-20 06:43:13.409054] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.581 [2024-11-20 06:43:13.409060] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:41.581 [2024-11-20 06:43:13.409074] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:41.581 qpair failed and we were unable to recover it. 00:32:41.841 [2024-11-20 06:43:13.418955] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.841 [2024-11-20 06:43:13.419012] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.841 [2024-11-20 06:43:13.419024] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.841 [2024-11-20 06:43:13.419031] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.841 [2024-11-20 06:43:13.419036] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:41.841 [2024-11-20 06:43:13.419050] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:41.841 qpair failed and we were unable to recover it. 00:32:41.841 [2024-11-20 06:43:13.428903] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.841 [2024-11-20 06:43:13.428978] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.841 [2024-11-20 06:43:13.428994] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.841 [2024-11-20 06:43:13.429001] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.841 [2024-11-20 06:43:13.429006] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:41.841 [2024-11-20 06:43:13.429020] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:41.841 qpair failed and we were unable to recover it. 00:32:41.841 [2024-11-20 06:43:13.439013] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.841 [2024-11-20 06:43:13.439087] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.841 [2024-11-20 06:43:13.439101] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.841 [2024-11-20 06:43:13.439107] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.841 [2024-11-20 06:43:13.439113] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:41.841 [2024-11-20 06:43:13.439126] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:41.841 qpair failed and we were unable to recover it. 00:32:41.841 [2024-11-20 06:43:13.449042] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.841 [2024-11-20 06:43:13.449096] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.841 [2024-11-20 06:43:13.449111] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.841 [2024-11-20 06:43:13.449119] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.841 [2024-11-20 06:43:13.449127] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:41.841 [2024-11-20 06:43:13.449142] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:41.841 qpair failed and we were unable to recover it. 00:32:41.841 [2024-11-20 06:43:13.459089] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.841 [2024-11-20 06:43:13.459173] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.841 [2024-11-20 06:43:13.459187] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.841 [2024-11-20 06:43:13.459193] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.841 [2024-11-20 06:43:13.459199] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:41.841 [2024-11-20 06:43:13.459218] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:41.841 qpair failed and we were unable to recover it. 00:32:41.841 [2024-11-20 06:43:13.469079] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.841 [2024-11-20 06:43:13.469133] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.841 [2024-11-20 06:43:13.469146] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.841 [2024-11-20 06:43:13.469153] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.841 [2024-11-20 06:43:13.469162] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:41.841 [2024-11-20 06:43:13.469176] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:41.841 qpair failed and we were unable to recover it. 00:32:41.841 [2024-11-20 06:43:13.479111] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.841 [2024-11-20 06:43:13.479167] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.841 [2024-11-20 06:43:13.479180] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.841 [2024-11-20 06:43:13.479186] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.841 [2024-11-20 06:43:13.479192] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:41.841 [2024-11-20 06:43:13.479211] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:41.841 qpair failed and we were unable to recover it. 00:32:41.841 [2024-11-20 06:43:13.489156] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.841 [2024-11-20 06:43:13.489219] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.841 [2024-11-20 06:43:13.489233] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.841 [2024-11-20 06:43:13.489239] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.841 [2024-11-20 06:43:13.489245] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:41.841 [2024-11-20 06:43:13.489259] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:41.841 qpair failed and we were unable to recover it. 00:32:41.841 [2024-11-20 06:43:13.499173] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.841 [2024-11-20 06:43:13.499233] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.841 [2024-11-20 06:43:13.499247] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.841 [2024-11-20 06:43:13.499253] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.841 [2024-11-20 06:43:13.499259] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:41.841 [2024-11-20 06:43:13.499273] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:41.841 qpair failed and we were unable to recover it. 00:32:41.841 [2024-11-20 06:43:13.509184] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.841 [2024-11-20 06:43:13.509241] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.841 [2024-11-20 06:43:13.509254] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.841 [2024-11-20 06:43:13.509260] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.841 [2024-11-20 06:43:13.509266] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:41.841 [2024-11-20 06:43:13.509280] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:41.841 qpair failed and we were unable to recover it. 00:32:41.841 [2024-11-20 06:43:13.519224] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.841 [2024-11-20 06:43:13.519278] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.841 [2024-11-20 06:43:13.519291] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.841 [2024-11-20 06:43:13.519297] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.841 [2024-11-20 06:43:13.519303] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:41.841 [2024-11-20 06:43:13.519317] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:41.842 qpair failed and we were unable to recover it. 00:32:41.842 [2024-11-20 06:43:13.529280] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.842 [2024-11-20 06:43:13.529335] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.842 [2024-11-20 06:43:13.529348] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.842 [2024-11-20 06:43:13.529355] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.842 [2024-11-20 06:43:13.529361] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:41.842 [2024-11-20 06:43:13.529374] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:41.842 qpair failed and we were unable to recover it. 00:32:41.842 [2024-11-20 06:43:13.539308] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.842 [2024-11-20 06:43:13.539367] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.842 [2024-11-20 06:43:13.539380] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.842 [2024-11-20 06:43:13.539387] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.842 [2024-11-20 06:43:13.539392] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:41.842 [2024-11-20 06:43:13.539407] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:41.842 qpair failed and we were unable to recover it. 00:32:41.842 [2024-11-20 06:43:13.549312] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.842 [2024-11-20 06:43:13.549364] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.842 [2024-11-20 06:43:13.549377] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.842 [2024-11-20 06:43:13.549383] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.842 [2024-11-20 06:43:13.549389] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:41.842 [2024-11-20 06:43:13.549404] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:41.842 qpair failed and we were unable to recover it. 00:32:41.842 [2024-11-20 06:43:13.559264] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.842 [2024-11-20 06:43:13.559315] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.842 [2024-11-20 06:43:13.559331] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.842 [2024-11-20 06:43:13.559337] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.842 [2024-11-20 06:43:13.559343] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:41.842 [2024-11-20 06:43:13.559357] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:41.842 qpair failed and we were unable to recover it. 00:32:41.842 [2024-11-20 06:43:13.569373] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.842 [2024-11-20 06:43:13.569425] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.842 [2024-11-20 06:43:13.569438] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.842 [2024-11-20 06:43:13.569444] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.842 [2024-11-20 06:43:13.569450] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:41.842 [2024-11-20 06:43:13.569463] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:41.842 qpair failed and we were unable to recover it. 00:32:41.842 [2024-11-20 06:43:13.579444] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.842 [2024-11-20 06:43:13.579544] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.842 [2024-11-20 06:43:13.579557] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.842 [2024-11-20 06:43:13.579564] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.842 [2024-11-20 06:43:13.579570] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:41.842 [2024-11-20 06:43:13.579584] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:41.842 qpair failed and we were unable to recover it. 00:32:41.842 [2024-11-20 06:43:13.589363] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.842 [2024-11-20 06:43:13.589414] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.842 [2024-11-20 06:43:13.589427] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.842 [2024-11-20 06:43:13.589434] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.842 [2024-11-20 06:43:13.589439] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:41.842 [2024-11-20 06:43:13.589453] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:41.842 qpair failed and we were unable to recover it. 00:32:41.842 [2024-11-20 06:43:13.599475] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.842 [2024-11-20 06:43:13.599526] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.842 [2024-11-20 06:43:13.599539] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.842 [2024-11-20 06:43:13.599549] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.842 [2024-11-20 06:43:13.599555] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:41.842 [2024-11-20 06:43:13.599569] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:41.842 qpair failed and we were unable to recover it. 00:32:41.842 [2024-11-20 06:43:13.609518] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.842 [2024-11-20 06:43:13.609596] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.842 [2024-11-20 06:43:13.609609] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.842 [2024-11-20 06:43:13.609615] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.842 [2024-11-20 06:43:13.609621] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:41.842 [2024-11-20 06:43:13.609635] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:41.842 qpair failed and we were unable to recover it. 00:32:41.842 [2024-11-20 06:43:13.619551] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.842 [2024-11-20 06:43:13.619608] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.842 [2024-11-20 06:43:13.619621] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.842 [2024-11-20 06:43:13.619627] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.842 [2024-11-20 06:43:13.619633] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:41.842 [2024-11-20 06:43:13.619647] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:41.842 qpair failed and we were unable to recover it. 00:32:41.842 [2024-11-20 06:43:13.629579] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.842 [2024-11-20 06:43:13.629654] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.842 [2024-11-20 06:43:13.629667] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.842 [2024-11-20 06:43:13.629674] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.842 [2024-11-20 06:43:13.629679] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:41.842 [2024-11-20 06:43:13.629693] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:41.842 qpair failed and we were unable to recover it. 00:32:41.842 [2024-11-20 06:43:13.639536] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.842 [2024-11-20 06:43:13.639637] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.842 [2024-11-20 06:43:13.639651] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.842 [2024-11-20 06:43:13.639658] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.842 [2024-11-20 06:43:13.639663] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:41.842 [2024-11-20 06:43:13.639681] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:41.843 qpair failed and we were unable to recover it. 00:32:41.843 [2024-11-20 06:43:13.649621] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.843 [2024-11-20 06:43:13.649696] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.843 [2024-11-20 06:43:13.649709] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.843 [2024-11-20 06:43:13.649716] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.843 [2024-11-20 06:43:13.649721] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:41.843 [2024-11-20 06:43:13.649735] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:41.843 qpair failed and we were unable to recover it. 00:32:41.843 [2024-11-20 06:43:13.659666] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.843 [2024-11-20 06:43:13.659730] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.843 [2024-11-20 06:43:13.659743] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.843 [2024-11-20 06:43:13.659749] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.843 [2024-11-20 06:43:13.659755] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:41.843 [2024-11-20 06:43:13.659769] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:41.843 qpair failed and we were unable to recover it. 00:32:41.843 [2024-11-20 06:43:13.669618] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.843 [2024-11-20 06:43:13.669695] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.843 [2024-11-20 06:43:13.669709] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.843 [2024-11-20 06:43:13.669715] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.843 [2024-11-20 06:43:13.669721] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:41.843 [2024-11-20 06:43:13.669735] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:41.843 qpair failed and we were unable to recover it. 00:32:42.102 [2024-11-20 06:43:13.679697] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:42.102 [2024-11-20 06:43:13.679759] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:42.102 [2024-11-20 06:43:13.679772] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:42.102 [2024-11-20 06:43:13.679778] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:42.103 [2024-11-20 06:43:13.679784] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:42.103 [2024-11-20 06:43:13.679798] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:42.103 qpair failed and we were unable to recover it. 00:32:42.103 [2024-11-20 06:43:13.689739] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:42.103 [2024-11-20 06:43:13.689818] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:42.103 [2024-11-20 06:43:13.689831] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:42.103 [2024-11-20 06:43:13.689837] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:42.103 [2024-11-20 06:43:13.689843] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:42.103 [2024-11-20 06:43:13.689857] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:42.103 qpair failed and we were unable to recover it. 00:32:42.103 [2024-11-20 06:43:13.699753] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:42.103 [2024-11-20 06:43:13.699807] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:42.103 [2024-11-20 06:43:13.699820] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:42.103 [2024-11-20 06:43:13.699826] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:42.103 [2024-11-20 06:43:13.699832] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:42.103 [2024-11-20 06:43:13.699846] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:42.103 qpair failed and we were unable to recover it. 00:32:42.103 [2024-11-20 06:43:13.709721] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:42.103 [2024-11-20 06:43:13.709777] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:42.103 [2024-11-20 06:43:13.709791] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:42.103 [2024-11-20 06:43:13.709798] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:42.103 [2024-11-20 06:43:13.709804] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:42.103 [2024-11-20 06:43:13.709818] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:42.103 qpair failed and we were unable to recover it. 00:32:42.103 [2024-11-20 06:43:13.719777] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:42.103 [2024-11-20 06:43:13.719844] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:42.103 [2024-11-20 06:43:13.719857] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:42.103 [2024-11-20 06:43:13.719864] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:42.103 [2024-11-20 06:43:13.719869] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:42.103 [2024-11-20 06:43:13.719884] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:42.103 qpair failed and we were unable to recover it. 00:32:42.103 [2024-11-20 06:43:13.729863] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:42.103 [2024-11-20 06:43:13.729921] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:42.103 [2024-11-20 06:43:13.729934] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:42.103 [2024-11-20 06:43:13.729944] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:42.103 [2024-11-20 06:43:13.729950] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:42.103 [2024-11-20 06:43:13.729963] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:42.103 qpair failed and we were unable to recover it. 00:32:42.103 [2024-11-20 06:43:13.739795] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:42.103 [2024-11-20 06:43:13.739889] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:42.103 [2024-11-20 06:43:13.739903] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:42.103 [2024-11-20 06:43:13.739909] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:42.103 [2024-11-20 06:43:13.739915] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:42.103 [2024-11-20 06:43:13.739929] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:42.103 qpair failed and we were unable to recover it. 00:32:42.103 [2024-11-20 06:43:13.749841] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:42.103 [2024-11-20 06:43:13.749910] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:42.103 [2024-11-20 06:43:13.749923] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:42.103 [2024-11-20 06:43:13.749930] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:42.103 [2024-11-20 06:43:13.749936] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:42.103 [2024-11-20 06:43:13.749951] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:42.103 qpair failed and we were unable to recover it. 00:32:42.103 [2024-11-20 06:43:13.759841] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:42.103 [2024-11-20 06:43:13.759928] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:42.103 [2024-11-20 06:43:13.759941] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:42.103 [2024-11-20 06:43:13.759947] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:42.103 [2024-11-20 06:43:13.759952] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:42.103 [2024-11-20 06:43:13.759966] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:42.103 qpair failed and we were unable to recover it. 00:32:42.103 [2024-11-20 06:43:13.769931] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:42.103 [2024-11-20 06:43:13.769986] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:42.103 [2024-11-20 06:43:13.769999] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:42.103 [2024-11-20 06:43:13.770005] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:42.103 [2024-11-20 06:43:13.770011] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:42.103 [2024-11-20 06:43:13.770029] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:42.103 qpair failed and we were unable to recover it. 00:32:42.103 [2024-11-20 06:43:13.780020] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:42.103 [2024-11-20 06:43:13.780080] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:42.103 [2024-11-20 06:43:13.780093] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:42.103 [2024-11-20 06:43:13.780100] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:42.103 [2024-11-20 06:43:13.780106] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:42.103 [2024-11-20 06:43:13.780120] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:42.103 qpair failed and we were unable to recover it. 00:32:42.103 [2024-11-20 06:43:13.789990] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:42.103 [2024-11-20 06:43:13.790062] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:42.103 [2024-11-20 06:43:13.790075] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:42.103 [2024-11-20 06:43:13.790081] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:42.103 [2024-11-20 06:43:13.790087] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:42.103 [2024-11-20 06:43:13.790102] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:42.103 qpair failed and we were unable to recover it. 00:32:42.103 [2024-11-20 06:43:13.800045] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:42.103 [2024-11-20 06:43:13.800101] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:42.103 [2024-11-20 06:43:13.800114] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:42.103 [2024-11-20 06:43:13.800121] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:42.103 [2024-11-20 06:43:13.800127] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:42.103 [2024-11-20 06:43:13.800141] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:42.103 qpair failed and we were unable to recover it. 00:32:42.103 [2024-11-20 06:43:13.810080] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:42.103 [2024-11-20 06:43:13.810137] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:42.103 [2024-11-20 06:43:13.810151] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:42.104 [2024-11-20 06:43:13.810158] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:42.104 [2024-11-20 06:43:13.810164] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:42.104 [2024-11-20 06:43:13.810178] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:42.104 qpair failed and we were unable to recover it. 00:32:42.104 [2024-11-20 06:43:13.820081] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:42.104 [2024-11-20 06:43:13.820138] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:42.104 [2024-11-20 06:43:13.820152] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:42.104 [2024-11-20 06:43:13.820159] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:42.104 [2024-11-20 06:43:13.820164] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:42.104 [2024-11-20 06:43:13.820179] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:42.104 qpair failed and we were unable to recover it. 00:32:42.104 [2024-11-20 06:43:13.830124] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:42.104 [2024-11-20 06:43:13.830180] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:42.104 [2024-11-20 06:43:13.830193] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:42.104 [2024-11-20 06:43:13.830200] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:42.104 [2024-11-20 06:43:13.830210] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:42.104 [2024-11-20 06:43:13.830224] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:42.104 qpair failed and we were unable to recover it. 00:32:42.104 [2024-11-20 06:43:13.840198] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:42.104 [2024-11-20 06:43:13.840306] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:42.104 [2024-11-20 06:43:13.840320] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:42.104 [2024-11-20 06:43:13.840327] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:42.104 [2024-11-20 06:43:13.840332] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:42.104 [2024-11-20 06:43:13.840347] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:42.104 qpair failed and we were unable to recover it. 00:32:42.104 [2024-11-20 06:43:13.850220] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:42.104 [2024-11-20 06:43:13.850294] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:42.104 [2024-11-20 06:43:13.850308] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:42.104 [2024-11-20 06:43:13.850314] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:42.104 [2024-11-20 06:43:13.850320] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:42.104 [2024-11-20 06:43:13.850334] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:42.104 qpair failed and we were unable to recover it. 00:32:42.104 [2024-11-20 06:43:13.860273] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:42.104 [2024-11-20 06:43:13.860378] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:42.104 [2024-11-20 06:43:13.860394] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:42.104 [2024-11-20 06:43:13.860401] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:42.104 [2024-11-20 06:43:13.860406] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:42.104 [2024-11-20 06:43:13.860421] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:42.104 qpair failed and we were unable to recover it. 00:32:42.104 [2024-11-20 06:43:13.870270] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:42.104 [2024-11-20 06:43:13.870327] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:42.104 [2024-11-20 06:43:13.870340] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:42.104 [2024-11-20 06:43:13.870346] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:42.104 [2024-11-20 06:43:13.870352] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:42.104 [2024-11-20 06:43:13.870367] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:42.104 qpair failed and we were unable to recover it. 00:32:42.104 [2024-11-20 06:43:13.880295] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:42.104 [2024-11-20 06:43:13.880352] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:42.104 [2024-11-20 06:43:13.880365] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:42.104 [2024-11-20 06:43:13.880372] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:42.104 [2024-11-20 06:43:13.880378] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:42.104 [2024-11-20 06:43:13.880391] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:42.104 qpair failed and we were unable to recover it. 00:32:42.104 [2024-11-20 06:43:13.890308] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:42.104 [2024-11-20 06:43:13.890367] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:42.104 [2024-11-20 06:43:13.890380] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:42.104 [2024-11-20 06:43:13.890387] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:42.104 [2024-11-20 06:43:13.890393] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:42.104 [2024-11-20 06:43:13.890406] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:42.104 qpair failed and we were unable to recover it. 00:32:42.104 [2024-11-20 06:43:13.900422] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:42.104 [2024-11-20 06:43:13.900488] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:42.104 [2024-11-20 06:43:13.900501] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:42.104 [2024-11-20 06:43:13.900508] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:42.104 [2024-11-20 06:43:13.900517] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:42.104 [2024-11-20 06:43:13.900531] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:42.104 qpair failed and we were unable to recover it. 00:32:42.104 [2024-11-20 06:43:13.910371] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:42.104 [2024-11-20 06:43:13.910427] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:42.104 [2024-11-20 06:43:13.910441] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:42.104 [2024-11-20 06:43:13.910448] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:42.104 [2024-11-20 06:43:13.910454] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:42.104 [2024-11-20 06:43:13.910468] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:42.104 qpair failed and we were unable to recover it. 00:32:42.104 [2024-11-20 06:43:13.920411] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:42.104 [2024-11-20 06:43:13.920462] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:42.104 [2024-11-20 06:43:13.920475] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:42.104 [2024-11-20 06:43:13.920481] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:42.104 [2024-11-20 06:43:13.920487] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:42.104 [2024-11-20 06:43:13.920501] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:42.104 qpair failed and we were unable to recover it. 00:32:42.104 [2024-11-20 06:43:13.930484] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:42.104 [2024-11-20 06:43:13.930539] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:42.104 [2024-11-20 06:43:13.930552] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:42.104 [2024-11-20 06:43:13.930558] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:42.104 [2024-11-20 06:43:13.930564] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:42.104 [2024-11-20 06:43:13.930578] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:42.104 qpair failed and we were unable to recover it. 00:32:42.364 [2024-11-20 06:43:13.940445] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:42.364 [2024-11-20 06:43:13.940503] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:42.364 [2024-11-20 06:43:13.940517] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:42.364 [2024-11-20 06:43:13.940523] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:42.364 [2024-11-20 06:43:13.940529] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:42.364 [2024-11-20 06:43:13.940542] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:42.364 qpair failed and we were unable to recover it. 00:32:42.364 [2024-11-20 06:43:13.950536] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:42.364 [2024-11-20 06:43:13.950589] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:42.364 [2024-11-20 06:43:13.950603] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:42.364 [2024-11-20 06:43:13.950609] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:42.364 [2024-11-20 06:43:13.950615] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:42.364 [2024-11-20 06:43:13.950629] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:42.364 qpair failed and we were unable to recover it. 00:32:42.364 [2024-11-20 06:43:13.960496] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:42.364 [2024-11-20 06:43:13.960553] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:42.364 [2024-11-20 06:43:13.960566] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:42.364 [2024-11-20 06:43:13.960573] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:42.364 [2024-11-20 06:43:13.960579] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:42.364 [2024-11-20 06:43:13.960593] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:42.364 qpair failed and we were unable to recover it. 00:32:42.364 [2024-11-20 06:43:13.970524] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:42.364 [2024-11-20 06:43:13.970590] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:42.364 [2024-11-20 06:43:13.970603] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:42.364 [2024-11-20 06:43:13.970610] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:42.364 [2024-11-20 06:43:13.970616] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:42.364 [2024-11-20 06:43:13.970629] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:42.364 qpair failed and we were unable to recover it. 00:32:42.364 [2024-11-20 06:43:13.980595] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:42.364 [2024-11-20 06:43:13.980654] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:42.364 [2024-11-20 06:43:13.980667] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:42.364 [2024-11-20 06:43:13.980674] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:42.364 [2024-11-20 06:43:13.980679] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:42.364 [2024-11-20 06:43:13.980693] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:42.364 qpair failed and we were unable to recover it. 00:32:42.364 [2024-11-20 06:43:13.990551] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:42.364 [2024-11-20 06:43:13.990606] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:42.364 [2024-11-20 06:43:13.990622] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:42.364 [2024-11-20 06:43:13.990629] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:42.364 [2024-11-20 06:43:13.990634] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:42.364 [2024-11-20 06:43:13.990648] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:42.364 qpair failed and we were unable to recover it. 00:32:42.364 [2024-11-20 06:43:14.000608] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:42.364 [2024-11-20 06:43:14.000703] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:42.364 [2024-11-20 06:43:14.000715] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:42.364 [2024-11-20 06:43:14.000722] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:42.364 [2024-11-20 06:43:14.000727] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:42.364 [2024-11-20 06:43:14.000741] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:42.364 qpair failed and we were unable to recover it. 00:32:42.364 [2024-11-20 06:43:14.010634] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:42.364 [2024-11-20 06:43:14.010687] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:42.364 [2024-11-20 06:43:14.010700] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:42.364 [2024-11-20 06:43:14.010707] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:42.364 [2024-11-20 06:43:14.010713] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:42.364 [2024-11-20 06:43:14.010727] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:42.364 qpair failed and we were unable to recover it. 00:32:42.364 [2024-11-20 06:43:14.020682] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:42.364 [2024-11-20 06:43:14.020737] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:42.364 [2024-11-20 06:43:14.020751] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:42.365 [2024-11-20 06:43:14.020757] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:42.365 [2024-11-20 06:43:14.020763] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:42.365 [2024-11-20 06:43:14.020777] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:42.365 qpair failed and we were unable to recover it. 00:32:42.365 [2024-11-20 06:43:14.030678] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:42.365 [2024-11-20 06:43:14.030728] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:42.365 [2024-11-20 06:43:14.030741] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:42.365 [2024-11-20 06:43:14.030747] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:42.365 [2024-11-20 06:43:14.030756] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:42.365 [2024-11-20 06:43:14.030770] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:42.365 qpair failed and we were unable to recover it. 00:32:42.365 [2024-11-20 06:43:14.040715] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:42.365 [2024-11-20 06:43:14.040764] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:42.365 [2024-11-20 06:43:14.040778] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:42.365 [2024-11-20 06:43:14.040784] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:42.365 [2024-11-20 06:43:14.040790] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:42.365 [2024-11-20 06:43:14.040804] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:42.365 qpair failed and we were unable to recover it. 00:32:42.365 [2024-11-20 06:43:14.050752] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:42.365 [2024-11-20 06:43:14.050807] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:42.365 [2024-11-20 06:43:14.050820] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:42.365 [2024-11-20 06:43:14.050827] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:42.365 [2024-11-20 06:43:14.050832] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:42.365 [2024-11-20 06:43:14.050846] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:42.365 qpair failed and we were unable to recover it. 00:32:42.365 [2024-11-20 06:43:14.060766] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:42.365 [2024-11-20 06:43:14.060822] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:42.365 [2024-11-20 06:43:14.060835] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:42.365 [2024-11-20 06:43:14.060842] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:42.365 [2024-11-20 06:43:14.060848] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:42.365 [2024-11-20 06:43:14.060862] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:42.365 qpair failed and we were unable to recover it. 00:32:42.365 [2024-11-20 06:43:14.070829] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:42.365 [2024-11-20 06:43:14.070885] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:42.365 [2024-11-20 06:43:14.070898] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:42.365 [2024-11-20 06:43:14.070905] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:42.365 [2024-11-20 06:43:14.070910] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:42.365 [2024-11-20 06:43:14.070924] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:42.365 qpair failed and we were unable to recover it. 00:32:42.365 [2024-11-20 06:43:14.080817] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:42.365 [2024-11-20 06:43:14.080869] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:42.365 [2024-11-20 06:43:14.080882] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:42.365 [2024-11-20 06:43:14.080888] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:42.365 [2024-11-20 06:43:14.080894] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:42.365 [2024-11-20 06:43:14.080908] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:42.365 qpair failed and we were unable to recover it. 00:32:42.365 [2024-11-20 06:43:14.090931] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:42.365 [2024-11-20 06:43:14.091034] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:42.365 [2024-11-20 06:43:14.091048] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:42.365 [2024-11-20 06:43:14.091054] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:42.365 [2024-11-20 06:43:14.091060] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:42.365 [2024-11-20 06:43:14.091074] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:42.365 qpair failed and we were unable to recover it. 00:32:42.365 [2024-11-20 06:43:14.100884] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:42.365 [2024-11-20 06:43:14.100971] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:42.365 [2024-11-20 06:43:14.100984] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:42.365 [2024-11-20 06:43:14.100990] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:42.365 [2024-11-20 06:43:14.100996] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:42.365 [2024-11-20 06:43:14.101010] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:42.365 qpair failed and we were unable to recover it. 00:32:42.365 [2024-11-20 06:43:14.110882] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:42.365 [2024-11-20 06:43:14.110972] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:42.365 [2024-11-20 06:43:14.110985] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:42.365 [2024-11-20 06:43:14.110991] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:42.365 [2024-11-20 06:43:14.110997] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:42.365 [2024-11-20 06:43:14.111011] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:42.365 qpair failed and we were unable to recover it. 00:32:42.365 [2024-11-20 06:43:14.120941] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:42.365 [2024-11-20 06:43:14.120994] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:42.365 [2024-11-20 06:43:14.121012] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:42.365 [2024-11-20 06:43:14.121018] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:42.365 [2024-11-20 06:43:14.121024] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:42.365 [2024-11-20 06:43:14.121038] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:42.365 qpair failed and we were unable to recover it. 00:32:42.365 [2024-11-20 06:43:14.130953] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:42.365 [2024-11-20 06:43:14.131023] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:42.365 [2024-11-20 06:43:14.131036] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:42.365 [2024-11-20 06:43:14.131042] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:42.365 [2024-11-20 06:43:14.131048] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:42.365 [2024-11-20 06:43:14.131062] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:42.365 qpair failed and we were unable to recover it. 00:32:42.365 [2024-11-20 06:43:14.140995] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:42.365 [2024-11-20 06:43:14.141053] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:42.365 [2024-11-20 06:43:14.141067] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:42.365 [2024-11-20 06:43:14.141073] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:42.365 [2024-11-20 06:43:14.141079] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:42.365 [2024-11-20 06:43:14.141094] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:42.365 qpair failed and we were unable to recover it. 00:32:42.365 [2024-11-20 06:43:14.151015] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:42.366 [2024-11-20 06:43:14.151067] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:42.366 [2024-11-20 06:43:14.151081] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:42.366 [2024-11-20 06:43:14.151087] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:42.366 [2024-11-20 06:43:14.151093] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:42.366 [2024-11-20 06:43:14.151107] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:42.366 qpair failed and we were unable to recover it. 00:32:42.366 [2024-11-20 06:43:14.161035] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:42.366 [2024-11-20 06:43:14.161092] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:42.366 [2024-11-20 06:43:14.161105] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:42.366 [2024-11-20 06:43:14.161115] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:42.366 [2024-11-20 06:43:14.161120] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:42.366 [2024-11-20 06:43:14.161135] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:42.366 qpair failed and we were unable to recover it. 00:32:42.366 [2024-11-20 06:43:14.171075] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:42.366 [2024-11-20 06:43:14.171133] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:42.366 [2024-11-20 06:43:14.171146] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:42.366 [2024-11-20 06:43:14.171153] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:42.366 [2024-11-20 06:43:14.171159] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:42.366 [2024-11-20 06:43:14.171173] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:42.366 qpair failed and we were unable to recover it. 00:32:42.366 [2024-11-20 06:43:14.181124] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:42.366 [2024-11-20 06:43:14.181183] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:42.366 [2024-11-20 06:43:14.181198] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:42.366 [2024-11-20 06:43:14.181209] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:42.366 [2024-11-20 06:43:14.181215] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:42.366 [2024-11-20 06:43:14.181230] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:42.366 qpair failed and we were unable to recover it. 00:32:42.366 [2024-11-20 06:43:14.191124] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:42.366 [2024-11-20 06:43:14.191177] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:42.366 [2024-11-20 06:43:14.191192] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:42.366 [2024-11-20 06:43:14.191199] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:42.366 [2024-11-20 06:43:14.191210] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:42.366 [2024-11-20 06:43:14.191225] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:42.366 qpair failed and we were unable to recover it. 00:32:42.625 [2024-11-20 06:43:14.201167] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:42.625 [2024-11-20 06:43:14.201230] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:42.625 [2024-11-20 06:43:14.201243] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:42.625 [2024-11-20 06:43:14.201249] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:42.625 [2024-11-20 06:43:14.201255] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:42.625 [2024-11-20 06:43:14.201273] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:42.625 qpair failed and we were unable to recover it. 00:32:42.625 [2024-11-20 06:43:14.211191] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:42.625 [2024-11-20 06:43:14.211267] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:42.625 [2024-11-20 06:43:14.211280] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:42.625 [2024-11-20 06:43:14.211287] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:42.625 [2024-11-20 06:43:14.211292] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:42.625 [2024-11-20 06:43:14.211307] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:42.625 qpair failed and we were unable to recover it. 00:32:42.625 [2024-11-20 06:43:14.221216] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:42.625 [2024-11-20 06:43:14.221275] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:42.625 [2024-11-20 06:43:14.221289] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:42.625 [2024-11-20 06:43:14.221296] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:42.625 [2024-11-20 06:43:14.221302] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:42.626 [2024-11-20 06:43:14.221316] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:42.626 qpair failed and we were unable to recover it. 00:32:42.626 [2024-11-20 06:43:14.231242] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:42.626 [2024-11-20 06:43:14.231293] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:42.626 [2024-11-20 06:43:14.231307] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:42.626 [2024-11-20 06:43:14.231313] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:42.626 [2024-11-20 06:43:14.231319] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:42.626 [2024-11-20 06:43:14.231332] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:42.626 qpair failed and we were unable to recover it. 00:32:42.626 [2024-11-20 06:43:14.241226] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:42.626 [2024-11-20 06:43:14.241303] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:42.626 [2024-11-20 06:43:14.241325] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:42.626 [2024-11-20 06:43:14.241332] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:42.626 [2024-11-20 06:43:14.241337] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:42.626 [2024-11-20 06:43:14.241357] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:42.626 qpair failed and we were unable to recover it. 00:32:42.626 [2024-11-20 06:43:14.251305] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:42.626 [2024-11-20 06:43:14.251365] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:42.626 [2024-11-20 06:43:14.251379] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:42.626 [2024-11-20 06:43:14.251385] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:42.626 [2024-11-20 06:43:14.251391] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:42.626 [2024-11-20 06:43:14.251405] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:42.626 qpair failed and we were unable to recover it. 00:32:42.626 [2024-11-20 06:43:14.261334] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:42.626 [2024-11-20 06:43:14.261386] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:42.626 [2024-11-20 06:43:14.261400] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:42.626 [2024-11-20 06:43:14.261406] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:42.626 [2024-11-20 06:43:14.261412] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:42.626 [2024-11-20 06:43:14.261426] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:42.626 qpair failed and we were unable to recover it. 00:32:42.626 [2024-11-20 06:43:14.271340] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:42.626 [2024-11-20 06:43:14.271421] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:42.626 [2024-11-20 06:43:14.271435] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:42.626 [2024-11-20 06:43:14.271441] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:42.626 [2024-11-20 06:43:14.271446] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:42.626 [2024-11-20 06:43:14.271460] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:42.626 qpair failed and we were unable to recover it. 00:32:42.626 [2024-11-20 06:43:14.281373] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:42.626 [2024-11-20 06:43:14.281423] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:42.626 [2024-11-20 06:43:14.281436] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:42.626 [2024-11-20 06:43:14.281442] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:42.626 [2024-11-20 06:43:14.281448] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:42.626 [2024-11-20 06:43:14.281462] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:42.626 qpair failed and we were unable to recover it. 00:32:42.626 [2024-11-20 06:43:14.291414] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:42.626 [2024-11-20 06:43:14.291472] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:42.626 [2024-11-20 06:43:14.291485] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:42.626 [2024-11-20 06:43:14.291494] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:42.626 [2024-11-20 06:43:14.291500] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:42.626 [2024-11-20 06:43:14.291514] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:42.626 qpair failed and we were unable to recover it. 00:32:42.626 [2024-11-20 06:43:14.301442] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:42.626 [2024-11-20 06:43:14.301495] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:42.626 [2024-11-20 06:43:14.301508] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:42.626 [2024-11-20 06:43:14.301516] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:42.626 [2024-11-20 06:43:14.301522] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:42.626 [2024-11-20 06:43:14.301536] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:42.626 qpair failed and we were unable to recover it. 00:32:42.626 [2024-11-20 06:43:14.311457] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:42.626 [2024-11-20 06:43:14.311510] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:42.626 [2024-11-20 06:43:14.311523] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:42.626 [2024-11-20 06:43:14.311529] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:42.626 [2024-11-20 06:43:14.311535] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:42.626 [2024-11-20 06:43:14.311549] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:42.626 qpair failed and we were unable to recover it. 00:32:42.626 [2024-11-20 06:43:14.321481] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:42.626 [2024-11-20 06:43:14.321530] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:42.626 [2024-11-20 06:43:14.321543] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:42.626 [2024-11-20 06:43:14.321550] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:42.626 [2024-11-20 06:43:14.321556] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:42.626 [2024-11-20 06:43:14.321571] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:42.626 qpair failed and we were unable to recover it. 00:32:42.626 [2024-11-20 06:43:14.331456] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:42.626 [2024-11-20 06:43:14.331511] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:42.626 [2024-11-20 06:43:14.331525] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:42.626 [2024-11-20 06:43:14.331531] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:42.626 [2024-11-20 06:43:14.331538] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:42.626 [2024-11-20 06:43:14.331555] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:42.626 qpair failed and we were unable to recover it. 00:32:42.626 [2024-11-20 06:43:14.341601] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:42.626 [2024-11-20 06:43:14.341659] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:42.626 [2024-11-20 06:43:14.341673] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:42.626 [2024-11-20 06:43:14.341680] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:42.626 [2024-11-20 06:43:14.341686] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:42.626 [2024-11-20 06:43:14.341701] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:42.626 qpair failed and we were unable to recover it. 00:32:42.626 [2024-11-20 06:43:14.351565] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:42.626 [2024-11-20 06:43:14.351624] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:42.626 [2024-11-20 06:43:14.351637] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:42.626 [2024-11-20 06:43:14.351643] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:42.626 [2024-11-20 06:43:14.351649] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:42.627 [2024-11-20 06:43:14.351663] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:42.627 qpair failed and we were unable to recover it. 00:32:42.627 [2024-11-20 06:43:14.361608] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:42.627 [2024-11-20 06:43:14.361660] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:42.627 [2024-11-20 06:43:14.361673] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:42.627 [2024-11-20 06:43:14.361680] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:42.627 [2024-11-20 06:43:14.361686] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:42.627 [2024-11-20 06:43:14.361700] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:42.627 qpair failed and we were unable to recover it. 00:32:42.627 [2024-11-20 06:43:14.371608] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:42.627 [2024-11-20 06:43:14.371679] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:42.627 [2024-11-20 06:43:14.371692] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:42.627 [2024-11-20 06:43:14.371698] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:42.627 [2024-11-20 06:43:14.371704] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:42.627 [2024-11-20 06:43:14.371718] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:42.627 qpair failed and we were unable to recover it. 00:32:42.627 [2024-11-20 06:43:14.381667] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:42.627 [2024-11-20 06:43:14.381749] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:42.627 [2024-11-20 06:43:14.381762] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:42.627 [2024-11-20 06:43:14.381768] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:42.627 [2024-11-20 06:43:14.381773] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:42.627 [2024-11-20 06:43:14.381787] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:42.627 qpair failed and we were unable to recover it. 00:32:42.627 [2024-11-20 06:43:14.391676] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:42.627 [2024-11-20 06:43:14.391726] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:42.627 [2024-11-20 06:43:14.391739] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:42.627 [2024-11-20 06:43:14.391746] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:42.627 [2024-11-20 06:43:14.391752] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:42.627 [2024-11-20 06:43:14.391766] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:42.627 qpair failed and we were unable to recover it. 00:32:42.627 [2024-11-20 06:43:14.401693] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:42.627 [2024-11-20 06:43:14.401748] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:42.627 [2024-11-20 06:43:14.401761] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:42.627 [2024-11-20 06:43:14.401768] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:42.627 [2024-11-20 06:43:14.401773] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:42.627 [2024-11-20 06:43:14.401788] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:42.627 qpair failed and we were unable to recover it. 00:32:42.627 [2024-11-20 06:43:14.411723] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:42.627 [2024-11-20 06:43:14.411775] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:42.627 [2024-11-20 06:43:14.411788] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:42.627 [2024-11-20 06:43:14.411794] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:42.627 [2024-11-20 06:43:14.411800] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:42.627 [2024-11-20 06:43:14.411814] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:42.627 qpair failed and we were unable to recover it. 00:32:42.627 [2024-11-20 06:43:14.421763] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:42.627 [2024-11-20 06:43:14.421829] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:42.627 [2024-11-20 06:43:14.421845] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:42.627 [2024-11-20 06:43:14.421851] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:42.627 [2024-11-20 06:43:14.421857] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd15c000b90 00:32:42.627 [2024-11-20 06:43:14.421870] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:42.627 qpair failed and we were unable to recover it. 00:32:42.627 [2024-11-20 06:43:14.431823] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:42.627 [2024-11-20 06:43:14.431924] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:42.627 [2024-11-20 06:43:14.431980] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:42.627 [2024-11-20 06:43:14.432005] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:42.627 [2024-11-20 06:43:14.432027] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:32:42.627 [2024-11-20 06:43:14.432080] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:42.627 qpair failed and we were unable to recover it. 00:32:42.627 [2024-11-20 06:43:14.441831] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:42.627 [2024-11-20 06:43:14.441902] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:42.627 [2024-11-20 06:43:14.441930] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:42.627 [2024-11-20 06:43:14.441944] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:42.627 [2024-11-20 06:43:14.441957] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:32:42.627 [2024-11-20 06:43:14.441987] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:42.627 qpair failed and we were unable to recover it. 00:32:42.627 [2024-11-20 06:43:14.442106] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Submitting Keep Alive failed 00:32:42.627 A controller has encountered a failure and is being reset. 00:32:42.627 Controller properly reset. 00:32:42.886 Initializing NVMe Controllers 00:32:42.886 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:42.886 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:42.886 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:32:42.886 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:32:42.886 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:32:42.886 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:32:42.886 Initialization complete. Launching workers. 00:32:42.886 Starting thread on core 1 00:32:42.886 Starting thread on core 2 00:32:42.886 Starting thread on core 3 00:32:42.886 Starting thread on core 0 00:32:42.886 06:43:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:32:42.886 00:32:42.886 real 0m10.803s 00:32:42.886 user 0m19.101s 00:32:42.886 sys 0m4.651s 00:32:42.886 06:43:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:32:42.886 06:43:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:42.886 ************************************ 00:32:42.886 END TEST nvmf_target_disconnect_tc2 00:32:42.886 ************************************ 00:32:42.886 06:43:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:32:42.886 06:43:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:32:42.886 06:43:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:32:42.886 06:43:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:42.886 06:43:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:32:42.886 06:43:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:42.886 06:43:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:32:42.886 06:43:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:42.886 06:43:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:42.886 rmmod nvme_tcp 00:32:42.886 rmmod nvme_fabrics 00:32:42.886 rmmod nvme_keyring 00:32:42.886 06:43:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:42.886 06:43:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:32:42.886 06:43:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:32:42.886 06:43:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@517 -- # '[' -n 710935 ']' 00:32:42.886 06:43:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # killprocess 710935 00:32:42.886 06:43:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # '[' -z 710935 ']' 00:32:42.886 06:43:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # kill -0 710935 00:32:42.886 06:43:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@957 -- # uname 00:32:42.886 06:43:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:32:42.886 06:43:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 710935 00:32:42.886 06:43:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # process_name=reactor_4 00:32:42.886 06:43:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@962 -- # '[' reactor_4 = sudo ']' 00:32:42.886 06:43:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@970 -- # echo 'killing process with pid 710935' 00:32:42.886 killing process with pid 710935 00:32:42.886 06:43:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@971 -- # kill 710935 00:32:42.886 06:43:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@976 -- # wait 710935 00:32:43.145 06:43:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:43.145 06:43:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:43.145 06:43:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:43.145 06:43:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:32:43.145 06:43:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:32:43.145 06:43:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:43.145 06:43:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:32:43.145 06:43:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:43.145 06:43:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:43.145 06:43:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:43.145 06:43:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:43.145 06:43:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:45.685 06:43:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:45.685 00:32:45.685 real 0m19.602s 00:32:45.685 user 0m46.886s 00:32:45.685 sys 0m9.563s 00:32:45.685 06:43:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1128 -- # xtrace_disable 00:32:45.685 06:43:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:32:45.685 ************************************ 00:32:45.685 END TEST nvmf_target_disconnect 00:32:45.685 ************************************ 00:32:45.685 06:43:16 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:32:45.685 00:32:45.685 real 5m54.002s 00:32:45.685 user 10m38.083s 00:32:45.685 sys 1m58.454s 00:32:45.685 06:43:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1128 -- # xtrace_disable 00:32:45.685 06:43:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:45.685 ************************************ 00:32:45.685 END TEST nvmf_host 00:32:45.685 ************************************ 00:32:45.685 06:43:16 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:32:45.685 06:43:16 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:32:45.685 06:43:16 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:32:45.685 06:43:16 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:32:45.685 06:43:16 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:32:45.685 06:43:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:45.685 ************************************ 00:32:45.685 START TEST nvmf_target_core_interrupt_mode 00:32:45.685 ************************************ 00:32:45.685 06:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:32:45.685 * Looking for test storage... 00:32:45.685 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:32:45.685 06:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:32:45.685 06:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1691 -- # lcov --version 00:32:45.685 06:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:32:45.685 06:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:32:45.685 06:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:45.685 06:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:45.685 06:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:45.685 06:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:32:45.685 06:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:32:45.685 06:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:32:45.685 06:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:32:45.685 06:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:32:45.685 06:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:32:45.685 06:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:32:45.685 06:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:45.685 06:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:32:45.685 06:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:32:45.685 06:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:45.685 06:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:45.685 06:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:32:45.685 06:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:32:45.685 06:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:45.685 06:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:32:45.686 06:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:32:45.686 06:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:32:45.686 06:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:32:45.686 06:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:45.686 06:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:32:45.686 06:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:32:45.686 06:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:45.686 06:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:45.686 06:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:32:45.686 06:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:45.686 06:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:32:45.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:45.686 --rc genhtml_branch_coverage=1 00:32:45.686 --rc genhtml_function_coverage=1 00:32:45.686 --rc genhtml_legend=1 00:32:45.686 --rc geninfo_all_blocks=1 00:32:45.686 --rc geninfo_unexecuted_blocks=1 00:32:45.686 00:32:45.686 ' 00:32:45.686 06:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:32:45.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:45.686 --rc genhtml_branch_coverage=1 00:32:45.686 --rc genhtml_function_coverage=1 00:32:45.686 --rc genhtml_legend=1 00:32:45.686 --rc geninfo_all_blocks=1 00:32:45.686 --rc geninfo_unexecuted_blocks=1 00:32:45.686 00:32:45.686 ' 00:32:45.686 06:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:32:45.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:45.686 --rc genhtml_branch_coverage=1 00:32:45.686 --rc genhtml_function_coverage=1 00:32:45.686 --rc genhtml_legend=1 00:32:45.686 --rc geninfo_all_blocks=1 00:32:45.686 --rc geninfo_unexecuted_blocks=1 00:32:45.686 00:32:45.686 ' 00:32:45.686 06:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:32:45.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:45.686 --rc genhtml_branch_coverage=1 00:32:45.686 --rc genhtml_function_coverage=1 00:32:45.686 --rc genhtml_legend=1 00:32:45.686 --rc geninfo_all_blocks=1 00:32:45.686 --rc geninfo_unexecuted_blocks=1 00:32:45.686 00:32:45.686 ' 00:32:45.686 06:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:32:45.686 06:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:32:45.686 06:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:45.686 06:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:32:45.686 06:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:45.686 06:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:45.686 06:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:45.686 06:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:45.686 06:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:45.686 06:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:45.686 06:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:45.686 06:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:45.686 06:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:45.686 06:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:45.686 06:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:32:45.686 06:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:32:45.686 06:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:45.686 06:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:45.686 06:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:45.686 06:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:45.686 06:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:45.686 06:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:32:45.686 06:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:45.686 06:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:45.686 06:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:45.686 06:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:45.686 06:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:45.686 06:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:45.686 06:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:32:45.686 06:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:45.686 06:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:32:45.686 06:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:45.686 06:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:45.686 06:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:45.687 06:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:45.687 06:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:45.687 06:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:45.687 06:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:45.687 06:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:45.687 06:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:45.687 06:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:45.687 06:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:32:45.687 06:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:32:45.687 06:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:32:45.687 06:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:32:45.687 06:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:32:45.687 06:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:32:45.687 06:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:45.687 ************************************ 00:32:45.687 START TEST nvmf_abort 00:32:45.687 ************************************ 00:32:45.687 06:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:32:45.687 * Looking for test storage... 00:32:45.687 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:45.687 06:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:32:45.687 06:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1691 -- # lcov --version 00:32:45.687 06:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:32:45.687 06:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:32:45.687 06:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:45.687 06:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:45.687 06:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:45.687 06:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:32:45.687 06:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:32:45.687 06:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:32:45.687 06:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:32:45.687 06:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:32:45.687 06:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:32:45.687 06:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:32:45.687 06:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:45.687 06:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:32:45.687 06:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:32:45.687 06:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:45.687 06:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:45.687 06:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:32:45.687 06:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:32:45.687 06:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:45.687 06:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:32:45.687 06:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:32:45.687 06:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:32:45.687 06:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:32:45.687 06:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:45.687 06:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:32:45.687 06:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:32:45.687 06:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:45.687 06:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:45.687 06:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:32:45.687 06:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:45.687 06:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:32:45.687 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:45.687 --rc genhtml_branch_coverage=1 00:32:45.687 --rc genhtml_function_coverage=1 00:32:45.687 --rc genhtml_legend=1 00:32:45.687 --rc geninfo_all_blocks=1 00:32:45.687 --rc geninfo_unexecuted_blocks=1 00:32:45.687 00:32:45.687 ' 00:32:45.687 06:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:32:45.687 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:45.687 --rc genhtml_branch_coverage=1 00:32:45.687 --rc genhtml_function_coverage=1 00:32:45.687 --rc genhtml_legend=1 00:32:45.687 --rc geninfo_all_blocks=1 00:32:45.687 --rc geninfo_unexecuted_blocks=1 00:32:45.687 00:32:45.687 ' 00:32:45.687 06:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:32:45.687 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:45.687 --rc genhtml_branch_coverage=1 00:32:45.687 --rc genhtml_function_coverage=1 00:32:45.687 --rc genhtml_legend=1 00:32:45.687 --rc geninfo_all_blocks=1 00:32:45.687 --rc geninfo_unexecuted_blocks=1 00:32:45.687 00:32:45.687 ' 00:32:45.687 06:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:32:45.687 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:45.687 --rc genhtml_branch_coverage=1 00:32:45.687 --rc genhtml_function_coverage=1 00:32:45.687 --rc genhtml_legend=1 00:32:45.687 --rc geninfo_all_blocks=1 00:32:45.687 --rc geninfo_unexecuted_blocks=1 00:32:45.687 00:32:45.687 ' 00:32:45.688 06:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:45.688 06:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:32:45.688 06:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:45.688 06:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:45.688 06:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:45.688 06:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:45.688 06:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:45.688 06:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:45.688 06:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:45.688 06:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:45.688 06:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:45.688 06:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:45.688 06:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:32:45.688 06:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:32:45.688 06:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:45.688 06:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:45.688 06:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:45.688 06:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:45.688 06:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:45.688 06:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:32:45.688 06:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:45.688 06:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:45.688 06:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:45.688 06:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:45.688 06:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:45.688 06:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:45.688 06:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:32:45.688 06:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:45.688 06:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:32:45.688 06:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:45.688 06:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:45.688 06:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:45.688 06:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:45.688 06:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:45.688 06:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:45.688 06:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:45.688 06:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:45.688 06:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:45.688 06:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:45.688 06:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:45.688 06:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:32:45.688 06:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:32:45.688 06:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:45.688 06:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:45.688 06:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:45.688 06:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:45.688 06:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:45.688 06:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:45.688 06:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:45.688 06:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:45.688 06:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:45.688 06:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:45.688 06:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:32:45.688 06:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:32:52.257 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:52.257 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:32:52.257 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:52.257 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:52.257 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:52.257 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:52.257 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:52.257 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:32:52.257 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:52.257 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:32:52.257 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:32:52.257 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:32:52.257 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:32:52.257 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:32:52.257 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:32:52.257 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:52.257 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:52.257 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:52.257 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:52.257 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:52.257 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:52.257 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:52.257 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:52.257 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:52.257 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:52.257 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:52.257 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:52.257 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:52.257 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:52.257 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:52.257 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:52.257 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:52.257 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:52.257 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:52.257 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:32:52.257 Found 0000:86:00.0 (0x8086 - 0x159b) 00:32:52.257 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:52.257 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:52.257 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:52.257 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:52.257 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:52.257 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:52.257 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:32:52.257 Found 0000:86:00.1 (0x8086 - 0x159b) 00:32:52.257 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:52.257 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:52.257 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:52.257 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:52.257 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:52.257 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:52.257 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:52.257 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:52.257 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:52.257 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:52.257 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:52.257 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:52.257 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:52.257 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:52.257 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:52.258 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:32:52.258 Found net devices under 0000:86:00.0: cvl_0_0 00:32:52.258 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:52.258 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:52.258 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:52.258 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:52.258 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:52.258 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:52.258 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:52.258 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:52.258 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:32:52.258 Found net devices under 0000:86:00.1: cvl_0_1 00:32:52.258 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:52.258 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:52.258 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:32:52.258 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:52.258 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:52.258 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:52.258 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:52.258 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:52.258 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:52.258 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:52.258 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:52.258 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:52.258 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:52.258 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:52.258 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:52.258 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:52.258 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:52.258 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:52.258 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:52.258 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:52.258 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:52.258 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:52.258 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:52.258 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:52.258 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:52.258 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:52.258 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:52.258 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:52.258 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:52.258 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:52.258 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.478 ms 00:32:52.258 00:32:52.258 --- 10.0.0.2 ping statistics --- 00:32:52.258 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:52.258 rtt min/avg/max/mdev = 0.478/0.478/0.478/0.000 ms 00:32:52.258 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:52.258 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:52.258 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.195 ms 00:32:52.258 00:32:52.258 --- 10.0.0.1 ping statistics --- 00:32:52.258 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:52.258 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:32:52.258 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:52.258 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:32:52.258 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:52.258 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:52.258 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:52.258 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:52.258 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:52.258 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:52.258 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:52.258 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:32:52.258 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:52.258 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:52.258 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:32:52.258 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=715477 00:32:52.258 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:32:52.258 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 715477 00:32:52.258 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@833 -- # '[' -z 715477 ']' 00:32:52.258 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:52.258 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@838 -- # local max_retries=100 00:32:52.258 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:52.258 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:52.258 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # xtrace_disable 00:32:52.258 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:32:52.258 [2024-11-20 06:43:23.428821] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:52.258 [2024-11-20 06:43:23.429722] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:32:52.258 [2024-11-20 06:43:23.429756] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:52.258 [2024-11-20 06:43:23.509261] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:32:52.258 [2024-11-20 06:43:23.550346] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:52.258 [2024-11-20 06:43:23.550382] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:52.258 [2024-11-20 06:43:23.550389] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:52.258 [2024-11-20 06:43:23.550395] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:52.258 [2024-11-20 06:43:23.550400] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:52.258 [2024-11-20 06:43:23.551818] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:52.258 [2024-11-20 06:43:23.551922] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:52.258 [2024-11-20 06:43:23.551923] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:52.258 [2024-11-20 06:43:23.617455] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:52.258 [2024-11-20 06:43:23.618217] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:32:52.258 [2024-11-20 06:43:23.618472] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:52.258 [2024-11-20 06:43:23.618598] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:52.258 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:32:52.258 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@866 -- # return 0 00:32:52.258 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:52.258 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:52.258 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:32:52.258 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:52.258 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:32:52.258 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:52.258 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:32:52.258 [2024-11-20 06:43:23.688700] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:52.258 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:52.258 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:32:52.259 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:52.259 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:32:52.259 Malloc0 00:32:52.259 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:52.259 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:32:52.259 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:52.259 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:32:52.259 Delay0 00:32:52.259 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:52.259 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:32:52.259 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:52.259 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:32:52.259 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:52.259 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:32:52.259 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:52.259 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:32:52.259 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:52.259 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:52.259 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:52.259 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:32:52.259 [2024-11-20 06:43:23.780630] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:52.259 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:52.259 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:52.259 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:52.259 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:32:52.259 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:52.259 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:32:52.259 [2024-11-20 06:43:23.955423] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:32:54.156 Initializing NVMe Controllers 00:32:54.156 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:32:54.156 controller IO queue size 128 less than required 00:32:54.156 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:32:54.157 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:32:54.157 Initialization complete. Launching workers. 00:32:54.157 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 38236 00:32:54.157 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 38293, failed to submit 66 00:32:54.157 success 38236, unsuccessful 57, failed 0 00:32:54.157 06:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:54.157 06:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:54.157 06:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:32:54.416 06:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:54.416 06:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:32:54.416 06:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:32:54.416 06:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:54.416 06:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:32:54.416 06:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:54.416 06:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:32:54.416 06:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:54.416 06:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:54.416 rmmod nvme_tcp 00:32:54.416 rmmod nvme_fabrics 00:32:54.416 rmmod nvme_keyring 00:32:54.416 06:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:54.416 06:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:32:54.416 06:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:32:54.416 06:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 715477 ']' 00:32:54.416 06:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 715477 00:32:54.416 06:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@952 -- # '[' -z 715477 ']' 00:32:54.416 06:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@956 -- # kill -0 715477 00:32:54.416 06:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@957 -- # uname 00:32:54.416 06:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:32:54.416 06:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 715477 00:32:54.416 06:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:32:54.416 06:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:32:54.416 06:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@970 -- # echo 'killing process with pid 715477' 00:32:54.416 killing process with pid 715477 00:32:54.416 06:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@971 -- # kill 715477 00:32:54.416 06:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@976 -- # wait 715477 00:32:54.675 06:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:54.675 06:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:54.675 06:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:54.675 06:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:32:54.675 06:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:32:54.675 06:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:54.675 06:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:32:54.675 06:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:54.675 06:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:54.675 06:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:54.675 06:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:54.675 06:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:56.580 06:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:56.580 00:32:56.580 real 0m11.111s 00:32:56.580 user 0m10.251s 00:32:56.580 sys 0m5.760s 00:32:56.580 06:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1128 -- # xtrace_disable 00:32:56.580 06:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:32:56.580 ************************************ 00:32:56.580 END TEST nvmf_abort 00:32:56.580 ************************************ 00:32:56.580 06:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:32:56.580 06:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:32:56.580 06:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:32:56.580 06:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:56.840 ************************************ 00:32:56.840 START TEST nvmf_ns_hotplug_stress 00:32:56.840 ************************************ 00:32:56.840 06:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:32:56.840 * Looking for test storage... 00:32:56.840 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:56.840 06:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:32:56.840 06:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lcov --version 00:32:56.840 06:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:32:56.840 06:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:32:56.840 06:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:56.840 06:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:56.840 06:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:56.840 06:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:32:56.840 06:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:32:56.840 06:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:32:56.840 06:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:32:56.840 06:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:32:56.840 06:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:32:56.840 06:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:32:56.840 06:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:56.840 06:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:32:56.840 06:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:32:56.840 06:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:56.840 06:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:56.840 06:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:32:56.840 06:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:32:56.840 06:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:56.840 06:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:32:56.840 06:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:32:56.840 06:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:32:56.840 06:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:32:56.840 06:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:56.840 06:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:32:56.840 06:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:32:56.840 06:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:56.840 06:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:56.840 06:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:32:56.840 06:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:56.840 06:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:32:56.840 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:56.840 --rc genhtml_branch_coverage=1 00:32:56.840 --rc genhtml_function_coverage=1 00:32:56.840 --rc genhtml_legend=1 00:32:56.840 --rc geninfo_all_blocks=1 00:32:56.840 --rc geninfo_unexecuted_blocks=1 00:32:56.840 00:32:56.840 ' 00:32:56.840 06:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:32:56.840 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:56.840 --rc genhtml_branch_coverage=1 00:32:56.840 --rc genhtml_function_coverage=1 00:32:56.840 --rc genhtml_legend=1 00:32:56.840 --rc geninfo_all_blocks=1 00:32:56.840 --rc geninfo_unexecuted_blocks=1 00:32:56.840 00:32:56.840 ' 00:32:56.840 06:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:32:56.840 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:56.840 --rc genhtml_branch_coverage=1 00:32:56.840 --rc genhtml_function_coverage=1 00:32:56.840 --rc genhtml_legend=1 00:32:56.840 --rc geninfo_all_blocks=1 00:32:56.840 --rc geninfo_unexecuted_blocks=1 00:32:56.840 00:32:56.840 ' 00:32:56.840 06:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:32:56.840 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:56.840 --rc genhtml_branch_coverage=1 00:32:56.840 --rc genhtml_function_coverage=1 00:32:56.840 --rc genhtml_legend=1 00:32:56.840 --rc geninfo_all_blocks=1 00:32:56.840 --rc geninfo_unexecuted_blocks=1 00:32:56.840 00:32:56.841 ' 00:32:56.841 06:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:56.841 06:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:32:56.841 06:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:56.841 06:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:56.841 06:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:56.841 06:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:56.841 06:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:56.841 06:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:56.841 06:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:56.841 06:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:56.841 06:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:56.841 06:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:56.841 06:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:32:56.841 06:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:32:56.841 06:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:56.841 06:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:56.841 06:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:56.841 06:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:56.841 06:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:56.841 06:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:32:56.841 06:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:56.841 06:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:56.841 06:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:56.841 06:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:56.841 06:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:56.841 06:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:56.841 06:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:32:56.841 06:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:56.841 06:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:32:56.841 06:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:56.841 06:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:56.841 06:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:56.841 06:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:56.841 06:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:56.841 06:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:56.841 06:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:56.841 06:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:56.841 06:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:56.841 06:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:56.841 06:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:56.841 06:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:32:56.841 06:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:56.841 06:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:56.841 06:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:56.841 06:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:56.841 06:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:56.841 06:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:56.841 06:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:56.841 06:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:56.841 06:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:56.841 06:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:56.841 06:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:32:56.841 06:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:33:03.412 06:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:03.412 06:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:33:03.412 06:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:03.412 06:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:03.412 06:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:03.412 06:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:03.412 06:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:03.412 06:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:33:03.412 06:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:03.412 06:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:33:03.412 06:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:33:03.412 06:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:33:03.412 06:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:33:03.412 06:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:33:03.412 06:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:33:03.412 06:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:03.412 06:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:03.412 06:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:03.412 06:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:03.412 06:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:03.412 06:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:03.412 06:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:03.412 06:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:03.412 06:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:03.412 06:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:03.412 06:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:03.412 06:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:03.412 06:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:03.412 06:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:03.412 06:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:03.412 06:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:03.412 06:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:03.412 06:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:03.412 06:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:03.412 06:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:33:03.412 Found 0000:86:00.0 (0x8086 - 0x159b) 00:33:03.412 06:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:03.412 06:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:03.412 06:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:03.412 06:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:03.412 06:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:03.412 06:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:03.412 06:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:33:03.412 Found 0000:86:00.1 (0x8086 - 0x159b) 00:33:03.412 06:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:03.412 06:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:03.412 06:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:03.412 06:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:03.412 06:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:03.412 06:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:03.412 06:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:03.412 06:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:03.412 06:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:03.412 06:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:03.412 06:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:03.412 06:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:03.412 06:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:03.412 06:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:03.412 06:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:03.412 06:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:33:03.412 Found net devices under 0000:86:00.0: cvl_0_0 00:33:03.412 06:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:03.412 06:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:03.412 06:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:03.412 06:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:03.412 06:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:03.412 06:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:03.412 06:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:03.412 06:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:03.412 06:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:33:03.413 Found net devices under 0000:86:00.1: cvl_0_1 00:33:03.413 06:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:03.413 06:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:03.413 06:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:33:03.413 06:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:03.413 06:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:03.413 06:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:03.413 06:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:03.413 06:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:03.413 06:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:03.413 06:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:03.413 06:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:03.413 06:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:03.413 06:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:03.413 06:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:03.413 06:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:03.413 06:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:03.413 06:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:03.413 06:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:03.413 06:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:03.413 06:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:03.413 06:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:03.413 06:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:03.413 06:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:03.413 06:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:03.413 06:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:03.413 06:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:03.413 06:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:03.413 06:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:03.413 06:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:03.413 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:03.413 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.482 ms 00:33:03.413 00:33:03.413 --- 10.0.0.2 ping statistics --- 00:33:03.413 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:03.413 rtt min/avg/max/mdev = 0.482/0.482/0.482/0.000 ms 00:33:03.413 06:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:03.413 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:03.413 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.216 ms 00:33:03.413 00:33:03.413 --- 10.0.0.1 ping statistics --- 00:33:03.413 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:03.413 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:33:03.413 06:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:03.413 06:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:33:03.413 06:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:03.413 06:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:03.413 06:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:03.413 06:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:03.413 06:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:03.413 06:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:03.413 06:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:03.413 06:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:33:03.413 06:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:03.413 06:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:03.413 06:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:33:03.413 06:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=719483 00:33:03.413 06:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:33:03.413 06:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 719483 00:33:03.413 06:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # '[' -z 719483 ']' 00:33:03.413 06:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:03.413 06:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # local max_retries=100 00:33:03.413 06:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:03.413 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:03.413 06:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # xtrace_disable 00:33:03.413 06:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:33:03.413 [2024-11-20 06:43:34.599403] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:03.413 [2024-11-20 06:43:34.600306] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:33:03.413 [2024-11-20 06:43:34.600340] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:03.413 [2024-11-20 06:43:34.675978] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:33:03.413 [2024-11-20 06:43:34.717427] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:03.413 [2024-11-20 06:43:34.717464] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:03.413 [2024-11-20 06:43:34.717471] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:03.413 [2024-11-20 06:43:34.717476] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:03.413 [2024-11-20 06:43:34.717481] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:03.413 [2024-11-20 06:43:34.718921] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:03.413 [2024-11-20 06:43:34.719026] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:03.413 [2024-11-20 06:43:34.719027] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:33:03.413 [2024-11-20 06:43:34.786080] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:03.413 [2024-11-20 06:43:34.786777] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:33:03.413 [2024-11-20 06:43:34.787076] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:03.413 [2024-11-20 06:43:34.787188] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:33:03.413 06:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:33:03.413 06:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@866 -- # return 0 00:33:03.413 06:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:03.413 06:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:03.413 06:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:33:03.413 06:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:03.413 06:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:33:03.413 06:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:33:03.413 [2024-11-20 06:43:35.015693] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:03.413 06:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:33:03.413 06:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:03.672 [2024-11-20 06:43:35.400789] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:03.672 06:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:03.931 06:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:33:04.189 Malloc0 00:33:04.189 06:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:33:04.189 Delay0 00:33:04.189 06:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:04.447 06:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:33:04.705 NULL1 00:33:04.705 06:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:33:04.963 06:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=719748 00:33:04.963 06:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:33:04.963 06:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 719748 00:33:04.964 06:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:06.337 Read completed with error (sct=0, sc=11) 00:33:06.337 06:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:06.337 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:06.337 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:06.337 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:06.337 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:06.337 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:06.337 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:06.337 06:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:33:06.337 06:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:33:06.595 true 00:33:06.595 06:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 719748 00:33:06.595 06:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:07.527 06:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:07.527 06:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:33:07.527 06:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:33:07.784 true 00:33:07.784 06:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 719748 00:33:07.784 06:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:07.784 06:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:08.042 06:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:33:08.042 06:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:33:08.299 true 00:33:08.299 06:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 719748 00:33:08.299 06:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:09.671 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:09.671 06:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:09.671 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:09.671 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:09.671 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:09.671 06:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:33:09.671 06:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:33:09.671 true 00:33:09.929 06:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 719748 00:33:09.929 06:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:09.929 06:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:10.186 06:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:33:10.186 06:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:33:10.444 true 00:33:10.444 06:43:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 719748 00:33:10.444 06:43:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:11.815 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:11.815 06:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:11.815 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:11.815 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:11.815 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:11.815 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:11.815 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:11.815 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:11.815 06:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:33:11.815 06:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:33:12.073 true 00:33:12.073 06:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 719748 00:33:12.073 06:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:13.005 06:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:13.005 06:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:33:13.005 06:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:33:13.263 true 00:33:13.263 06:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 719748 00:33:13.263 06:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:13.263 06:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:13.520 06:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:33:13.520 06:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:33:13.778 true 00:33:13.778 06:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 719748 00:33:13.778 06:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:14.711 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:14.711 06:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:14.968 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:14.968 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:14.968 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:14.968 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:14.968 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:14.968 06:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:33:14.968 06:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:33:15.226 true 00:33:15.226 06:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 719748 00:33:15.226 06:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:16.158 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:16.158 06:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:16.158 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:16.158 06:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:33:16.158 06:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:33:16.462 true 00:33:16.462 06:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 719748 00:33:16.462 06:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:16.733 06:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:17.017 06:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:33:17.017 06:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:33:17.017 true 00:33:17.017 06:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 719748 00:33:17.017 06:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:18.392 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:18.392 06:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:18.392 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:18.392 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:18.392 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:18.392 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:18.392 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:18.392 06:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:33:18.392 06:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:33:18.651 true 00:33:18.651 06:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 719748 00:33:18.651 06:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:19.587 06:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:19.587 06:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:33:19.587 06:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:33:19.846 true 00:33:19.846 06:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 719748 00:33:19.846 06:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:20.105 06:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:20.105 06:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:33:20.105 06:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:33:20.363 true 00:33:20.363 06:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 719748 00:33:20.363 06:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:21.740 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:21.740 06:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:21.740 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:21.740 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:21.740 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:21.740 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:21.740 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:21.740 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:21.740 06:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:33:21.740 06:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:33:21.999 true 00:33:21.999 06:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 719748 00:33:21.999 06:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:22.937 06:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:22.937 06:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:33:22.937 06:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:33:23.196 true 00:33:23.196 06:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 719748 00:33:23.196 06:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:23.196 06:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:23.455 06:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:33:23.455 06:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:33:23.713 true 00:33:23.713 06:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 719748 00:33:23.713 06:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:24.650 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:24.650 06:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:24.909 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:24.909 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:24.909 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:24.909 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:24.909 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:24.909 06:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:33:24.909 06:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:33:25.168 true 00:33:25.168 06:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 719748 00:33:25.168 06:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:26.105 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:26.105 06:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:26.105 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:26.105 06:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:33:26.105 06:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:33:26.364 true 00:33:26.364 06:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 719748 00:33:26.364 06:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:26.623 06:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:26.882 06:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:33:26.882 06:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:33:26.882 true 00:33:27.140 06:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 719748 00:33:27.141 06:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:28.074 06:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:28.074 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:28.074 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:28.074 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:28.334 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:28.334 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:28.334 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:28.334 06:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:33:28.334 06:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:33:28.592 true 00:33:28.592 06:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 719748 00:33:28.592 06:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:29.527 06:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:29.527 06:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:33:29.528 06:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:33:29.786 true 00:33:29.786 06:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 719748 00:33:29.786 06:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:30.044 06:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:30.044 06:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:33:30.044 06:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:33:30.301 true 00:33:30.301 06:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 719748 00:33:30.301 06:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:31.234 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:31.234 06:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:31.491 06:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:33:31.491 06:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:33:31.748 true 00:33:31.748 06:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 719748 00:33:31.748 06:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:32.006 06:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:32.006 06:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:33:32.006 06:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:33:32.264 true 00:33:32.264 06:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 719748 00:33:32.264 06:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:33.638 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:33.638 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:33.638 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:33.638 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:33.638 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:33.638 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:33.638 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:33.638 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:33:33.638 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:33:33.896 true 00:33:33.896 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 719748 00:33:33.896 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:34.831 06:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:34.831 06:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:33:34.831 06:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:33:35.090 true 00:33:35.090 06:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 719748 00:33:35.091 06:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:35.091 Initializing NVMe Controllers 00:33:35.091 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:35.091 Controller IO queue size 128, less than required. 00:33:35.091 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:33:35.091 Controller IO queue size 128, less than required. 00:33:35.091 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:33:35.091 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:33:35.091 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:33:35.091 Initialization complete. Launching workers. 00:33:35.091 ======================================================== 00:33:35.091 Latency(us) 00:33:35.091 Device Information : IOPS MiB/s Average min max 00:33:35.091 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2077.15 1.01 42620.27 2282.81 1012170.83 00:33:35.091 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 18441.73 9.00 6940.87 1554.99 371114.74 00:33:35.091 ======================================================== 00:33:35.091 Total : 20518.87 10.02 10552.74 1554.99 1012170.83 00:33:35.091 00:33:35.349 06:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:35.608 06:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:33:35.608 06:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:33:35.608 true 00:33:35.867 06:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 719748 00:33:35.867 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (719748) - No such process 00:33:35.867 06:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 719748 00:33:35.867 06:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:35.867 06:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:33:36.126 06:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:33:36.126 06:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:33:36.126 06:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:33:36.126 06:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:33:36.126 06:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:33:36.384 null0 00:33:36.384 06:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:33:36.384 06:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:33:36.384 06:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:33:36.384 null1 00:33:36.384 06:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:33:36.384 06:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:33:36.384 06:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:33:36.643 null2 00:33:36.643 06:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:33:36.643 06:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:33:36.643 06:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:33:36.902 null3 00:33:36.902 06:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:33:36.902 06:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:33:36.902 06:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:33:37.161 null4 00:33:37.161 06:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:33:37.161 06:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:33:37.161 06:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:33:37.161 null5 00:33:37.161 06:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:33:37.161 06:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:33:37.161 06:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:33:37.420 null6 00:33:37.420 06:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:33:37.420 06:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:33:37.420 06:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:33:37.679 null7 00:33:37.679 06:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:33:37.679 06:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:33:37.679 06:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:33:37.679 06:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:33:37.679 06:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:33:37.679 06:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:33:37.679 06:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:33:37.679 06:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:33:37.679 06:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:37.679 06:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:33:37.679 06:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:33:37.679 06:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:33:37.679 06:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:33:37.679 06:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:33:37.679 06:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:33:37.679 06:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:33:37.679 06:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:33:37.679 06:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:37.679 06:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:33:37.679 06:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:33:37.679 06:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:33:37.679 06:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:33:37.679 06:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:33:37.679 06:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:33:37.679 06:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:33:37.679 06:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:33:37.679 06:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:37.679 06:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:33:37.679 06:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:33:37.679 06:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:33:37.679 06:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:33:37.680 06:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:33:37.680 06:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:33:37.680 06:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:33:37.680 06:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:37.680 06:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:33:37.680 06:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:33:37.680 06:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:33:37.680 06:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:33:37.680 06:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:33:37.680 06:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:33:37.680 06:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:33:37.680 06:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:37.680 06:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:33:37.680 06:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:33:37.680 06:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:33:37.680 06:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:33:37.680 06:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:33:37.680 06:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:33:37.680 06:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:33:37.680 06:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:37.680 06:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:33:37.680 06:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:33:37.680 06:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:33:37.680 06:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:33:37.680 06:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:33:37.680 06:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:33:37.680 06:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:33:37.680 06:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:37.680 06:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:33:37.680 06:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:33:37.680 06:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:33:37.680 06:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:33:37.680 06:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:33:37.680 06:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 725086 725088 725090 725091 725093 725095 725097 725099 00:33:37.680 06:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:33:37.680 06:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:33:37.680 06:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:37.680 06:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:33:37.680 06:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:33:37.680 06:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:33:37.680 06:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:37.680 06:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:33:37.939 06:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:33:37.939 06:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:33:37.939 06:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:33:37.939 06:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:33:37.939 06:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:37.939 06:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:37.939 06:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:33:37.939 06:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:37.939 06:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:37.939 06:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:33:37.939 06:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:37.939 06:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:37.939 06:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:33:37.939 06:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:37.939 06:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:37.939 06:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:33:37.939 06:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:37.939 06:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:37.939 06:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:33:37.939 06:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:37.939 06:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:37.939 06:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:33:37.939 06:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:37.939 06:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:37.939 06:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:33:37.939 06:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:37.939 06:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:37.939 06:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:33:38.199 06:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:33:38.199 06:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:33:38.199 06:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:33:38.199 06:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:33:38.199 06:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:33:38.199 06:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:33:38.199 06:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:38.199 06:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:33:38.458 06:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:38.458 06:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:38.458 06:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:33:38.458 06:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:38.458 06:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:38.458 06:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:33:38.458 06:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:38.458 06:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:38.458 06:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:38.458 06:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:33:38.458 06:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:38.458 06:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:33:38.458 06:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:38.458 06:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:38.458 06:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:33:38.458 06:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:38.458 06:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:38.458 06:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:33:38.458 06:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:38.458 06:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:38.458 06:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:33:38.458 06:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:38.458 06:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:38.458 06:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:33:38.717 06:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:33:38.717 06:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:38.717 06:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:33:38.717 06:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:33:38.717 06:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:33:38.717 06:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:33:38.717 06:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:33:38.717 06:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:33:38.717 06:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:38.717 06:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:38.717 06:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:33:38.717 06:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:38.717 06:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:38.717 06:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:33:38.717 06:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:38.976 06:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:38.976 06:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:38.976 06:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:38.976 06:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:33:38.976 06:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:33:38.976 06:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:38.976 06:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:38.976 06:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:33:38.976 06:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:38.976 06:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:38.976 06:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:33:38.976 06:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:38.976 06:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:38.976 06:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:33:38.976 06:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:38.976 06:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:38.976 06:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:33:38.976 06:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:33:38.976 06:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:33:38.976 06:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:38.976 06:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:33:38.976 06:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:33:38.976 06:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:33:38.976 06:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:33:38.977 06:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:33:39.236 06:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:39.236 06:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:39.236 06:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:33:39.236 06:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:39.236 06:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:39.236 06:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:33:39.236 06:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:39.236 06:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:39.236 06:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:33:39.236 06:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:39.236 06:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:39.236 06:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:33:39.236 06:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:39.236 06:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:39.236 06:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:33:39.236 06:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:39.236 06:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:39.236 06:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:33:39.236 06:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:39.236 06:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:39.236 06:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:33:39.236 06:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:39.236 06:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:39.236 06:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:33:39.495 06:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:33:39.495 06:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:33:39.495 06:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:33:39.495 06:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:39.495 06:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:33:39.495 06:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:33:39.495 06:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:33:39.495 06:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:33:39.754 06:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:39.754 06:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:39.754 06:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:33:39.754 06:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:39.754 06:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:39.754 06:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:33:39.754 06:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:39.754 06:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:39.754 06:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:33:39.754 06:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:39.754 06:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:39.754 06:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:33:39.754 06:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:39.754 06:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:39.754 06:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:33:39.754 06:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:39.754 06:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:39.754 06:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:33:39.754 06:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:39.754 06:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:39.754 06:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:33:39.754 06:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:39.754 06:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:39.754 06:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:33:39.754 06:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:33:40.013 06:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:33:40.013 06:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:33:40.013 06:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:33:40.013 06:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:33:40.013 06:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:33:40.013 06:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:40.013 06:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:33:40.013 06:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:40.013 06:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:40.013 06:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:33:40.013 06:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:40.013 06:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:40.013 06:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:33:40.013 06:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:40.013 06:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:40.013 06:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:40.013 06:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:33:40.013 06:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:40.013 06:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:33:40.013 06:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:40.013 06:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:40.013 06:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:33:40.013 06:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:40.013 06:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:40.013 06:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:33:40.013 06:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:40.013 06:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:40.014 06:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:33:40.014 06:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:40.014 06:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:40.014 06:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:33:40.272 06:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:33:40.272 06:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:33:40.272 06:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:33:40.272 06:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:33:40.272 06:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:33:40.272 06:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:33:40.272 06:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:33:40.272 06:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:40.531 06:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:40.531 06:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:40.531 06:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:33:40.531 06:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:40.531 06:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:40.531 06:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:33:40.531 06:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:40.531 06:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:40.531 06:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:33:40.531 06:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:40.531 06:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:40.531 06:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:40.531 06:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:40.531 06:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:33:40.531 06:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:33:40.531 06:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:40.531 06:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:40.531 06:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:40.531 06:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:33:40.531 06:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:40.531 06:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:33:40.531 06:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:40.531 06:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:40.531 06:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:33:40.790 06:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:33:40.790 06:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:33:40.790 06:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:33:40.790 06:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:33:40.790 06:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:40.790 06:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:33:40.790 06:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:33:40.790 06:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:33:40.790 06:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:40.790 06:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:40.790 06:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:33:41.049 06:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:41.049 06:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:41.049 06:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:33:41.049 06:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:41.049 06:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:41.049 06:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:33:41.049 06:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:41.049 06:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:41.049 06:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:33:41.049 06:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:41.049 06:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:41.049 06:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:33:41.049 06:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:41.049 06:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:41.049 06:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:33:41.049 06:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:41.049 06:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:41.049 06:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:33:41.049 06:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:41.049 06:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:41.049 06:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:33:41.049 06:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:33:41.049 06:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:33:41.049 06:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:33:41.049 06:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:41.049 06:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:33:41.049 06:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:33:41.049 06:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:33:41.049 06:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:33:41.307 06:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:41.308 06:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:41.308 06:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:33:41.308 06:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:41.308 06:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:41.308 06:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:33:41.308 06:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:41.308 06:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:41.308 06:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:33:41.308 06:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:41.308 06:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:41.308 06:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:41.308 06:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:33:41.308 06:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:41.308 06:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:41.308 06:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:41.308 06:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:33:41.308 06:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:33:41.308 06:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:41.308 06:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:41.308 06:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:33:41.308 06:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:41.308 06:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:41.308 06:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:33:41.567 06:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:33:41.567 06:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:33:41.567 06:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:33:41.567 06:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:33:41.567 06:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:33:41.567 06:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:33:41.567 06:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:33:41.567 06:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:41.826 06:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:41.826 06:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:41.826 06:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:41.826 06:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:41.826 06:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:41.826 06:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:41.826 06:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:41.826 06:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:41.826 06:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:41.826 06:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:41.826 06:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:41.826 06:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:41.826 06:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:41.826 06:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:41.826 06:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:41.826 06:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:41.826 06:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:33:41.826 06:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:33:41.826 06:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:41.826 06:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:33:41.826 06:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:41.826 06:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:33:41.826 06:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:41.826 06:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:41.826 rmmod nvme_tcp 00:33:41.826 rmmod nvme_fabrics 00:33:41.826 rmmod nvme_keyring 00:33:41.826 06:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:41.826 06:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:33:41.826 06:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:33:41.826 06:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 719483 ']' 00:33:41.826 06:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 719483 00:33:41.826 06:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # '[' -z 719483 ']' 00:33:41.826 06:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # kill -0 719483 00:33:41.826 06:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@957 -- # uname 00:33:41.826 06:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:33:41.826 06:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 719483 00:33:41.826 06:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:33:41.826 06:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:33:41.826 06:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@970 -- # echo 'killing process with pid 719483' 00:33:41.826 killing process with pid 719483 00:33:41.826 06:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@971 -- # kill 719483 00:33:41.826 06:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@976 -- # wait 719483 00:33:42.086 06:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:42.086 06:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:42.086 06:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:42.086 06:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:33:42.086 06:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:33:42.086 06:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:42.086 06:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:33:42.086 06:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:42.086 06:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:42.086 06:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:42.086 06:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:42.086 06:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:44.622 06:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:44.622 00:33:44.622 real 0m47.412s 00:33:44.622 user 2m56.493s 00:33:44.622 sys 0m19.846s 00:33:44.622 06:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1128 -- # xtrace_disable 00:33:44.622 06:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:33:44.622 ************************************ 00:33:44.622 END TEST nvmf_ns_hotplug_stress 00:33:44.622 ************************************ 00:33:44.622 06:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:33:44.622 06:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:33:44.622 06:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:33:44.623 06:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:44.623 ************************************ 00:33:44.623 START TEST nvmf_delete_subsystem 00:33:44.623 ************************************ 00:33:44.623 06:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:33:44.623 * Looking for test storage... 00:33:44.623 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:44.623 06:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:33:44.623 06:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lcov --version 00:33:44.623 06:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:33:44.623 06:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:33:44.623 06:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:44.623 06:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:44.623 06:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:44.623 06:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:33:44.623 06:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:33:44.623 06:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:33:44.623 06:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:33:44.623 06:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:33:44.623 06:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:33:44.623 06:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:33:44.623 06:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:44.623 06:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:33:44.623 06:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:33:44.623 06:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:44.623 06:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:44.623 06:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:33:44.623 06:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:33:44.623 06:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:44.623 06:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:33:44.623 06:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:33:44.623 06:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:33:44.623 06:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:33:44.623 06:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:44.623 06:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:33:44.623 06:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:33:44.623 06:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:44.623 06:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:44.623 06:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:33:44.623 06:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:44.623 06:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:33:44.623 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:44.623 --rc genhtml_branch_coverage=1 00:33:44.623 --rc genhtml_function_coverage=1 00:33:44.623 --rc genhtml_legend=1 00:33:44.623 --rc geninfo_all_blocks=1 00:33:44.623 --rc geninfo_unexecuted_blocks=1 00:33:44.623 00:33:44.623 ' 00:33:44.623 06:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:33:44.623 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:44.623 --rc genhtml_branch_coverage=1 00:33:44.623 --rc genhtml_function_coverage=1 00:33:44.623 --rc genhtml_legend=1 00:33:44.623 --rc geninfo_all_blocks=1 00:33:44.623 --rc geninfo_unexecuted_blocks=1 00:33:44.623 00:33:44.623 ' 00:33:44.623 06:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:33:44.623 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:44.623 --rc genhtml_branch_coverage=1 00:33:44.623 --rc genhtml_function_coverage=1 00:33:44.623 --rc genhtml_legend=1 00:33:44.623 --rc geninfo_all_blocks=1 00:33:44.623 --rc geninfo_unexecuted_blocks=1 00:33:44.623 00:33:44.623 ' 00:33:44.623 06:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:33:44.623 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:44.623 --rc genhtml_branch_coverage=1 00:33:44.623 --rc genhtml_function_coverage=1 00:33:44.623 --rc genhtml_legend=1 00:33:44.623 --rc geninfo_all_blocks=1 00:33:44.623 --rc geninfo_unexecuted_blocks=1 00:33:44.623 00:33:44.623 ' 00:33:44.623 06:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:44.623 06:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:33:44.623 06:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:44.623 06:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:44.623 06:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:44.623 06:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:44.623 06:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:44.623 06:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:44.623 06:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:44.623 06:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:44.623 06:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:44.623 06:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:44.623 06:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:33:44.623 06:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:33:44.623 06:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:44.623 06:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:44.623 06:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:44.623 06:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:44.623 06:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:44.623 06:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:33:44.623 06:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:44.623 06:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:44.623 06:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:44.623 06:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:44.623 06:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:44.624 06:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:44.624 06:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:33:44.624 06:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:44.624 06:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:33:44.624 06:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:44.624 06:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:44.624 06:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:44.624 06:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:44.624 06:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:44.624 06:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:44.624 06:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:44.624 06:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:44.624 06:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:44.624 06:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:44.624 06:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:33:44.624 06:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:44.624 06:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:44.624 06:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:44.624 06:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:44.624 06:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:44.624 06:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:44.624 06:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:44.624 06:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:44.624 06:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:44.624 06:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:44.624 06:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:33:44.624 06:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:33:51.195 06:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:51.195 06:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:33:51.195 06:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:51.195 06:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:51.195 06:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:51.195 06:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:51.195 06:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:51.195 06:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:33:51.195 06:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:51.195 06:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:33:51.195 06:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:33:51.195 06:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:33:51.195 06:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:33:51.195 06:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:33:51.195 06:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:33:51.195 06:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:51.195 06:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:51.195 06:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:51.195 06:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:51.196 06:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:51.196 06:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:51.196 06:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:51.196 06:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:51.196 06:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:51.196 06:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:51.196 06:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:51.196 06:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:51.196 06:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:51.196 06:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:51.196 06:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:51.196 06:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:51.196 06:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:51.196 06:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:51.196 06:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:51.196 06:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:33:51.196 Found 0000:86:00.0 (0x8086 - 0x159b) 00:33:51.196 06:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:51.196 06:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:51.196 06:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:51.196 06:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:51.196 06:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:51.196 06:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:51.196 06:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:33:51.196 Found 0000:86:00.1 (0x8086 - 0x159b) 00:33:51.196 06:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:51.196 06:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:51.196 06:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:51.196 06:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:51.196 06:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:51.196 06:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:51.196 06:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:51.196 06:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:51.196 06:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:51.196 06:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:51.196 06:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:51.196 06:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:51.196 06:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:51.196 06:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:51.196 06:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:51.196 06:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:33:51.196 Found net devices under 0000:86:00.0: cvl_0_0 00:33:51.196 06:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:51.196 06:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:51.196 06:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:51.196 06:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:51.196 06:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:51.196 06:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:51.196 06:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:51.196 06:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:51.196 06:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:33:51.196 Found net devices under 0000:86:00.1: cvl_0_1 00:33:51.196 06:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:51.196 06:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:51.196 06:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:33:51.196 06:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:51.196 06:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:51.196 06:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:51.196 06:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:51.196 06:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:51.196 06:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:51.196 06:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:51.196 06:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:51.196 06:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:51.196 06:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:51.196 06:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:51.196 06:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:51.196 06:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:51.196 06:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:51.196 06:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:51.196 06:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:51.196 06:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:51.196 06:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:51.196 06:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:51.196 06:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:51.196 06:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:51.196 06:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:51.196 06:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:51.196 06:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:51.196 06:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:51.196 06:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:51.196 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:51.196 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.399 ms 00:33:51.196 00:33:51.196 --- 10.0.0.2 ping statistics --- 00:33:51.196 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:51.196 rtt min/avg/max/mdev = 0.399/0.399/0.399/0.000 ms 00:33:51.196 06:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:51.196 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:51.196 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.221 ms 00:33:51.196 00:33:51.196 --- 10.0.0.1 ping statistics --- 00:33:51.196 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:51.196 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:33:51.196 06:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:51.196 06:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:33:51.196 06:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:51.196 06:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:51.196 06:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:51.196 06:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:51.197 06:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:51.197 06:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:51.197 06:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:51.197 06:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:33:51.197 06:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:51.197 06:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:51.197 06:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:33:51.197 06:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=729451 00:33:51.197 06:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:33:51.197 06:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 729451 00:33:51.197 06:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # '[' -z 729451 ']' 00:33:51.197 06:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:51.197 06:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # local max_retries=100 00:33:51.197 06:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:51.197 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:51.197 06:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # xtrace_disable 00:33:51.197 06:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:33:51.197 [2024-11-20 06:44:22.095444] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:51.197 [2024-11-20 06:44:22.096321] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:33:51.197 [2024-11-20 06:44:22.096355] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:51.197 [2024-11-20 06:44:22.174743] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:33:51.197 [2024-11-20 06:44:22.215281] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:51.197 [2024-11-20 06:44:22.215315] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:51.197 [2024-11-20 06:44:22.215322] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:51.197 [2024-11-20 06:44:22.215328] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:51.197 [2024-11-20 06:44:22.215333] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:51.197 [2024-11-20 06:44:22.216560] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:51.197 [2024-11-20 06:44:22.216562] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:51.197 [2024-11-20 06:44:22.282418] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:51.197 [2024-11-20 06:44:22.282946] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:33:51.197 [2024-11-20 06:44:22.283168] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:51.197 06:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:33:51.197 06:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@866 -- # return 0 00:33:51.197 06:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:51.197 06:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:51.197 06:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:33:51.197 06:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:51.197 06:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:51.197 06:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:51.197 06:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:33:51.197 [2024-11-20 06:44:22.349353] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:51.197 06:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:51.197 06:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:33:51.197 06:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:51.197 06:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:33:51.197 06:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:51.197 06:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:51.197 06:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:51.197 06:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:33:51.197 [2024-11-20 06:44:22.377668] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:51.197 06:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:51.197 06:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:33:51.197 06:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:51.197 06:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:33:51.197 NULL1 00:33:51.197 06:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:51.197 06:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:33:51.197 06:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:51.197 06:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:33:51.197 Delay0 00:33:51.197 06:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:51.197 06:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:51.197 06:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:51.197 06:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:33:51.197 06:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:51.197 06:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=729479 00:33:51.197 06:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:33:51.197 06:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:33:51.197 [2024-11-20 06:44:22.490084] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:33:53.097 06:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:53.098 06:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:53.098 06:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:33:53.098 Read completed with error (sct=0, sc=8) 00:33:53.098 Write completed with error (sct=0, sc=8) 00:33:53.098 Write completed with error (sct=0, sc=8) 00:33:53.098 Write completed with error (sct=0, sc=8) 00:33:53.098 starting I/O failed: -6 00:33:53.098 Write completed with error (sct=0, sc=8) 00:33:53.098 Write completed with error (sct=0, sc=8) 00:33:53.098 Read completed with error (sct=0, sc=8) 00:33:53.098 Read completed with error (sct=0, sc=8) 00:33:53.098 starting I/O failed: -6 00:33:53.098 Read completed with error (sct=0, sc=8) 00:33:53.098 Write completed with error (sct=0, sc=8) 00:33:53.098 Read completed with error (sct=0, sc=8) 00:33:53.098 Read completed with error (sct=0, sc=8) 00:33:53.098 starting I/O failed: -6 00:33:53.098 Read completed with error (sct=0, sc=8) 00:33:53.098 Write completed with error (sct=0, sc=8) 00:33:53.098 Read completed with error (sct=0, sc=8) 00:33:53.098 Write completed with error (sct=0, sc=8) 00:33:53.098 starting I/O failed: -6 00:33:53.098 Read completed with error (sct=0, sc=8) 00:33:53.098 Read completed with error (sct=0, sc=8) 00:33:53.098 Read completed with error (sct=0, sc=8) 00:33:53.098 Read completed with error (sct=0, sc=8) 00:33:53.098 starting I/O failed: -6 00:33:53.098 Write completed with error (sct=0, sc=8) 00:33:53.098 Read completed with error (sct=0, sc=8) 00:33:53.098 Read completed with error (sct=0, sc=8) 00:33:53.098 Read completed with error (sct=0, sc=8) 00:33:53.098 starting I/O failed: -6 00:33:53.098 Read completed with error (sct=0, sc=8) 00:33:53.098 Write completed with error (sct=0, sc=8) 00:33:53.098 Read completed with error (sct=0, sc=8) 00:33:53.098 Read completed with error (sct=0, sc=8) 00:33:53.098 starting I/O failed: -6 00:33:53.098 Read completed with error (sct=0, sc=8) 00:33:53.098 Read completed with error (sct=0, sc=8) 00:33:53.098 Write completed with error (sct=0, sc=8) 00:33:53.098 Write completed with error (sct=0, sc=8) 00:33:53.098 starting I/O failed: -6 00:33:53.098 Write completed with error (sct=0, sc=8) 00:33:53.098 Write completed with error (sct=0, sc=8) 00:33:53.098 Read completed with error (sct=0, sc=8) 00:33:53.098 Read completed with error (sct=0, sc=8) 00:33:53.098 starting I/O failed: -6 00:33:53.098 Write completed with error (sct=0, sc=8) 00:33:53.098 Read completed with error (sct=0, sc=8) 00:33:53.098 Read completed with error (sct=0, sc=8) 00:33:53.098 Write completed with error (sct=0, sc=8) 00:33:53.098 starting I/O failed: -6 00:33:53.098 Write completed with error (sct=0, sc=8) 00:33:53.098 Read completed with error (sct=0, sc=8) 00:33:53.098 Read completed with error (sct=0, sc=8) 00:33:53.098 [2024-11-20 06:44:24.602889] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdd680 is same with the state(6) to be set 00:33:53.098 Write completed with error (sct=0, sc=8) 00:33:53.098 Read completed with error (sct=0, sc=8) 00:33:53.098 Write completed with error (sct=0, sc=8) 00:33:53.098 Read completed with error (sct=0, sc=8) 00:33:53.098 Write completed with error (sct=0, sc=8) 00:33:53.098 Write completed with error (sct=0, sc=8) 00:33:53.098 Read completed with error (sct=0, sc=8) 00:33:53.098 Read completed with error (sct=0, sc=8) 00:33:53.098 Read completed with error (sct=0, sc=8) 00:33:53.098 Read completed with error (sct=0, sc=8) 00:33:53.098 Read completed with error (sct=0, sc=8) 00:33:53.098 Read completed with error (sct=0, sc=8) 00:33:53.098 Read completed with error (sct=0, sc=8) 00:33:53.098 Read completed with error (sct=0, sc=8) 00:33:53.098 Write completed with error (sct=0, sc=8) 00:33:53.098 Read completed with error (sct=0, sc=8) 00:33:53.098 Write completed with error (sct=0, sc=8) 00:33:53.098 Write completed with error (sct=0, sc=8) 00:33:53.098 Read completed with error (sct=0, sc=8) 00:33:53.098 Read completed with error (sct=0, sc=8) 00:33:53.098 Read completed with error (sct=0, sc=8) 00:33:53.098 Read completed with error (sct=0, sc=8) 00:33:53.098 Read completed with error (sct=0, sc=8) 00:33:53.098 Read completed with error (sct=0, sc=8) 00:33:53.098 Read completed with error (sct=0, sc=8) 00:33:53.098 Write completed with error (sct=0, sc=8) 00:33:53.098 Write completed with error (sct=0, sc=8) 00:33:53.098 Write completed with error (sct=0, sc=8) 00:33:53.098 Write completed with error (sct=0, sc=8) 00:33:53.098 Read completed with error (sct=0, sc=8) 00:33:53.098 Read completed with error (sct=0, sc=8) 00:33:53.098 Read completed with error (sct=0, sc=8) 00:33:53.098 Write completed with error (sct=0, sc=8) 00:33:53.098 Read completed with error (sct=0, sc=8) 00:33:53.098 Read completed with error (sct=0, sc=8) 00:33:53.098 Write completed with error (sct=0, sc=8) 00:33:53.098 Read completed with error (sct=0, sc=8) 00:33:53.098 Read completed with error (sct=0, sc=8) 00:33:53.098 Read completed with error (sct=0, sc=8) 00:33:53.098 Write completed with error (sct=0, sc=8) 00:33:53.098 Write completed with error (sct=0, sc=8) 00:33:53.098 Write completed with error (sct=0, sc=8) 00:33:53.098 Write completed with error (sct=0, sc=8) 00:33:53.098 Read completed with error (sct=0, sc=8) 00:33:53.098 Read completed with error (sct=0, sc=8) 00:33:53.098 Read completed with error (sct=0, sc=8) 00:33:53.098 Read completed with error (sct=0, sc=8) 00:33:53.098 Read completed with error (sct=0, sc=8) 00:33:53.098 Write completed with error (sct=0, sc=8) 00:33:53.098 Read completed with error (sct=0, sc=8) 00:33:53.098 Read completed with error (sct=0, sc=8) 00:33:53.098 Write completed with error (sct=0, sc=8) 00:33:53.098 Write completed with error (sct=0, sc=8) 00:33:53.098 Read completed with error (sct=0, sc=8) 00:33:53.098 Read completed with error (sct=0, sc=8) 00:33:53.098 Write completed with error (sct=0, sc=8) 00:33:53.098 starting I/O failed: -6 00:33:53.098 Read completed with error (sct=0, sc=8) 00:33:53.098 Write completed with error (sct=0, sc=8) 00:33:53.098 Read completed with error (sct=0, sc=8) 00:33:53.098 Read completed with error (sct=0, sc=8) 00:33:53.098 starting I/O failed: -6 00:33:53.098 Read completed with error (sct=0, sc=8) 00:33:53.098 Write completed with error (sct=0, sc=8) 00:33:53.098 Write completed with error (sct=0, sc=8) 00:33:53.098 Write completed with error (sct=0, sc=8) 00:33:53.098 starting I/O failed: -6 00:33:53.098 Write completed with error (sct=0, sc=8) 00:33:53.098 Read completed with error (sct=0, sc=8) 00:33:53.098 Write completed with error (sct=0, sc=8) 00:33:53.098 Read completed with error (sct=0, sc=8) 00:33:53.098 starting I/O failed: -6 00:33:53.098 Read completed with error (sct=0, sc=8) 00:33:53.098 Read completed with error (sct=0, sc=8) 00:33:53.098 Read completed with error (sct=0, sc=8) 00:33:53.098 Write completed with error (sct=0, sc=8) 00:33:53.098 starting I/O failed: -6 00:33:53.098 Read completed with error (sct=0, sc=8) 00:33:53.098 Read completed with error (sct=0, sc=8) 00:33:53.098 Read completed with error (sct=0, sc=8) 00:33:53.098 Write completed with error (sct=0, sc=8) 00:33:53.098 starting I/O failed: -6 00:33:53.098 Read completed with error (sct=0, sc=8) 00:33:53.098 Read completed with error (sct=0, sc=8) 00:33:53.098 Write completed with error (sct=0, sc=8) 00:33:53.098 Read completed with error (sct=0, sc=8) 00:33:53.098 starting I/O failed: -6 00:33:53.098 Read completed with error (sct=0, sc=8) 00:33:53.098 Write completed with error (sct=0, sc=8) 00:33:53.098 Read completed with error (sct=0, sc=8) 00:33:53.098 Read completed with error (sct=0, sc=8) 00:33:53.098 starting I/O failed: -6 00:33:53.098 Write completed with error (sct=0, sc=8) 00:33:53.098 Read completed with error (sct=0, sc=8) 00:33:53.098 Write completed with error (sct=0, sc=8) 00:33:53.098 Read completed with error (sct=0, sc=8) 00:33:53.098 starting I/O failed: -6 00:33:53.098 Read completed with error (sct=0, sc=8) 00:33:53.098 Read completed with error (sct=0, sc=8) 00:33:53.098 Read completed with error (sct=0, sc=8) 00:33:53.098 Read completed with error (sct=0, sc=8) 00:33:53.098 starting I/O failed: -6 00:33:53.098 Read completed with error (sct=0, sc=8) 00:33:53.098 Write completed with error (sct=0, sc=8) 00:33:53.098 Read completed with error (sct=0, sc=8) 00:33:53.098 starting I/O failed: -6 00:33:53.098 starting I/O failed: -6 00:33:53.098 starting I/O failed: -6 00:33:53.098 starting I/O failed: -6 00:33:53.098 Read completed with error (sct=0, sc=8) 00:33:53.098 starting I/O failed: -6 00:33:53.098 Write completed with error (sct=0, sc=8) 00:33:53.098 Read completed with error (sct=0, sc=8) 00:33:53.098 starting I/O failed: -6 00:33:53.098 Read completed with error (sct=0, sc=8) 00:33:53.098 Read completed with error (sct=0, sc=8) 00:33:53.098 starting I/O failed: -6 00:33:53.098 Read completed with error (sct=0, sc=8) 00:33:53.098 Read completed with error (sct=0, sc=8) 00:33:53.098 starting I/O failed: -6 00:33:53.098 Write completed with error (sct=0, sc=8) 00:33:53.098 Read completed with error (sct=0, sc=8) 00:33:53.098 starting I/O failed: -6 00:33:53.098 Read completed with error (sct=0, sc=8) 00:33:53.098 Write completed with error (sct=0, sc=8) 00:33:53.098 starting I/O failed: -6 00:33:53.098 Read completed with error (sct=0, sc=8) 00:33:53.098 Read completed with error (sct=0, sc=8) 00:33:53.098 starting I/O failed: -6 00:33:53.098 Read completed with error (sct=0, sc=8) 00:33:53.098 Write completed with error (sct=0, sc=8) 00:33:53.098 starting I/O failed: -6 00:33:53.098 Read completed with error (sct=0, sc=8) 00:33:53.098 Read completed with error (sct=0, sc=8) 00:33:53.098 starting I/O failed: -6 00:33:53.098 Read completed with error (sct=0, sc=8) 00:33:53.098 Read completed with error (sct=0, sc=8) 00:33:53.098 starting I/O failed: -6 00:33:53.098 Write completed with error (sct=0, sc=8) 00:33:53.098 Read completed with error (sct=0, sc=8) 00:33:53.098 starting I/O failed: -6 00:33:53.098 Write completed with error (sct=0, sc=8) 00:33:53.098 Read completed with error (sct=0, sc=8) 00:33:53.098 starting I/O failed: -6 00:33:53.098 Write completed with error (sct=0, sc=8) 00:33:53.099 Read completed with error (sct=0, sc=8) 00:33:53.099 starting I/O failed: -6 00:33:53.099 Read completed with error (sct=0, sc=8) 00:33:53.099 Read completed with error (sct=0, sc=8) 00:33:53.099 starting I/O failed: -6 00:33:53.099 Write completed with error (sct=0, sc=8) 00:33:53.099 Read completed with error (sct=0, sc=8) 00:33:53.099 starting I/O failed: -6 00:33:53.099 Read completed with error (sct=0, sc=8) 00:33:53.099 Read completed with error (sct=0, sc=8) 00:33:53.099 starting I/O failed: -6 00:33:53.099 Write completed with error (sct=0, sc=8) 00:33:53.099 Read completed with error (sct=0, sc=8) 00:33:53.099 starting I/O failed: -6 00:33:53.099 Read completed with error (sct=0, sc=8) 00:33:53.099 Read completed with error (sct=0, sc=8) 00:33:53.099 starting I/O failed: -6 00:33:53.099 Read completed with error (sct=0, sc=8) 00:33:53.099 Write completed with error (sct=0, sc=8) 00:33:53.099 starting I/O failed: -6 00:33:53.099 Read completed with error (sct=0, sc=8) 00:33:53.099 Read completed with error (sct=0, sc=8) 00:33:53.099 starting I/O failed: -6 00:33:53.099 Write completed with error (sct=0, sc=8) 00:33:53.099 Read completed with error (sct=0, sc=8) 00:33:53.099 starting I/O failed: -6 00:33:53.099 Read completed with error (sct=0, sc=8) 00:33:53.099 Read completed with error (sct=0, sc=8) 00:33:53.099 starting I/O failed: -6 00:33:53.099 Read completed with error (sct=0, sc=8) 00:33:53.099 Read completed with error (sct=0, sc=8) 00:33:53.099 starting I/O failed: -6 00:33:53.099 Write completed with error (sct=0, sc=8) 00:33:53.099 Read completed with error (sct=0, sc=8) 00:33:53.099 starting I/O failed: -6 00:33:53.099 Read completed with error (sct=0, sc=8) 00:33:53.099 Read completed with error (sct=0, sc=8) 00:33:53.099 starting I/O failed: -6 00:33:53.099 Read completed with error (sct=0, sc=8) 00:33:53.099 Read completed with error (sct=0, sc=8) 00:33:53.099 starting I/O failed: -6 00:33:53.099 Read completed with error (sct=0, sc=8) 00:33:53.099 Write completed with error (sct=0, sc=8) 00:33:53.099 starting I/O failed: -6 00:33:53.099 Write completed with error (sct=0, sc=8) 00:33:53.099 Read completed with error (sct=0, sc=8) 00:33:53.099 starting I/O failed: -6 00:33:53.099 Read completed with error (sct=0, sc=8) 00:33:53.099 [2024-11-20 06:44:24.607080] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fb868000c40 is same with the state(6) to be set 00:33:54.035 [2024-11-20 06:44:25.582924] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cde9a0 is same with the state(6) to be set 00:33:54.035 Write completed with error (sct=0, sc=8) 00:33:54.035 Read completed with error (sct=0, sc=8) 00:33:54.035 Read completed with error (sct=0, sc=8) 00:33:54.035 Read completed with error (sct=0, sc=8) 00:33:54.035 Write completed with error (sct=0, sc=8) 00:33:54.035 Read completed with error (sct=0, sc=8) 00:33:54.035 Read completed with error (sct=0, sc=8) 00:33:54.035 Read completed with error (sct=0, sc=8) 00:33:54.035 Read completed with error (sct=0, sc=8) 00:33:54.035 Read completed with error (sct=0, sc=8) 00:33:54.035 Read completed with error (sct=0, sc=8) 00:33:54.035 Read completed with error (sct=0, sc=8) 00:33:54.035 Read completed with error (sct=0, sc=8) 00:33:54.035 Read completed with error (sct=0, sc=8) 00:33:54.035 Write completed with error (sct=0, sc=8) 00:33:54.035 Write completed with error (sct=0, sc=8) 00:33:54.035 Read completed with error (sct=0, sc=8) 00:33:54.035 Write completed with error (sct=0, sc=8) 00:33:54.035 Read completed with error (sct=0, sc=8) 00:33:54.035 Read completed with error (sct=0, sc=8) 00:33:54.035 Read completed with error (sct=0, sc=8) 00:33:54.035 Read completed with error (sct=0, sc=8) 00:33:54.035 [2024-11-20 06:44:25.606263] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdd4a0 is same with the state(6) to be set 00:33:54.035 Write completed with error (sct=0, sc=8) 00:33:54.035 Read completed with error (sct=0, sc=8) 00:33:54.035 Read completed with error (sct=0, sc=8) 00:33:54.035 Write completed with error (sct=0, sc=8) 00:33:54.035 Read completed with error (sct=0, sc=8) 00:33:54.035 Read completed with error (sct=0, sc=8) 00:33:54.035 Read completed with error (sct=0, sc=8) 00:33:54.035 Read completed with error (sct=0, sc=8) 00:33:54.035 Read completed with error (sct=0, sc=8) 00:33:54.035 Read completed with error (sct=0, sc=8) 00:33:54.035 Read completed with error (sct=0, sc=8) 00:33:54.035 Write completed with error (sct=0, sc=8) 00:33:54.035 Read completed with error (sct=0, sc=8) 00:33:54.035 Read completed with error (sct=0, sc=8) 00:33:54.035 Read completed with error (sct=0, sc=8) 00:33:54.035 Read completed with error (sct=0, sc=8) 00:33:54.035 Read completed with error (sct=0, sc=8) 00:33:54.035 Write completed with error (sct=0, sc=8) 00:33:54.035 Read completed with error (sct=0, sc=8) 00:33:54.035 Write completed with error (sct=0, sc=8) 00:33:54.035 Read completed with error (sct=0, sc=8) 00:33:54.035 [2024-11-20 06:44:25.606697] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdd860 is same with the state(6) to be set 00:33:54.035 Read completed with error (sct=0, sc=8) 00:33:54.035 Write completed with error (sct=0, sc=8) 00:33:54.035 Read completed with error (sct=0, sc=8) 00:33:54.035 Read completed with error (sct=0, sc=8) 00:33:54.035 Write completed with error (sct=0, sc=8) 00:33:54.035 Read completed with error (sct=0, sc=8) 00:33:54.035 Read completed with error (sct=0, sc=8) 00:33:54.035 Write completed with error (sct=0, sc=8) 00:33:54.035 Write completed with error (sct=0, sc=8) 00:33:54.035 Read completed with error (sct=0, sc=8) 00:33:54.035 Read completed with error (sct=0, sc=8) 00:33:54.035 Read completed with error (sct=0, sc=8) 00:33:54.035 Read completed with error (sct=0, sc=8) 00:33:54.035 Write completed with error (sct=0, sc=8) 00:33:54.035 Read completed with error (sct=0, sc=8) 00:33:54.035 Write completed with error (sct=0, sc=8) 00:33:54.035 Read completed with error (sct=0, sc=8) 00:33:54.035 Read completed with error (sct=0, sc=8) 00:33:54.035 Read completed with error (sct=0, sc=8) 00:33:54.035 Read completed with error (sct=0, sc=8) 00:33:54.035 Write completed with error (sct=0, sc=8) 00:33:54.035 Read completed with error (sct=0, sc=8) 00:33:54.035 Read completed with error (sct=0, sc=8) 00:33:54.035 Read completed with error (sct=0, sc=8) 00:33:54.035 Write completed with error (sct=0, sc=8) 00:33:54.035 Write completed with error (sct=0, sc=8) 00:33:54.035 Write completed with error (sct=0, sc=8) 00:33:54.035 Write completed with error (sct=0, sc=8) 00:33:54.035 Read completed with error (sct=0, sc=8) 00:33:54.035 Write completed with error (sct=0, sc=8) 00:33:54.035 Write completed with error (sct=0, sc=8) 00:33:54.035 Write completed with error (sct=0, sc=8) 00:33:54.035 Read completed with error (sct=0, sc=8) 00:33:54.035 Write completed with error (sct=0, sc=8) 00:33:54.035 Read completed with error (sct=0, sc=8) 00:33:54.035 Read completed with error (sct=0, sc=8) 00:33:54.035 Write completed with error (sct=0, sc=8) 00:33:54.036 [2024-11-20 06:44:25.608601] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fb86800d7e0 is same with the state(6) to be set 00:33:54.036 Write completed with error (sct=0, sc=8) 00:33:54.036 Read completed with error (sct=0, sc=8) 00:33:54.036 Read completed with error (sct=0, sc=8) 00:33:54.036 Read completed with error (sct=0, sc=8) 00:33:54.036 Write completed with error (sct=0, sc=8) 00:33:54.036 Read completed with error (sct=0, sc=8) 00:33:54.036 Read completed with error (sct=0, sc=8) 00:33:54.036 Write completed with error (sct=0, sc=8) 00:33:54.036 Read completed with error (sct=0, sc=8) 00:33:54.036 Read completed with error (sct=0, sc=8) 00:33:54.036 Read completed with error (sct=0, sc=8) 00:33:54.036 Write completed with error (sct=0, sc=8) 00:33:54.036 Read completed with error (sct=0, sc=8) 00:33:54.036 Read completed with error (sct=0, sc=8) 00:33:54.036 Write completed with error (sct=0, sc=8) 00:33:54.036 Read completed with error (sct=0, sc=8) 00:33:54.036 Read completed with error (sct=0, sc=8) 00:33:54.036 Write completed with error (sct=0, sc=8) 00:33:54.036 Read completed with error (sct=0, sc=8) 00:33:54.036 Read completed with error (sct=0, sc=8) 00:33:54.036 Read completed with error (sct=0, sc=8) 00:33:54.036 Read completed with error (sct=0, sc=8) 00:33:54.036 Read completed with error (sct=0, sc=8) 00:33:54.036 Write completed with error (sct=0, sc=8) 00:33:54.036 Read completed with error (sct=0, sc=8) 00:33:54.036 Read completed with error (sct=0, sc=8) 00:33:54.036 Read completed with error (sct=0, sc=8) 00:33:54.036 Read completed with error (sct=0, sc=8) 00:33:54.036 Write completed with error (sct=0, sc=8) 00:33:54.036 Read completed with error (sct=0, sc=8) 00:33:54.036 Read completed with error (sct=0, sc=8) 00:33:54.036 Write completed with error (sct=0, sc=8) 00:33:54.036 Read completed with error (sct=0, sc=8) 00:33:54.036 Read completed with error (sct=0, sc=8) 00:33:54.036 Write completed with error (sct=0, sc=8) 00:33:54.036 Read completed with error (sct=0, sc=8) 00:33:54.036 Read completed with error (sct=0, sc=8) 00:33:54.036 Write completed with error (sct=0, sc=8) 00:33:54.036 [2024-11-20 06:44:25.609300] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fb86800d020 is same with the state(6) to be set 00:33:54.036 Initializing NVMe Controllers 00:33:54.036 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:54.036 Controller IO queue size 128, less than required. 00:33:54.036 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:33:54.036 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:33:54.036 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:33:54.036 Initialization complete. Launching workers. 00:33:54.036 ======================================================== 00:33:54.036 Latency(us) 00:33:54.036 Device Information : IOPS MiB/s Average min max 00:33:54.036 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 165.26 0.08 903729.01 255.62 1006435.03 00:33:54.036 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 181.19 0.09 912967.39 328.69 1009666.71 00:33:54.036 ======================================================== 00:33:54.036 Total : 346.46 0.17 908560.58 255.62 1009666.71 00:33:54.036 00:33:54.036 [2024-11-20 06:44:25.609848] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cde9a0 (9): Bad file descriptor 00:33:54.036 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:33:54.036 06:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:54.036 06:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:33:54.036 06:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 729479 00:33:54.036 06:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:33:54.296 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:33:54.296 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 729479 00:33:54.296 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (729479) - No such process 00:33:54.296 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 729479 00:33:54.296 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:33:54.296 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 729479 00:33:54.296 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:33:54.296 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:54.296 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:33:54.296 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:54.296 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 729479 00:33:54.296 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:33:54.296 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:33:54.296 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:33:54.296 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:33:54.296 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:33:54.296 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:54.296 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:33:54.554 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:54.554 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:54.555 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:54.555 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:33:54.555 [2024-11-20 06:44:26.137592] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:54.555 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:54.555 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:54.555 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:54.555 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:33:54.555 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:54.555 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=730158 00:33:54.555 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:33:54.555 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:33:54.555 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 730158 00:33:54.555 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:33:54.555 [2024-11-20 06:44:26.220677] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:33:55.120 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:33:55.120 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 730158 00:33:55.120 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:33:55.377 06:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:33:55.377 06:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 730158 00:33:55.377 06:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:33:55.942 06:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:33:55.942 06:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 730158 00:33:55.942 06:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:33:56.509 06:44:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:33:56.509 06:44:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 730158 00:33:56.509 06:44:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:33:57.075 06:44:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:33:57.075 06:44:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 730158 00:33:57.075 06:44:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:33:57.641 06:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:33:57.641 06:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 730158 00:33:57.641 06:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:33:57.641 Initializing NVMe Controllers 00:33:57.641 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:57.641 Controller IO queue size 128, less than required. 00:33:57.641 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:33:57.641 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:33:57.641 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:33:57.641 Initialization complete. Launching workers. 00:33:57.641 ======================================================== 00:33:57.641 Latency(us) 00:33:57.641 Device Information : IOPS MiB/s Average min max 00:33:57.641 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002323.14 1000149.01 1041390.21 00:33:57.641 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004262.37 1000224.66 1042404.88 00:33:57.641 ======================================================== 00:33:57.641 Total : 256.00 0.12 1003292.75 1000149.01 1042404.88 00:33:57.641 00:33:57.899 06:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:33:57.899 06:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 730158 00:33:57.899 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (730158) - No such process 00:33:57.899 06:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 730158 00:33:57.899 06:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:33:57.899 06:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:33:57.899 06:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:57.899 06:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:33:57.899 06:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:57.899 06:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:33:57.899 06:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:57.899 06:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:57.899 rmmod nvme_tcp 00:33:57.899 rmmod nvme_fabrics 00:33:57.899 rmmod nvme_keyring 00:33:58.158 06:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:58.158 06:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:33:58.158 06:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:33:58.158 06:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 729451 ']' 00:33:58.158 06:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 729451 00:33:58.158 06:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # '[' -z 729451 ']' 00:33:58.158 06:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # kill -0 729451 00:33:58.158 06:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@957 -- # uname 00:33:58.158 06:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:33:58.158 06:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 729451 00:33:58.158 06:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:33:58.158 06:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:33:58.158 06:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@970 -- # echo 'killing process with pid 729451' 00:33:58.158 killing process with pid 729451 00:33:58.158 06:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@971 -- # kill 729451 00:33:58.158 06:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@976 -- # wait 729451 00:33:58.158 06:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:58.159 06:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:58.159 06:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:58.159 06:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:33:58.159 06:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:33:58.159 06:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:58.159 06:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:33:58.159 06:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:58.159 06:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:58.159 06:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:58.159 06:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:58.159 06:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:00.696 06:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:00.696 00:34:00.696 real 0m16.115s 00:34:00.696 user 0m26.101s 00:34:00.696 sys 0m6.102s 00:34:00.696 06:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1128 -- # xtrace_disable 00:34:00.696 06:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:34:00.696 ************************************ 00:34:00.696 END TEST nvmf_delete_subsystem 00:34:00.696 ************************************ 00:34:00.696 06:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:34:00.696 06:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:34:00.696 06:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:34:00.696 06:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:00.696 ************************************ 00:34:00.696 START TEST nvmf_host_management 00:34:00.696 ************************************ 00:34:00.696 06:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:34:00.696 * Looking for test storage... 00:34:00.696 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:00.696 06:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:34:00.696 06:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1691 -- # lcov --version 00:34:00.696 06:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:34:00.696 06:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:34:00.696 06:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:00.696 06:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:00.696 06:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:00.696 06:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:34:00.696 06:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:34:00.696 06:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:34:00.696 06:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:34:00.696 06:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:34:00.696 06:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:34:00.696 06:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:34:00.696 06:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:00.696 06:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:34:00.696 06:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:34:00.696 06:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:00.696 06:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:00.696 06:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:34:00.696 06:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:34:00.696 06:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:00.696 06:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:34:00.696 06:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:34:00.696 06:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:34:00.696 06:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:34:00.696 06:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:00.696 06:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:34:00.696 06:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:34:00.696 06:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:00.696 06:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:00.696 06:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:34:00.696 06:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:00.696 06:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:34:00.696 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:00.696 --rc genhtml_branch_coverage=1 00:34:00.696 --rc genhtml_function_coverage=1 00:34:00.696 --rc genhtml_legend=1 00:34:00.696 --rc geninfo_all_blocks=1 00:34:00.696 --rc geninfo_unexecuted_blocks=1 00:34:00.696 00:34:00.696 ' 00:34:00.697 06:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:34:00.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:00.697 --rc genhtml_branch_coverage=1 00:34:00.697 --rc genhtml_function_coverage=1 00:34:00.697 --rc genhtml_legend=1 00:34:00.697 --rc geninfo_all_blocks=1 00:34:00.697 --rc geninfo_unexecuted_blocks=1 00:34:00.697 00:34:00.697 ' 00:34:00.697 06:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:34:00.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:00.697 --rc genhtml_branch_coverage=1 00:34:00.697 --rc genhtml_function_coverage=1 00:34:00.697 --rc genhtml_legend=1 00:34:00.697 --rc geninfo_all_blocks=1 00:34:00.697 --rc geninfo_unexecuted_blocks=1 00:34:00.697 00:34:00.697 ' 00:34:00.697 06:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:34:00.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:00.697 --rc genhtml_branch_coverage=1 00:34:00.697 --rc genhtml_function_coverage=1 00:34:00.697 --rc genhtml_legend=1 00:34:00.697 --rc geninfo_all_blocks=1 00:34:00.697 --rc geninfo_unexecuted_blocks=1 00:34:00.697 00:34:00.697 ' 00:34:00.697 06:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:00.697 06:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:34:00.697 06:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:00.697 06:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:00.697 06:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:00.697 06:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:00.697 06:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:00.697 06:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:00.697 06:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:00.697 06:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:00.697 06:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:00.697 06:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:00.697 06:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:34:00.697 06:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:34:00.697 06:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:00.697 06:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:00.697 06:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:00.697 06:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:00.697 06:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:00.697 06:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:34:00.697 06:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:00.697 06:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:00.697 06:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:00.697 06:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:00.697 06:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:00.697 06:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:00.697 06:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:34:00.697 06:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:00.697 06:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:34:00.697 06:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:00.697 06:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:00.697 06:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:00.697 06:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:00.697 06:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:00.697 06:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:00.697 06:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:00.697 06:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:00.697 06:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:00.697 06:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:00.697 06:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:34:00.697 06:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:34:00.697 06:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:34:00.697 06:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:00.697 06:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:00.697 06:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:00.697 06:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:00.697 06:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:00.697 06:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:00.697 06:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:00.697 06:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:00.697 06:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:00.697 06:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:00.697 06:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:34:00.697 06:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:34:07.270 06:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:07.270 06:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:34:07.270 06:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:07.270 06:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:07.270 06:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:07.270 06:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:07.270 06:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:07.270 06:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:34:07.270 06:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:07.270 06:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:34:07.270 06:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:34:07.270 06:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:34:07.270 06:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:34:07.270 06:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:34:07.270 06:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:34:07.270 06:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:07.270 06:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:07.270 06:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:07.270 06:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:07.270 06:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:07.270 06:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:07.270 06:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:07.270 06:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:07.270 06:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:07.270 06:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:07.270 06:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:07.270 06:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:07.270 06:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:07.270 06:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:07.270 06:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:07.270 06:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:07.270 06:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:07.270 06:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:07.270 06:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:07.270 06:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:34:07.270 Found 0000:86:00.0 (0x8086 - 0x159b) 00:34:07.270 06:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:07.270 06:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:07.270 06:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:07.270 06:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:07.270 06:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:07.270 06:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:07.270 06:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:34:07.270 Found 0000:86:00.1 (0x8086 - 0x159b) 00:34:07.270 06:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:07.270 06:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:07.270 06:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:07.270 06:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:07.270 06:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:07.270 06:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:07.270 06:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:07.270 06:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:07.270 06:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:07.270 06:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:07.270 06:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:07.270 06:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:07.270 06:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:07.270 06:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:07.270 06:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:07.270 06:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:34:07.270 Found net devices under 0000:86:00.0: cvl_0_0 00:34:07.270 06:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:07.270 06:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:07.270 06:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:07.270 06:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:07.270 06:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:07.270 06:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:07.270 06:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:07.270 06:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:07.270 06:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:34:07.270 Found net devices under 0000:86:00.1: cvl_0_1 00:34:07.270 06:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:07.270 06:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:07.270 06:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:34:07.270 06:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:07.270 06:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:07.270 06:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:07.270 06:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:07.270 06:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:07.270 06:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:07.270 06:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:07.270 06:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:07.270 06:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:07.270 06:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:07.271 06:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:07.271 06:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:07.271 06:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:07.271 06:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:07.271 06:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:07.271 06:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:07.271 06:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:07.271 06:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:07.271 06:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:07.271 06:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:07.271 06:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:07.271 06:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:07.271 06:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:07.271 06:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:07.271 06:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:07.271 06:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:07.271 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:07.271 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.456 ms 00:34:07.271 00:34:07.271 --- 10.0.0.2 ping statistics --- 00:34:07.271 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:07.271 rtt min/avg/max/mdev = 0.456/0.456/0.456/0.000 ms 00:34:07.271 06:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:07.271 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:07.271 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.197 ms 00:34:07.271 00:34:07.271 --- 10.0.0.1 ping statistics --- 00:34:07.271 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:07.271 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:34:07.271 06:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:07.271 06:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:34:07.271 06:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:07.271 06:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:07.271 06:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:07.271 06:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:07.271 06:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:07.271 06:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:07.271 06:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:07.271 06:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:34:07.271 06:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:34:07.271 06:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:34:07.271 06:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:07.271 06:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:07.271 06:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:34:07.271 06:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=734152 00:34:07.271 06:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 734152 00:34:07.271 06:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:34:07.271 06:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@833 -- # '[' -z 734152 ']' 00:34:07.271 06:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:07.271 06:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@838 -- # local max_retries=100 00:34:07.271 06:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:07.271 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:07.271 06:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # xtrace_disable 00:34:07.271 06:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:34:07.271 [2024-11-20 06:44:38.273586] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:07.271 [2024-11-20 06:44:38.274522] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:34:07.271 [2024-11-20 06:44:38.274557] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:07.271 [2024-11-20 06:44:38.354993] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:07.271 [2024-11-20 06:44:38.397769] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:07.271 [2024-11-20 06:44:38.397804] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:07.271 [2024-11-20 06:44:38.397811] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:07.271 [2024-11-20 06:44:38.397817] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:07.271 [2024-11-20 06:44:38.397822] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:07.271 [2024-11-20 06:44:38.399483] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:07.271 [2024-11-20 06:44:38.399588] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:34:07.271 [2024-11-20 06:44:38.399694] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:07.271 [2024-11-20 06:44:38.399695] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:34:07.271 [2024-11-20 06:44:38.467443] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:07.271 [2024-11-20 06:44:38.468016] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:07.271 [2024-11-20 06:44:38.468305] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:34:07.271 [2024-11-20 06:44:38.468438] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:34:07.271 [2024-11-20 06:44:38.468531] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:34:07.271 06:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:34:07.271 06:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@866 -- # return 0 00:34:07.271 06:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:07.271 06:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:07.271 06:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:34:07.271 06:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:07.271 06:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:07.271 06:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:07.271 06:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:34:07.271 [2024-11-20 06:44:38.532396] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:07.271 06:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:07.271 06:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:34:07.271 06:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:07.271 06:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:34:07.271 06:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:34:07.271 06:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:34:07.271 06:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:34:07.271 06:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:07.271 06:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:34:07.271 Malloc0 00:34:07.271 [2024-11-20 06:44:38.620686] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:07.271 06:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:07.271 06:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:34:07.271 06:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:07.271 06:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:34:07.271 06:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=734199 00:34:07.271 06:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 734199 /var/tmp/bdevperf.sock 00:34:07.271 06:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@833 -- # '[' -z 734199 ']' 00:34:07.272 06:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:34:07.272 06:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:34:07.272 06:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:34:07.272 06:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@838 -- # local max_retries=100 00:34:07.272 06:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:34:07.272 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:34:07.272 06:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:34:07.272 06:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # xtrace_disable 00:34:07.272 06:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:34:07.272 06:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:34:07.272 06:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:07.272 06:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:07.272 { 00:34:07.272 "params": { 00:34:07.272 "name": "Nvme$subsystem", 00:34:07.272 "trtype": "$TEST_TRANSPORT", 00:34:07.272 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:07.272 "adrfam": "ipv4", 00:34:07.272 "trsvcid": "$NVMF_PORT", 00:34:07.272 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:07.272 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:07.272 "hdgst": ${hdgst:-false}, 00:34:07.272 "ddgst": ${ddgst:-false} 00:34:07.272 }, 00:34:07.272 "method": "bdev_nvme_attach_controller" 00:34:07.272 } 00:34:07.272 EOF 00:34:07.272 )") 00:34:07.272 06:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:34:07.272 06:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:34:07.272 06:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:34:07.272 06:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:07.272 "params": { 00:34:07.272 "name": "Nvme0", 00:34:07.272 "trtype": "tcp", 00:34:07.272 "traddr": "10.0.0.2", 00:34:07.272 "adrfam": "ipv4", 00:34:07.272 "trsvcid": "4420", 00:34:07.272 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:07.272 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:07.272 "hdgst": false, 00:34:07.272 "ddgst": false 00:34:07.272 }, 00:34:07.272 "method": "bdev_nvme_attach_controller" 00:34:07.272 }' 00:34:07.272 [2024-11-20 06:44:38.721193] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:34:07.272 [2024-11-20 06:44:38.721257] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid734199 ] 00:34:07.272 [2024-11-20 06:44:38.795038] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:07.272 [2024-11-20 06:44:38.835916] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:07.530 Running I/O for 10 seconds... 00:34:07.530 06:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:34:07.530 06:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@866 -- # return 0 00:34:07.530 06:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:34:07.530 06:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:07.530 06:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:34:07.530 06:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:07.530 06:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:34:07.530 06:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:34:07.530 06:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:34:07.530 06:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:34:07.530 06:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:34:07.531 06:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:34:07.531 06:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:34:07.531 06:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:34:07.531 06:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:34:07.531 06:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:34:07.531 06:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:07.531 06:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:34:07.531 06:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:07.531 06:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=92 00:34:07.531 06:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 92 -ge 100 ']' 00:34:07.531 06:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:34:07.791 06:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:34:07.791 06:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:34:07.791 06:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:34:07.791 06:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:34:07.791 06:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:07.791 06:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:34:07.791 06:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:07.791 06:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=707 00:34:07.791 06:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 707 -ge 100 ']' 00:34:07.791 06:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:34:07.791 06:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:34:07.791 06:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:34:07.791 06:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:34:07.791 06:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:07.791 06:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:34:07.791 [2024-11-20 06:44:39.512145] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15aad70 is same with the state(6) to be set 00:34:07.791 [2024-11-20 06:44:39.512186] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15aad70 is same with the state(6) to be set 00:34:07.791 [2024-11-20 06:44:39.512433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:101120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.791 [2024-11-20 06:44:39.512467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:07.791 [2024-11-20 06:44:39.512482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:101248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.791 [2024-11-20 06:44:39.512490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:07.791 [2024-11-20 06:44:39.512499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:101376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.791 [2024-11-20 06:44:39.512506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:07.791 [2024-11-20 06:44:39.512515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:101504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.791 [2024-11-20 06:44:39.512521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:07.791 [2024-11-20 06:44:39.512529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:101632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.791 [2024-11-20 06:44:39.512536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:07.791 [2024-11-20 06:44:39.512544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:101760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.791 [2024-11-20 06:44:39.512550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:07.791 [2024-11-20 06:44:39.512558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:101888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.791 [2024-11-20 06:44:39.512564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:07.791 [2024-11-20 06:44:39.512573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:102016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.791 [2024-11-20 06:44:39.512579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:07.792 [2024-11-20 06:44:39.512587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:102144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.792 [2024-11-20 06:44:39.512593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:07.792 [2024-11-20 06:44:39.512601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:102272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.792 [2024-11-20 06:44:39.512608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:07.792 [2024-11-20 06:44:39.512615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:102400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.792 [2024-11-20 06:44:39.512622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:07.792 [2024-11-20 06:44:39.512630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:102528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.792 [2024-11-20 06:44:39.512644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:07.792 [2024-11-20 06:44:39.512652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:102656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.792 [2024-11-20 06:44:39.512659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:07.792 [2024-11-20 06:44:39.512667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:102784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.792 [2024-11-20 06:44:39.512673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:07.792 [2024-11-20 06:44:39.512682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:102912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.792 [2024-11-20 06:44:39.512688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:07.792 [2024-11-20 06:44:39.512696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:103040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.792 [2024-11-20 06:44:39.512703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:07.792 [2024-11-20 06:44:39.512711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:103168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.792 [2024-11-20 06:44:39.512717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:07.792 [2024-11-20 06:44:39.512725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:103296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.792 [2024-11-20 06:44:39.512732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:07.792 [2024-11-20 06:44:39.512740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:103424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.792 [2024-11-20 06:44:39.512747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:07.792 [2024-11-20 06:44:39.512755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:103552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.792 [2024-11-20 06:44:39.512761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:07.792 [2024-11-20 06:44:39.512770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:103680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.792 [2024-11-20 06:44:39.512776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:07.792 [2024-11-20 06:44:39.512784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:103808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.792 [2024-11-20 06:44:39.512791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:07.792 [2024-11-20 06:44:39.512799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:103936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.792 [2024-11-20 06:44:39.512805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:07.792 [2024-11-20 06:44:39.512813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:104064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.792 [2024-11-20 06:44:39.512820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:07.792 [2024-11-20 06:44:39.512829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:104192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.792 [2024-11-20 06:44:39.512836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:07.792 [2024-11-20 06:44:39.512843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:104320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.792 [2024-11-20 06:44:39.512850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:07.792 [2024-11-20 06:44:39.512858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:104448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.792 [2024-11-20 06:44:39.512864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:07.792 [2024-11-20 06:44:39.512873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:104576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.792 [2024-11-20 06:44:39.512879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:07.792 [2024-11-20 06:44:39.512887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:104704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.792 [2024-11-20 06:44:39.512893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:07.792 [2024-11-20 06:44:39.512901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:104832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.792 [2024-11-20 06:44:39.512908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:07.792 [2024-11-20 06:44:39.512916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:104960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.792 [2024-11-20 06:44:39.512923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:07.792 [2024-11-20 06:44:39.512931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:105088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.792 [2024-11-20 06:44:39.512937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:07.792 [2024-11-20 06:44:39.512945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:105216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.792 [2024-11-20 06:44:39.512951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:07.792 [2024-11-20 06:44:39.512959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:105344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.792 [2024-11-20 06:44:39.512965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:07.792 [2024-11-20 06:44:39.512974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:105472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.792 [2024-11-20 06:44:39.512980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:07.792 [2024-11-20 06:44:39.512988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:105600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.792 [2024-11-20 06:44:39.512995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:07.792 [2024-11-20 06:44:39.513003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:105728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.792 [2024-11-20 06:44:39.513011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:07.792 [2024-11-20 06:44:39.513019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:105856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.792 [2024-11-20 06:44:39.513025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:07.792 [2024-11-20 06:44:39.513033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:105984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.792 [2024-11-20 06:44:39.513039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:07.792 [2024-11-20 06:44:39.513047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:106112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.792 [2024-11-20 06:44:39.513053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:07.792 [2024-11-20 06:44:39.513061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:106240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.792 [2024-11-20 06:44:39.513069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:07.792 [2024-11-20 06:44:39.513077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:106368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.792 [2024-11-20 06:44:39.513084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:07.792 [2024-11-20 06:44:39.513092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.792 [2024-11-20 06:44:39.513098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:07.792 [2024-11-20 06:44:39.513106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:98432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.792 [2024-11-20 06:44:39.513112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:07.792 [2024-11-20 06:44:39.513120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:98560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.792 [2024-11-20 06:44:39.513126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:07.792 [2024-11-20 06:44:39.513134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:98688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.792 [2024-11-20 06:44:39.513141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:07.792 [2024-11-20 06:44:39.513149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:98816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.792 [2024-11-20 06:44:39.513155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:07.792 [2024-11-20 06:44:39.513164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:98944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.793 [2024-11-20 06:44:39.513170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:07.793 [2024-11-20 06:44:39.513178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:99072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.793 [2024-11-20 06:44:39.513185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:07.793 [2024-11-20 06:44:39.513197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:99200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.793 [2024-11-20 06:44:39.513209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:07.793 [2024-11-20 06:44:39.513218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:99328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.793 [2024-11-20 06:44:39.513225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:07.793 [2024-11-20 06:44:39.513233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:99456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.793 [2024-11-20 06:44:39.513240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:07.793 [2024-11-20 06:44:39.513248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:99584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.793 [2024-11-20 06:44:39.513255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:07.793 [2024-11-20 06:44:39.513263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:99712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.793 [2024-11-20 06:44:39.513270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:07.793 [2024-11-20 06:44:39.513278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:99840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.793 [2024-11-20 06:44:39.513285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:07.793 [2024-11-20 06:44:39.513292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:99968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.793 [2024-11-20 06:44:39.513299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:07.793 [2024-11-20 06:44:39.513306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:100096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.793 [2024-11-20 06:44:39.513313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:07.793 [2024-11-20 06:44:39.513321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:100224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.793 [2024-11-20 06:44:39.513327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:07.793 [2024-11-20 06:44:39.513335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:100352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.793 [2024-11-20 06:44:39.513341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:07.793 [2024-11-20 06:44:39.513349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:100480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.793 [2024-11-20 06:44:39.513356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:07.793 [2024-11-20 06:44:39.513364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:100608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.793 [2024-11-20 06:44:39.513371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:07.793 [2024-11-20 06:44:39.513379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:100736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.793 [2024-11-20 06:44:39.513387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:07.793 [2024-11-20 06:44:39.513395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:100864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.793 [2024-11-20 06:44:39.513402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:07.793 [2024-11-20 06:44:39.513410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:100992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.793 [2024-11-20 06:44:39.513416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:07.793 [2024-11-20 06:44:39.513423] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2361820 is same with the state(6) to be set 00:34:07.793 [2024-11-20 06:44:39.514362] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:34:07.793 task offset: 101120 on job bdev=Nvme0n1 fails 00:34:07.793 00:34:07.793 Latency(us) 00:34:07.793 [2024-11-20T05:44:39.629Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:07.793 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:34:07.793 Job: Nvme0n1 ended in about 0.40 seconds with error 00:34:07.793 Verification LBA range: start 0x0 length 0x400 00:34:07.793 Nvme0n1 : 0.40 1917.78 119.86 159.81 0.00 29982.94 1466.76 26838.55 00:34:07.793 [2024-11-20T05:44:39.629Z] =================================================================================================================== 00:34:07.793 [2024-11-20T05:44:39.629Z] Total : 1917.78 119.86 159.81 0.00 29982.94 1466.76 26838.55 00:34:07.793 [2024-11-20 06:44:39.516720] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:34:07.793 [2024-11-20 06:44:39.516741] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2148500 (9): Bad file descriptor 00:34:07.793 06:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:07.793 06:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:34:07.793 [2024-11-20 06:44:39.517743] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:34:07.793 06:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:07.793 [2024-11-20 06:44:39.517809] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:34:07.793 [2024-11-20 06:44:39.517833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:07.793 [2024-11-20 06:44:39.517848] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:34:07.793 [2024-11-20 06:44:39.517855] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:34:07.793 [2024-11-20 06:44:39.517862] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:07.793 [2024-11-20 06:44:39.517869] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2148500 00:34:07.793 [2024-11-20 06:44:39.517887] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2148500 (9): Bad file descriptor 00:34:07.793 [2024-11-20 06:44:39.517898] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:34:07.793 [2024-11-20 06:44:39.517904] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:34:07.793 [2024-11-20 06:44:39.517916] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:34:07.793 [2024-11-20 06:44:39.517924] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:34:07.793 06:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:34:07.793 06:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:07.793 06:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:34:08.730 06:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 734199 00:34:08.730 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (734199) - No such process 00:34:08.730 06:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:34:08.730 06:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:34:08.730 06:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:34:08.730 06:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:34:08.731 06:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:34:08.731 06:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:34:08.731 06:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:08.731 06:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:08.731 { 00:34:08.731 "params": { 00:34:08.731 "name": "Nvme$subsystem", 00:34:08.731 "trtype": "$TEST_TRANSPORT", 00:34:08.731 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:08.731 "adrfam": "ipv4", 00:34:08.731 "trsvcid": "$NVMF_PORT", 00:34:08.731 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:08.731 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:08.731 "hdgst": ${hdgst:-false}, 00:34:08.731 "ddgst": ${ddgst:-false} 00:34:08.731 }, 00:34:08.731 "method": "bdev_nvme_attach_controller" 00:34:08.731 } 00:34:08.731 EOF 00:34:08.731 )") 00:34:08.731 06:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:34:08.731 06:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:34:08.731 06:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:34:08.731 06:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:08.731 "params": { 00:34:08.731 "name": "Nvme0", 00:34:08.731 "trtype": "tcp", 00:34:08.731 "traddr": "10.0.0.2", 00:34:08.731 "adrfam": "ipv4", 00:34:08.731 "trsvcid": "4420", 00:34:08.731 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:08.731 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:08.731 "hdgst": false, 00:34:08.731 "ddgst": false 00:34:08.731 }, 00:34:08.731 "method": "bdev_nvme_attach_controller" 00:34:08.731 }' 00:34:08.990 [2024-11-20 06:44:40.584649] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:34:08.990 [2024-11-20 06:44:40.584702] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid734587 ] 00:34:08.990 [2024-11-20 06:44:40.662024] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:08.990 [2024-11-20 06:44:40.702948] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:09.321 Running I/O for 1 seconds... 00:34:10.329 1984.00 IOPS, 124.00 MiB/s 00:34:10.329 Latency(us) 00:34:10.329 [2024-11-20T05:44:42.165Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:10.329 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:34:10.329 Verification LBA range: start 0x0 length 0x400 00:34:10.329 Nvme0n1 : 1.00 2040.25 127.52 0.00 0.00 30879.53 6054.28 27088.21 00:34:10.329 [2024-11-20T05:44:42.165Z] =================================================================================================================== 00:34:10.329 [2024-11-20T05:44:42.165Z] Total : 2040.25 127.52 0.00 0.00 30879.53 6054.28 27088.21 00:34:10.329 06:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:34:10.329 06:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:34:10.329 06:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:34:10.329 06:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:34:10.329 06:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:34:10.329 06:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:10.329 06:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:34:10.329 06:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:10.329 06:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:34:10.329 06:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:10.329 06:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:10.588 rmmod nvme_tcp 00:34:10.588 rmmod nvme_fabrics 00:34:10.588 rmmod nvme_keyring 00:34:10.588 06:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:10.588 06:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:34:10.588 06:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:34:10.588 06:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 734152 ']' 00:34:10.588 06:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 734152 00:34:10.588 06:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@952 -- # '[' -z 734152 ']' 00:34:10.588 06:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@956 -- # kill -0 734152 00:34:10.588 06:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@957 -- # uname 00:34:10.588 06:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:34:10.588 06:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 734152 00:34:10.588 06:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:34:10.588 06:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:34:10.588 06:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@970 -- # echo 'killing process with pid 734152' 00:34:10.588 killing process with pid 734152 00:34:10.588 06:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@971 -- # kill 734152 00:34:10.588 06:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@976 -- # wait 734152 00:34:10.848 [2024-11-20 06:44:42.426831] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:34:10.848 06:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:10.848 06:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:10.848 06:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:10.848 06:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:34:10.848 06:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:34:10.848 06:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:10.848 06:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:34:10.848 06:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:10.848 06:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:10.848 06:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:10.848 06:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:10.848 06:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:12.753 06:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:12.753 06:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:34:12.753 00:34:12.753 real 0m12.412s 00:34:12.753 user 0m18.306s 00:34:12.753 sys 0m6.332s 00:34:12.753 06:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1128 -- # xtrace_disable 00:34:12.753 06:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:34:12.753 ************************************ 00:34:12.753 END TEST nvmf_host_management 00:34:12.753 ************************************ 00:34:12.753 06:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:34:12.753 06:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:34:12.753 06:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:34:12.753 06:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:13.013 ************************************ 00:34:13.013 START TEST nvmf_lvol 00:34:13.013 ************************************ 00:34:13.013 06:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:34:13.013 * Looking for test storage... 00:34:13.013 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:13.013 06:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:34:13.013 06:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1691 -- # lcov --version 00:34:13.013 06:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:34:13.013 06:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:34:13.013 06:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:13.013 06:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:13.013 06:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:13.013 06:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:34:13.013 06:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:34:13.013 06:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:34:13.013 06:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:34:13.013 06:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:34:13.013 06:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:34:13.013 06:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:34:13.013 06:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:13.013 06:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:34:13.013 06:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:34:13.014 06:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:13.014 06:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:13.014 06:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:34:13.014 06:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:34:13.014 06:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:13.014 06:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:34:13.014 06:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:34:13.014 06:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:34:13.014 06:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:34:13.014 06:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:13.014 06:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:34:13.014 06:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:34:13.014 06:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:13.014 06:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:13.014 06:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:34:13.014 06:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:13.014 06:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:34:13.014 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:13.014 --rc genhtml_branch_coverage=1 00:34:13.014 --rc genhtml_function_coverage=1 00:34:13.014 --rc genhtml_legend=1 00:34:13.014 --rc geninfo_all_blocks=1 00:34:13.014 --rc geninfo_unexecuted_blocks=1 00:34:13.014 00:34:13.014 ' 00:34:13.014 06:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:34:13.014 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:13.014 --rc genhtml_branch_coverage=1 00:34:13.014 --rc genhtml_function_coverage=1 00:34:13.014 --rc genhtml_legend=1 00:34:13.014 --rc geninfo_all_blocks=1 00:34:13.014 --rc geninfo_unexecuted_blocks=1 00:34:13.014 00:34:13.014 ' 00:34:13.014 06:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:34:13.014 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:13.014 --rc genhtml_branch_coverage=1 00:34:13.014 --rc genhtml_function_coverage=1 00:34:13.014 --rc genhtml_legend=1 00:34:13.014 --rc geninfo_all_blocks=1 00:34:13.014 --rc geninfo_unexecuted_blocks=1 00:34:13.014 00:34:13.014 ' 00:34:13.014 06:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:34:13.014 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:13.014 --rc genhtml_branch_coverage=1 00:34:13.014 --rc genhtml_function_coverage=1 00:34:13.014 --rc genhtml_legend=1 00:34:13.014 --rc geninfo_all_blocks=1 00:34:13.014 --rc geninfo_unexecuted_blocks=1 00:34:13.014 00:34:13.014 ' 00:34:13.014 06:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:13.014 06:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:34:13.014 06:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:13.014 06:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:13.014 06:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:13.014 06:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:13.014 06:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:13.014 06:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:13.014 06:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:13.014 06:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:13.014 06:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:13.014 06:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:13.014 06:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:34:13.014 06:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:34:13.014 06:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:13.014 06:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:13.014 06:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:13.014 06:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:13.014 06:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:13.014 06:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:34:13.014 06:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:13.014 06:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:13.014 06:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:13.014 06:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:13.014 06:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:13.014 06:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:13.014 06:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:34:13.014 06:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:13.014 06:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:34:13.014 06:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:13.014 06:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:13.014 06:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:13.014 06:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:13.014 06:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:13.014 06:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:13.014 06:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:13.014 06:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:13.014 06:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:13.014 06:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:13.014 06:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:34:13.014 06:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:34:13.014 06:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:34:13.014 06:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:34:13.014 06:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:13.014 06:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:34:13.014 06:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:13.014 06:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:13.014 06:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:13.014 06:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:13.014 06:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:13.014 06:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:13.014 06:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:13.014 06:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:13.014 06:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:13.015 06:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:13.015 06:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:34:13.015 06:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:34:18.290 06:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:18.290 06:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:34:18.290 06:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:18.290 06:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:18.290 06:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:18.290 06:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:18.290 06:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:18.290 06:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:34:18.290 06:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:18.290 06:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:34:18.290 06:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:34:18.290 06:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:34:18.290 06:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:34:18.290 06:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:34:18.290 06:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:34:18.290 06:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:18.290 06:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:18.290 06:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:18.290 06:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:18.290 06:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:18.290 06:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:18.290 06:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:18.290 06:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:18.290 06:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:18.290 06:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:18.290 06:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:18.290 06:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:18.290 06:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:18.290 06:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:18.290 06:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:18.290 06:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:18.290 06:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:18.290 06:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:18.290 06:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:18.290 06:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:34:18.290 Found 0000:86:00.0 (0x8086 - 0x159b) 00:34:18.290 06:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:18.290 06:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:18.290 06:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:18.290 06:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:18.290 06:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:18.290 06:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:18.290 06:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:34:18.290 Found 0000:86:00.1 (0x8086 - 0x159b) 00:34:18.290 06:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:18.290 06:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:18.290 06:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:18.290 06:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:18.290 06:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:18.290 06:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:18.290 06:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:18.290 06:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:18.290 06:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:18.290 06:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:18.290 06:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:18.290 06:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:18.290 06:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:18.290 06:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:18.291 06:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:18.291 06:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:34:18.291 Found net devices under 0000:86:00.0: cvl_0_0 00:34:18.291 06:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:18.291 06:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:18.291 06:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:18.291 06:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:18.291 06:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:18.291 06:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:18.291 06:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:18.291 06:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:18.291 06:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:34:18.291 Found net devices under 0000:86:00.1: cvl_0_1 00:34:18.291 06:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:18.291 06:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:18.291 06:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:34:18.291 06:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:18.291 06:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:18.291 06:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:18.291 06:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:18.291 06:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:18.291 06:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:18.291 06:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:18.291 06:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:18.291 06:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:18.291 06:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:18.291 06:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:18.291 06:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:18.291 06:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:18.291 06:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:18.291 06:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:18.291 06:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:18.291 06:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:18.291 06:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:18.551 06:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:18.551 06:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:18.551 06:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:18.551 06:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:18.551 06:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:18.551 06:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:18.551 06:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:18.551 06:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:18.551 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:18.551 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.362 ms 00:34:18.551 00:34:18.551 --- 10.0.0.2 ping statistics --- 00:34:18.551 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:18.551 rtt min/avg/max/mdev = 0.362/0.362/0.362/0.000 ms 00:34:18.551 06:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:18.551 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:18.551 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.196 ms 00:34:18.551 00:34:18.551 --- 10.0.0.1 ping statistics --- 00:34:18.551 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:18.551 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:34:18.551 06:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:18.551 06:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:34:18.551 06:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:18.551 06:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:18.551 06:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:18.551 06:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:18.551 06:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:18.551 06:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:18.551 06:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:18.551 06:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:34:18.551 06:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:18.551 06:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:18.551 06:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:34:18.551 06:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=738215 00:34:18.551 06:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:34:18.551 06:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 738215 00:34:18.551 06:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@833 -- # '[' -z 738215 ']' 00:34:18.551 06:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:18.551 06:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@838 -- # local max_retries=100 00:34:18.551 06:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:18.551 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:18.551 06:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # xtrace_disable 00:34:18.551 06:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:34:18.551 [2024-11-20 06:44:50.370275] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:18.551 [2024-11-20 06:44:50.371178] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:34:18.551 [2024-11-20 06:44:50.371218] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:18.811 [2024-11-20 06:44:50.452089] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:34:18.811 [2024-11-20 06:44:50.493723] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:18.811 [2024-11-20 06:44:50.493758] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:18.811 [2024-11-20 06:44:50.493765] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:18.811 [2024-11-20 06:44:50.493775] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:18.811 [2024-11-20 06:44:50.493779] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:18.811 [2024-11-20 06:44:50.495053] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:18.811 [2024-11-20 06:44:50.495089] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:18.811 [2024-11-20 06:44:50.495090] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:18.811 [2024-11-20 06:44:50.562144] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:18.811 [2024-11-20 06:44:50.562903] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:34:18.811 [2024-11-20 06:44:50.563021] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:18.811 [2024-11-20 06:44:50.563192] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:34:19.379 06:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:34:19.379 06:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@866 -- # return 0 00:34:19.380 06:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:19.380 06:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:19.639 06:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:34:19.639 06:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:19.639 06:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:34:19.639 [2024-11-20 06:44:51.419839] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:19.639 06:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:19.898 06:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:34:19.898 06:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:20.157 06:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:34:20.157 06:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:34:20.417 06:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:34:20.676 06:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=b7c7e961-fcad-4eb7-9a99-6b5e18888a4c 00:34:20.676 06:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u b7c7e961-fcad-4eb7-9a99-6b5e18888a4c lvol 20 00:34:20.676 06:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=58c34dd3-86cb-4896-86ac-3abcf8d03e92 00:34:20.676 06:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:34:20.934 06:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 58c34dd3-86cb-4896-86ac-3abcf8d03e92 00:34:21.191 06:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:21.191 [2024-11-20 06:44:52.999824] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:21.449 06:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:34:21.449 06:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=738707 00:34:21.449 06:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:34:21.449 06:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:34:22.821 06:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 58c34dd3-86cb-4896-86ac-3abcf8d03e92 MY_SNAPSHOT 00:34:22.821 06:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=c36980d2-7af9-42a8-9b9f-e2dd46aaf012 00:34:22.821 06:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 58c34dd3-86cb-4896-86ac-3abcf8d03e92 30 00:34:23.079 06:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone c36980d2-7af9-42a8-9b9f-e2dd46aaf012 MY_CLONE 00:34:23.336 06:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=793f6e22-f47a-4227-b834-722594904ffb 00:34:23.336 06:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 793f6e22-f47a-4227-b834-722594904ffb 00:34:23.594 06:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 738707 00:34:31.698 Initializing NVMe Controllers 00:34:31.698 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:34:31.698 Controller IO queue size 128, less than required. 00:34:31.698 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:34:31.698 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:34:31.698 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:34:31.698 Initialization complete. Launching workers. 00:34:31.698 ======================================================== 00:34:31.698 Latency(us) 00:34:31.698 Device Information : IOPS MiB/s Average min max 00:34:31.698 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12528.10 48.94 10220.48 1885.58 52983.25 00:34:31.698 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 12327.60 48.15 10385.05 3533.37 61990.07 00:34:31.698 ======================================================== 00:34:31.698 Total : 24855.69 97.09 10302.11 1885.58 61990.07 00:34:31.698 00:34:31.957 06:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:31.957 06:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 58c34dd3-86cb-4896-86ac-3abcf8d03e92 00:34:32.216 06:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b7c7e961-fcad-4eb7-9a99-6b5e18888a4c 00:34:32.475 06:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:34:32.475 06:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:34:32.475 06:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:34:32.475 06:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:32.475 06:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:34:32.475 06:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:32.475 06:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:34:32.475 06:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:32.475 06:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:32.475 rmmod nvme_tcp 00:34:32.475 rmmod nvme_fabrics 00:34:32.475 rmmod nvme_keyring 00:34:32.475 06:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:32.475 06:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:34:32.475 06:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:34:32.475 06:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 738215 ']' 00:34:32.475 06:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 738215 00:34:32.475 06:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@952 -- # '[' -z 738215 ']' 00:34:32.475 06:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@956 -- # kill -0 738215 00:34:32.475 06:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@957 -- # uname 00:34:32.475 06:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:34:32.475 06:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 738215 00:34:32.475 06:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:34:32.475 06:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:34:32.475 06:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@970 -- # echo 'killing process with pid 738215' 00:34:32.475 killing process with pid 738215 00:34:32.475 06:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@971 -- # kill 738215 00:34:32.475 06:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@976 -- # wait 738215 00:34:32.735 06:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:32.735 06:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:32.735 06:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:32.735 06:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:34:32.735 06:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:34:32.735 06:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:32.735 06:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:34:32.735 06:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:32.735 06:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:32.735 06:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:32.735 06:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:32.735 06:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:35.271 06:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:35.271 00:34:35.271 real 0m21.924s 00:34:35.271 user 0m55.322s 00:34:35.271 sys 0m9.353s 00:34:35.271 06:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1128 -- # xtrace_disable 00:34:35.271 06:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:34:35.271 ************************************ 00:34:35.271 END TEST nvmf_lvol 00:34:35.271 ************************************ 00:34:35.271 06:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:34:35.271 06:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:34:35.271 06:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:34:35.271 06:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:35.271 ************************************ 00:34:35.271 START TEST nvmf_lvs_grow 00:34:35.271 ************************************ 00:34:35.271 06:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:34:35.271 * Looking for test storage... 00:34:35.271 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:35.271 06:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:34:35.271 06:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lcov --version 00:34:35.271 06:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:34:35.271 06:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:34:35.271 06:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:35.271 06:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:35.271 06:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:35.271 06:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:34:35.271 06:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:34:35.271 06:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:34:35.271 06:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:34:35.271 06:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:34:35.271 06:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:34:35.271 06:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:34:35.271 06:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:35.271 06:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:34:35.271 06:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:34:35.271 06:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:35.271 06:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:35.271 06:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:34:35.271 06:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:34:35.271 06:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:35.271 06:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:34:35.271 06:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:34:35.271 06:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:34:35.271 06:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:34:35.271 06:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:35.271 06:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:34:35.271 06:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:34:35.271 06:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:35.272 06:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:35.272 06:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:34:35.272 06:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:35.272 06:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:34:35.272 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:35.272 --rc genhtml_branch_coverage=1 00:34:35.272 --rc genhtml_function_coverage=1 00:34:35.272 --rc genhtml_legend=1 00:34:35.272 --rc geninfo_all_blocks=1 00:34:35.272 --rc geninfo_unexecuted_blocks=1 00:34:35.272 00:34:35.272 ' 00:34:35.272 06:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:34:35.272 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:35.272 --rc genhtml_branch_coverage=1 00:34:35.272 --rc genhtml_function_coverage=1 00:34:35.272 --rc genhtml_legend=1 00:34:35.272 --rc geninfo_all_blocks=1 00:34:35.272 --rc geninfo_unexecuted_blocks=1 00:34:35.272 00:34:35.272 ' 00:34:35.272 06:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:34:35.272 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:35.272 --rc genhtml_branch_coverage=1 00:34:35.272 --rc genhtml_function_coverage=1 00:34:35.272 --rc genhtml_legend=1 00:34:35.272 --rc geninfo_all_blocks=1 00:34:35.272 --rc geninfo_unexecuted_blocks=1 00:34:35.272 00:34:35.272 ' 00:34:35.272 06:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:34:35.272 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:35.272 --rc genhtml_branch_coverage=1 00:34:35.272 --rc genhtml_function_coverage=1 00:34:35.272 --rc genhtml_legend=1 00:34:35.272 --rc geninfo_all_blocks=1 00:34:35.272 --rc geninfo_unexecuted_blocks=1 00:34:35.272 00:34:35.272 ' 00:34:35.272 06:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:35.272 06:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:34:35.272 06:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:35.272 06:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:35.272 06:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:35.272 06:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:35.272 06:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:35.272 06:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:35.272 06:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:35.272 06:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:35.272 06:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:35.272 06:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:35.272 06:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:34:35.272 06:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:34:35.272 06:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:35.272 06:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:35.272 06:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:35.272 06:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:35.272 06:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:35.272 06:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:34:35.272 06:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:35.272 06:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:35.272 06:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:35.272 06:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:35.272 06:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:35.272 06:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:35.272 06:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:34:35.272 06:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:35.272 06:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:34:35.272 06:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:35.272 06:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:35.272 06:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:35.272 06:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:35.272 06:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:35.272 06:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:35.272 06:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:35.272 06:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:35.272 06:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:35.272 06:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:35.272 06:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:35.272 06:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:34:35.272 06:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:34:35.272 06:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:35.272 06:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:35.272 06:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:35.272 06:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:35.272 06:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:35.272 06:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:35.272 06:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:35.272 06:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:35.272 06:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:35.272 06:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:35.272 06:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:34:35.272 06:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:34:41.842 06:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:41.842 06:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:34:41.842 06:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:41.842 06:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:41.842 06:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:41.842 06:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:41.842 06:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:41.842 06:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:34:41.842 06:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:41.842 06:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:34:41.842 06:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:34:41.842 06:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:34:41.842 06:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:34:41.842 06:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:34:41.842 06:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:34:41.842 06:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:41.842 06:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:41.842 06:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:41.842 06:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:41.842 06:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:41.842 06:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:41.842 06:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:41.842 06:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:41.842 06:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:41.842 06:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:41.842 06:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:41.842 06:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:41.842 06:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:41.842 06:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:41.842 06:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:41.842 06:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:41.842 06:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:41.842 06:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:41.842 06:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:41.842 06:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:34:41.842 Found 0000:86:00.0 (0x8086 - 0x159b) 00:34:41.842 06:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:41.842 06:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:41.842 06:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:41.842 06:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:41.842 06:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:41.842 06:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:41.842 06:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:34:41.842 Found 0000:86:00.1 (0x8086 - 0x159b) 00:34:41.842 06:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:41.842 06:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:41.842 06:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:41.842 06:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:41.842 06:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:41.842 06:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:41.842 06:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:41.842 06:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:41.842 06:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:41.842 06:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:41.842 06:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:41.842 06:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:41.842 06:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:41.842 06:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:41.842 06:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:41.842 06:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:34:41.842 Found net devices under 0000:86:00.0: cvl_0_0 00:34:41.842 06:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:41.842 06:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:41.842 06:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:41.842 06:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:41.842 06:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:41.842 06:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:41.842 06:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:41.842 06:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:41.842 06:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:34:41.842 Found net devices under 0000:86:00.1: cvl_0_1 00:34:41.842 06:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:41.842 06:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:41.842 06:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:34:41.842 06:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:41.842 06:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:41.842 06:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:41.842 06:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:41.842 06:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:41.842 06:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:41.842 06:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:41.842 06:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:41.842 06:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:41.842 06:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:41.842 06:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:41.843 06:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:41.843 06:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:41.843 06:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:41.843 06:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:41.843 06:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:41.843 06:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:41.843 06:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:41.843 06:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:41.843 06:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:41.843 06:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:41.843 06:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:41.843 06:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:41.843 06:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:41.843 06:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:41.843 06:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:41.843 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:41.843 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.336 ms 00:34:41.843 00:34:41.843 --- 10.0.0.2 ping statistics --- 00:34:41.843 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:41.843 rtt min/avg/max/mdev = 0.336/0.336/0.336/0.000 ms 00:34:41.843 06:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:41.843 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:41.843 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.217 ms 00:34:41.843 00:34:41.843 --- 10.0.0.1 ping statistics --- 00:34:41.843 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:41.843 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:34:41.843 06:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:41.843 06:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:34:41.843 06:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:41.843 06:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:41.843 06:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:41.843 06:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:41.843 06:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:41.843 06:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:41.843 06:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:41.843 06:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:34:41.843 06:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:41.843 06:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:41.843 06:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:34:41.843 06:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=744564 00:34:41.843 06:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 744564 00:34:41.843 06:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:34:41.843 06:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # '[' -z 744564 ']' 00:34:41.843 06:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:41.843 06:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # local max_retries=100 00:34:41.843 06:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:41.843 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:41.843 06:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # xtrace_disable 00:34:41.843 06:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:34:41.843 [2024-11-20 06:45:12.758996] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:41.843 [2024-11-20 06:45:12.759879] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:34:41.843 [2024-11-20 06:45:12.759913] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:41.843 [2024-11-20 06:45:12.836426] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:41.843 [2024-11-20 06:45:12.875226] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:41.843 [2024-11-20 06:45:12.875262] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:41.843 [2024-11-20 06:45:12.875269] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:41.843 [2024-11-20 06:45:12.875275] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:41.843 [2024-11-20 06:45:12.875280] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:41.843 [2024-11-20 06:45:12.875795] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:41.843 [2024-11-20 06:45:12.943492] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:41.843 [2024-11-20 06:45:12.943719] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:41.843 06:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:34:41.843 06:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@866 -- # return 0 00:34:41.843 06:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:41.843 06:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:41.843 06:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:34:41.843 06:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:41.843 06:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:34:41.843 [2024-11-20 06:45:13.176434] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:41.843 06:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:34:41.843 06:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:34:41.843 06:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1109 -- # xtrace_disable 00:34:41.843 06:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:34:41.843 ************************************ 00:34:41.843 START TEST lvs_grow_clean 00:34:41.843 ************************************ 00:34:41.843 06:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1127 -- # lvs_grow 00:34:41.843 06:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:34:41.843 06:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:34:41.843 06:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:34:41.843 06:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:34:41.843 06:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:34:41.843 06:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:34:41.843 06:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:34:41.843 06:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:34:41.843 06:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:34:41.843 06:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:34:41.843 06:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:34:42.103 06:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=bd5c8e63-bd8a-46fd-b8af-d3e061393247 00:34:42.103 06:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bd5c8e63-bd8a-46fd-b8af-d3e061393247 00:34:42.103 06:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:34:42.103 06:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:34:42.103 06:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:34:42.103 06:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u bd5c8e63-bd8a-46fd-b8af-d3e061393247 lvol 150 00:34:42.363 06:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=3c348f4f-6cc9-409c-918f-02dde334f9b7 00:34:42.363 06:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:34:42.363 06:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:34:42.622 [2024-11-20 06:45:14.228149] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:34:42.622 [2024-11-20 06:45:14.228287] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:34:42.622 true 00:34:42.622 06:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bd5c8e63-bd8a-46fd-b8af-d3e061393247 00:34:42.622 06:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:34:42.622 06:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:34:42.622 06:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:34:42.882 06:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 3c348f4f-6cc9-409c-918f-02dde334f9b7 00:34:43.141 06:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:43.141 [2024-11-20 06:45:14.968672] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:43.400 06:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:34:43.400 06:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=744947 00:34:43.400 06:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:34:43.400 06:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:34:43.400 06:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 744947 /var/tmp/bdevperf.sock 00:34:43.400 06:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # '[' -z 744947 ']' 00:34:43.400 06:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:34:43.400 06:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:34:43.400 06:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:34:43.400 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:34:43.400 06:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:34:43.400 06:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:34:43.400 [2024-11-20 06:45:15.226099] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:34:43.401 [2024-11-20 06:45:15.226149] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid744947 ] 00:34:43.659 [2024-11-20 06:45:15.300890] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:43.659 [2024-11-20 06:45:15.342544] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:43.659 06:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:34:43.659 06:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@866 -- # return 0 00:34:43.659 06:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:34:44.228 Nvme0n1 00:34:44.228 06:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:34:44.228 [ 00:34:44.228 { 00:34:44.228 "name": "Nvme0n1", 00:34:44.228 "aliases": [ 00:34:44.228 "3c348f4f-6cc9-409c-918f-02dde334f9b7" 00:34:44.228 ], 00:34:44.228 "product_name": "NVMe disk", 00:34:44.228 "block_size": 4096, 00:34:44.228 "num_blocks": 38912, 00:34:44.228 "uuid": "3c348f4f-6cc9-409c-918f-02dde334f9b7", 00:34:44.228 "numa_id": 1, 00:34:44.228 "assigned_rate_limits": { 00:34:44.228 "rw_ios_per_sec": 0, 00:34:44.228 "rw_mbytes_per_sec": 0, 00:34:44.228 "r_mbytes_per_sec": 0, 00:34:44.228 "w_mbytes_per_sec": 0 00:34:44.228 }, 00:34:44.228 "claimed": false, 00:34:44.228 "zoned": false, 00:34:44.228 "supported_io_types": { 00:34:44.228 "read": true, 00:34:44.228 "write": true, 00:34:44.228 "unmap": true, 00:34:44.228 "flush": true, 00:34:44.228 "reset": true, 00:34:44.228 "nvme_admin": true, 00:34:44.228 "nvme_io": true, 00:34:44.228 "nvme_io_md": false, 00:34:44.228 "write_zeroes": true, 00:34:44.228 "zcopy": false, 00:34:44.228 "get_zone_info": false, 00:34:44.228 "zone_management": false, 00:34:44.228 "zone_append": false, 00:34:44.228 "compare": true, 00:34:44.228 "compare_and_write": true, 00:34:44.228 "abort": true, 00:34:44.228 "seek_hole": false, 00:34:44.228 "seek_data": false, 00:34:44.228 "copy": true, 00:34:44.228 "nvme_iov_md": false 00:34:44.228 }, 00:34:44.228 "memory_domains": [ 00:34:44.228 { 00:34:44.228 "dma_device_id": "system", 00:34:44.228 "dma_device_type": 1 00:34:44.228 } 00:34:44.228 ], 00:34:44.228 "driver_specific": { 00:34:44.228 "nvme": [ 00:34:44.228 { 00:34:44.228 "trid": { 00:34:44.228 "trtype": "TCP", 00:34:44.228 "adrfam": "IPv4", 00:34:44.228 "traddr": "10.0.0.2", 00:34:44.228 "trsvcid": "4420", 00:34:44.228 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:34:44.228 }, 00:34:44.228 "ctrlr_data": { 00:34:44.228 "cntlid": 1, 00:34:44.228 "vendor_id": "0x8086", 00:34:44.228 "model_number": "SPDK bdev Controller", 00:34:44.228 "serial_number": "SPDK0", 00:34:44.228 "firmware_revision": "25.01", 00:34:44.228 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:44.228 "oacs": { 00:34:44.228 "security": 0, 00:34:44.228 "format": 0, 00:34:44.228 "firmware": 0, 00:34:44.228 "ns_manage": 0 00:34:44.228 }, 00:34:44.228 "multi_ctrlr": true, 00:34:44.228 "ana_reporting": false 00:34:44.228 }, 00:34:44.228 "vs": { 00:34:44.228 "nvme_version": "1.3" 00:34:44.228 }, 00:34:44.228 "ns_data": { 00:34:44.228 "id": 1, 00:34:44.228 "can_share": true 00:34:44.228 } 00:34:44.228 } 00:34:44.228 ], 00:34:44.228 "mp_policy": "active_passive" 00:34:44.228 } 00:34:44.228 } 00:34:44.228 ] 00:34:44.228 06:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=745073 00:34:44.228 06:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:34:44.228 06:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:34:44.487 Running I/O for 10 seconds... 00:34:45.422 Latency(us) 00:34:45.422 [2024-11-20T05:45:17.258Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:45.422 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:45.422 Nvme0n1 : 1.00 22733.00 88.80 0.00 0.00 0.00 0.00 0.00 00:34:45.422 [2024-11-20T05:45:17.258Z] =================================================================================================================== 00:34:45.422 [2024-11-20T05:45:17.258Z] Total : 22733.00 88.80 0.00 0.00 0.00 0.00 0.00 00:34:45.422 00:34:46.357 06:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u bd5c8e63-bd8a-46fd-b8af-d3e061393247 00:34:46.357 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:46.357 Nvme0n1 : 2.00 23050.50 90.04 0.00 0.00 0.00 0.00 0.00 00:34:46.357 [2024-11-20T05:45:18.193Z] =================================================================================================================== 00:34:46.357 [2024-11-20T05:45:18.193Z] Total : 23050.50 90.04 0.00 0.00 0.00 0.00 0.00 00:34:46.357 00:34:46.616 true 00:34:46.616 06:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bd5c8e63-bd8a-46fd-b8af-d3e061393247 00:34:46.616 06:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:34:46.616 06:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:34:46.616 06:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:34:46.616 06:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 745073 00:34:47.575 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:47.575 Nvme0n1 : 3.00 23156.33 90.45 0.00 0.00 0.00 0.00 0.00 00:34:47.575 [2024-11-20T05:45:19.411Z] =================================================================================================================== 00:34:47.575 [2024-11-20T05:45:19.411Z] Total : 23156.33 90.45 0.00 0.00 0.00 0.00 0.00 00:34:47.575 00:34:48.511 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:48.511 Nvme0n1 : 4.00 23241.00 90.79 0.00 0.00 0.00 0.00 0.00 00:34:48.511 [2024-11-20T05:45:20.347Z] =================================================================================================================== 00:34:48.511 [2024-11-20T05:45:20.347Z] Total : 23241.00 90.79 0.00 0.00 0.00 0.00 0.00 00:34:48.511 00:34:49.446 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:49.446 Nvme0n1 : 5.00 23291.80 90.98 0.00 0.00 0.00 0.00 0.00 00:34:49.446 [2024-11-20T05:45:21.282Z] =================================================================================================================== 00:34:49.446 [2024-11-20T05:45:21.282Z] Total : 23291.80 90.98 0.00 0.00 0.00 0.00 0.00 00:34:49.446 00:34:50.383 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:50.383 Nvme0n1 : 6.00 23368.00 91.28 0.00 0.00 0.00 0.00 0.00 00:34:50.383 [2024-11-20T05:45:22.219Z] =================================================================================================================== 00:34:50.383 [2024-11-20T05:45:22.219Z] Total : 23368.00 91.28 0.00 0.00 0.00 0.00 0.00 00:34:50.383 00:34:51.320 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:51.320 Nvme0n1 : 7.00 23404.29 91.42 0.00 0.00 0.00 0.00 0.00 00:34:51.320 [2024-11-20T05:45:23.156Z] =================================================================================================================== 00:34:51.320 [2024-11-20T05:45:23.156Z] Total : 23404.29 91.42 0.00 0.00 0.00 0.00 0.00 00:34:51.320 00:34:52.696 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:52.696 Nvme0n1 : 8.00 23431.50 91.53 0.00 0.00 0.00 0.00 0.00 00:34:52.696 [2024-11-20T05:45:24.532Z] =================================================================================================================== 00:34:52.696 [2024-11-20T05:45:24.532Z] Total : 23431.50 91.53 0.00 0.00 0.00 0.00 0.00 00:34:52.696 00:34:53.632 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:53.632 Nvme0n1 : 9.00 23466.78 91.67 0.00 0.00 0.00 0.00 0.00 00:34:53.632 [2024-11-20T05:45:25.468Z] =================================================================================================================== 00:34:53.632 [2024-11-20T05:45:25.468Z] Total : 23466.78 91.67 0.00 0.00 0.00 0.00 0.00 00:34:53.632 00:34:54.567 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:54.567 Nvme0n1 : 10.00 23482.30 91.73 0.00 0.00 0.00 0.00 0.00 00:34:54.567 [2024-11-20T05:45:26.403Z] =================================================================================================================== 00:34:54.567 [2024-11-20T05:45:26.403Z] Total : 23482.30 91.73 0.00 0.00 0.00 0.00 0.00 00:34:54.567 00:34:54.567 00:34:54.567 Latency(us) 00:34:54.567 [2024-11-20T05:45:26.403Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:54.567 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:54.567 Nvme0n1 : 10.00 23484.33 91.74 0.00 0.00 5447.49 5055.63 28086.86 00:34:54.567 [2024-11-20T05:45:26.403Z] =================================================================================================================== 00:34:54.567 [2024-11-20T05:45:26.404Z] Total : 23484.33 91.74 0.00 0.00 5447.49 5055.63 28086.86 00:34:54.568 { 00:34:54.568 "results": [ 00:34:54.568 { 00:34:54.568 "job": "Nvme0n1", 00:34:54.568 "core_mask": "0x2", 00:34:54.568 "workload": "randwrite", 00:34:54.568 "status": "finished", 00:34:54.568 "queue_depth": 128, 00:34:54.568 "io_size": 4096, 00:34:54.568 "runtime": 10.004584, 00:34:54.568 "iops": 23484.33478093642, 00:34:54.568 "mibps": 91.73568273803289, 00:34:54.568 "io_failed": 0, 00:34:54.568 "io_timeout": 0, 00:34:54.568 "avg_latency_us": 5447.490465493211, 00:34:54.568 "min_latency_us": 5055.634285714285, 00:34:54.568 "max_latency_us": 28086.85714285714 00:34:54.568 } 00:34:54.568 ], 00:34:54.568 "core_count": 1 00:34:54.568 } 00:34:54.568 06:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 744947 00:34:54.568 06:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # '[' -z 744947 ']' 00:34:54.568 06:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # kill -0 744947 00:34:54.568 06:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@957 -- # uname 00:34:54.568 06:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:34:54.568 06:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 744947 00:34:54.568 06:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:34:54.568 06:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:34:54.568 06:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 744947' 00:34:54.568 killing process with pid 744947 00:34:54.568 06:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@971 -- # kill 744947 00:34:54.568 Received shutdown signal, test time was about 10.000000 seconds 00:34:54.568 00:34:54.568 Latency(us) 00:34:54.568 [2024-11-20T05:45:26.404Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:54.568 [2024-11-20T05:45:26.404Z] =================================================================================================================== 00:34:54.568 [2024-11-20T05:45:26.404Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:54.568 06:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@976 -- # wait 744947 00:34:54.568 06:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:34:54.826 06:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:55.085 06:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bd5c8e63-bd8a-46fd-b8af-d3e061393247 00:34:55.085 06:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:34:55.344 06:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:34:55.344 06:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:34:55.344 06:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:34:55.344 [2024-11-20 06:45:27.132260] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:34:55.604 06:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bd5c8e63-bd8a-46fd-b8af-d3e061393247 00:34:55.604 06:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:34:55.604 06:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bd5c8e63-bd8a-46fd-b8af-d3e061393247 00:34:55.604 06:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:55.604 06:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:55.604 06:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:55.604 06:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:55.605 06:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:55.605 06:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:55.605 06:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:55.605 06:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:34:55.605 06:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bd5c8e63-bd8a-46fd-b8af-d3e061393247 00:34:55.605 request: 00:34:55.605 { 00:34:55.605 "uuid": "bd5c8e63-bd8a-46fd-b8af-d3e061393247", 00:34:55.605 "method": "bdev_lvol_get_lvstores", 00:34:55.605 "req_id": 1 00:34:55.605 } 00:34:55.605 Got JSON-RPC error response 00:34:55.605 response: 00:34:55.605 { 00:34:55.605 "code": -19, 00:34:55.605 "message": "No such device" 00:34:55.605 } 00:34:55.605 06:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:34:55.605 06:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:34:55.605 06:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:34:55.605 06:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:34:55.605 06:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:34:55.864 aio_bdev 00:34:55.864 06:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 3c348f4f-6cc9-409c-918f-02dde334f9b7 00:34:55.864 06:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local bdev_name=3c348f4f-6cc9-409c-918f-02dde334f9b7 00:34:55.864 06:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:34:55.864 06:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local i 00:34:55.864 06:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:34:55.864 06:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:34:55.864 06:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:34:56.122 06:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 3c348f4f-6cc9-409c-918f-02dde334f9b7 -t 2000 00:34:56.122 [ 00:34:56.122 { 00:34:56.122 "name": "3c348f4f-6cc9-409c-918f-02dde334f9b7", 00:34:56.122 "aliases": [ 00:34:56.122 "lvs/lvol" 00:34:56.122 ], 00:34:56.122 "product_name": "Logical Volume", 00:34:56.122 "block_size": 4096, 00:34:56.122 "num_blocks": 38912, 00:34:56.122 "uuid": "3c348f4f-6cc9-409c-918f-02dde334f9b7", 00:34:56.122 "assigned_rate_limits": { 00:34:56.122 "rw_ios_per_sec": 0, 00:34:56.122 "rw_mbytes_per_sec": 0, 00:34:56.122 "r_mbytes_per_sec": 0, 00:34:56.122 "w_mbytes_per_sec": 0 00:34:56.122 }, 00:34:56.122 "claimed": false, 00:34:56.122 "zoned": false, 00:34:56.122 "supported_io_types": { 00:34:56.122 "read": true, 00:34:56.122 "write": true, 00:34:56.122 "unmap": true, 00:34:56.122 "flush": false, 00:34:56.122 "reset": true, 00:34:56.122 "nvme_admin": false, 00:34:56.122 "nvme_io": false, 00:34:56.122 "nvme_io_md": false, 00:34:56.122 "write_zeroes": true, 00:34:56.122 "zcopy": false, 00:34:56.122 "get_zone_info": false, 00:34:56.122 "zone_management": false, 00:34:56.122 "zone_append": false, 00:34:56.122 "compare": false, 00:34:56.122 "compare_and_write": false, 00:34:56.122 "abort": false, 00:34:56.122 "seek_hole": true, 00:34:56.122 "seek_data": true, 00:34:56.122 "copy": false, 00:34:56.122 "nvme_iov_md": false 00:34:56.122 }, 00:34:56.122 "driver_specific": { 00:34:56.122 "lvol": { 00:34:56.122 "lvol_store_uuid": "bd5c8e63-bd8a-46fd-b8af-d3e061393247", 00:34:56.122 "base_bdev": "aio_bdev", 00:34:56.122 "thin_provision": false, 00:34:56.122 "num_allocated_clusters": 38, 00:34:56.122 "snapshot": false, 00:34:56.122 "clone": false, 00:34:56.122 "esnap_clone": false 00:34:56.122 } 00:34:56.122 } 00:34:56.122 } 00:34:56.122 ] 00:34:56.122 06:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@909 -- # return 0 00:34:56.122 06:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bd5c8e63-bd8a-46fd-b8af-d3e061393247 00:34:56.122 06:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:34:56.381 06:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:34:56.381 06:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bd5c8e63-bd8a-46fd-b8af-d3e061393247 00:34:56.381 06:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:34:56.640 06:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:34:56.640 06:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 3c348f4f-6cc9-409c-918f-02dde334f9b7 00:34:56.898 06:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u bd5c8e63-bd8a-46fd-b8af-d3e061393247 00:34:56.898 06:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:34:57.157 06:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:34:57.157 00:34:57.157 real 0m15.703s 00:34:57.157 user 0m15.236s 00:34:57.157 sys 0m1.433s 00:34:57.157 06:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1128 -- # xtrace_disable 00:34:57.157 06:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:34:57.157 ************************************ 00:34:57.157 END TEST lvs_grow_clean 00:34:57.157 ************************************ 00:34:57.157 06:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:34:57.157 06:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:34:57.157 06:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1109 -- # xtrace_disable 00:34:57.157 06:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:34:57.415 ************************************ 00:34:57.415 START TEST lvs_grow_dirty 00:34:57.415 ************************************ 00:34:57.415 06:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1127 -- # lvs_grow dirty 00:34:57.415 06:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:34:57.415 06:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:34:57.415 06:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:34:57.415 06:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:34:57.415 06:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:34:57.416 06:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:34:57.416 06:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:34:57.416 06:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:34:57.416 06:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:34:57.416 06:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:34:57.416 06:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:34:57.674 06:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=9a0d66d6-f747-4178-8557-12311b75962a 00:34:57.674 06:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9a0d66d6-f747-4178-8557-12311b75962a 00:34:57.674 06:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:34:57.933 06:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:34:57.933 06:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:34:57.933 06:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 9a0d66d6-f747-4178-8557-12311b75962a lvol 150 00:34:58.192 06:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=13f96bf8-234f-4a73-884e-5793724101fe 00:34:58.192 06:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:34:58.192 06:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:34:58.192 [2024-11-20 06:45:30.000173] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:34:58.192 [2024-11-20 06:45:30.000326] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:34:58.192 true 00:34:58.450 06:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9a0d66d6-f747-4178-8557-12311b75962a 00:34:58.450 06:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:34:58.451 06:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:34:58.451 06:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:34:58.709 06:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 13f96bf8-234f-4a73-884e-5793724101fe 00:34:58.968 06:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:58.968 [2024-11-20 06:45:30.776769] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:59.227 06:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:34:59.227 06:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=747495 00:34:59.227 06:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:34:59.227 06:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:34:59.227 06:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 747495 /var/tmp/bdevperf.sock 00:34:59.227 06:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # '[' -z 747495 ']' 00:34:59.227 06:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:34:59.227 06:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # local max_retries=100 00:34:59.227 06:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:34:59.227 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:34:59.227 06:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # xtrace_disable 00:34:59.227 06:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:34:59.227 [2024-11-20 06:45:31.038682] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:34:59.227 [2024-11-20 06:45:31.038732] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid747495 ] 00:34:59.486 [2024-11-20 06:45:31.114399] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:59.486 [2024-11-20 06:45:31.155829] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:59.486 06:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:34:59.486 06:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@866 -- # return 0 00:34:59.486 06:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:35:00.052 Nvme0n1 00:35:00.052 06:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:35:00.052 [ 00:35:00.052 { 00:35:00.052 "name": "Nvme0n1", 00:35:00.052 "aliases": [ 00:35:00.052 "13f96bf8-234f-4a73-884e-5793724101fe" 00:35:00.052 ], 00:35:00.052 "product_name": "NVMe disk", 00:35:00.052 "block_size": 4096, 00:35:00.052 "num_blocks": 38912, 00:35:00.052 "uuid": "13f96bf8-234f-4a73-884e-5793724101fe", 00:35:00.052 "numa_id": 1, 00:35:00.052 "assigned_rate_limits": { 00:35:00.052 "rw_ios_per_sec": 0, 00:35:00.052 "rw_mbytes_per_sec": 0, 00:35:00.052 "r_mbytes_per_sec": 0, 00:35:00.052 "w_mbytes_per_sec": 0 00:35:00.052 }, 00:35:00.052 "claimed": false, 00:35:00.052 "zoned": false, 00:35:00.052 "supported_io_types": { 00:35:00.052 "read": true, 00:35:00.052 "write": true, 00:35:00.052 "unmap": true, 00:35:00.052 "flush": true, 00:35:00.052 "reset": true, 00:35:00.052 "nvme_admin": true, 00:35:00.052 "nvme_io": true, 00:35:00.052 "nvme_io_md": false, 00:35:00.052 "write_zeroes": true, 00:35:00.052 "zcopy": false, 00:35:00.052 "get_zone_info": false, 00:35:00.053 "zone_management": false, 00:35:00.053 "zone_append": false, 00:35:00.053 "compare": true, 00:35:00.053 "compare_and_write": true, 00:35:00.053 "abort": true, 00:35:00.053 "seek_hole": false, 00:35:00.053 "seek_data": false, 00:35:00.053 "copy": true, 00:35:00.053 "nvme_iov_md": false 00:35:00.053 }, 00:35:00.053 "memory_domains": [ 00:35:00.053 { 00:35:00.053 "dma_device_id": "system", 00:35:00.053 "dma_device_type": 1 00:35:00.053 } 00:35:00.053 ], 00:35:00.053 "driver_specific": { 00:35:00.053 "nvme": [ 00:35:00.053 { 00:35:00.053 "trid": { 00:35:00.053 "trtype": "TCP", 00:35:00.053 "adrfam": "IPv4", 00:35:00.053 "traddr": "10.0.0.2", 00:35:00.053 "trsvcid": "4420", 00:35:00.053 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:35:00.053 }, 00:35:00.053 "ctrlr_data": { 00:35:00.053 "cntlid": 1, 00:35:00.053 "vendor_id": "0x8086", 00:35:00.053 "model_number": "SPDK bdev Controller", 00:35:00.053 "serial_number": "SPDK0", 00:35:00.053 "firmware_revision": "25.01", 00:35:00.053 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:00.053 "oacs": { 00:35:00.053 "security": 0, 00:35:00.053 "format": 0, 00:35:00.053 "firmware": 0, 00:35:00.053 "ns_manage": 0 00:35:00.053 }, 00:35:00.053 "multi_ctrlr": true, 00:35:00.053 "ana_reporting": false 00:35:00.053 }, 00:35:00.053 "vs": { 00:35:00.053 "nvme_version": "1.3" 00:35:00.053 }, 00:35:00.053 "ns_data": { 00:35:00.053 "id": 1, 00:35:00.053 "can_share": true 00:35:00.053 } 00:35:00.053 } 00:35:00.053 ], 00:35:00.053 "mp_policy": "active_passive" 00:35:00.053 } 00:35:00.053 } 00:35:00.053 ] 00:35:00.053 06:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=747650 00:35:00.053 06:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:35:00.053 06:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:35:00.311 Running I/O for 10 seconds... 00:35:01.310 Latency(us) 00:35:01.310 [2024-11-20T05:45:33.147Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:01.311 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:01.311 Nvme0n1 : 1.00 22860.00 89.30 0.00 0.00 0.00 0.00 0.00 00:35:01.311 [2024-11-20T05:45:33.147Z] =================================================================================================================== 00:35:01.311 [2024-11-20T05:45:33.147Z] Total : 22860.00 89.30 0.00 0.00 0.00 0.00 0.00 00:35:01.311 00:35:02.298 06:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 9a0d66d6-f747-4178-8557-12311b75962a 00:35:02.298 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:02.298 Nvme0n1 : 2.00 23177.50 90.54 0.00 0.00 0.00 0.00 0.00 00:35:02.298 [2024-11-20T05:45:34.134Z] =================================================================================================================== 00:35:02.298 [2024-11-20T05:45:34.134Z] Total : 23177.50 90.54 0.00 0.00 0.00 0.00 0.00 00:35:02.298 00:35:02.298 true 00:35:02.298 06:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9a0d66d6-f747-4178-8557-12311b75962a 00:35:02.298 06:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:35:02.556 06:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:35:02.556 06:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:35:02.556 06:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 747650 00:35:03.492 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:03.492 Nvme0n1 : 3.00 23325.67 91.12 0.00 0.00 0.00 0.00 0.00 00:35:03.492 [2024-11-20T05:45:35.328Z] =================================================================================================================== 00:35:03.492 [2024-11-20T05:45:35.328Z] Total : 23325.67 91.12 0.00 0.00 0.00 0.00 0.00 00:35:03.492 00:35:04.428 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:04.428 Nvme0n1 : 4.00 23415.75 91.47 0.00 0.00 0.00 0.00 0.00 00:35:04.428 [2024-11-20T05:45:36.264Z] =================================================================================================================== 00:35:04.428 [2024-11-20T05:45:36.264Z] Total : 23415.75 91.47 0.00 0.00 0.00 0.00 0.00 00:35:04.428 00:35:05.364 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:05.364 Nvme0n1 : 5.00 23479.40 91.72 0.00 0.00 0.00 0.00 0.00 00:35:05.364 [2024-11-20T05:45:37.200Z] =================================================================================================================== 00:35:05.364 [2024-11-20T05:45:37.200Z] Total : 23479.40 91.72 0.00 0.00 0.00 0.00 0.00 00:35:05.364 00:35:06.300 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:06.300 Nvme0n1 : 6.00 23545.50 91.97 0.00 0.00 0.00 0.00 0.00 00:35:06.300 [2024-11-20T05:45:38.136Z] =================================================================================================================== 00:35:06.300 [2024-11-20T05:45:38.136Z] Total : 23545.50 91.97 0.00 0.00 0.00 0.00 0.00 00:35:06.300 00:35:07.234 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:07.234 Nvme0n1 : 7.00 23556.43 92.02 0.00 0.00 0.00 0.00 0.00 00:35:07.234 [2024-11-20T05:45:39.070Z] =================================================================================================================== 00:35:07.234 [2024-11-20T05:45:39.070Z] Total : 23556.43 92.02 0.00 0.00 0.00 0.00 0.00 00:35:07.234 00:35:08.176 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:08.176 Nvme0n1 : 8.00 23582.62 92.12 0.00 0.00 0.00 0.00 0.00 00:35:08.176 [2024-11-20T05:45:40.012Z] =================================================================================================================== 00:35:08.176 [2024-11-20T05:45:40.012Z] Total : 23582.62 92.12 0.00 0.00 0.00 0.00 0.00 00:35:08.176 00:35:09.552 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:09.552 Nvme0n1 : 9.00 23558.78 92.03 0.00 0.00 0.00 0.00 0.00 00:35:09.552 [2024-11-20T05:45:41.388Z] =================================================================================================================== 00:35:09.552 [2024-11-20T05:45:41.388Z] Total : 23558.78 92.03 0.00 0.00 0.00 0.00 0.00 00:35:09.553 00:35:10.503 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:10.503 Nvme0n1 : 10.00 23590.50 92.15 0.00 0.00 0.00 0.00 0.00 00:35:10.503 [2024-11-20T05:45:42.339Z] =================================================================================================================== 00:35:10.503 [2024-11-20T05:45:42.339Z] Total : 23590.50 92.15 0.00 0.00 0.00 0.00 0.00 00:35:10.503 00:35:10.503 00:35:10.503 Latency(us) 00:35:10.503 [2024-11-20T05:45:42.339Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:10.503 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:10.503 Nvme0n1 : 10.01 23588.68 92.14 0.00 0.00 5423.52 3136.37 26214.40 00:35:10.503 [2024-11-20T05:45:42.339Z] =================================================================================================================== 00:35:10.503 [2024-11-20T05:45:42.339Z] Total : 23588.68 92.14 0.00 0.00 5423.52 3136.37 26214.40 00:35:10.503 { 00:35:10.503 "results": [ 00:35:10.503 { 00:35:10.503 "job": "Nvme0n1", 00:35:10.503 "core_mask": "0x2", 00:35:10.503 "workload": "randwrite", 00:35:10.503 "status": "finished", 00:35:10.503 "queue_depth": 128, 00:35:10.503 "io_size": 4096, 00:35:10.503 "runtime": 10.0062, 00:35:10.503 "iops": 23588.675021486677, 00:35:10.503 "mibps": 92.14326180268233, 00:35:10.503 "io_failed": 0, 00:35:10.503 "io_timeout": 0, 00:35:10.503 "avg_latency_us": 5423.516970463977, 00:35:10.503 "min_latency_us": 3136.365714285714, 00:35:10.503 "max_latency_us": 26214.4 00:35:10.503 } 00:35:10.503 ], 00:35:10.503 "core_count": 1 00:35:10.503 } 00:35:10.503 06:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 747495 00:35:10.503 06:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # '[' -z 747495 ']' 00:35:10.503 06:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # kill -0 747495 00:35:10.503 06:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@957 -- # uname 00:35:10.503 06:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:35:10.503 06:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 747495 00:35:10.503 06:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:35:10.503 06:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:35:10.503 06:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@970 -- # echo 'killing process with pid 747495' 00:35:10.503 killing process with pid 747495 00:35:10.503 06:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@971 -- # kill 747495 00:35:10.503 Received shutdown signal, test time was about 10.000000 seconds 00:35:10.503 00:35:10.503 Latency(us) 00:35:10.503 [2024-11-20T05:45:42.339Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:10.503 [2024-11-20T05:45:42.339Z] =================================================================================================================== 00:35:10.503 [2024-11-20T05:45:42.339Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:10.503 06:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@976 -- # wait 747495 00:35:10.503 06:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:35:10.762 06:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:11.020 06:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:35:11.020 06:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9a0d66d6-f747-4178-8557-12311b75962a 00:35:11.020 06:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:35:11.020 06:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:35:11.020 06:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 744564 00:35:11.020 06:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 744564 00:35:11.279 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 744564 Killed "${NVMF_APP[@]}" "$@" 00:35:11.279 06:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:35:11.279 06:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:35:11.279 06:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:11.279 06:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:11.279 06:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:35:11.279 06:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=749491 00:35:11.279 06:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 749491 00:35:11.279 06:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:35:11.279 06:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # '[' -z 749491 ']' 00:35:11.279 06:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:11.279 06:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # local max_retries=100 00:35:11.279 06:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:11.279 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:11.279 06:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # xtrace_disable 00:35:11.279 06:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:35:11.279 [2024-11-20 06:45:42.916153] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:35:11.279 [2024-11-20 06:45:42.917044] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:35:11.279 [2024-11-20 06:45:42.917077] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:11.279 [2024-11-20 06:45:42.996215] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:11.279 [2024-11-20 06:45:43.036102] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:11.279 [2024-11-20 06:45:43.036136] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:11.279 [2024-11-20 06:45:43.036143] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:11.279 [2024-11-20 06:45:43.036149] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:11.279 [2024-11-20 06:45:43.036154] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:11.279 [2024-11-20 06:45:43.036701] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:11.279 [2024-11-20 06:45:43.102100] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:35:11.279 [2024-11-20 06:45:43.102326] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:35:11.538 06:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:35:11.538 06:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@866 -- # return 0 00:35:11.538 06:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:11.538 06:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:11.538 06:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:35:11.538 06:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:11.538 06:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:35:11.538 [2024-11-20 06:45:43.342072] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:35:11.538 [2024-11-20 06:45:43.342288] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:35:11.538 [2024-11-20 06:45:43.342375] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:35:11.538 06:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:35:11.538 06:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 13f96bf8-234f-4a73-884e-5793724101fe 00:35:11.796 06:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local bdev_name=13f96bf8-234f-4a73-884e-5793724101fe 00:35:11.796 06:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:35:11.796 06:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local i 00:35:11.796 06:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:35:11.796 06:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:35:11.796 06:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:35:11.796 06:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 13f96bf8-234f-4a73-884e-5793724101fe -t 2000 00:35:12.055 [ 00:35:12.055 { 00:35:12.055 "name": "13f96bf8-234f-4a73-884e-5793724101fe", 00:35:12.055 "aliases": [ 00:35:12.055 "lvs/lvol" 00:35:12.055 ], 00:35:12.055 "product_name": "Logical Volume", 00:35:12.055 "block_size": 4096, 00:35:12.055 "num_blocks": 38912, 00:35:12.055 "uuid": "13f96bf8-234f-4a73-884e-5793724101fe", 00:35:12.055 "assigned_rate_limits": { 00:35:12.055 "rw_ios_per_sec": 0, 00:35:12.055 "rw_mbytes_per_sec": 0, 00:35:12.055 "r_mbytes_per_sec": 0, 00:35:12.055 "w_mbytes_per_sec": 0 00:35:12.055 }, 00:35:12.055 "claimed": false, 00:35:12.055 "zoned": false, 00:35:12.055 "supported_io_types": { 00:35:12.055 "read": true, 00:35:12.055 "write": true, 00:35:12.055 "unmap": true, 00:35:12.055 "flush": false, 00:35:12.055 "reset": true, 00:35:12.055 "nvme_admin": false, 00:35:12.055 "nvme_io": false, 00:35:12.055 "nvme_io_md": false, 00:35:12.055 "write_zeroes": true, 00:35:12.055 "zcopy": false, 00:35:12.055 "get_zone_info": false, 00:35:12.055 "zone_management": false, 00:35:12.055 "zone_append": false, 00:35:12.055 "compare": false, 00:35:12.055 "compare_and_write": false, 00:35:12.055 "abort": false, 00:35:12.055 "seek_hole": true, 00:35:12.055 "seek_data": true, 00:35:12.055 "copy": false, 00:35:12.055 "nvme_iov_md": false 00:35:12.055 }, 00:35:12.055 "driver_specific": { 00:35:12.055 "lvol": { 00:35:12.055 "lvol_store_uuid": "9a0d66d6-f747-4178-8557-12311b75962a", 00:35:12.055 "base_bdev": "aio_bdev", 00:35:12.055 "thin_provision": false, 00:35:12.055 "num_allocated_clusters": 38, 00:35:12.055 "snapshot": false, 00:35:12.055 "clone": false, 00:35:12.055 "esnap_clone": false 00:35:12.055 } 00:35:12.055 } 00:35:12.055 } 00:35:12.055 ] 00:35:12.055 06:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@909 -- # return 0 00:35:12.055 06:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9a0d66d6-f747-4178-8557-12311b75962a 00:35:12.055 06:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:35:12.314 06:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:35:12.314 06:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9a0d66d6-f747-4178-8557-12311b75962a 00:35:12.314 06:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:35:12.572 06:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:35:12.572 06:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:35:12.572 [2024-11-20 06:45:44.317161] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:35:12.572 06:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9a0d66d6-f747-4178-8557-12311b75962a 00:35:12.572 06:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:35:12.572 06:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9a0d66d6-f747-4178-8557-12311b75962a 00:35:12.572 06:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:35:12.572 06:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:12.572 06:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:35:12.572 06:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:12.572 06:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:35:12.572 06:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:12.572 06:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:35:12.572 06:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:35:12.572 06:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9a0d66d6-f747-4178-8557-12311b75962a 00:35:12.831 request: 00:35:12.831 { 00:35:12.831 "uuid": "9a0d66d6-f747-4178-8557-12311b75962a", 00:35:12.831 "method": "bdev_lvol_get_lvstores", 00:35:12.831 "req_id": 1 00:35:12.831 } 00:35:12.831 Got JSON-RPC error response 00:35:12.831 response: 00:35:12.831 { 00:35:12.831 "code": -19, 00:35:12.831 "message": "No such device" 00:35:12.831 } 00:35:12.831 06:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:35:12.831 06:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:35:12.831 06:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:35:12.831 06:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:35:12.831 06:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:35:13.089 aio_bdev 00:35:13.089 06:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 13f96bf8-234f-4a73-884e-5793724101fe 00:35:13.089 06:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local bdev_name=13f96bf8-234f-4a73-884e-5793724101fe 00:35:13.089 06:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:35:13.089 06:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local i 00:35:13.089 06:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:35:13.089 06:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:35:13.089 06:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:35:13.348 06:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 13f96bf8-234f-4a73-884e-5793724101fe -t 2000 00:35:13.348 [ 00:35:13.348 { 00:35:13.348 "name": "13f96bf8-234f-4a73-884e-5793724101fe", 00:35:13.348 "aliases": [ 00:35:13.348 "lvs/lvol" 00:35:13.348 ], 00:35:13.348 "product_name": "Logical Volume", 00:35:13.348 "block_size": 4096, 00:35:13.348 "num_blocks": 38912, 00:35:13.348 "uuid": "13f96bf8-234f-4a73-884e-5793724101fe", 00:35:13.348 "assigned_rate_limits": { 00:35:13.348 "rw_ios_per_sec": 0, 00:35:13.348 "rw_mbytes_per_sec": 0, 00:35:13.348 "r_mbytes_per_sec": 0, 00:35:13.348 "w_mbytes_per_sec": 0 00:35:13.348 }, 00:35:13.348 "claimed": false, 00:35:13.348 "zoned": false, 00:35:13.348 "supported_io_types": { 00:35:13.348 "read": true, 00:35:13.348 "write": true, 00:35:13.348 "unmap": true, 00:35:13.348 "flush": false, 00:35:13.348 "reset": true, 00:35:13.348 "nvme_admin": false, 00:35:13.348 "nvme_io": false, 00:35:13.348 "nvme_io_md": false, 00:35:13.348 "write_zeroes": true, 00:35:13.348 "zcopy": false, 00:35:13.348 "get_zone_info": false, 00:35:13.348 "zone_management": false, 00:35:13.348 "zone_append": false, 00:35:13.348 "compare": false, 00:35:13.348 "compare_and_write": false, 00:35:13.348 "abort": false, 00:35:13.348 "seek_hole": true, 00:35:13.348 "seek_data": true, 00:35:13.348 "copy": false, 00:35:13.348 "nvme_iov_md": false 00:35:13.348 }, 00:35:13.348 "driver_specific": { 00:35:13.348 "lvol": { 00:35:13.348 "lvol_store_uuid": "9a0d66d6-f747-4178-8557-12311b75962a", 00:35:13.348 "base_bdev": "aio_bdev", 00:35:13.348 "thin_provision": false, 00:35:13.348 "num_allocated_clusters": 38, 00:35:13.348 "snapshot": false, 00:35:13.348 "clone": false, 00:35:13.348 "esnap_clone": false 00:35:13.348 } 00:35:13.348 } 00:35:13.348 } 00:35:13.348 ] 00:35:13.348 06:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@909 -- # return 0 00:35:13.348 06:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9a0d66d6-f747-4178-8557-12311b75962a 00:35:13.348 06:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:35:13.607 06:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:35:13.607 06:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9a0d66d6-f747-4178-8557-12311b75962a 00:35:13.607 06:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:35:13.865 06:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:35:13.865 06:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 13f96bf8-234f-4a73-884e-5793724101fe 00:35:13.865 06:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 9a0d66d6-f747-4178-8557-12311b75962a 00:35:14.124 06:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:35:14.383 06:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:35:14.383 00:35:14.383 real 0m17.074s 00:35:14.383 user 0m34.477s 00:35:14.383 sys 0m3.810s 00:35:14.383 06:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1128 -- # xtrace_disable 00:35:14.383 06:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:35:14.383 ************************************ 00:35:14.383 END TEST lvs_grow_dirty 00:35:14.383 ************************************ 00:35:14.383 06:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:35:14.383 06:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # type=--id 00:35:14.383 06:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@811 -- # id=0 00:35:14.383 06:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # '[' --id = --pid ']' 00:35:14.383 06:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:35:14.383 06:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # shm_files=nvmf_trace.0 00:35:14.383 06:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # [[ -z nvmf_trace.0 ]] 00:35:14.383 06:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@822 -- # for n in $shm_files 00:35:14.383 06:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:35:14.383 nvmf_trace.0 00:35:14.383 06:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # return 0 00:35:14.383 06:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:35:14.383 06:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:14.383 06:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:35:14.383 06:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:14.383 06:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:35:14.383 06:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:14.383 06:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:14.383 rmmod nvme_tcp 00:35:14.383 rmmod nvme_fabrics 00:35:14.383 rmmod nvme_keyring 00:35:14.642 06:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:14.642 06:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:35:14.642 06:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:35:14.642 06:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 749491 ']' 00:35:14.642 06:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 749491 00:35:14.642 06:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # '[' -z 749491 ']' 00:35:14.642 06:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # kill -0 749491 00:35:14.642 06:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@957 -- # uname 00:35:14.642 06:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:35:14.642 06:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 749491 00:35:14.642 06:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:35:14.642 06:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:35:14.642 06:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@970 -- # echo 'killing process with pid 749491' 00:35:14.642 killing process with pid 749491 00:35:14.642 06:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@971 -- # kill 749491 00:35:14.642 06:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@976 -- # wait 749491 00:35:14.642 06:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:14.642 06:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:14.642 06:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:14.642 06:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:35:14.642 06:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:35:14.642 06:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:14.642 06:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:35:14.900 06:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:14.900 06:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:14.900 06:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:14.900 06:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:14.900 06:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:16.803 06:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:16.803 00:35:16.803 real 0m41.949s 00:35:16.803 user 0m52.305s 00:35:16.803 sys 0m10.025s 00:35:16.803 06:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1128 -- # xtrace_disable 00:35:16.803 06:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:35:16.803 ************************************ 00:35:16.803 END TEST nvmf_lvs_grow 00:35:16.803 ************************************ 00:35:16.803 06:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:35:16.803 06:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:35:16.803 06:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:35:16.803 06:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:35:16.803 ************************************ 00:35:16.803 START TEST nvmf_bdev_io_wait 00:35:16.803 ************************************ 00:35:16.803 06:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:35:17.062 * Looking for test storage... 00:35:17.062 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:17.062 06:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:35:17.062 06:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lcov --version 00:35:17.062 06:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:35:17.062 06:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:35:17.062 06:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:17.062 06:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:17.062 06:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:17.062 06:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:35:17.062 06:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:35:17.062 06:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:35:17.062 06:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:35:17.062 06:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:35:17.062 06:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:35:17.062 06:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:35:17.062 06:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:17.062 06:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:35:17.062 06:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:35:17.062 06:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:17.062 06:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:17.062 06:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:35:17.062 06:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:35:17.062 06:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:17.062 06:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:35:17.062 06:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:35:17.062 06:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:35:17.062 06:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:35:17.062 06:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:17.062 06:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:35:17.062 06:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:35:17.062 06:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:17.062 06:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:17.062 06:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:35:17.062 06:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:17.062 06:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:35:17.062 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:17.062 --rc genhtml_branch_coverage=1 00:35:17.062 --rc genhtml_function_coverage=1 00:35:17.062 --rc genhtml_legend=1 00:35:17.062 --rc geninfo_all_blocks=1 00:35:17.062 --rc geninfo_unexecuted_blocks=1 00:35:17.062 00:35:17.062 ' 00:35:17.062 06:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:35:17.062 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:17.062 --rc genhtml_branch_coverage=1 00:35:17.062 --rc genhtml_function_coverage=1 00:35:17.062 --rc genhtml_legend=1 00:35:17.062 --rc geninfo_all_blocks=1 00:35:17.062 --rc geninfo_unexecuted_blocks=1 00:35:17.062 00:35:17.062 ' 00:35:17.062 06:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:35:17.062 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:17.062 --rc genhtml_branch_coverage=1 00:35:17.062 --rc genhtml_function_coverage=1 00:35:17.062 --rc genhtml_legend=1 00:35:17.062 --rc geninfo_all_blocks=1 00:35:17.062 --rc geninfo_unexecuted_blocks=1 00:35:17.062 00:35:17.062 ' 00:35:17.062 06:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:35:17.062 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:17.062 --rc genhtml_branch_coverage=1 00:35:17.062 --rc genhtml_function_coverage=1 00:35:17.062 --rc genhtml_legend=1 00:35:17.062 --rc geninfo_all_blocks=1 00:35:17.062 --rc geninfo_unexecuted_blocks=1 00:35:17.062 00:35:17.062 ' 00:35:17.062 06:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:17.062 06:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:35:17.062 06:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:17.062 06:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:17.062 06:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:17.062 06:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:17.062 06:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:17.062 06:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:17.062 06:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:17.062 06:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:17.062 06:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:17.062 06:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:17.062 06:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:35:17.062 06:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:35:17.062 06:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:17.062 06:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:17.062 06:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:17.062 06:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:17.062 06:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:17.062 06:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:35:17.062 06:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:17.062 06:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:17.062 06:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:17.062 06:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:17.062 06:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:17.062 06:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:17.062 06:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:35:17.062 06:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:17.062 06:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:35:17.063 06:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:17.063 06:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:17.063 06:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:17.063 06:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:17.063 06:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:17.063 06:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:35:17.063 06:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:35:17.063 06:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:17.063 06:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:17.063 06:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:17.063 06:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:35:17.063 06:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:35:17.063 06:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:35:17.063 06:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:17.063 06:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:17.063 06:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:17.063 06:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:17.063 06:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:17.063 06:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:17.063 06:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:17.063 06:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:17.063 06:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:17.063 06:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:17.063 06:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:35:17.063 06:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:35:23.631 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:23.631 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:35:23.631 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:23.631 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:23.631 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:23.631 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:23.631 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:23.631 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:35:23.631 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:23.631 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:35:23.631 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:35:23.631 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:35:23.631 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:35:23.631 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:35:23.631 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:35:23.631 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:23.631 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:23.631 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:23.631 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:23.631 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:23.631 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:23.631 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:23.631 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:23.631 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:23.631 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:23.631 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:23.631 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:23.631 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:23.631 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:23.631 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:23.631 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:23.631 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:23.631 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:23.631 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:23.631 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:35:23.631 Found 0000:86:00.0 (0x8086 - 0x159b) 00:35:23.631 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:23.631 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:23.631 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:23.631 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:23.631 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:23.631 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:23.631 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:35:23.631 Found 0000:86:00.1 (0x8086 - 0x159b) 00:35:23.631 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:23.631 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:23.631 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:23.631 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:23.631 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:23.631 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:23.631 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:23.631 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:23.631 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:23.631 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:23.631 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:23.631 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:23.631 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:23.631 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:23.631 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:23.631 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:35:23.631 Found net devices under 0000:86:00.0: cvl_0_0 00:35:23.631 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:23.631 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:23.632 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:23.632 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:23.632 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:23.632 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:23.632 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:23.632 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:23.632 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:35:23.632 Found net devices under 0000:86:00.1: cvl_0_1 00:35:23.632 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:23.632 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:23.632 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:35:23.632 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:23.632 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:23.632 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:23.632 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:23.632 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:23.632 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:23.632 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:23.632 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:23.632 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:23.632 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:23.632 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:23.632 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:23.632 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:23.632 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:23.632 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:23.632 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:23.632 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:23.632 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:23.632 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:23.632 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:23.632 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:23.632 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:23.632 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:23.632 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:23.632 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:23.632 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:23.632 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:23.632 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.483 ms 00:35:23.632 00:35:23.632 --- 10.0.0.2 ping statistics --- 00:35:23.632 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:23.632 rtt min/avg/max/mdev = 0.483/0.483/0.483/0.000 ms 00:35:23.632 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:23.632 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:23.632 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.201 ms 00:35:23.632 00:35:23.632 --- 10.0.0.1 ping statistics --- 00:35:23.632 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:23.632 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:35:23.632 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:23.632 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:35:23.632 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:23.632 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:23.632 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:23.632 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:23.632 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:23.632 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:23.632 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:23.632 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:35:23.632 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:23.632 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:23.632 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:35:23.632 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=753536 00:35:23.632 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:35:23.632 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 753536 00:35:23.632 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # '[' -z 753536 ']' 00:35:23.632 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:23.632 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # local max_retries=100 00:35:23.632 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:23.632 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:23.632 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # xtrace_disable 00:35:23.632 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:35:23.632 [2024-11-20 06:45:54.779660] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:35:23.632 [2024-11-20 06:45:54.780600] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:35:23.632 [2024-11-20 06:45:54.780637] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:23.632 [2024-11-20 06:45:54.858043] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:23.632 [2024-11-20 06:45:54.901413] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:23.632 [2024-11-20 06:45:54.901449] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:23.632 [2024-11-20 06:45:54.901456] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:23.632 [2024-11-20 06:45:54.901461] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:23.632 [2024-11-20 06:45:54.901466] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:23.632 [2024-11-20 06:45:54.902903] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:23.632 [2024-11-20 06:45:54.903015] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:35:23.632 [2024-11-20 06:45:54.903119] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:23.632 [2024-11-20 06:45:54.903121] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:35:23.632 [2024-11-20 06:45:54.903398] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:35:23.632 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:35:23.632 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@866 -- # return 0 00:35:23.632 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:23.632 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:23.632 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:35:23.632 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:23.632 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:35:23.632 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:23.632 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:35:23.632 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:23.632 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:35:23.632 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:23.632 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:35:23.632 [2024-11-20 06:45:55.022720] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:35:23.632 [2024-11-20 06:45:55.023006] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:35:23.633 [2024-11-20 06:45:55.023311] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:35:23.633 [2024-11-20 06:45:55.023466] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:35:23.633 06:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:23.633 06:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:23.633 06:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:23.633 06:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:35:23.633 [2024-11-20 06:45:55.035815] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:23.633 06:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:23.633 06:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:35:23.633 06:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:23.633 06:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:35:23.633 Malloc0 00:35:23.633 06:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:23.633 06:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:35:23.633 06:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:23.633 06:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:35:23.633 06:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:23.633 06:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:23.633 06:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:23.633 06:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:35:23.633 06:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:23.633 06:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:23.633 06:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:23.633 06:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:35:23.633 [2024-11-20 06:45:55.107957] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:23.633 06:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:23.633 06:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=753559 00:35:23.633 06:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:35:23.633 06:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:35:23.633 06:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=753561 00:35:23.633 06:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:35:23.633 06:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:35:23.633 06:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:23.633 06:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:23.633 { 00:35:23.633 "params": { 00:35:23.633 "name": "Nvme$subsystem", 00:35:23.633 "trtype": "$TEST_TRANSPORT", 00:35:23.633 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:23.633 "adrfam": "ipv4", 00:35:23.633 "trsvcid": "$NVMF_PORT", 00:35:23.633 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:23.633 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:23.633 "hdgst": ${hdgst:-false}, 00:35:23.633 "ddgst": ${ddgst:-false} 00:35:23.633 }, 00:35:23.633 "method": "bdev_nvme_attach_controller" 00:35:23.633 } 00:35:23.633 EOF 00:35:23.633 )") 00:35:23.633 06:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:35:23.633 06:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:35:23.633 06:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=753563 00:35:23.633 06:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:35:23.633 06:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:35:23.633 06:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:23.633 06:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:23.633 { 00:35:23.633 "params": { 00:35:23.633 "name": "Nvme$subsystem", 00:35:23.633 "trtype": "$TEST_TRANSPORT", 00:35:23.633 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:23.633 "adrfam": "ipv4", 00:35:23.633 "trsvcid": "$NVMF_PORT", 00:35:23.633 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:23.633 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:23.633 "hdgst": ${hdgst:-false}, 00:35:23.633 "ddgst": ${ddgst:-false} 00:35:23.633 }, 00:35:23.633 "method": "bdev_nvme_attach_controller" 00:35:23.633 } 00:35:23.633 EOF 00:35:23.633 )") 00:35:23.633 06:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:35:23.633 06:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:35:23.633 06:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=753566 00:35:23.633 06:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:35:23.633 06:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:35:23.633 06:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:35:23.633 06:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:35:23.633 06:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:23.633 06:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:35:23.633 06:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:35:23.633 06:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:23.633 { 00:35:23.633 "params": { 00:35:23.633 "name": "Nvme$subsystem", 00:35:23.633 "trtype": "$TEST_TRANSPORT", 00:35:23.633 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:23.633 "adrfam": "ipv4", 00:35:23.633 "trsvcid": "$NVMF_PORT", 00:35:23.633 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:23.633 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:23.633 "hdgst": ${hdgst:-false}, 00:35:23.633 "ddgst": ${ddgst:-false} 00:35:23.633 }, 00:35:23.633 "method": "bdev_nvme_attach_controller" 00:35:23.633 } 00:35:23.633 EOF 00:35:23.633 )") 00:35:23.633 06:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:35:23.633 06:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:35:23.633 06:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:35:23.633 06:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:23.633 06:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:23.633 { 00:35:23.633 "params": { 00:35:23.633 "name": "Nvme$subsystem", 00:35:23.633 "trtype": "$TEST_TRANSPORT", 00:35:23.633 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:23.633 "adrfam": "ipv4", 00:35:23.633 "trsvcid": "$NVMF_PORT", 00:35:23.633 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:23.633 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:23.633 "hdgst": ${hdgst:-false}, 00:35:23.633 "ddgst": ${ddgst:-false} 00:35:23.633 }, 00:35:23.633 "method": "bdev_nvme_attach_controller" 00:35:23.633 } 00:35:23.633 EOF 00:35:23.633 )") 00:35:23.633 06:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:35:23.633 06:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 753559 00:35:23.633 06:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:35:23.633 06:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:35:23.633 06:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:35:23.633 06:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:35:23.633 06:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:35:23.633 06:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:23.633 "params": { 00:35:23.633 "name": "Nvme1", 00:35:23.633 "trtype": "tcp", 00:35:23.633 "traddr": "10.0.0.2", 00:35:23.633 "adrfam": "ipv4", 00:35:23.633 "trsvcid": "4420", 00:35:23.633 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:23.633 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:23.633 "hdgst": false, 00:35:23.633 "ddgst": false 00:35:23.633 }, 00:35:23.633 "method": "bdev_nvme_attach_controller" 00:35:23.633 }' 00:35:23.633 06:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:35:23.633 06:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:35:23.634 06:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:23.634 "params": { 00:35:23.634 "name": "Nvme1", 00:35:23.634 "trtype": "tcp", 00:35:23.634 "traddr": "10.0.0.2", 00:35:23.634 "adrfam": "ipv4", 00:35:23.634 "trsvcid": "4420", 00:35:23.634 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:23.634 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:23.634 "hdgst": false, 00:35:23.634 "ddgst": false 00:35:23.634 }, 00:35:23.634 "method": "bdev_nvme_attach_controller" 00:35:23.634 }' 00:35:23.634 06:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:35:23.634 06:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:23.634 "params": { 00:35:23.634 "name": "Nvme1", 00:35:23.634 "trtype": "tcp", 00:35:23.634 "traddr": "10.0.0.2", 00:35:23.634 "adrfam": "ipv4", 00:35:23.634 "trsvcid": "4420", 00:35:23.634 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:23.634 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:23.634 "hdgst": false, 00:35:23.634 "ddgst": false 00:35:23.634 }, 00:35:23.634 "method": "bdev_nvme_attach_controller" 00:35:23.634 }' 00:35:23.634 06:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:35:23.634 06:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:23.634 "params": { 00:35:23.634 "name": "Nvme1", 00:35:23.634 "trtype": "tcp", 00:35:23.634 "traddr": "10.0.0.2", 00:35:23.634 "adrfam": "ipv4", 00:35:23.634 "trsvcid": "4420", 00:35:23.634 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:23.634 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:23.634 "hdgst": false, 00:35:23.634 "ddgst": false 00:35:23.634 }, 00:35:23.634 "method": "bdev_nvme_attach_controller" 00:35:23.634 }' 00:35:23.634 [2024-11-20 06:45:55.161821] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:35:23.634 [2024-11-20 06:45:55.161874] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:35:23.634 [2024-11-20 06:45:55.163166] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:35:23.634 [2024-11-20 06:45:55.163217] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:35:23.634 [2024-11-20 06:45:55.163252] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:35:23.634 [2024-11-20 06:45:55.163293] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:35:23.634 [2024-11-20 06:45:55.165469] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:35:23.634 [2024-11-20 06:45:55.165521] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:35:23.634 [2024-11-20 06:45:55.352198] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:23.634 [2024-11-20 06:45:55.394613] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:35:23.634 [2024-11-20 06:45:55.450884] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:23.892 [2024-11-20 06:45:55.493462] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:35:23.892 [2024-11-20 06:45:55.546167] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:23.892 [2024-11-20 06:45:55.593422] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:35:23.892 [2024-11-20 06:45:55.606838] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:23.892 [2024-11-20 06:45:55.649516] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:35:23.892 Running I/O for 1 seconds... 00:35:24.150 Running I/O for 1 seconds... 00:35:24.150 Running I/O for 1 seconds... 00:35:24.150 Running I/O for 1 seconds... 00:35:25.084 8987.00 IOPS, 35.11 MiB/s 00:35:25.084 Latency(us) 00:35:25.084 [2024-11-20T05:45:56.920Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:25.084 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:35:25.084 Nvme1n1 : 1.01 9006.23 35.18 0.00 0.00 14087.28 3557.67 23343.30 00:35:25.084 [2024-11-20T05:45:56.920Z] =================================================================================================================== 00:35:25.084 [2024-11-20T05:45:56.920Z] Total : 9006.23 35.18 0.00 0.00 14087.28 3557.67 23343.30 00:35:25.084 254920.00 IOPS, 995.78 MiB/s 00:35:25.084 Latency(us) 00:35:25.084 [2024-11-20T05:45:56.920Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:25.084 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:35:25.084 Nvme1n1 : 1.00 254536.27 994.28 0.00 0.00 500.36 223.33 1497.97 00:35:25.084 [2024-11-20T05:45:56.920Z] =================================================================================================================== 00:35:25.084 [2024-11-20T05:45:56.920Z] Total : 254536.27 994.28 0.00 0.00 500.36 223.33 1497.97 00:35:25.084 8409.00 IOPS, 32.85 MiB/s 00:35:25.084 Latency(us) 00:35:25.084 [2024-11-20T05:45:56.920Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:25.084 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:35:25.084 Nvme1n1 : 1.01 8523.37 33.29 0.00 0.00 14981.02 3869.74 26713.72 00:35:25.084 [2024-11-20T05:45:56.920Z] =================================================================================================================== 00:35:25.084 [2024-11-20T05:45:56.920Z] Total : 8523.37 33.29 0.00 0.00 14981.02 3869.74 26713.72 00:35:25.084 06:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 753561 00:35:25.084 12202.00 IOPS, 47.66 MiB/s 00:35:25.084 Latency(us) 00:35:25.084 [2024-11-20T05:45:56.920Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:25.084 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:35:25.084 Nvme1n1 : 1.01 12269.81 47.93 0.00 0.00 10404.39 3838.54 14542.75 00:35:25.084 [2024-11-20T05:45:56.920Z] =================================================================================================================== 00:35:25.084 [2024-11-20T05:45:56.920Z] Total : 12269.81 47.93 0.00 0.00 10404.39 3838.54 14542.75 00:35:25.342 06:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 753563 00:35:25.342 06:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 753566 00:35:25.342 06:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:25.342 06:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:25.342 06:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:35:25.342 06:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:25.342 06:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:35:25.342 06:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:35:25.342 06:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:25.342 06:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:35:25.342 06:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:25.342 06:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:35:25.343 06:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:25.343 06:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:25.343 rmmod nvme_tcp 00:35:25.343 rmmod nvme_fabrics 00:35:25.343 rmmod nvme_keyring 00:35:25.343 06:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:25.343 06:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:35:25.343 06:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:35:25.343 06:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 753536 ']' 00:35:25.343 06:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 753536 00:35:25.343 06:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # '[' -z 753536 ']' 00:35:25.343 06:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # kill -0 753536 00:35:25.343 06:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@957 -- # uname 00:35:25.343 06:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:35:25.343 06:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 753536 00:35:25.343 06:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:35:25.343 06:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:35:25.343 06:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@970 -- # echo 'killing process with pid 753536' 00:35:25.343 killing process with pid 753536 00:35:25.343 06:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@971 -- # kill 753536 00:35:25.343 06:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@976 -- # wait 753536 00:35:25.602 06:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:25.602 06:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:25.602 06:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:25.602 06:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:35:25.602 06:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:35:25.602 06:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:25.602 06:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:35:25.602 06:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:25.602 06:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:25.602 06:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:25.602 06:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:25.602 06:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:28.135 06:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:28.135 00:35:28.135 real 0m10.734s 00:35:28.135 user 0m15.048s 00:35:28.135 sys 0m6.461s 00:35:28.135 06:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1128 -- # xtrace_disable 00:35:28.135 06:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:35:28.135 ************************************ 00:35:28.135 END TEST nvmf_bdev_io_wait 00:35:28.135 ************************************ 00:35:28.135 06:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:35:28.135 06:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:35:28.135 06:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:35:28.135 06:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:35:28.135 ************************************ 00:35:28.135 START TEST nvmf_queue_depth 00:35:28.135 ************************************ 00:35:28.135 06:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:35:28.135 * Looking for test storage... 00:35:28.135 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:28.135 06:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:35:28.135 06:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lcov --version 00:35:28.135 06:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:35:28.135 06:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:35:28.135 06:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:28.135 06:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:28.135 06:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:28.135 06:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:35:28.135 06:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:35:28.135 06:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:35:28.135 06:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:35:28.135 06:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:35:28.135 06:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:35:28.135 06:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:35:28.135 06:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:28.135 06:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:35:28.135 06:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:35:28.135 06:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:28.135 06:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:28.135 06:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:35:28.135 06:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:35:28.135 06:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:28.135 06:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:35:28.135 06:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:35:28.135 06:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:35:28.135 06:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:35:28.135 06:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:28.135 06:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:35:28.135 06:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:35:28.135 06:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:28.135 06:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:28.135 06:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:35:28.135 06:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:28.135 06:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:35:28.135 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:28.135 --rc genhtml_branch_coverage=1 00:35:28.135 --rc genhtml_function_coverage=1 00:35:28.135 --rc genhtml_legend=1 00:35:28.135 --rc geninfo_all_blocks=1 00:35:28.135 --rc geninfo_unexecuted_blocks=1 00:35:28.135 00:35:28.135 ' 00:35:28.135 06:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:35:28.135 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:28.135 --rc genhtml_branch_coverage=1 00:35:28.135 --rc genhtml_function_coverage=1 00:35:28.135 --rc genhtml_legend=1 00:35:28.135 --rc geninfo_all_blocks=1 00:35:28.135 --rc geninfo_unexecuted_blocks=1 00:35:28.135 00:35:28.135 ' 00:35:28.135 06:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:35:28.135 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:28.135 --rc genhtml_branch_coverage=1 00:35:28.135 --rc genhtml_function_coverage=1 00:35:28.135 --rc genhtml_legend=1 00:35:28.135 --rc geninfo_all_blocks=1 00:35:28.135 --rc geninfo_unexecuted_blocks=1 00:35:28.135 00:35:28.135 ' 00:35:28.135 06:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:35:28.135 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:28.135 --rc genhtml_branch_coverage=1 00:35:28.135 --rc genhtml_function_coverage=1 00:35:28.135 --rc genhtml_legend=1 00:35:28.135 --rc geninfo_all_blocks=1 00:35:28.135 --rc geninfo_unexecuted_blocks=1 00:35:28.135 00:35:28.135 ' 00:35:28.136 06:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:28.136 06:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:35:28.136 06:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:28.136 06:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:28.136 06:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:28.136 06:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:28.136 06:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:28.136 06:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:28.136 06:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:28.136 06:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:28.136 06:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:28.136 06:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:28.136 06:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:35:28.136 06:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:35:28.136 06:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:28.136 06:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:28.136 06:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:28.136 06:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:28.136 06:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:28.136 06:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:35:28.136 06:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:28.136 06:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:28.136 06:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:28.136 06:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:28.136 06:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:28.136 06:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:28.136 06:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:35:28.136 06:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:28.136 06:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:35:28.136 06:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:28.136 06:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:28.136 06:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:28.136 06:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:28.136 06:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:28.136 06:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:35:28.136 06:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:35:28.136 06:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:28.136 06:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:28.136 06:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:28.136 06:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:35:28.136 06:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:35:28.136 06:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:35:28.136 06:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:35:28.136 06:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:28.136 06:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:28.136 06:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:28.136 06:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:28.136 06:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:28.136 06:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:28.136 06:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:28.136 06:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:28.136 06:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:28.136 06:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:28.136 06:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:35:28.136 06:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:35:34.701 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:34.701 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:35:34.701 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:34.701 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:34.701 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:34.701 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:34.701 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:34.701 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:35:34.701 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:34.701 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:35:34.701 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:35:34.701 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:35:34.701 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:35:34.701 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:35:34.701 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:35:34.701 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:34.701 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:34.701 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:34.701 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:34.701 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:34.701 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:34.701 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:34.701 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:34.701 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:34.701 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:34.701 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:34.701 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:34.701 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:34.701 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:34.701 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:34.701 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:34.701 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:34.701 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:34.701 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:34.701 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:35:34.701 Found 0000:86:00.0 (0x8086 - 0x159b) 00:35:34.701 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:34.701 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:34.701 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:34.701 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:34.701 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:34.701 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:34.701 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:35:34.701 Found 0000:86:00.1 (0x8086 - 0x159b) 00:35:34.701 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:34.701 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:34.701 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:34.701 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:34.701 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:34.701 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:34.701 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:34.701 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:34.701 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:34.701 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:34.701 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:34.701 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:34.701 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:34.701 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:34.701 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:34.701 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:35:34.701 Found net devices under 0000:86:00.0: cvl_0_0 00:35:34.701 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:34.701 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:34.701 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:34.701 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:34.701 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:34.701 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:34.701 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:34.701 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:34.701 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:35:34.701 Found net devices under 0000:86:00.1: cvl_0_1 00:35:34.701 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:34.701 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:34.701 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:35:34.701 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:34.701 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:34.701 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:34.701 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:34.701 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:34.701 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:34.701 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:34.701 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:34.701 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:34.701 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:34.701 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:34.702 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:34.702 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:34.702 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:34.702 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:34.702 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:34.702 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:34.702 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:34.702 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:34.702 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:34.702 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:34.702 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:34.702 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:34.702 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:34.702 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:34.702 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:34.702 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:34.702 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.450 ms 00:35:34.702 00:35:34.702 --- 10.0.0.2 ping statistics --- 00:35:34.702 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:34.702 rtt min/avg/max/mdev = 0.450/0.450/0.450/0.000 ms 00:35:34.702 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:34.702 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:34.702 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.224 ms 00:35:34.702 00:35:34.702 --- 10.0.0.1 ping statistics --- 00:35:34.702 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:34.702 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:35:34.702 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:34.702 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:35:34.702 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:34.702 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:34.702 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:34.702 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:34.702 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:34.702 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:34.702 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:34.702 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:35:34.702 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:34.702 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:34.702 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:35:34.702 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=757341 00:35:34.702 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:35:34.702 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 757341 00:35:34.702 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@833 -- # '[' -z 757341 ']' 00:35:34.702 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:34.702 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@838 -- # local max_retries=100 00:35:34.702 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:34.702 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:34.702 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # xtrace_disable 00:35:34.702 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:35:34.702 [2024-11-20 06:46:05.629401] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:35:34.702 [2024-11-20 06:46:05.630288] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:35:34.702 [2024-11-20 06:46:05.630320] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:34.702 [2024-11-20 06:46:05.711025] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:34.702 [2024-11-20 06:46:05.751251] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:34.702 [2024-11-20 06:46:05.751286] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:34.702 [2024-11-20 06:46:05.751294] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:34.702 [2024-11-20 06:46:05.751299] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:34.702 [2024-11-20 06:46:05.751304] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:34.702 [2024-11-20 06:46:05.751838] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:34.702 [2024-11-20 06:46:05.819247] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:35:34.702 [2024-11-20 06:46:05.819450] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:35:34.702 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:35:34.702 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@866 -- # return 0 00:35:34.702 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:34.702 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:34.702 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:35:34.702 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:34.702 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:34.702 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:34.702 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:35:34.702 [2024-11-20 06:46:05.888571] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:34.702 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:34.702 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:35:34.702 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:34.702 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:35:34.702 Malloc0 00:35:34.702 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:34.702 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:35:34.702 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:34.702 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:35:34.702 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:34.702 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:34.702 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:34.702 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:35:34.702 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:34.702 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:34.702 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:34.702 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:35:34.702 [2024-11-20 06:46:05.972532] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:34.702 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:34.702 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=757460 00:35:34.702 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:35:34.702 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:35:34.702 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 757460 /var/tmp/bdevperf.sock 00:35:34.702 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@833 -- # '[' -z 757460 ']' 00:35:34.702 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:35:34.703 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@838 -- # local max_retries=100 00:35:34.703 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:35:34.703 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:35:34.703 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # xtrace_disable 00:35:34.703 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:35:34.703 [2024-11-20 06:46:06.023501] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:35:34.703 [2024-11-20 06:46:06.023545] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid757460 ] 00:35:34.703 [2024-11-20 06:46:06.097651] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:34.703 [2024-11-20 06:46:06.139766] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:34.703 06:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:35:34.703 06:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@866 -- # return 0 00:35:34.703 06:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:35:34.703 06:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:34.703 06:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:35:34.703 NVMe0n1 00:35:34.703 06:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:34.703 06:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:35:34.703 Running I/O for 10 seconds... 00:35:37.013 12104.00 IOPS, 47.28 MiB/s [2024-11-20T05:46:09.784Z] 12288.00 IOPS, 48.00 MiB/s [2024-11-20T05:46:10.720Z] 12397.67 IOPS, 48.43 MiB/s [2024-11-20T05:46:11.656Z] 12462.25 IOPS, 48.68 MiB/s [2024-11-20T05:46:12.591Z] 12474.80 IOPS, 48.73 MiB/s [2024-11-20T05:46:13.524Z] 12451.33 IOPS, 48.64 MiB/s [2024-11-20T05:46:14.460Z] 12436.86 IOPS, 48.58 MiB/s [2024-11-20T05:46:15.837Z] 12469.62 IOPS, 48.71 MiB/s [2024-11-20T05:46:16.774Z] 12506.89 IOPS, 48.86 MiB/s [2024-11-20T05:46:16.774Z] 12491.90 IOPS, 48.80 MiB/s 00:35:44.938 Latency(us) 00:35:44.938 [2024-11-20T05:46:16.774Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:44.938 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:35:44.938 Verification LBA range: start 0x0 length 0x4000 00:35:44.938 NVMe0n1 : 10.05 12527.74 48.94 0.00 0.00 81490.38 12233.39 52179.14 00:35:44.938 [2024-11-20T05:46:16.775Z] =================================================================================================================== 00:35:44.939 [2024-11-20T05:46:16.775Z] Total : 12527.74 48.94 0.00 0.00 81490.38 12233.39 52179.14 00:35:44.939 { 00:35:44.939 "results": [ 00:35:44.939 { 00:35:44.939 "job": "NVMe0n1", 00:35:44.939 "core_mask": "0x1", 00:35:44.939 "workload": "verify", 00:35:44.939 "status": "finished", 00:35:44.939 "verify_range": { 00:35:44.939 "start": 0, 00:35:44.939 "length": 16384 00:35:44.939 }, 00:35:44.939 "queue_depth": 1024, 00:35:44.939 "io_size": 4096, 00:35:44.939 "runtime": 10.051852, 00:35:44.939 "iops": 12527.74115655503, 00:35:44.939 "mibps": 48.936488892793086, 00:35:44.939 "io_failed": 0, 00:35:44.939 "io_timeout": 0, 00:35:44.939 "avg_latency_us": 81490.37635848737, 00:35:44.939 "min_latency_us": 12233.386666666667, 00:35:44.939 "max_latency_us": 52179.13904761905 00:35:44.939 } 00:35:44.939 ], 00:35:44.939 "core_count": 1 00:35:44.939 } 00:35:44.939 06:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 757460 00:35:44.939 06:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' -z 757460 ']' 00:35:44.939 06:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # kill -0 757460 00:35:44.939 06:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@957 -- # uname 00:35:44.939 06:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:35:44.939 06:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 757460 00:35:44.939 06:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:35:44.939 06:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:35:44.939 06:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@970 -- # echo 'killing process with pid 757460' 00:35:44.939 killing process with pid 757460 00:35:44.939 06:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@971 -- # kill 757460 00:35:44.939 Received shutdown signal, test time was about 10.000000 seconds 00:35:44.939 00:35:44.939 Latency(us) 00:35:44.939 [2024-11-20T05:46:16.775Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:44.939 [2024-11-20T05:46:16.775Z] =================================================================================================================== 00:35:44.939 [2024-11-20T05:46:16.775Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:44.939 06:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@976 -- # wait 757460 00:35:44.939 06:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:35:44.939 06:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:35:44.939 06:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:44.939 06:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:35:44.939 06:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:44.939 06:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:35:44.939 06:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:44.939 06:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:44.939 rmmod nvme_tcp 00:35:44.939 rmmod nvme_fabrics 00:35:45.198 rmmod nvme_keyring 00:35:45.198 06:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:45.198 06:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:35:45.198 06:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:35:45.198 06:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 757341 ']' 00:35:45.198 06:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 757341 00:35:45.198 06:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' -z 757341 ']' 00:35:45.198 06:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # kill -0 757341 00:35:45.198 06:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@957 -- # uname 00:35:45.198 06:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:35:45.198 06:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 757341 00:35:45.198 06:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:35:45.198 06:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:35:45.198 06:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@970 -- # echo 'killing process with pid 757341' 00:35:45.198 killing process with pid 757341 00:35:45.198 06:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@971 -- # kill 757341 00:35:45.198 06:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@976 -- # wait 757341 00:35:45.457 06:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:45.457 06:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:45.457 06:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:45.457 06:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:35:45.457 06:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:35:45.457 06:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:45.457 06:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:35:45.457 06:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:45.457 06:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:45.457 06:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:45.457 06:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:45.457 06:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:47.361 06:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:47.361 00:35:47.361 real 0m19.677s 00:35:47.361 user 0m22.539s 00:35:47.361 sys 0m6.387s 00:35:47.361 06:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1128 -- # xtrace_disable 00:35:47.361 06:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:35:47.361 ************************************ 00:35:47.361 END TEST nvmf_queue_depth 00:35:47.361 ************************************ 00:35:47.361 06:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:35:47.361 06:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:35:47.361 06:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:35:47.361 06:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:35:47.361 ************************************ 00:35:47.361 START TEST nvmf_target_multipath 00:35:47.361 ************************************ 00:35:47.361 06:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:35:47.621 * Looking for test storage... 00:35:47.621 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:47.621 06:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:35:47.621 06:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lcov --version 00:35:47.621 06:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:35:47.621 06:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:35:47.621 06:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:47.621 06:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:47.621 06:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:47.621 06:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:35:47.621 06:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:35:47.621 06:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:35:47.621 06:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:35:47.621 06:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:35:47.621 06:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:35:47.621 06:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:35:47.621 06:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:47.621 06:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:35:47.621 06:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:35:47.621 06:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:47.621 06:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:47.621 06:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:35:47.621 06:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:35:47.621 06:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:47.621 06:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:35:47.621 06:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:35:47.621 06:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:35:47.621 06:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:35:47.621 06:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:47.621 06:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:35:47.621 06:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:35:47.621 06:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:47.621 06:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:47.621 06:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:35:47.621 06:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:47.621 06:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:35:47.621 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:47.621 --rc genhtml_branch_coverage=1 00:35:47.621 --rc genhtml_function_coverage=1 00:35:47.621 --rc genhtml_legend=1 00:35:47.621 --rc geninfo_all_blocks=1 00:35:47.621 --rc geninfo_unexecuted_blocks=1 00:35:47.621 00:35:47.621 ' 00:35:47.621 06:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:35:47.621 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:47.621 --rc genhtml_branch_coverage=1 00:35:47.621 --rc genhtml_function_coverage=1 00:35:47.621 --rc genhtml_legend=1 00:35:47.621 --rc geninfo_all_blocks=1 00:35:47.621 --rc geninfo_unexecuted_blocks=1 00:35:47.621 00:35:47.621 ' 00:35:47.621 06:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:35:47.621 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:47.621 --rc genhtml_branch_coverage=1 00:35:47.621 --rc genhtml_function_coverage=1 00:35:47.621 --rc genhtml_legend=1 00:35:47.622 --rc geninfo_all_blocks=1 00:35:47.622 --rc geninfo_unexecuted_blocks=1 00:35:47.622 00:35:47.622 ' 00:35:47.622 06:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:35:47.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:47.622 --rc genhtml_branch_coverage=1 00:35:47.622 --rc genhtml_function_coverage=1 00:35:47.622 --rc genhtml_legend=1 00:35:47.622 --rc geninfo_all_blocks=1 00:35:47.622 --rc geninfo_unexecuted_blocks=1 00:35:47.622 00:35:47.622 ' 00:35:47.622 06:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:47.622 06:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:35:47.622 06:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:47.622 06:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:47.622 06:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:47.622 06:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:47.622 06:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:47.622 06:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:47.622 06:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:47.622 06:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:47.622 06:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:47.622 06:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:47.622 06:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:35:47.622 06:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:35:47.622 06:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:47.622 06:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:47.622 06:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:47.622 06:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:47.622 06:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:47.622 06:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:35:47.622 06:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:47.622 06:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:47.622 06:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:47.622 06:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:47.622 06:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:47.622 06:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:47.622 06:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:35:47.622 06:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:47.622 06:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:35:47.622 06:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:47.622 06:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:47.622 06:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:47.622 06:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:47.622 06:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:47.622 06:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:35:47.622 06:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:35:47.622 06:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:47.622 06:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:47.622 06:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:47.622 06:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:35:47.622 06:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:35:47.622 06:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:35:47.622 06:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:35:47.622 06:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:35:47.622 06:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:47.622 06:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:47.622 06:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:47.622 06:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:47.622 06:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:47.622 06:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:47.622 06:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:47.622 06:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:47.622 06:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:47.622 06:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:47.622 06:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:35:47.622 06:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:35:54.190 06:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:54.190 06:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:35:54.191 06:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:54.191 06:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:54.191 06:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:54.191 06:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:54.191 06:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:54.191 06:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:35:54.191 06:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:54.191 06:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:35:54.191 06:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:35:54.191 06:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:35:54.191 06:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:35:54.191 06:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:35:54.191 06:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:35:54.191 06:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:54.191 06:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:54.191 06:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:54.191 06:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:54.191 06:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:54.191 06:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:54.191 06:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:54.191 06:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:54.191 06:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:54.191 06:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:54.191 06:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:54.191 06:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:54.191 06:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:54.191 06:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:54.191 06:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:54.191 06:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:54.191 06:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:54.191 06:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:54.191 06:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:54.191 06:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:35:54.191 Found 0000:86:00.0 (0x8086 - 0x159b) 00:35:54.191 06:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:54.191 06:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:54.191 06:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:54.191 06:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:54.191 06:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:54.191 06:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:54.191 06:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:35:54.191 Found 0000:86:00.1 (0x8086 - 0x159b) 00:35:54.191 06:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:54.191 06:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:54.191 06:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:54.191 06:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:54.191 06:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:54.191 06:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:54.191 06:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:54.191 06:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:54.191 06:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:54.191 06:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:54.191 06:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:54.191 06:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:54.191 06:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:54.191 06:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:54.191 06:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:54.191 06:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:35:54.191 Found net devices under 0000:86:00.0: cvl_0_0 00:35:54.191 06:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:54.191 06:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:54.191 06:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:54.191 06:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:54.191 06:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:54.191 06:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:54.191 06:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:54.191 06:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:54.191 06:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:35:54.191 Found net devices under 0000:86:00.1: cvl_0_1 00:35:54.191 06:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:54.191 06:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:54.191 06:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:35:54.191 06:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:54.191 06:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:54.191 06:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:54.191 06:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:54.191 06:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:54.191 06:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:54.191 06:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:54.191 06:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:54.191 06:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:54.191 06:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:54.191 06:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:54.191 06:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:54.191 06:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:54.191 06:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:54.191 06:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:54.191 06:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:54.191 06:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:54.191 06:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:54.191 06:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:54.191 06:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:54.191 06:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:54.192 06:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:54.192 06:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:54.192 06:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:54.192 06:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:54.192 06:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:54.192 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:54.192 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.432 ms 00:35:54.192 00:35:54.192 --- 10.0.0.2 ping statistics --- 00:35:54.192 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:54.192 rtt min/avg/max/mdev = 0.432/0.432/0.432/0.000 ms 00:35:54.192 06:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:54.192 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:54.192 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.187 ms 00:35:54.192 00:35:54.192 --- 10.0.0.1 ping statistics --- 00:35:54.192 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:54.192 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:35:54.192 06:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:54.192 06:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:35:54.192 06:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:54.192 06:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:54.192 06:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:54.192 06:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:54.192 06:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:54.192 06:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:54.192 06:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:54.192 06:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:35:54.192 06:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:35:54.192 only one NIC for nvmf test 00:35:54.192 06:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:35:54.192 06:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:54.192 06:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:35:54.192 06:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:54.192 06:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:35:54.192 06:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:54.192 06:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:54.192 rmmod nvme_tcp 00:35:54.192 rmmod nvme_fabrics 00:35:54.192 rmmod nvme_keyring 00:35:54.192 06:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:54.192 06:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:35:54.192 06:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:35:54.192 06:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:35:54.192 06:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:54.192 06:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:54.192 06:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:54.192 06:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:35:54.192 06:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:35:54.192 06:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:54.192 06:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:35:54.192 06:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:54.192 06:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:54.192 06:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:54.192 06:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:54.192 06:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:56.097 06:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:56.097 06:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:35:56.097 06:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:35:56.097 06:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:56.097 06:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:35:56.097 06:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:56.097 06:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:35:56.097 06:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:56.097 06:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:56.097 06:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:56.097 06:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:35:56.097 06:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:35:56.097 06:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:35:56.097 06:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:56.097 06:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:56.097 06:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:56.097 06:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:35:56.097 06:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:35:56.097 06:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:56.097 06:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:35:56.097 06:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:56.097 06:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:56.097 06:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:56.097 06:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:56.097 06:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:56.097 06:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:56.097 00:35:56.097 real 0m8.297s 00:35:56.097 user 0m1.806s 00:35:56.097 sys 0m4.507s 00:35:56.097 06:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1128 -- # xtrace_disable 00:35:56.097 06:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:35:56.097 ************************************ 00:35:56.097 END TEST nvmf_target_multipath 00:35:56.097 ************************************ 00:35:56.097 06:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:35:56.097 06:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:35:56.097 06:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:35:56.097 06:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:35:56.097 ************************************ 00:35:56.097 START TEST nvmf_zcopy 00:35:56.097 ************************************ 00:35:56.097 06:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:35:56.097 * Looking for test storage... 00:35:56.097 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:56.097 06:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:35:56.097 06:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lcov --version 00:35:56.097 06:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:35:56.097 06:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:35:56.098 06:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:56.098 06:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:56.098 06:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:56.098 06:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:35:56.098 06:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:35:56.098 06:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:35:56.098 06:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:35:56.098 06:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:35:56.098 06:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:35:56.098 06:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:35:56.098 06:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:56.098 06:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:35:56.098 06:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:35:56.098 06:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:56.098 06:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:56.098 06:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:35:56.098 06:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:35:56.098 06:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:56.098 06:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:35:56.098 06:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:35:56.098 06:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:35:56.098 06:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:35:56.098 06:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:56.098 06:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:35:56.098 06:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:35:56.098 06:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:56.098 06:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:56.098 06:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:35:56.098 06:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:56.098 06:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:35:56.098 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:56.098 --rc genhtml_branch_coverage=1 00:35:56.098 --rc genhtml_function_coverage=1 00:35:56.098 --rc genhtml_legend=1 00:35:56.098 --rc geninfo_all_blocks=1 00:35:56.098 --rc geninfo_unexecuted_blocks=1 00:35:56.098 00:35:56.098 ' 00:35:56.098 06:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:35:56.098 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:56.098 --rc genhtml_branch_coverage=1 00:35:56.098 --rc genhtml_function_coverage=1 00:35:56.098 --rc genhtml_legend=1 00:35:56.098 --rc geninfo_all_blocks=1 00:35:56.098 --rc geninfo_unexecuted_blocks=1 00:35:56.098 00:35:56.098 ' 00:35:56.098 06:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:35:56.098 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:56.098 --rc genhtml_branch_coverage=1 00:35:56.098 --rc genhtml_function_coverage=1 00:35:56.098 --rc genhtml_legend=1 00:35:56.098 --rc geninfo_all_blocks=1 00:35:56.098 --rc geninfo_unexecuted_blocks=1 00:35:56.098 00:35:56.098 ' 00:35:56.098 06:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:35:56.098 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:56.098 --rc genhtml_branch_coverage=1 00:35:56.098 --rc genhtml_function_coverage=1 00:35:56.098 --rc genhtml_legend=1 00:35:56.098 --rc geninfo_all_blocks=1 00:35:56.098 --rc geninfo_unexecuted_blocks=1 00:35:56.098 00:35:56.098 ' 00:35:56.098 06:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:56.098 06:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:35:56.098 06:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:56.098 06:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:56.098 06:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:56.098 06:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:56.098 06:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:56.098 06:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:56.098 06:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:56.098 06:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:56.098 06:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:56.098 06:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:56.098 06:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:35:56.098 06:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:35:56.098 06:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:56.098 06:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:56.098 06:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:56.098 06:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:56.098 06:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:56.098 06:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:35:56.098 06:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:56.098 06:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:56.098 06:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:56.098 06:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:56.098 06:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:56.098 06:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:56.098 06:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:35:56.098 06:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:56.098 06:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:35:56.098 06:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:56.098 06:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:56.098 06:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:56.098 06:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:56.098 06:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:56.098 06:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:35:56.098 06:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:35:56.098 06:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:56.098 06:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:56.098 06:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:56.098 06:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:35:56.098 06:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:56.099 06:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:56.099 06:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:56.099 06:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:56.099 06:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:56.099 06:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:56.099 06:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:56.099 06:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:56.099 06:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:56.099 06:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:56.099 06:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:35:56.099 06:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:36:02.762 06:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:02.762 06:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:36:02.762 06:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:02.762 06:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:02.762 06:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:02.762 06:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:02.762 06:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:02.762 06:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:36:02.762 06:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:02.762 06:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:36:02.762 06:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:36:02.762 06:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:36:02.762 06:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:36:02.762 06:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:36:02.762 06:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:36:02.762 06:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:02.762 06:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:02.762 06:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:02.762 06:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:02.762 06:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:02.762 06:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:02.762 06:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:02.762 06:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:02.762 06:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:02.762 06:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:02.762 06:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:02.762 06:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:02.762 06:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:02.762 06:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:02.762 06:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:02.762 06:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:02.762 06:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:02.762 06:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:02.762 06:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:02.762 06:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:36:02.762 Found 0000:86:00.0 (0x8086 - 0x159b) 00:36:02.762 06:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:02.762 06:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:02.762 06:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:02.762 06:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:02.762 06:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:02.762 06:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:02.762 06:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:36:02.762 Found 0000:86:00.1 (0x8086 - 0x159b) 00:36:02.762 06:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:02.762 06:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:02.762 06:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:02.762 06:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:02.762 06:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:02.762 06:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:02.762 06:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:02.763 06:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:02.763 06:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:02.763 06:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:02.763 06:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:02.763 06:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:02.763 06:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:02.763 06:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:02.763 06:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:02.763 06:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:36:02.763 Found net devices under 0000:86:00.0: cvl_0_0 00:36:02.763 06:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:02.763 06:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:02.763 06:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:02.763 06:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:02.763 06:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:02.763 06:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:02.763 06:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:02.763 06:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:02.763 06:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:36:02.763 Found net devices under 0000:86:00.1: cvl_0_1 00:36:02.763 06:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:02.763 06:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:02.763 06:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:36:02.763 06:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:02.763 06:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:02.763 06:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:02.763 06:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:02.763 06:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:02.763 06:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:02.763 06:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:02.763 06:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:02.763 06:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:02.763 06:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:02.763 06:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:02.763 06:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:02.763 06:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:02.763 06:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:02.763 06:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:02.763 06:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:02.763 06:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:02.763 06:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:02.763 06:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:02.763 06:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:02.763 06:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:02.763 06:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:02.763 06:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:02.763 06:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:02.763 06:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:02.763 06:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:02.763 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:02.763 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.449 ms 00:36:02.763 00:36:02.763 --- 10.0.0.2 ping statistics --- 00:36:02.763 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:02.763 rtt min/avg/max/mdev = 0.449/0.449/0.449/0.000 ms 00:36:02.763 06:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:02.763 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:02.763 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.202 ms 00:36:02.763 00:36:02.763 --- 10.0.0.1 ping statistics --- 00:36:02.763 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:02.763 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:36:02.763 06:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:02.763 06:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:36:02.763 06:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:02.763 06:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:02.763 06:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:02.763 06:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:02.763 06:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:02.763 06:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:02.763 06:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:02.763 06:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:36:02.763 06:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:02.763 06:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:02.763 06:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:36:02.763 06:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=766057 00:36:02.763 06:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:36:02.763 06:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 766057 00:36:02.763 06:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@833 -- # '[' -z 766057 ']' 00:36:02.763 06:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:02.763 06:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@838 -- # local max_retries=100 00:36:02.763 06:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:02.763 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:02.763 06:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # xtrace_disable 00:36:02.763 06:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:36:02.763 [2024-11-20 06:46:33.706813] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:36:02.763 [2024-11-20 06:46:33.707776] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:36:02.763 [2024-11-20 06:46:33.707816] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:02.763 [2024-11-20 06:46:33.787736] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:02.763 [2024-11-20 06:46:33.827586] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:02.763 [2024-11-20 06:46:33.827621] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:02.763 [2024-11-20 06:46:33.827629] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:02.763 [2024-11-20 06:46:33.827634] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:02.763 [2024-11-20 06:46:33.827641] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:02.763 [2024-11-20 06:46:33.828168] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:02.763 [2024-11-20 06:46:33.893895] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:36:02.763 [2024-11-20 06:46:33.894140] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:36:02.763 06:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:36:02.763 06:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@866 -- # return 0 00:36:02.763 06:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:02.763 06:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:02.763 06:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:36:02.763 06:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:02.763 06:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:36:02.763 06:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:36:02.763 06:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:02.764 06:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:36:02.764 [2024-11-20 06:46:33.968832] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:02.764 06:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:02.764 06:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:36:02.764 06:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:02.764 06:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:36:02.764 06:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:02.764 06:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:02.764 06:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:02.764 06:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:36:02.764 [2024-11-20 06:46:33.993068] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:02.764 06:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:02.764 06:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:36:02.764 06:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:02.764 06:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:36:02.764 06:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:02.764 06:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:36:02.764 06:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:02.764 06:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:36:02.764 malloc0 00:36:02.764 06:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:02.764 06:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:36:02.764 06:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:02.764 06:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:36:02.764 06:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:02.764 06:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:36:02.764 06:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:36:02.764 06:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:36:02.764 06:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:36:02.764 06:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:02.764 06:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:02.764 { 00:36:02.764 "params": { 00:36:02.764 "name": "Nvme$subsystem", 00:36:02.764 "trtype": "$TEST_TRANSPORT", 00:36:02.764 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:02.764 "adrfam": "ipv4", 00:36:02.764 "trsvcid": "$NVMF_PORT", 00:36:02.764 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:02.764 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:02.764 "hdgst": ${hdgst:-false}, 00:36:02.764 "ddgst": ${ddgst:-false} 00:36:02.764 }, 00:36:02.764 "method": "bdev_nvme_attach_controller" 00:36:02.764 } 00:36:02.764 EOF 00:36:02.764 )") 00:36:02.764 06:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:36:02.764 06:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:36:02.764 06:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:36:02.764 06:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:36:02.764 "params": { 00:36:02.764 "name": "Nvme1", 00:36:02.764 "trtype": "tcp", 00:36:02.764 "traddr": "10.0.0.2", 00:36:02.764 "adrfam": "ipv4", 00:36:02.764 "trsvcid": "4420", 00:36:02.764 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:02.764 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:02.764 "hdgst": false, 00:36:02.764 "ddgst": false 00:36:02.764 }, 00:36:02.764 "method": "bdev_nvme_attach_controller" 00:36:02.764 }' 00:36:02.764 [2024-11-20 06:46:34.083334] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:36:02.764 [2024-11-20 06:46:34.083375] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid766257 ] 00:36:02.764 [2024-11-20 06:46:34.156602] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:02.764 [2024-11-20 06:46:34.196973] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:02.764 Running I/O for 10 seconds... 00:36:05.073 8537.00 IOPS, 66.70 MiB/s [2024-11-20T05:46:37.842Z] 8610.50 IOPS, 67.27 MiB/s [2024-11-20T05:46:38.776Z] 8610.33 IOPS, 67.27 MiB/s [2024-11-20T05:46:39.708Z] 8627.50 IOPS, 67.40 MiB/s [2024-11-20T05:46:40.642Z] 8637.00 IOPS, 67.48 MiB/s [2024-11-20T05:46:41.577Z] 8631.17 IOPS, 67.43 MiB/s [2024-11-20T05:46:42.952Z] 8638.14 IOPS, 67.49 MiB/s [2024-11-20T05:46:43.886Z] 8644.88 IOPS, 67.54 MiB/s [2024-11-20T05:46:44.819Z] 8646.89 IOPS, 67.55 MiB/s [2024-11-20T05:46:44.819Z] 8642.70 IOPS, 67.52 MiB/s 00:36:12.983 Latency(us) 00:36:12.983 [2024-11-20T05:46:44.819Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:12.983 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:36:12.983 Verification LBA range: start 0x0 length 0x1000 00:36:12.983 Nvme1n1 : 10.01 8642.79 67.52 0.00 0.00 14767.58 1240.50 21346.01 00:36:12.983 [2024-11-20T05:46:44.819Z] =================================================================================================================== 00:36:12.983 [2024-11-20T05:46:44.819Z] Total : 8642.79 67.52 0.00 0.00 14767.58 1240.50 21346.01 00:36:12.983 06:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=767860 00:36:12.983 06:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:36:12.983 06:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:36:12.983 06:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:36:12.983 06:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:36:12.983 06:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:36:12.983 06:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:36:12.983 06:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:12.983 06:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:12.983 { 00:36:12.983 "params": { 00:36:12.983 "name": "Nvme$subsystem", 00:36:12.983 "trtype": "$TEST_TRANSPORT", 00:36:12.983 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:12.983 "adrfam": "ipv4", 00:36:12.983 "trsvcid": "$NVMF_PORT", 00:36:12.983 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:12.983 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:12.983 "hdgst": ${hdgst:-false}, 00:36:12.983 "ddgst": ${ddgst:-false} 00:36:12.983 }, 00:36:12.983 "method": "bdev_nvme_attach_controller" 00:36:12.983 } 00:36:12.983 EOF 00:36:12.983 )") 00:36:12.983 [2024-11-20 06:46:44.708517] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:12.983 [2024-11-20 06:46:44.708552] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:12.983 06:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:36:12.983 06:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:36:12.983 06:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:36:12.983 06:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:36:12.983 "params": { 00:36:12.983 "name": "Nvme1", 00:36:12.983 "trtype": "tcp", 00:36:12.983 "traddr": "10.0.0.2", 00:36:12.983 "adrfam": "ipv4", 00:36:12.983 "trsvcid": "4420", 00:36:12.983 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:12.983 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:12.983 "hdgst": false, 00:36:12.983 "ddgst": false 00:36:12.983 }, 00:36:12.983 "method": "bdev_nvme_attach_controller" 00:36:12.983 }' 00:36:12.983 [2024-11-20 06:46:44.720477] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:12.983 [2024-11-20 06:46:44.720491] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:12.983 [2024-11-20 06:46:44.732472] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:12.983 [2024-11-20 06:46:44.732482] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:12.984 [2024-11-20 06:46:44.744475] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:12.984 [2024-11-20 06:46:44.744485] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:12.984 [2024-11-20 06:46:44.749252] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:36:12.984 [2024-11-20 06:46:44.749293] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid767860 ] 00:36:12.984 [2024-11-20 06:46:44.756474] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:12.984 [2024-11-20 06:46:44.756483] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:12.984 [2024-11-20 06:46:44.768469] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:12.984 [2024-11-20 06:46:44.768478] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:12.984 [2024-11-20 06:46:44.780474] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:12.984 [2024-11-20 06:46:44.780482] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:12.984 [2024-11-20 06:46:44.792471] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:12.984 [2024-11-20 06:46:44.792480] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:12.984 [2024-11-20 06:46:44.804472] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:12.984 [2024-11-20 06:46:44.804481] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:13.241 [2024-11-20 06:46:44.816472] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:13.241 [2024-11-20 06:46:44.816481] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:13.241 [2024-11-20 06:46:44.824148] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:13.241 [2024-11-20 06:46:44.828471] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:13.242 [2024-11-20 06:46:44.828480] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:13.242 [2024-11-20 06:46:44.840474] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:13.242 [2024-11-20 06:46:44.840488] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:13.242 [2024-11-20 06:46:44.852474] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:13.242 [2024-11-20 06:46:44.852483] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:13.242 [2024-11-20 06:46:44.864487] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:13.242 [2024-11-20 06:46:44.864504] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:13.242 [2024-11-20 06:46:44.865840] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:13.242 [2024-11-20 06:46:44.876482] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:13.242 [2024-11-20 06:46:44.876495] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:13.242 [2024-11-20 06:46:44.888478] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:13.242 [2024-11-20 06:46:44.888499] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:13.242 [2024-11-20 06:46:44.900476] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:13.242 [2024-11-20 06:46:44.900490] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:13.242 [2024-11-20 06:46:44.912477] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:13.242 [2024-11-20 06:46:44.912491] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:13.242 [2024-11-20 06:46:44.924479] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:13.242 [2024-11-20 06:46:44.924492] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:13.242 [2024-11-20 06:46:44.936473] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:13.242 [2024-11-20 06:46:44.936484] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:13.242 [2024-11-20 06:46:44.948501] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:13.242 [2024-11-20 06:46:44.948521] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:13.242 [2024-11-20 06:46:44.960488] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:13.242 [2024-11-20 06:46:44.960509] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:13.242 [2024-11-20 06:46:44.972484] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:13.242 [2024-11-20 06:46:44.972499] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:13.242 [2024-11-20 06:46:44.984478] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:13.242 [2024-11-20 06:46:44.984491] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:13.242 [2024-11-20 06:46:44.996474] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:13.242 [2024-11-20 06:46:44.996483] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:13.242 [2024-11-20 06:46:45.008471] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:13.242 [2024-11-20 06:46:45.008480] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:13.242 [2024-11-20 06:46:45.020477] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:13.242 [2024-11-20 06:46:45.020490] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:13.242 [2024-11-20 06:46:45.032477] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:13.242 [2024-11-20 06:46:45.032491] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:13.242 [2024-11-20 06:46:45.044491] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:13.242 [2024-11-20 06:46:45.044506] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:13.242 [2024-11-20 06:46:45.056491] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:13.242 [2024-11-20 06:46:45.056507] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:13.242 Running I/O for 5 seconds... 00:36:13.242 [2024-11-20 06:46:45.069418] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:13.242 [2024-11-20 06:46:45.069438] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:13.501 [2024-11-20 06:46:45.084422] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:13.501 [2024-11-20 06:46:45.084441] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:13.501 [2024-11-20 06:46:45.098211] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:13.501 [2024-11-20 06:46:45.098230] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:13.501 [2024-11-20 06:46:45.113234] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:13.501 [2024-11-20 06:46:45.113252] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:13.501 [2024-11-20 06:46:45.125101] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:13.501 [2024-11-20 06:46:45.125118] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:13.501 [2024-11-20 06:46:45.138545] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:13.501 [2024-11-20 06:46:45.138564] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:13.501 [2024-11-20 06:46:45.153750] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:13.501 [2024-11-20 06:46:45.153769] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:13.501 [2024-11-20 06:46:45.168538] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:13.501 [2024-11-20 06:46:45.168557] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:13.501 [2024-11-20 06:46:45.179369] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:13.501 [2024-11-20 06:46:45.179389] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:13.501 [2024-11-20 06:46:45.194396] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:13.501 [2024-11-20 06:46:45.194416] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:13.501 [2024-11-20 06:46:45.209225] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:13.501 [2024-11-20 06:46:45.209245] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:13.501 [2024-11-20 06:46:45.225010] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:13.501 [2024-11-20 06:46:45.225028] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:13.501 [2024-11-20 06:46:45.237329] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:13.501 [2024-11-20 06:46:45.237355] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:13.501 [2024-11-20 06:46:45.249897] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:13.501 [2024-11-20 06:46:45.249917] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:13.501 [2024-11-20 06:46:45.264776] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:13.501 [2024-11-20 06:46:45.264794] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:13.501 [2024-11-20 06:46:45.280301] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:13.501 [2024-11-20 06:46:45.280320] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:13.501 [2024-11-20 06:46:45.291783] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:13.501 [2024-11-20 06:46:45.291802] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:13.501 [2024-11-20 06:46:45.306684] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:13.501 [2024-11-20 06:46:45.306703] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:13.501 [2024-11-20 06:46:45.321975] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:13.501 [2024-11-20 06:46:45.321995] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:13.760 [2024-11-20 06:46:45.336859] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:13.760 [2024-11-20 06:46:45.336882] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:13.760 [2024-11-20 06:46:45.353016] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:13.760 [2024-11-20 06:46:45.353035] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:13.760 [2024-11-20 06:46:45.364785] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:13.760 [2024-11-20 06:46:45.364806] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:13.760 [2024-11-20 06:46:45.378124] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:13.760 [2024-11-20 06:46:45.378142] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:13.760 [2024-11-20 06:46:45.393080] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:13.760 [2024-11-20 06:46:45.393099] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:13.760 [2024-11-20 06:46:45.403133] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:13.760 [2024-11-20 06:46:45.403151] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:13.760 [2024-11-20 06:46:45.418015] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:13.760 [2024-11-20 06:46:45.418034] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:13.760 [2024-11-20 06:46:45.432527] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:13.760 [2024-11-20 06:46:45.432547] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:13.760 [2024-11-20 06:46:45.444007] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:13.760 [2024-11-20 06:46:45.444027] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:13.760 [2024-11-20 06:46:45.457935] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:13.760 [2024-11-20 06:46:45.457953] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:13.760 [2024-11-20 06:46:45.472811] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:13.760 [2024-11-20 06:46:45.472829] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:13.760 [2024-11-20 06:46:45.488005] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:13.760 [2024-11-20 06:46:45.488024] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:13.760 [2024-11-20 06:46:45.502592] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:13.760 [2024-11-20 06:46:45.502611] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:13.760 [2024-11-20 06:46:45.517372] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:13.760 [2024-11-20 06:46:45.517390] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:13.760 [2024-11-20 06:46:45.532195] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:13.760 [2024-11-20 06:46:45.532219] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:13.760 [2024-11-20 06:46:45.546497] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:13.760 [2024-11-20 06:46:45.546523] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:13.760 [2024-11-20 06:46:45.561472] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:13.760 [2024-11-20 06:46:45.561490] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:13.760 [2024-11-20 06:46:45.572112] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:13.760 [2024-11-20 06:46:45.572131] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:13.760 [2024-11-20 06:46:45.585919] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:13.760 [2024-11-20 06:46:45.585937] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:14.019 [2024-11-20 06:46:45.600673] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:14.019 [2024-11-20 06:46:45.600691] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:14.019 [2024-11-20 06:46:45.610996] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:14.019 [2024-11-20 06:46:45.611014] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:14.019 [2024-11-20 06:46:45.625833] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:14.019 [2024-11-20 06:46:45.625851] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:14.019 [2024-11-20 06:46:45.636025] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:14.019 [2024-11-20 06:46:45.636043] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:14.019 [2024-11-20 06:46:45.650358] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:14.019 [2024-11-20 06:46:45.650377] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:14.019 [2024-11-20 06:46:45.665114] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:14.019 [2024-11-20 06:46:45.665132] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:14.019 [2024-11-20 06:46:45.680960] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:14.019 [2024-11-20 06:46:45.680978] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:14.019 [2024-11-20 06:46:45.693272] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:14.019 [2024-11-20 06:46:45.693289] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:14.019 [2024-11-20 06:46:45.706122] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:14.019 [2024-11-20 06:46:45.706140] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:14.019 [2024-11-20 06:46:45.721180] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:14.019 [2024-11-20 06:46:45.721198] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:14.019 [2024-11-20 06:46:45.736707] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:14.019 [2024-11-20 06:46:45.736725] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:14.019 [2024-11-20 06:46:45.747173] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:14.019 [2024-11-20 06:46:45.747191] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:14.019 [2024-11-20 06:46:45.761770] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:14.019 [2024-11-20 06:46:45.761788] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:14.019 [2024-11-20 06:46:45.776335] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:14.019 [2024-11-20 06:46:45.776354] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:14.019 [2024-11-20 06:46:45.789567] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:14.019 [2024-11-20 06:46:45.789584] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:14.019 [2024-11-20 06:46:45.802176] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:14.019 [2024-11-20 06:46:45.802198] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:14.019 [2024-11-20 06:46:45.817093] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:14.019 [2024-11-20 06:46:45.817111] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:14.019 [2024-11-20 06:46:45.832210] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:14.019 [2024-11-20 06:46:45.832229] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:14.019 [2024-11-20 06:46:45.846056] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:14.019 [2024-11-20 06:46:45.846074] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:14.278 [2024-11-20 06:46:45.861219] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:14.278 [2024-11-20 06:46:45.861253] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:14.278 [2024-11-20 06:46:45.875814] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:14.278 [2024-11-20 06:46:45.875832] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:14.278 [2024-11-20 06:46:45.890731] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:14.278 [2024-11-20 06:46:45.890750] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:14.278 [2024-11-20 06:46:45.905062] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:14.278 [2024-11-20 06:46:45.905080] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:14.278 [2024-11-20 06:46:45.920476] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:14.278 [2024-11-20 06:46:45.920494] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:14.278 [2024-11-20 06:46:45.934070] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:14.278 [2024-11-20 06:46:45.934088] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:14.278 [2024-11-20 06:46:45.949352] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:14.278 [2024-11-20 06:46:45.949369] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:14.278 [2024-11-20 06:46:45.960920] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:14.278 [2024-11-20 06:46:45.960937] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:14.278 [2024-11-20 06:46:45.974335] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:14.278 [2024-11-20 06:46:45.974356] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:14.278 [2024-11-20 06:46:45.989314] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:14.278 [2024-11-20 06:46:45.989334] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:14.278 [2024-11-20 06:46:46.000489] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:14.278 [2024-11-20 06:46:46.000508] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:14.278 [2024-11-20 06:46:46.014193] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:14.278 [2024-11-20 06:46:46.014217] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:14.278 [2024-11-20 06:46:46.029101] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:14.278 [2024-11-20 06:46:46.029119] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:14.278 [2024-11-20 06:46:46.044426] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:14.278 [2024-11-20 06:46:46.044444] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:14.278 [2024-11-20 06:46:46.057461] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:14.278 [2024-11-20 06:46:46.057478] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:14.278 16601.00 IOPS, 129.70 MiB/s [2024-11-20T05:46:46.114Z] [2024-11-20 06:46:46.069934] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:14.278 [2024-11-20 06:46:46.069957] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:14.278 [2024-11-20 06:46:46.080515] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:14.278 [2024-11-20 06:46:46.080533] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:14.278 [2024-11-20 06:46:46.094902] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:14.278 [2024-11-20 06:46:46.094920] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:14.278 [2024-11-20 06:46:46.109694] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:14.278 [2024-11-20 06:46:46.109712] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:14.537 [2024-11-20 06:46:46.124199] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:14.537 [2024-11-20 06:46:46.124223] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:14.537 [2024-11-20 06:46:46.137767] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:14.537 [2024-11-20 06:46:46.137786] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:14.537 [2024-11-20 06:46:46.152446] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:14.537 [2024-11-20 06:46:46.152463] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:14.537 [2024-11-20 06:46:46.163714] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:14.537 [2024-11-20 06:46:46.163731] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:14.537 [2024-11-20 06:46:46.178512] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:14.537 [2024-11-20 06:46:46.178531] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:14.537 [2024-11-20 06:46:46.193137] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:14.537 [2024-11-20 06:46:46.193156] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:14.537 [2024-11-20 06:46:46.207991] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:14.537 [2024-11-20 06:46:46.208010] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:14.537 [2024-11-20 06:46:46.222443] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:14.537 [2024-11-20 06:46:46.222471] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:14.537 [2024-11-20 06:46:46.236855] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:14.537 [2024-11-20 06:46:46.236872] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:14.537 [2024-11-20 06:46:46.252165] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:14.537 [2024-11-20 06:46:46.252183] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:14.537 [2024-11-20 06:46:46.266531] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:14.537 [2024-11-20 06:46:46.266550] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:14.537 [2024-11-20 06:46:46.281302] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:14.537 [2024-11-20 06:46:46.281320] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:14.537 [2024-11-20 06:46:46.292661] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:14.537 [2024-11-20 06:46:46.292679] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:14.537 [2024-11-20 06:46:46.306819] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:14.537 [2024-11-20 06:46:46.306838] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:14.537 [2024-11-20 06:46:46.321483] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:14.537 [2024-11-20 06:46:46.321501] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:14.537 [2024-11-20 06:46:46.336634] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:14.537 [2024-11-20 06:46:46.336652] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:14.537 [2024-11-20 06:46:46.349571] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:14.537 [2024-11-20 06:46:46.349589] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:14.537 [2024-11-20 06:46:46.360845] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:14.537 [2024-11-20 06:46:46.360863] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:14.796 [2024-11-20 06:46:46.374034] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:14.796 [2024-11-20 06:46:46.374052] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:14.796 [2024-11-20 06:46:46.388780] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:14.796 [2024-11-20 06:46:46.388798] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:14.796 [2024-11-20 06:46:46.400102] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:14.796 [2024-11-20 06:46:46.400120] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:14.796 [2024-11-20 06:46:46.414551] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:14.796 [2024-11-20 06:46:46.414569] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:14.796 [2024-11-20 06:46:46.429441] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:14.796 [2024-11-20 06:46:46.429460] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:14.796 [2024-11-20 06:46:46.439948] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:14.796 [2024-11-20 06:46:46.439965] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:14.796 [2024-11-20 06:46:46.454330] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:14.796 [2024-11-20 06:46:46.454348] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:14.796 [2024-11-20 06:46:46.468871] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:14.796 [2024-11-20 06:46:46.468889] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:14.796 [2024-11-20 06:46:46.480466] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:14.796 [2024-11-20 06:46:46.480487] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:14.796 [2024-11-20 06:46:46.494260] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:14.796 [2024-11-20 06:46:46.494278] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:14.796 [2024-11-20 06:46:46.508868] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:14.796 [2024-11-20 06:46:46.508885] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:14.796 [2024-11-20 06:46:46.524848] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:14.796 [2024-11-20 06:46:46.524866] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:14.796 [2024-11-20 06:46:46.540434] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:14.796 [2024-11-20 06:46:46.540453] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:14.796 [2024-11-20 06:46:46.551628] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:14.796 [2024-11-20 06:46:46.551647] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:14.796 [2024-11-20 06:46:46.566411] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:14.796 [2024-11-20 06:46:46.566429] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:14.796 [2024-11-20 06:46:46.581273] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:14.796 [2024-11-20 06:46:46.581291] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:14.796 [2024-11-20 06:46:46.596732] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:14.796 [2024-11-20 06:46:46.596750] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:14.796 [2024-11-20 06:46:46.608906] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:14.796 [2024-11-20 06:46:46.608925] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:14.796 [2024-11-20 06:46:46.622413] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:14.796 [2024-11-20 06:46:46.622433] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:15.055 [2024-11-20 06:46:46.637198] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:15.055 [2024-11-20 06:46:46.637223] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:15.055 [2024-11-20 06:46:46.652384] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:15.055 [2024-11-20 06:46:46.652404] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:15.055 [2024-11-20 06:46:46.665980] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:15.055 [2024-11-20 06:46:46.665999] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:15.055 [2024-11-20 06:46:46.680909] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:15.055 [2024-11-20 06:46:46.680927] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:15.055 [2024-11-20 06:46:46.697037] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:15.055 [2024-11-20 06:46:46.697056] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:15.055 [2024-11-20 06:46:46.707488] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:15.055 [2024-11-20 06:46:46.707506] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:15.055 [2024-11-20 06:46:46.722272] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:15.055 [2024-11-20 06:46:46.722290] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:15.055 [2024-11-20 06:46:46.737078] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:15.055 [2024-11-20 06:46:46.737097] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:15.055 [2024-11-20 06:46:46.752584] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:15.055 [2024-11-20 06:46:46.752602] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:15.055 [2024-11-20 06:46:46.766235] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:15.055 [2024-11-20 06:46:46.766254] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:15.055 [2024-11-20 06:46:46.780999] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:15.055 [2024-11-20 06:46:46.781017] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:15.055 [2024-11-20 06:46:46.791930] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:15.055 [2024-11-20 06:46:46.791947] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:15.055 [2024-11-20 06:46:46.806434] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:15.055 [2024-11-20 06:46:46.806453] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:15.055 [2024-11-20 06:46:46.821159] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:15.055 [2024-11-20 06:46:46.821177] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:15.055 [2024-11-20 06:46:46.836576] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:15.055 [2024-11-20 06:46:46.836595] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:15.055 [2024-11-20 06:46:46.850807] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:15.055 [2024-11-20 06:46:46.850826] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:15.055 [2024-11-20 06:46:46.865182] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:15.055 [2024-11-20 06:46:46.865200] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:15.055 [2024-11-20 06:46:46.880148] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:15.055 [2024-11-20 06:46:46.880166] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:15.314 [2024-11-20 06:46:46.894709] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:15.314 [2024-11-20 06:46:46.894729] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:15.314 [2024-11-20 06:46:46.909298] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:15.314 [2024-11-20 06:46:46.909316] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:15.314 [2024-11-20 06:46:46.920725] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:15.314 [2024-11-20 06:46:46.920743] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:15.314 [2024-11-20 06:46:46.933967] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:15.314 [2024-11-20 06:46:46.933985] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:15.314 [2024-11-20 06:46:46.948898] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:15.314 [2024-11-20 06:46:46.948916] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:15.314 [2024-11-20 06:46:46.961106] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:15.314 [2024-11-20 06:46:46.961123] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:15.314 [2024-11-20 06:46:46.973817] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:15.314 [2024-11-20 06:46:46.973835] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:15.314 [2024-11-20 06:46:46.984910] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:15.314 [2024-11-20 06:46:46.984927] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:15.314 [2024-11-20 06:46:46.997971] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:15.314 [2024-11-20 06:46:46.997990] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:15.314 [2024-11-20 06:46:47.013256] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:15.314 [2024-11-20 06:46:47.013274] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:15.314 [2024-11-20 06:46:47.024301] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:15.314 [2024-11-20 06:46:47.024320] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:15.314 [2024-11-20 06:46:47.038636] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:15.314 [2024-11-20 06:46:47.038654] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:15.314 [2024-11-20 06:46:47.054275] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:15.314 [2024-11-20 06:46:47.054293] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:15.314 16637.50 IOPS, 129.98 MiB/s [2024-11-20T05:46:47.150Z] [2024-11-20 06:46:47.069071] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:15.314 [2024-11-20 06:46:47.069088] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:15.314 [2024-11-20 06:46:47.081987] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:15.314 [2024-11-20 06:46:47.082004] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:15.314 [2024-11-20 06:46:47.096972] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:15.314 [2024-11-20 06:46:47.096990] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:15.314 [2024-11-20 06:46:47.108586] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:15.314 [2024-11-20 06:46:47.108607] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:15.314 [2024-11-20 06:46:47.122768] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:15.314 [2024-11-20 06:46:47.122787] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:15.314 [2024-11-20 06:46:47.137419] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:15.314 [2024-11-20 06:46:47.137436] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:15.573 [2024-11-20 06:46:47.153373] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:15.573 [2024-11-20 06:46:47.153392] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:15.573 [2024-11-20 06:46:47.168685] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:15.573 [2024-11-20 06:46:47.168702] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:15.573 [2024-11-20 06:46:47.181364] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:15.573 [2024-11-20 06:46:47.181381] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:15.573 [2024-11-20 06:46:47.193001] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:15.573 [2024-11-20 06:46:47.193019] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:15.573 [2024-11-20 06:46:47.206304] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:15.573 [2024-11-20 06:46:47.206322] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:15.573 [2024-11-20 06:46:47.221194] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:15.573 [2024-11-20 06:46:47.221217] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:15.573 [2024-11-20 06:46:47.232543] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:15.573 [2024-11-20 06:46:47.232561] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:15.573 [2024-11-20 06:46:47.246752] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:15.573 [2024-11-20 06:46:47.246769] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:15.573 [2024-11-20 06:46:47.261513] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:15.573 [2024-11-20 06:46:47.261531] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:15.573 [2024-11-20 06:46:47.273108] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:15.573 [2024-11-20 06:46:47.273125] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:15.573 [2024-11-20 06:46:47.286134] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:15.573 [2024-11-20 06:46:47.286151] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:15.573 [2024-11-20 06:46:47.300888] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:15.573 [2024-11-20 06:46:47.300905] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:15.573 [2024-11-20 06:46:47.316210] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:15.573 [2024-11-20 06:46:47.316242] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:15.573 [2024-11-20 06:46:47.330952] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:15.573 [2024-11-20 06:46:47.330971] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:15.573 [2024-11-20 06:46:47.345534] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:15.573 [2024-11-20 06:46:47.345551] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:15.573 [2024-11-20 06:46:47.360630] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:15.573 [2024-11-20 06:46:47.360649] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:15.573 [2024-11-20 06:46:47.374778] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:15.573 [2024-11-20 06:46:47.374801] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:15.573 [2024-11-20 06:46:47.389394] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:15.573 [2024-11-20 06:46:47.389411] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:15.573 [2024-11-20 06:46:47.399807] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:15.573 [2024-11-20 06:46:47.399825] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:15.832 [2024-11-20 06:46:47.414649] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:15.832 [2024-11-20 06:46:47.414668] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:15.832 [2024-11-20 06:46:47.429254] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:15.832 [2024-11-20 06:46:47.429272] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:15.832 [2024-11-20 06:46:47.440020] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:15.832 [2024-11-20 06:46:47.440039] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:15.832 [2024-11-20 06:46:47.453996] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:15.832 [2024-11-20 06:46:47.454014] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:15.832 [2024-11-20 06:46:47.469219] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:15.832 [2024-11-20 06:46:47.469237] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:15.832 [2024-11-20 06:46:47.479817] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:15.832 [2024-11-20 06:46:47.479836] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:15.832 [2024-11-20 06:46:47.494054] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:15.832 [2024-11-20 06:46:47.494072] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:15.832 [2024-11-20 06:46:47.509143] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:15.832 [2024-11-20 06:46:47.509163] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:15.832 [2024-11-20 06:46:47.520555] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:15.832 [2024-11-20 06:46:47.520572] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:15.832 [2024-11-20 06:46:47.534889] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:15.832 [2024-11-20 06:46:47.534907] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:15.832 [2024-11-20 06:46:47.550195] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:15.832 [2024-11-20 06:46:47.550219] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:15.832 [2024-11-20 06:46:47.564877] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:15.832 [2024-11-20 06:46:47.564895] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:15.832 [2024-11-20 06:46:47.576357] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:15.832 [2024-11-20 06:46:47.576375] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:15.832 [2024-11-20 06:46:47.590538] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:15.832 [2024-11-20 06:46:47.590556] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:15.832 [2024-11-20 06:46:47.605479] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:15.832 [2024-11-20 06:46:47.605497] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:15.832 [2024-11-20 06:46:47.620609] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:15.832 [2024-11-20 06:46:47.620627] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:15.832 [2024-11-20 06:46:47.634092] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:15.832 [2024-11-20 06:46:47.634115] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:15.832 [2024-11-20 06:46:47.644386] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:15.832 [2024-11-20 06:46:47.644404] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:15.832 [2024-11-20 06:46:47.658412] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:15.832 [2024-11-20 06:46:47.658430] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:16.091 [2024-11-20 06:46:47.673422] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:16.091 [2024-11-20 06:46:47.673439] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:16.091 [2024-11-20 06:46:47.685183] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:16.091 [2024-11-20 06:46:47.685200] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:16.091 [2024-11-20 06:46:47.696698] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:16.091 [2024-11-20 06:46:47.696714] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:16.091 [2024-11-20 06:46:47.710555] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:16.091 [2024-11-20 06:46:47.710573] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:16.091 [2024-11-20 06:46:47.725462] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:16.091 [2024-11-20 06:46:47.725480] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:16.091 [2024-11-20 06:46:47.740505] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:16.091 [2024-11-20 06:46:47.740524] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:16.091 [2024-11-20 06:46:47.753406] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:16.091 [2024-11-20 06:46:47.753424] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:16.091 [2024-11-20 06:46:47.765744] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:16.091 [2024-11-20 06:46:47.765762] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:16.091 [2024-11-20 06:46:47.776612] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:16.091 [2024-11-20 06:46:47.776630] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:16.091 [2024-11-20 06:46:47.790668] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:16.091 [2024-11-20 06:46:47.790686] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:16.091 [2024-11-20 06:46:47.805513] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:16.091 [2024-11-20 06:46:47.805531] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:16.091 [2024-11-20 06:46:47.820170] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:16.091 [2024-11-20 06:46:47.820188] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:16.091 [2024-11-20 06:46:47.833881] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:16.091 [2024-11-20 06:46:47.833899] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:16.091 [2024-11-20 06:46:47.843975] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:16.091 [2024-11-20 06:46:47.843993] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:16.091 [2024-11-20 06:46:47.858088] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:16.091 [2024-11-20 06:46:47.858106] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:16.091 [2024-11-20 06:46:47.873353] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:16.091 [2024-11-20 06:46:47.873371] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:16.091 [2024-11-20 06:46:47.888633] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:16.091 [2024-11-20 06:46:47.888650] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:16.091 [2024-11-20 06:46:47.901091] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:16.091 [2024-11-20 06:46:47.901109] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:16.091 [2024-11-20 06:46:47.914432] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:16.091 [2024-11-20 06:46:47.914451] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:16.352 [2024-11-20 06:46:47.929551] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:16.352 [2024-11-20 06:46:47.929569] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:16.352 [2024-11-20 06:46:47.944521] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:16.352 [2024-11-20 06:46:47.944539] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:16.352 [2024-11-20 06:46:47.955634] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:16.352 [2024-11-20 06:46:47.955652] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:16.352 [2024-11-20 06:46:47.970054] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:16.352 [2024-11-20 06:46:47.970071] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:16.352 [2024-11-20 06:46:47.985431] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:16.352 [2024-11-20 06:46:47.985449] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:16.352 [2024-11-20 06:46:48.000544] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:16.352 [2024-11-20 06:46:48.000561] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:16.352 [2024-11-20 06:46:48.014002] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:16.352 [2024-11-20 06:46:48.014019] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:16.352 [2024-11-20 06:46:48.024852] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:16.352 [2024-11-20 06:46:48.024870] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:16.352 [2024-11-20 06:46:48.038716] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:16.352 [2024-11-20 06:46:48.038735] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:16.352 [2024-11-20 06:46:48.053619] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:16.352 [2024-11-20 06:46:48.053637] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:16.352 [2024-11-20 06:46:48.064848] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:16.352 [2024-11-20 06:46:48.064865] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:16.352 16617.33 IOPS, 129.82 MiB/s [2024-11-20T05:46:48.188Z] [2024-11-20 06:46:48.078663] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:16.352 [2024-11-20 06:46:48.078681] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:16.352 [2024-11-20 06:46:48.093635] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:16.352 [2024-11-20 06:46:48.093654] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:16.352 [2024-11-20 06:46:48.108133] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:16.352 [2024-11-20 06:46:48.108153] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:16.352 [2024-11-20 06:46:48.120541] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:16.352 [2024-11-20 06:46:48.120560] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:16.352 [2024-11-20 06:46:48.134392] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:16.352 [2024-11-20 06:46:48.134411] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:16.352 [2024-11-20 06:46:48.148978] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:16.352 [2024-11-20 06:46:48.148996] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:16.352 [2024-11-20 06:46:48.161493] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:16.352 [2024-11-20 06:46:48.161510] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:16.352 [2024-11-20 06:46:48.176328] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:16.352 [2024-11-20 06:46:48.176346] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:16.611 [2024-11-20 06:46:48.190208] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:16.611 [2024-11-20 06:46:48.190227] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:16.611 [2024-11-20 06:46:48.205161] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:16.611 [2024-11-20 06:46:48.205179] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:16.611 [2024-11-20 06:46:48.217412] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:16.611 [2024-11-20 06:46:48.217431] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:16.611 [2024-11-20 06:46:48.230156] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:16.611 [2024-11-20 06:46:48.230175] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:16.611 [2024-11-20 06:46:48.245333] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:16.611 [2024-11-20 06:46:48.245351] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:16.611 [2024-11-20 06:46:48.258166] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:16.611 [2024-11-20 06:46:48.258185] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:16.611 [2024-11-20 06:46:48.273308] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:16.611 [2024-11-20 06:46:48.273327] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:16.611 [2024-11-20 06:46:48.287878] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:16.611 [2024-11-20 06:46:48.287898] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:16.611 [2024-11-20 06:46:48.302220] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:16.611 [2024-11-20 06:46:48.302249] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:16.611 [2024-11-20 06:46:48.316806] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:16.611 [2024-11-20 06:46:48.316824] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:16.611 [2024-11-20 06:46:48.328866] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:16.611 [2024-11-20 06:46:48.328884] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:16.611 [2024-11-20 06:46:48.342422] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:16.611 [2024-11-20 06:46:48.342441] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:16.611 [2024-11-20 06:46:48.357057] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:16.611 [2024-11-20 06:46:48.357079] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:16.611 [2024-11-20 06:46:48.373246] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:16.611 [2024-11-20 06:46:48.373265] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:16.611 [2024-11-20 06:46:48.388347] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:16.611 [2024-11-20 06:46:48.388365] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:16.611 [2024-11-20 06:46:48.401444] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:16.611 [2024-11-20 06:46:48.401473] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:16.611 [2024-11-20 06:46:48.412867] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:16.611 [2024-11-20 06:46:48.412885] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:16.611 [2024-11-20 06:46:48.426157] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:16.611 [2024-11-20 06:46:48.426176] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:16.611 [2024-11-20 06:46:48.441130] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:16.611 [2024-11-20 06:46:48.441149] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:16.870 [2024-11-20 06:46:48.457176] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:16.870 [2024-11-20 06:46:48.457195] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:16.870 [2024-11-20 06:46:48.471703] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:16.870 [2024-11-20 06:46:48.471723] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:16.870 [2024-11-20 06:46:48.486415] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:16.870 [2024-11-20 06:46:48.486433] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:16.870 [2024-11-20 06:46:48.500895] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:16.870 [2024-11-20 06:46:48.500926] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:16.870 [2024-11-20 06:46:48.513760] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:16.870 [2024-11-20 06:46:48.513779] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:16.870 [2024-11-20 06:46:48.528776] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:16.870 [2024-11-20 06:46:48.528793] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:16.870 [2024-11-20 06:46:48.539929] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:16.870 [2024-11-20 06:46:48.539948] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:16.870 [2024-11-20 06:46:48.554065] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:16.870 [2024-11-20 06:46:48.554083] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:16.870 [2024-11-20 06:46:48.568868] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:16.870 [2024-11-20 06:46:48.568886] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:16.870 [2024-11-20 06:46:48.584460] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:16.870 [2024-11-20 06:46:48.584478] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:16.870 [2024-11-20 06:46:48.595896] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:16.870 [2024-11-20 06:46:48.595914] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:16.870 [2024-11-20 06:46:48.610670] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:16.870 [2024-11-20 06:46:48.610689] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:16.870 [2024-11-20 06:46:48.625455] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:16.870 [2024-11-20 06:46:48.625472] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:16.870 [2024-11-20 06:46:48.635799] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:16.870 [2024-11-20 06:46:48.635817] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:16.870 [2024-11-20 06:46:48.650344] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:16.870 [2024-11-20 06:46:48.650362] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:16.870 [2024-11-20 06:46:48.664899] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:16.870 [2024-11-20 06:46:48.664920] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:16.870 [2024-11-20 06:46:48.677579] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:16.870 [2024-11-20 06:46:48.677597] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:16.870 [2024-11-20 06:46:48.692320] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:16.870 [2024-11-20 06:46:48.692338] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:17.129 [2024-11-20 06:46:48.706521] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:17.129 [2024-11-20 06:46:48.706539] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:17.129 [2024-11-20 06:46:48.721099] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:17.129 [2024-11-20 06:46:48.721117] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:17.129 [2024-11-20 06:46:48.736254] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:17.129 [2024-11-20 06:46:48.736272] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:17.129 [2024-11-20 06:46:48.750568] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:17.129 [2024-11-20 06:46:48.750594] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:17.129 [2024-11-20 06:46:48.765542] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:17.129 [2024-11-20 06:46:48.765559] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:17.129 [2024-11-20 06:46:48.779877] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:17.129 [2024-11-20 06:46:48.779896] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:17.129 [2024-11-20 06:46:48.793697] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:17.129 [2024-11-20 06:46:48.793714] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:17.129 [2024-11-20 06:46:48.805289] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:17.129 [2024-11-20 06:46:48.805306] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:17.129 [2024-11-20 06:46:48.817959] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:17.129 [2024-11-20 06:46:48.817976] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:17.129 [2024-11-20 06:46:48.828360] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:17.129 [2024-11-20 06:46:48.828378] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:17.129 [2024-11-20 06:46:48.842502] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:17.129 [2024-11-20 06:46:48.842520] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:17.129 [2024-11-20 06:46:48.857262] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:17.129 [2024-11-20 06:46:48.857279] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:17.129 [2024-11-20 06:46:48.872348] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:17.129 [2024-11-20 06:46:48.872366] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:17.129 [2024-11-20 06:46:48.886085] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:17.129 [2024-11-20 06:46:48.886103] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:17.129 [2024-11-20 06:46:48.901053] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:17.129 [2024-11-20 06:46:48.901070] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:17.129 [2024-11-20 06:46:48.912901] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:17.129 [2024-11-20 06:46:48.912918] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:17.129 [2024-11-20 06:46:48.926438] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:17.129 [2024-11-20 06:46:48.926459] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:17.129 [2024-11-20 06:46:48.941135] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:17.129 [2024-11-20 06:46:48.941153] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:17.129 [2024-11-20 06:46:48.954095] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:17.129 [2024-11-20 06:46:48.954112] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:17.387 [2024-11-20 06:46:48.969013] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:17.387 [2024-11-20 06:46:48.969030] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:17.387 [2024-11-20 06:46:48.984476] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:17.387 [2024-11-20 06:46:48.984498] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:17.387 [2024-11-20 06:46:48.998211] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:17.387 [2024-11-20 06:46:48.998230] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:17.387 [2024-11-20 06:46:49.012939] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:17.387 [2024-11-20 06:46:49.012957] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:17.387 [2024-11-20 06:46:49.025239] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:17.387 [2024-11-20 06:46:49.025257] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:17.387 [2024-11-20 06:46:49.037744] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:17.387 [2024-11-20 06:46:49.037762] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:17.387 [2024-11-20 06:46:49.050216] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:17.388 [2024-11-20 06:46:49.050234] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:17.388 [2024-11-20 06:46:49.065054] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:17.388 [2024-11-20 06:46:49.065072] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:17.388 16648.50 IOPS, 130.07 MiB/s [2024-11-20T05:46:49.224Z] [2024-11-20 06:46:49.080284] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:17.388 [2024-11-20 06:46:49.080303] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:17.388 [2024-11-20 06:46:49.094045] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:17.388 [2024-11-20 06:46:49.094064] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:17.388 [2024-11-20 06:46:49.108973] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:17.388 [2024-11-20 06:46:49.108991] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:17.388 [2024-11-20 06:46:49.119811] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:17.388 [2024-11-20 06:46:49.119828] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:17.388 [2024-11-20 06:46:49.134434] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:17.388 [2024-11-20 06:46:49.134452] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:17.388 [2024-11-20 06:46:49.148736] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:17.388 [2024-11-20 06:46:49.148755] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:17.388 [2024-11-20 06:46:49.161265] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:17.388 [2024-11-20 06:46:49.161282] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:17.388 [2024-11-20 06:46:49.173850] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:17.388 [2024-11-20 06:46:49.173867] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:17.388 [2024-11-20 06:46:49.184368] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:17.388 [2024-11-20 06:46:49.184390] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:17.388 [2024-11-20 06:46:49.191251] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:17.388 [2024-11-20 06:46:49.191269] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:17.388 [2024-11-20 06:46:49.204604] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:17.388 [2024-11-20 06:46:49.204622] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:17.388 [2024-11-20 06:46:49.219053] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:17.388 [2024-11-20 06:46:49.219071] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:17.646 [2024-11-20 06:46:49.234155] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:17.646 [2024-11-20 06:46:49.234173] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:17.646 [2024-11-20 06:46:49.248925] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:17.646 [2024-11-20 06:46:49.248943] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:17.646 [2024-11-20 06:46:49.259541] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:17.646 [2024-11-20 06:46:49.259559] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:17.646 [2024-11-20 06:46:49.274723] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:17.646 [2024-11-20 06:46:49.274742] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:17.646 [2024-11-20 06:46:49.289335] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:17.646 [2024-11-20 06:46:49.289353] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:17.646 [2024-11-20 06:46:49.299858] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:17.646 [2024-11-20 06:46:49.299877] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:17.646 [2024-11-20 06:46:49.314888] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:17.646 [2024-11-20 06:46:49.314908] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:17.646 [2024-11-20 06:46:49.329562] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:17.646 [2024-11-20 06:46:49.329581] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:17.646 [2024-11-20 06:46:49.344485] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:17.646 [2024-11-20 06:46:49.344503] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:17.646 [2024-11-20 06:46:49.357738] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:17.646 [2024-11-20 06:46:49.357756] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:17.646 [2024-11-20 06:46:49.368796] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:17.646 [2024-11-20 06:46:49.368814] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:17.646 [2024-11-20 06:46:49.381611] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:17.646 [2024-11-20 06:46:49.381629] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:17.646 [2024-11-20 06:46:49.392790] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:17.646 [2024-11-20 06:46:49.392807] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:17.646 [2024-11-20 06:46:49.406141] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:17.646 [2024-11-20 06:46:49.406159] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:17.646 [2024-11-20 06:46:49.421045] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:17.646 [2024-11-20 06:46:49.421063] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:17.646 [2024-11-20 06:46:49.435930] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:17.646 [2024-11-20 06:46:49.435948] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:17.646 [2024-11-20 06:46:49.450097] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:17.646 [2024-11-20 06:46:49.450115] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:17.646 [2024-11-20 06:46:49.464714] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:17.646 [2024-11-20 06:46:49.464731] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:17.646 [2024-11-20 06:46:49.475304] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:17.646 [2024-11-20 06:46:49.475322] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:17.904 [2024-11-20 06:46:49.490358] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:17.904 [2024-11-20 06:46:49.490376] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:17.904 [2024-11-20 06:46:49.504730] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:17.904 [2024-11-20 06:46:49.504748] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:17.904 [2024-11-20 06:46:49.517635] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:17.904 [2024-11-20 06:46:49.517653] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:17.904 [2024-11-20 06:46:49.532697] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:17.904 [2024-11-20 06:46:49.532716] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:17.904 [2024-11-20 06:46:49.544535] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:17.904 [2024-11-20 06:46:49.544556] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:17.904 [2024-11-20 06:46:49.558795] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:17.904 [2024-11-20 06:46:49.558814] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:17.904 [2024-11-20 06:46:49.573604] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:17.904 [2024-11-20 06:46:49.573624] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:17.904 [2024-11-20 06:46:49.589231] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:17.904 [2024-11-20 06:46:49.589249] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:17.904 [2024-11-20 06:46:49.604241] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:17.904 [2024-11-20 06:46:49.604260] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:17.904 [2024-11-20 06:46:49.618517] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:17.904 [2024-11-20 06:46:49.618536] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:17.904 [2024-11-20 06:46:49.633071] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:17.905 [2024-11-20 06:46:49.633089] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:17.905 [2024-11-20 06:46:49.645880] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:17.905 [2024-11-20 06:46:49.645898] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:17.905 [2024-11-20 06:46:49.657062] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:17.905 [2024-11-20 06:46:49.657079] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:17.905 [2024-11-20 06:46:49.670101] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:17.905 [2024-11-20 06:46:49.670120] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:17.905 [2024-11-20 06:46:49.684862] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:17.905 [2024-11-20 06:46:49.684881] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:17.905 [2024-11-20 06:46:49.700286] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:17.905 [2024-11-20 06:46:49.700304] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:17.905 [2024-11-20 06:46:49.714313] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:17.905 [2024-11-20 06:46:49.714331] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:17.905 [2024-11-20 06:46:49.729118] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:17.905 [2024-11-20 06:46:49.729135] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:18.162 [2024-11-20 06:46:49.744821] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:18.162 [2024-11-20 06:46:49.744840] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:18.162 [2024-11-20 06:46:49.760784] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:18.162 [2024-11-20 06:46:49.760801] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:18.162 [2024-11-20 06:46:49.773300] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:18.162 [2024-11-20 06:46:49.773319] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:18.162 [2024-11-20 06:46:49.785913] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:18.162 [2024-11-20 06:46:49.785931] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:18.163 [2024-11-20 06:46:49.796382] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:18.163 [2024-11-20 06:46:49.796400] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:18.163 [2024-11-20 06:46:49.803485] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:18.163 [2024-11-20 06:46:49.803506] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:18.163 [2024-11-20 06:46:49.815229] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:18.163 [2024-11-20 06:46:49.815247] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:18.163 [2024-11-20 06:46:49.830033] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:18.163 [2024-11-20 06:46:49.830051] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:18.163 [2024-11-20 06:46:49.844803] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:18.163 [2024-11-20 06:46:49.844820] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:18.163 [2024-11-20 06:46:49.857090] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:18.163 [2024-11-20 06:46:49.857107] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:18.163 [2024-11-20 06:46:49.869738] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:18.163 [2024-11-20 06:46:49.869757] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:18.163 [2024-11-20 06:46:49.884507] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:18.163 [2024-11-20 06:46:49.884524] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:18.163 [2024-11-20 06:46:49.897435] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:18.163 [2024-11-20 06:46:49.897453] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:18.163 [2024-11-20 06:46:49.910148] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:18.163 [2024-11-20 06:46:49.910167] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:18.163 [2024-11-20 06:46:49.925366] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:18.163 [2024-11-20 06:46:49.925384] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:18.163 [2024-11-20 06:46:49.936059] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:18.163 [2024-11-20 06:46:49.936076] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:18.163 [2024-11-20 06:46:49.950255] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:18.163 [2024-11-20 06:46:49.950273] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:18.163 [2024-11-20 06:46:49.964950] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:18.163 [2024-11-20 06:46:49.964968] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:18.163 [2024-11-20 06:46:49.976070] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:18.163 [2024-11-20 06:46:49.976088] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:18.163 [2024-11-20 06:46:49.990289] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:18.163 [2024-11-20 06:46:49.990306] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:18.421 [2024-11-20 06:46:50.006066] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:18.421 [2024-11-20 06:46:50.006086] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:18.422 [2024-11-20 06:46:50.022546] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:18.422 [2024-11-20 06:46:50.022565] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:18.422 [2024-11-20 06:46:50.038241] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:18.422 [2024-11-20 06:46:50.038261] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:18.422 [2024-11-20 06:46:50.053165] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:18.422 [2024-11-20 06:46:50.053183] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:18.422 [2024-11-20 06:46:50.068721] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:18.422 [2024-11-20 06:46:50.068741] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:18.422 16628.80 IOPS, 129.91 MiB/s [2024-11-20T05:46:50.258Z] [2024-11-20 06:46:50.077715] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:18.422 [2024-11-20 06:46:50.077733] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:18.422 [2024-11-20 06:46:50.118418] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:18.422 [2024-11-20 06:46:50.118434] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:18.422 00:36:18.422 Latency(us) 00:36:18.422 [2024-11-20T05:46:50.258Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:18.422 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:36:18.422 Nvme1n1 : 5.05 16501.47 128.92 0.00 0.00 7688.06 2012.89 46436.94 00:36:18.422 [2024-11-20T05:46:50.258Z] =================================================================================================================== 00:36:18.422 [2024-11-20T05:46:50.258Z] Total : 16501.47 128.92 0.00 0.00 7688.06 2012.89 46436.94 00:36:18.422 [2024-11-20 06:46:50.128474] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:18.422 [2024-11-20 06:46:50.128490] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:18.422 [2024-11-20 06:46:50.140476] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:18.422 [2024-11-20 06:46:50.140488] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:18.422 [2024-11-20 06:46:50.152484] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:18.422 [2024-11-20 06:46:50.152502] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:18.422 [2024-11-20 06:46:50.164476] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:18.422 [2024-11-20 06:46:50.164490] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:18.422 [2024-11-20 06:46:50.176481] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:18.422 [2024-11-20 06:46:50.176501] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:18.422 [2024-11-20 06:46:50.188476] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:18.422 [2024-11-20 06:46:50.188489] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:18.422 [2024-11-20 06:46:50.200475] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:18.422 [2024-11-20 06:46:50.200499] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:18.422 [2024-11-20 06:46:50.212474] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:18.422 [2024-11-20 06:46:50.212486] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:18.422 [2024-11-20 06:46:50.224473] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:18.422 [2024-11-20 06:46:50.224484] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:18.422 [2024-11-20 06:46:50.236471] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:18.422 [2024-11-20 06:46:50.236480] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:18.422 [2024-11-20 06:46:50.248475] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:18.422 [2024-11-20 06:46:50.248486] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:18.681 [2024-11-20 06:46:50.260471] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:18.681 [2024-11-20 06:46:50.260481] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:18.681 [2024-11-20 06:46:50.272472] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:18.681 [2024-11-20 06:46:50.272481] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:18.681 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (767860) - No such process 00:36:18.681 06:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 767860 00:36:18.681 06:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:18.681 06:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:18.681 06:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:36:18.681 06:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:18.681 06:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:36:18.681 06:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:18.681 06:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:36:18.681 delay0 00:36:18.681 06:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:18.681 06:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:36:18.681 06:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:18.681 06:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:36:18.681 06:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:18.681 06:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:36:18.681 [2024-11-20 06:46:50.377381] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:36:26.792 Initializing NVMe Controllers 00:36:26.792 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:36:26.792 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:36:26.792 Initialization complete. Launching workers. 00:36:26.792 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 290, failed: 13629 00:36:26.792 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 13848, failed to submit 71 00:36:26.792 success 13768, unsuccessful 80, failed 0 00:36:26.792 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:36:26.792 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:36:26.792 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:26.792 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:36:26.792 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:26.792 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:36:26.792 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:26.792 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:26.792 rmmod nvme_tcp 00:36:26.792 rmmod nvme_fabrics 00:36:26.792 rmmod nvme_keyring 00:36:26.792 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:26.792 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:36:26.792 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:36:26.792 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 766057 ']' 00:36:26.792 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 766057 00:36:26.792 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@952 -- # '[' -z 766057 ']' 00:36:26.792 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@956 -- # kill -0 766057 00:36:26.792 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@957 -- # uname 00:36:26.792 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:36:26.792 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 766057 00:36:26.792 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:36:26.792 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:36:26.792 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@970 -- # echo 'killing process with pid 766057' 00:36:26.792 killing process with pid 766057 00:36:26.792 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@971 -- # kill 766057 00:36:26.792 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@976 -- # wait 766057 00:36:26.792 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:26.792 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:26.792 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:26.792 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:36:26.792 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:36:26.792 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:26.792 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:36:26.792 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:26.792 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:26.792 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:26.792 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:26.792 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:28.169 06:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:28.169 00:36:28.169 real 0m32.246s 00:36:28.169 user 0m41.705s 00:36:28.169 sys 0m12.857s 00:36:28.169 06:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1128 -- # xtrace_disable 00:36:28.169 06:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:36:28.169 ************************************ 00:36:28.169 END TEST nvmf_zcopy 00:36:28.169 ************************************ 00:36:28.169 06:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:36:28.169 06:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:36:28.169 06:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:36:28.169 06:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:36:28.169 ************************************ 00:36:28.169 START TEST nvmf_nmic 00:36:28.169 ************************************ 00:36:28.169 06:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:36:28.169 * Looking for test storage... 00:36:28.169 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:28.169 06:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:36:28.169 06:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1691 -- # lcov --version 00:36:28.169 06:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:36:28.429 06:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:36:28.429 06:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:28.429 06:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:28.429 06:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:28.429 06:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:36:28.429 06:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:36:28.429 06:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:36:28.429 06:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:36:28.429 06:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:36:28.429 06:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:36:28.429 06:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:36:28.429 06:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:28.429 06:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:36:28.429 06:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:36:28.429 06:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:28.429 06:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:28.429 06:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:36:28.429 06:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:36:28.429 06:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:28.429 06:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:36:28.429 06:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:36:28.429 06:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:36:28.429 06:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:36:28.429 06:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:28.429 06:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:36:28.429 06:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:36:28.429 06:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:28.429 06:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:28.429 06:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:36:28.429 06:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:28.429 06:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:36:28.429 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:28.429 --rc genhtml_branch_coverage=1 00:36:28.429 --rc genhtml_function_coverage=1 00:36:28.429 --rc genhtml_legend=1 00:36:28.429 --rc geninfo_all_blocks=1 00:36:28.429 --rc geninfo_unexecuted_blocks=1 00:36:28.429 00:36:28.429 ' 00:36:28.429 06:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:36:28.429 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:28.429 --rc genhtml_branch_coverage=1 00:36:28.429 --rc genhtml_function_coverage=1 00:36:28.429 --rc genhtml_legend=1 00:36:28.429 --rc geninfo_all_blocks=1 00:36:28.429 --rc geninfo_unexecuted_blocks=1 00:36:28.429 00:36:28.429 ' 00:36:28.429 06:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:36:28.429 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:28.429 --rc genhtml_branch_coverage=1 00:36:28.429 --rc genhtml_function_coverage=1 00:36:28.429 --rc genhtml_legend=1 00:36:28.429 --rc geninfo_all_blocks=1 00:36:28.429 --rc geninfo_unexecuted_blocks=1 00:36:28.429 00:36:28.429 ' 00:36:28.429 06:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:36:28.429 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:28.429 --rc genhtml_branch_coverage=1 00:36:28.429 --rc genhtml_function_coverage=1 00:36:28.429 --rc genhtml_legend=1 00:36:28.429 --rc geninfo_all_blocks=1 00:36:28.429 --rc geninfo_unexecuted_blocks=1 00:36:28.429 00:36:28.429 ' 00:36:28.429 06:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:28.429 06:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:36:28.429 06:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:28.429 06:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:28.429 06:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:28.429 06:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:28.429 06:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:28.429 06:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:28.429 06:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:28.429 06:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:28.429 06:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:28.429 06:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:28.429 06:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:36:28.429 06:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:36:28.429 06:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:28.429 06:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:28.429 06:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:28.429 06:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:28.429 06:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:28.429 06:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:36:28.429 06:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:28.429 06:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:28.429 06:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:28.429 06:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:28.430 06:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:28.430 06:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:28.430 06:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:36:28.430 06:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:28.430 06:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:36:28.430 06:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:28.430 06:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:28.430 06:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:28.430 06:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:28.430 06:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:28.430 06:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:36:28.430 06:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:36:28.430 06:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:28.430 06:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:28.430 06:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:28.430 06:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:36:28.430 06:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:36:28.430 06:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:36:28.430 06:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:28.430 06:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:28.430 06:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:28.430 06:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:28.430 06:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:28.430 06:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:28.430 06:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:28.430 06:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:28.430 06:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:28.430 06:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:28.430 06:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:36:28.430 06:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:34.996 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:34.996 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:36:34.996 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:34.996 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:34.996 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:34.996 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:34.996 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:34.996 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:36:34.996 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:34.996 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:36:34.996 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:36:34.996 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:36:34.996 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:36:34.996 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:36:34.996 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:36:34.996 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:34.996 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:34.996 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:34.996 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:34.996 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:34.996 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:34.996 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:34.996 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:34.996 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:34.996 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:34.996 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:34.996 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:34.996 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:34.996 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:34.996 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:34.996 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:34.996 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:34.996 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:34.996 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:34.996 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:36:34.996 Found 0000:86:00.0 (0x8086 - 0x159b) 00:36:34.996 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:34.996 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:34.996 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:34.996 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:34.996 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:34.996 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:34.996 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:36:34.996 Found 0000:86:00.1 (0x8086 - 0x159b) 00:36:34.996 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:34.996 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:34.996 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:34.996 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:34.996 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:34.996 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:34.996 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:34.996 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:34.996 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:34.996 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:34.996 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:34.996 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:34.996 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:34.996 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:34.996 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:34.996 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:36:34.996 Found net devices under 0000:86:00.0: cvl_0_0 00:36:34.996 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:34.996 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:34.996 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:34.996 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:34.996 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:34.996 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:34.996 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:34.996 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:34.996 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:36:34.996 Found net devices under 0000:86:00.1: cvl_0_1 00:36:34.996 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:34.997 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:34.997 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:36:34.997 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:34.997 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:34.997 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:34.997 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:34.997 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:34.997 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:34.997 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:34.997 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:34.997 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:34.997 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:34.997 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:34.997 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:34.997 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:34.997 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:34.997 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:34.997 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:34.997 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:34.997 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:34.997 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:34.997 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:34.997 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:34.997 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:34.997 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:34.997 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:34.997 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:34.997 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:34.997 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:34.997 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.422 ms 00:36:34.997 00:36:34.997 --- 10.0.0.2 ping statistics --- 00:36:34.997 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:34.997 rtt min/avg/max/mdev = 0.422/0.422/0.422/0.000 ms 00:36:34.997 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:34.997 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:34.997 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.214 ms 00:36:34.997 00:36:34.997 --- 10.0.0.1 ping statistics --- 00:36:34.997 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:34.997 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:36:34.997 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:34.997 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:36:34.997 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:34.997 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:34.997 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:34.997 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:34.997 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:34.997 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:34.997 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:34.997 06:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:36:34.997 06:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:34.997 06:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:34.997 06:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:34.997 06:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=773443 00:36:34.997 06:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 773443 00:36:34.997 06:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:36:34.997 06:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@833 -- # '[' -z 773443 ']' 00:36:34.997 06:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:34.997 06:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@838 -- # local max_retries=100 00:36:34.997 06:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:34.997 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:34.997 06:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # xtrace_disable 00:36:34.997 06:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:34.997 [2024-11-20 06:47:06.058797] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:36:34.997 [2024-11-20 06:47:06.059691] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:36:34.997 [2024-11-20 06:47:06.059724] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:34.997 [2024-11-20 06:47:06.139650] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:34.997 [2024-11-20 06:47:06.182453] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:34.997 [2024-11-20 06:47:06.182490] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:34.997 [2024-11-20 06:47:06.182497] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:34.997 [2024-11-20 06:47:06.182503] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:34.997 [2024-11-20 06:47:06.182508] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:34.997 [2024-11-20 06:47:06.183920] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:34.997 [2024-11-20 06:47:06.184034] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:36:34.997 [2024-11-20 06:47:06.184140] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:34.997 [2024-11-20 06:47:06.184141] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:36:34.997 [2024-11-20 06:47:06.250931] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:36:34.997 [2024-11-20 06:47:06.251869] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:36:34.997 [2024-11-20 06:47:06.251880] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:36:34.997 [2024-11-20 06:47:06.252081] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:36:34.997 [2024-11-20 06:47:06.252152] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:36:34.997 06:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:36:34.997 06:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@866 -- # return 0 00:36:34.997 06:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:34.997 06:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:34.997 06:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:34.997 06:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:34.997 06:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:34.997 06:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:34.997 06:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:34.997 [2024-11-20 06:47:06.324889] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:34.997 06:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:34.997 06:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:36:34.997 06:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:34.997 06:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:34.997 Malloc0 00:36:34.997 06:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:34.997 06:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:36:34.997 06:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:34.997 06:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:34.997 06:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:34.997 06:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:36:34.997 06:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:34.998 06:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:34.998 06:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:34.998 06:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:34.998 06:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:34.998 06:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:34.998 [2024-11-20 06:47:06.397001] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:34.998 06:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:34.998 06:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:36:34.998 test case1: single bdev can't be used in multiple subsystems 00:36:34.998 06:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:36:34.998 06:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:34.998 06:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:34.998 06:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:34.998 06:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:36:34.998 06:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:34.998 06:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:34.998 06:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:34.998 06:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:36:34.998 06:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:36:34.998 06:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:34.998 06:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:34.998 [2024-11-20 06:47:06.420600] bdev.c:8189:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:36:34.998 [2024-11-20 06:47:06.420623] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:36:34.998 [2024-11-20 06:47:06.420631] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:34.998 request: 00:36:34.998 { 00:36:34.998 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:36:34.998 "namespace": { 00:36:34.998 "bdev_name": "Malloc0", 00:36:34.998 "no_auto_visible": false 00:36:34.998 }, 00:36:34.998 "method": "nvmf_subsystem_add_ns", 00:36:34.998 "req_id": 1 00:36:34.998 } 00:36:34.998 Got JSON-RPC error response 00:36:34.998 response: 00:36:34.998 { 00:36:34.998 "code": -32602, 00:36:34.998 "message": "Invalid parameters" 00:36:34.998 } 00:36:34.998 06:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:36:34.998 06:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:36:34.998 06:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:36:34.998 06:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:36:34.998 Adding namespace failed - expected result. 00:36:34.998 06:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:36:34.998 test case2: host connect to nvmf target in multiple paths 00:36:34.998 06:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:36:34.998 06:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:34.998 06:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:34.998 [2024-11-20 06:47:06.432716] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:36:34.998 06:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:34.998 06:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:36:34.998 06:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:36:35.257 06:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:36:35.257 06:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1200 -- # local i=0 00:36:35.257 06:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:36:35.257 06:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:36:35.257 06:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1207 -- # sleep 2 00:36:37.155 06:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:36:37.155 06:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:36:37.155 06:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:36:37.155 06:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:36:37.155 06:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:36:37.155 06:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # return 0 00:36:37.155 06:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:36:37.419 [global] 00:36:37.419 thread=1 00:36:37.419 invalidate=1 00:36:37.419 rw=write 00:36:37.419 time_based=1 00:36:37.419 runtime=1 00:36:37.419 ioengine=libaio 00:36:37.419 direct=1 00:36:37.419 bs=4096 00:36:37.419 iodepth=1 00:36:37.419 norandommap=0 00:36:37.419 numjobs=1 00:36:37.419 00:36:37.419 verify_dump=1 00:36:37.419 verify_backlog=512 00:36:37.419 verify_state_save=0 00:36:37.419 do_verify=1 00:36:37.419 verify=crc32c-intel 00:36:37.419 [job0] 00:36:37.419 filename=/dev/nvme0n1 00:36:37.419 Could not set queue depth (nvme0n1) 00:36:37.675 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:36:37.675 fio-3.35 00:36:37.675 Starting 1 thread 00:36:39.044 00:36:39.044 job0: (groupid=0, jobs=1): err= 0: pid=774059: Wed Nov 20 06:47:10 2024 00:36:39.044 read: IOPS=2318, BW=9275KiB/s (9497kB/s)(9312KiB/1004msec) 00:36:39.044 slat (nsec): min=6712, max=38683, avg=7885.75, stdev=1300.03 00:36:39.044 clat (usec): min=173, max=41019, avg=253.81, stdev=1457.79 00:36:39.044 lat (usec): min=181, max=41046, avg=261.70, stdev=1458.24 00:36:39.044 clat percentiles (usec): 00:36:39.044 | 1.00th=[ 178], 5.00th=[ 180], 10.00th=[ 182], 20.00th=[ 184], 00:36:39.044 | 30.00th=[ 186], 40.00th=[ 186], 50.00th=[ 188], 60.00th=[ 190], 00:36:39.044 | 70.00th=[ 196], 80.00th=[ 243], 90.00th=[ 247], 95.00th=[ 251], 00:36:39.044 | 99.00th=[ 260], 99.50th=[ 281], 99.90th=[40633], 99.95th=[40633], 00:36:39.044 | 99.99th=[41157] 00:36:39.044 write: IOPS=2549, BW=9.96MiB/s (10.4MB/s)(10.0MiB/1004msec); 0 zone resets 00:36:39.044 slat (nsec): min=9753, max=40001, avg=11121.41, stdev=1645.50 00:36:39.044 clat (usec): min=119, max=3912, avg=137.02, stdev=104.42 00:36:39.044 lat (usec): min=129, max=3924, avg=148.14, stdev=104.49 00:36:39.044 clat percentiles (usec): 00:36:39.044 | 1.00th=[ 123], 5.00th=[ 125], 10.00th=[ 126], 20.00th=[ 128], 00:36:39.044 | 30.00th=[ 131], 40.00th=[ 133], 50.00th=[ 135], 60.00th=[ 137], 00:36:39.044 | 70.00th=[ 137], 80.00th=[ 139], 90.00th=[ 141], 95.00th=[ 145], 00:36:39.044 | 99.00th=[ 157], 99.50th=[ 196], 99.90th=[ 326], 99.95th=[ 3785], 00:36:39.044 | 99.99th=[ 3916] 00:36:39.044 bw ( KiB/s): min= 8192, max=12288, per=100.00%, avg=10240.00, stdev=2896.31, samples=2 00:36:39.044 iops : min= 2048, max= 3072, avg=2560.00, stdev=724.08, samples=2 00:36:39.044 lat (usec) : 250=97.44%, 500=2.43%, 1000=0.02% 00:36:39.044 lat (msec) : 4=0.04%, 50=0.06% 00:36:39.044 cpu : usr=3.19%, sys=6.88%, ctx=4888, majf=0, minf=1 00:36:39.044 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:39.044 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:39.044 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:39.044 issued rwts: total=2328,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:39.044 latency : target=0, window=0, percentile=100.00%, depth=1 00:36:39.044 00:36:39.044 Run status group 0 (all jobs): 00:36:39.044 READ: bw=9275KiB/s (9497kB/s), 9275KiB/s-9275KiB/s (9497kB/s-9497kB/s), io=9312KiB (9535kB), run=1004-1004msec 00:36:39.044 WRITE: bw=9.96MiB/s (10.4MB/s), 9.96MiB/s-9.96MiB/s (10.4MB/s-10.4MB/s), io=10.0MiB (10.5MB), run=1004-1004msec 00:36:39.044 00:36:39.044 Disk stats (read/write): 00:36:39.044 nvme0n1: ios=2375/2560, merge=0/0, ticks=466/331, in_queue=797, util=91.18% 00:36:39.044 06:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:36:39.044 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:36:39.044 06:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:36:39.044 06:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1221 -- # local i=0 00:36:39.044 06:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:36:39.044 06:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:36:39.044 06:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:36:39.044 06:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:36:39.044 06:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1233 -- # return 0 00:36:39.044 06:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:36:39.044 06:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:36:39.044 06:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:39.044 06:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:36:39.044 06:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:39.044 06:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:36:39.044 06:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:39.044 06:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:39.044 rmmod nvme_tcp 00:36:39.044 rmmod nvme_fabrics 00:36:39.044 rmmod nvme_keyring 00:36:39.044 06:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:39.044 06:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:36:39.044 06:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:36:39.044 06:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 773443 ']' 00:36:39.044 06:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 773443 00:36:39.044 06:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@952 -- # '[' -z 773443 ']' 00:36:39.044 06:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@956 -- # kill -0 773443 00:36:39.044 06:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@957 -- # uname 00:36:39.044 06:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:36:39.044 06:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 773443 00:36:39.044 06:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:36:39.044 06:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:36:39.044 06:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@970 -- # echo 'killing process with pid 773443' 00:36:39.044 killing process with pid 773443 00:36:39.044 06:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@971 -- # kill 773443 00:36:39.044 06:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@976 -- # wait 773443 00:36:39.303 06:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:39.303 06:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:39.303 06:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:39.303 06:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:36:39.303 06:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:36:39.303 06:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:39.303 06:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:36:39.303 06:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:39.303 06:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:39.303 06:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:39.303 06:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:39.303 06:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:41.837 06:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:41.837 00:36:41.837 real 0m13.187s 00:36:41.837 user 0m24.250s 00:36:41.837 sys 0m6.213s 00:36:41.837 06:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1128 -- # xtrace_disable 00:36:41.837 06:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:41.837 ************************************ 00:36:41.837 END TEST nvmf_nmic 00:36:41.837 ************************************ 00:36:41.837 06:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:36:41.837 06:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:36:41.837 06:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:36:41.837 06:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:36:41.837 ************************************ 00:36:41.837 START TEST nvmf_fio_target 00:36:41.837 ************************************ 00:36:41.837 06:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:36:41.837 * Looking for test storage... 00:36:41.837 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:41.837 06:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:36:41.837 06:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lcov --version 00:36:41.837 06:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:36:41.837 06:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:36:41.837 06:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:41.837 06:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:41.837 06:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:41.837 06:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:36:41.837 06:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:36:41.837 06:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:36:41.837 06:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:36:41.838 06:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:36:41.838 06:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:36:41.838 06:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:36:41.838 06:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:41.838 06:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:36:41.838 06:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:36:41.838 06:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:41.838 06:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:41.838 06:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:36:41.838 06:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:36:41.838 06:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:41.838 06:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:36:41.838 06:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:36:41.838 06:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:36:41.838 06:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:36:41.838 06:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:41.838 06:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:36:41.838 06:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:36:41.838 06:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:41.838 06:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:41.838 06:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:36:41.838 06:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:41.838 06:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:36:41.838 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:41.838 --rc genhtml_branch_coverage=1 00:36:41.838 --rc genhtml_function_coverage=1 00:36:41.838 --rc genhtml_legend=1 00:36:41.838 --rc geninfo_all_blocks=1 00:36:41.838 --rc geninfo_unexecuted_blocks=1 00:36:41.838 00:36:41.838 ' 00:36:41.838 06:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:36:41.838 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:41.838 --rc genhtml_branch_coverage=1 00:36:41.838 --rc genhtml_function_coverage=1 00:36:41.838 --rc genhtml_legend=1 00:36:41.838 --rc geninfo_all_blocks=1 00:36:41.838 --rc geninfo_unexecuted_blocks=1 00:36:41.838 00:36:41.838 ' 00:36:41.838 06:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:36:41.838 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:41.838 --rc genhtml_branch_coverage=1 00:36:41.838 --rc genhtml_function_coverage=1 00:36:41.838 --rc genhtml_legend=1 00:36:41.838 --rc geninfo_all_blocks=1 00:36:41.838 --rc geninfo_unexecuted_blocks=1 00:36:41.838 00:36:41.838 ' 00:36:41.838 06:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:36:41.838 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:41.838 --rc genhtml_branch_coverage=1 00:36:41.838 --rc genhtml_function_coverage=1 00:36:41.838 --rc genhtml_legend=1 00:36:41.838 --rc geninfo_all_blocks=1 00:36:41.838 --rc geninfo_unexecuted_blocks=1 00:36:41.838 00:36:41.838 ' 00:36:41.838 06:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:41.838 06:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:36:41.838 06:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:41.838 06:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:41.838 06:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:41.838 06:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:41.838 06:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:41.838 06:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:41.838 06:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:41.838 06:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:41.838 06:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:41.838 06:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:41.838 06:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:36:41.838 06:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:36:41.838 06:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:41.838 06:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:41.838 06:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:41.838 06:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:41.838 06:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:41.838 06:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:36:41.838 06:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:41.838 06:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:41.838 06:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:41.838 06:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:41.838 06:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:41.838 06:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:41.838 06:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:36:41.838 06:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:41.838 06:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:36:41.838 06:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:41.838 06:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:41.838 06:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:41.838 06:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:41.839 06:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:41.839 06:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:36:41.839 06:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:36:41.839 06:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:41.839 06:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:41.839 06:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:41.839 06:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:36:41.839 06:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:36:41.839 06:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:36:41.839 06:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:36:41.839 06:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:41.839 06:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:41.839 06:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:41.839 06:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:41.839 06:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:41.839 06:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:41.839 06:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:41.839 06:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:41.839 06:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:41.839 06:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:41.839 06:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:36:41.839 06:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:36:47.110 06:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:47.110 06:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:36:47.110 06:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:47.110 06:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:47.110 06:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:47.369 06:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:47.369 06:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:47.369 06:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:36:47.369 06:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:47.369 06:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:36:47.369 06:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:36:47.369 06:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:36:47.369 06:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:36:47.369 06:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:36:47.369 06:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:36:47.369 06:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:47.370 06:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:47.370 06:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:47.370 06:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:47.370 06:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:47.370 06:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:47.370 06:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:47.370 06:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:47.370 06:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:47.370 06:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:47.370 06:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:47.370 06:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:47.370 06:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:47.370 06:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:47.370 06:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:47.370 06:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:47.370 06:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:47.370 06:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:47.370 06:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:47.370 06:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:36:47.370 Found 0000:86:00.0 (0x8086 - 0x159b) 00:36:47.370 06:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:47.370 06:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:47.370 06:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:47.370 06:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:47.370 06:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:47.370 06:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:47.370 06:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:36:47.370 Found 0000:86:00.1 (0x8086 - 0x159b) 00:36:47.370 06:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:47.370 06:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:47.370 06:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:47.370 06:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:47.370 06:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:47.370 06:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:47.370 06:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:47.370 06:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:47.370 06:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:47.370 06:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:47.370 06:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:47.370 06:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:47.370 06:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:47.370 06:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:47.370 06:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:47.370 06:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:36:47.370 Found net devices under 0000:86:00.0: cvl_0_0 00:36:47.370 06:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:47.370 06:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:47.370 06:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:47.370 06:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:47.370 06:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:47.370 06:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:47.370 06:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:47.370 06:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:47.370 06:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:36:47.370 Found net devices under 0000:86:00.1: cvl_0_1 00:36:47.370 06:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:47.370 06:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:47.370 06:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:36:47.370 06:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:47.370 06:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:47.370 06:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:47.370 06:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:47.370 06:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:47.370 06:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:47.370 06:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:47.370 06:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:47.370 06:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:47.370 06:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:47.370 06:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:47.370 06:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:47.370 06:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:47.370 06:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:47.370 06:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:47.370 06:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:47.370 06:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:47.370 06:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:47.370 06:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:47.370 06:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:47.370 06:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:47.370 06:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:47.370 06:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:47.370 06:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:47.370 06:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:47.370 06:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:47.630 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:47.630 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.377 ms 00:36:47.630 00:36:47.630 --- 10.0.0.2 ping statistics --- 00:36:47.630 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:47.630 rtt min/avg/max/mdev = 0.377/0.377/0.377/0.000 ms 00:36:47.630 06:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:47.630 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:47.630 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.236 ms 00:36:47.630 00:36:47.630 --- 10.0.0.1 ping statistics --- 00:36:47.630 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:47.630 rtt min/avg/max/mdev = 0.236/0.236/0.236/0.000 ms 00:36:47.630 06:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:47.630 06:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:36:47.630 06:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:47.630 06:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:47.630 06:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:47.630 06:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:47.630 06:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:47.630 06:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:47.630 06:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:47.630 06:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:36:47.630 06:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:47.630 06:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:47.630 06:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:36:47.630 06:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=777813 00:36:47.630 06:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:36:47.630 06:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 777813 00:36:47.630 06:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@833 -- # '[' -z 777813 ']' 00:36:47.630 06:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:47.631 06:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:36:47.631 06:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:47.631 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:47.631 06:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:36:47.631 06:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:36:47.631 [2024-11-20 06:47:19.313267] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:36:47.631 [2024-11-20 06:47:19.314217] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:36:47.631 [2024-11-20 06:47:19.314251] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:47.631 [2024-11-20 06:47:19.389667] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:47.631 [2024-11-20 06:47:19.431356] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:47.631 [2024-11-20 06:47:19.431410] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:47.631 [2024-11-20 06:47:19.431418] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:47.631 [2024-11-20 06:47:19.431424] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:47.631 [2024-11-20 06:47:19.431429] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:47.631 [2024-11-20 06:47:19.432993] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:47.631 [2024-11-20 06:47:19.433110] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:36:47.631 [2024-11-20 06:47:19.433233] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:47.631 [2024-11-20 06:47:19.433234] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:36:47.889 [2024-11-20 06:47:19.500674] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:36:47.889 [2024-11-20 06:47:19.500823] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:36:47.889 [2024-11-20 06:47:19.501389] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:36:47.889 [2024-11-20 06:47:19.501622] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:36:47.889 [2024-11-20 06:47:19.501697] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:36:47.889 06:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:36:47.889 06:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@866 -- # return 0 00:36:47.889 06:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:47.889 06:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:47.889 06:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:36:47.889 06:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:47.889 06:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:36:48.148 [2024-11-20 06:47:19.737886] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:48.148 06:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:36:48.406 06:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:36:48.406 06:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:36:48.406 06:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:36:48.406 06:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:36:48.665 06:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:36:48.665 06:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:36:48.923 06:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:36:48.923 06:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:36:49.181 06:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:36:49.439 06:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:36:49.439 06:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:36:49.439 06:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:36:49.439 06:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:36:49.697 06:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:36:49.697 06:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:36:49.956 06:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:36:50.213 06:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:36:50.213 06:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:36:50.213 06:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:36:50.213 06:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:36:50.469 06:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:50.725 [2024-11-20 06:47:22.365828] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:50.725 06:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:36:50.981 06:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:36:50.981 06:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:36:51.238 06:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:36:51.238 06:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1200 -- # local i=0 00:36:51.238 06:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:36:51.238 06:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # [[ -n 4 ]] 00:36:51.238 06:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # nvme_device_counter=4 00:36:51.238 06:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1207 -- # sleep 2 00:36:53.757 06:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:36:53.757 06:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:36:53.757 06:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:36:53.757 06:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # nvme_devices=4 00:36:53.757 06:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:36:53.757 06:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # return 0 00:36:53.757 06:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:36:53.757 [global] 00:36:53.757 thread=1 00:36:53.757 invalidate=1 00:36:53.757 rw=write 00:36:53.757 time_based=1 00:36:53.757 runtime=1 00:36:53.757 ioengine=libaio 00:36:53.757 direct=1 00:36:53.757 bs=4096 00:36:53.757 iodepth=1 00:36:53.757 norandommap=0 00:36:53.757 numjobs=1 00:36:53.757 00:36:53.757 verify_dump=1 00:36:53.757 verify_backlog=512 00:36:53.757 verify_state_save=0 00:36:53.757 do_verify=1 00:36:53.757 verify=crc32c-intel 00:36:53.757 [job0] 00:36:53.757 filename=/dev/nvme0n1 00:36:53.757 [job1] 00:36:53.757 filename=/dev/nvme0n2 00:36:53.757 [job2] 00:36:53.757 filename=/dev/nvme0n3 00:36:53.757 [job3] 00:36:53.757 filename=/dev/nvme0n4 00:36:53.757 Could not set queue depth (nvme0n1) 00:36:53.757 Could not set queue depth (nvme0n2) 00:36:53.757 Could not set queue depth (nvme0n3) 00:36:53.757 Could not set queue depth (nvme0n4) 00:36:53.757 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:36:53.757 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:36:53.757 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:36:53.757 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:36:53.757 fio-3.35 00:36:53.757 Starting 4 threads 00:36:55.217 00:36:55.217 job0: (groupid=0, jobs=1): err= 0: pid=778936: Wed Nov 20 06:47:26 2024 00:36:55.217 read: IOPS=82, BW=331KiB/s (339kB/s)(336KiB/1014msec) 00:36:55.217 slat (nsec): min=8052, max=26260, avg=12443.92, stdev=6123.23 00:36:55.217 clat (usec): min=206, max=41161, avg=10896.37, stdev=18013.73 00:36:55.217 lat (usec): min=216, max=41184, avg=10908.82, stdev=18019.02 00:36:55.217 clat percentiles (usec): 00:36:55.217 | 1.00th=[ 208], 5.00th=[ 215], 10.00th=[ 217], 20.00th=[ 219], 00:36:55.217 | 30.00th=[ 221], 40.00th=[ 227], 50.00th=[ 229], 60.00th=[ 235], 00:36:55.217 | 70.00th=[ 269], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:36:55.217 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:36:55.217 | 99.99th=[41157] 00:36:55.217 write: IOPS=504, BW=2020KiB/s (2068kB/s)(2048KiB/1014msec); 0 zone resets 00:36:55.217 slat (nsec): min=10569, max=40392, avg=11909.11, stdev=2108.14 00:36:55.217 clat (usec): min=142, max=345, avg=174.45, stdev=15.76 00:36:55.217 lat (usec): min=153, max=386, avg=186.36, stdev=16.44 00:36:55.217 clat percentiles (usec): 00:36:55.218 | 1.00th=[ 151], 5.00th=[ 157], 10.00th=[ 159], 20.00th=[ 163], 00:36:55.218 | 30.00th=[ 167], 40.00th=[ 169], 50.00th=[ 172], 60.00th=[ 176], 00:36:55.218 | 70.00th=[ 180], 80.00th=[ 184], 90.00th=[ 194], 95.00th=[ 200], 00:36:55.218 | 99.00th=[ 215], 99.50th=[ 231], 99.90th=[ 347], 99.95th=[ 347], 00:36:55.218 | 99.99th=[ 347] 00:36:55.218 bw ( KiB/s): min= 4096, max= 4096, per=22.80%, avg=4096.00, stdev= 0.00, samples=1 00:36:55.218 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:36:55.218 lat (usec) : 250=95.47%, 500=0.84% 00:36:55.218 lat (msec) : 50=3.69% 00:36:55.218 cpu : usr=0.59%, sys=0.39%, ctx=596, majf=0, minf=1 00:36:55.218 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:55.218 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:55.218 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:55.218 issued rwts: total=84,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:55.218 latency : target=0, window=0, percentile=100.00%, depth=1 00:36:55.218 job1: (groupid=0, jobs=1): err= 0: pid=778937: Wed Nov 20 06:47:26 2024 00:36:55.218 read: IOPS=867, BW=3470KiB/s (3553kB/s)(3560KiB/1026msec) 00:36:55.218 slat (nsec): min=6330, max=26419, avg=7455.19, stdev=2156.49 00:36:55.218 clat (usec): min=180, max=41058, avg=938.03, stdev=5417.71 00:36:55.218 lat (usec): min=187, max=41081, avg=945.49, stdev=5419.65 00:36:55.218 clat percentiles (usec): 00:36:55.218 | 1.00th=[ 188], 5.00th=[ 192], 10.00th=[ 194], 20.00th=[ 198], 00:36:55.218 | 30.00th=[ 200], 40.00th=[ 202], 50.00th=[ 204], 60.00th=[ 206], 00:36:55.218 | 70.00th=[ 208], 80.00th=[ 212], 90.00th=[ 219], 95.00th=[ 229], 00:36:55.218 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:36:55.218 | 99.99th=[41157] 00:36:55.218 write: IOPS=998, BW=3992KiB/s (4088kB/s)(4096KiB/1026msec); 0 zone resets 00:36:55.218 slat (nsec): min=9108, max=44398, avg=10427.13, stdev=1730.65 00:36:55.218 clat (usec): min=124, max=333, avg=164.40, stdev=24.72 00:36:55.218 lat (usec): min=134, max=377, avg=174.82, stdev=25.16 00:36:55.218 clat percentiles (usec): 00:36:55.218 | 1.00th=[ 129], 5.00th=[ 137], 10.00th=[ 139], 20.00th=[ 143], 00:36:55.218 | 30.00th=[ 145], 40.00th=[ 149], 50.00th=[ 159], 60.00th=[ 172], 00:36:55.218 | 70.00th=[ 180], 80.00th=[ 188], 90.00th=[ 196], 95.00th=[ 202], 00:36:55.218 | 99.00th=[ 231], 99.50th=[ 249], 99.90th=[ 302], 99.95th=[ 334], 00:36:55.218 | 99.99th=[ 334] 00:36:55.218 bw ( KiB/s): min= 8192, max= 8192, per=45.60%, avg=8192.00, stdev= 0.00, samples=1 00:36:55.218 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:36:55.218 lat (usec) : 250=98.07%, 500=1.10% 00:36:55.218 lat (msec) : 50=0.84% 00:36:55.218 cpu : usr=0.98%, sys=1.66%, ctx=1915, majf=0, minf=1 00:36:55.218 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:55.218 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:55.218 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:55.218 issued rwts: total=890,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:55.218 latency : target=0, window=0, percentile=100.00%, depth=1 00:36:55.218 job2: (groupid=0, jobs=1): err= 0: pid=778938: Wed Nov 20 06:47:26 2024 00:36:55.218 read: IOPS=2334, BW=9339KiB/s (9563kB/s)(9348KiB/1001msec) 00:36:55.218 slat (nsec): min=7034, max=35454, avg=8147.96, stdev=1051.91 00:36:55.218 clat (usec): min=198, max=299, avg=223.22, stdev=15.97 00:36:55.218 lat (usec): min=206, max=308, avg=231.36, stdev=16.24 00:36:55.218 clat percentiles (usec): 00:36:55.218 | 1.00th=[ 202], 5.00th=[ 204], 10.00th=[ 206], 20.00th=[ 208], 00:36:55.218 | 30.00th=[ 210], 40.00th=[ 215], 50.00th=[ 219], 60.00th=[ 227], 00:36:55.218 | 70.00th=[ 233], 80.00th=[ 239], 90.00th=[ 247], 95.00th=[ 251], 00:36:55.218 | 99.00th=[ 260], 99.50th=[ 265], 99.90th=[ 285], 99.95th=[ 293], 00:36:55.218 | 99.99th=[ 302] 00:36:55.218 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:36:55.218 slat (nsec): min=9760, max=43420, avg=11769.31, stdev=2540.81 00:36:55.218 clat (usec): min=137, max=351, avg=163.15, stdev=17.89 00:36:55.218 lat (usec): min=148, max=385, avg=174.92, stdev=18.89 00:36:55.218 clat percentiles (usec): 00:36:55.218 | 1.00th=[ 141], 5.00th=[ 145], 10.00th=[ 145], 20.00th=[ 147], 00:36:55.218 | 30.00th=[ 151], 40.00th=[ 155], 50.00th=[ 161], 60.00th=[ 165], 00:36:55.218 | 70.00th=[ 172], 80.00th=[ 178], 90.00th=[ 186], 95.00th=[ 196], 00:36:55.218 | 99.00th=[ 212], 99.50th=[ 221], 99.90th=[ 318], 99.95th=[ 338], 00:36:55.218 | 99.99th=[ 351] 00:36:55.218 bw ( KiB/s): min=11040, max=11040, per=61.45%, avg=11040.00, stdev= 0.00, samples=1 00:36:55.218 iops : min= 2760, max= 2760, avg=2760.00, stdev= 0.00, samples=1 00:36:55.218 lat (usec) : 250=97.04%, 500=2.96% 00:36:55.218 cpu : usr=1.70%, sys=5.90%, ctx=4898, majf=0, minf=1 00:36:55.218 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:55.218 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:55.218 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:55.218 issued rwts: total=2337,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:55.218 latency : target=0, window=0, percentile=100.00%, depth=1 00:36:55.218 job3: (groupid=0, jobs=1): err= 0: pid=778939: Wed Nov 20 06:47:26 2024 00:36:55.218 read: IOPS=480, BW=1922KiB/s (1968kB/s)(1924KiB/1001msec) 00:36:55.218 slat (nsec): min=8207, max=23640, avg=9422.44, stdev=2502.55 00:36:55.218 clat (usec): min=205, max=42023, avg=1851.42, stdev=7951.05 00:36:55.218 lat (usec): min=214, max=42045, avg=1860.84, stdev=7953.03 00:36:55.218 clat percentiles (usec): 00:36:55.218 | 1.00th=[ 221], 5.00th=[ 225], 10.00th=[ 229], 20.00th=[ 235], 00:36:55.218 | 30.00th=[ 237], 40.00th=[ 239], 50.00th=[ 241], 60.00th=[ 243], 00:36:55.218 | 70.00th=[ 247], 80.00th=[ 249], 90.00th=[ 253], 95.00th=[ 262], 00:36:55.218 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:36:55.218 | 99.99th=[42206] 00:36:55.218 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:36:55.218 slat (nsec): min=10348, max=38142, avg=12242.86, stdev=2300.50 00:36:55.218 clat (usec): min=134, max=929, avg=187.23, stdev=50.68 00:36:55.218 lat (usec): min=145, max=940, avg=199.47, stdev=50.98 00:36:55.218 clat percentiles (usec): 00:36:55.218 | 1.00th=[ 153], 5.00th=[ 161], 10.00th=[ 165], 20.00th=[ 169], 00:36:55.218 | 30.00th=[ 172], 40.00th=[ 176], 50.00th=[ 180], 60.00th=[ 184], 00:36:55.218 | 70.00th=[ 190], 80.00th=[ 198], 90.00th=[ 208], 95.00th=[ 235], 00:36:55.218 | 99.00th=[ 260], 99.50th=[ 611], 99.90th=[ 930], 99.95th=[ 930], 00:36:55.218 | 99.99th=[ 930] 00:36:55.218 bw ( KiB/s): min= 4096, max= 4096, per=22.80%, avg=4096.00, stdev= 0.00, samples=1 00:36:55.218 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:36:55.218 lat (usec) : 250=92.04%, 500=5.64%, 750=0.30%, 1000=0.10% 00:36:55.218 lat (msec) : 50=1.91% 00:36:55.218 cpu : usr=0.50%, sys=1.30%, ctx=993, majf=0, minf=1 00:36:55.218 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:55.218 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:55.218 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:55.218 issued rwts: total=481,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:55.218 latency : target=0, window=0, percentile=100.00%, depth=1 00:36:55.218 00:36:55.218 Run status group 0 (all jobs): 00:36:55.218 READ: bw=14.4MiB/s (15.1MB/s), 331KiB/s-9339KiB/s (339kB/s-9563kB/s), io=14.8MiB (15.5MB), run=1001-1026msec 00:36:55.218 WRITE: bw=17.5MiB/s (18.4MB/s), 2020KiB/s-9.99MiB/s (2068kB/s-10.5MB/s), io=18.0MiB (18.9MB), run=1001-1026msec 00:36:55.218 00:36:55.218 Disk stats (read/write): 00:36:55.218 nvme0n1: ios=129/512, merge=0/0, ticks=770/79, in_queue=849, util=86.27% 00:36:55.218 nvme0n2: ios=910/1024, merge=0/0, ticks=1614/164, in_queue=1778, util=98.06% 00:36:55.218 nvme0n3: ios=2106/2110, merge=0/0, ticks=649/330, in_queue=979, util=98.12% 00:36:55.218 nvme0n4: ios=18/512, merge=0/0, ticks=739/90, in_queue=829, util=89.66% 00:36:55.218 06:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:36:55.218 [global] 00:36:55.218 thread=1 00:36:55.218 invalidate=1 00:36:55.218 rw=randwrite 00:36:55.218 time_based=1 00:36:55.218 runtime=1 00:36:55.218 ioengine=libaio 00:36:55.218 direct=1 00:36:55.218 bs=4096 00:36:55.218 iodepth=1 00:36:55.218 norandommap=0 00:36:55.218 numjobs=1 00:36:55.218 00:36:55.218 verify_dump=1 00:36:55.218 verify_backlog=512 00:36:55.218 verify_state_save=0 00:36:55.218 do_verify=1 00:36:55.218 verify=crc32c-intel 00:36:55.218 [job0] 00:36:55.218 filename=/dev/nvme0n1 00:36:55.218 [job1] 00:36:55.218 filename=/dev/nvme0n2 00:36:55.218 [job2] 00:36:55.218 filename=/dev/nvme0n3 00:36:55.218 [job3] 00:36:55.218 filename=/dev/nvme0n4 00:36:55.218 Could not set queue depth (nvme0n1) 00:36:55.218 Could not set queue depth (nvme0n2) 00:36:55.218 Could not set queue depth (nvme0n3) 00:36:55.218 Could not set queue depth (nvme0n4) 00:36:55.218 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:36:55.218 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:36:55.218 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:36:55.218 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:36:55.218 fio-3.35 00:36:55.218 Starting 4 threads 00:36:56.590 00:36:56.590 job0: (groupid=0, jobs=1): err= 0: pid=779312: Wed Nov 20 06:47:28 2024 00:36:56.590 read: IOPS=66, BW=266KiB/s (273kB/s)(272KiB/1021msec) 00:36:56.590 slat (nsec): min=7387, max=33792, avg=12834.75, stdev=6615.34 00:36:56.590 clat (usec): min=216, max=41397, avg=13457.80, stdev=19190.80 00:36:56.590 lat (usec): min=224, max=41405, avg=13470.63, stdev=19189.62 00:36:56.590 clat percentiles (usec): 00:36:56.590 | 1.00th=[ 217], 5.00th=[ 243], 10.00th=[ 249], 20.00th=[ 253], 00:36:56.590 | 30.00th=[ 260], 40.00th=[ 269], 50.00th=[ 318], 60.00th=[ 355], 00:36:56.590 | 70.00th=[40633], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:36:56.590 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:36:56.590 | 99.99th=[41157] 00:36:56.590 write: IOPS=501, BW=2006KiB/s (2054kB/s)(2048KiB/1021msec); 0 zone resets 00:36:56.590 slat (nsec): min=10433, max=42320, avg=13430.60, stdev=4546.59 00:36:56.590 clat (usec): min=135, max=405, avg=187.26, stdev=27.05 00:36:56.590 lat (usec): min=146, max=445, avg=200.69, stdev=28.62 00:36:56.590 clat percentiles (usec): 00:36:56.590 | 1.00th=[ 149], 5.00th=[ 157], 10.00th=[ 163], 20.00th=[ 167], 00:36:56.590 | 30.00th=[ 174], 40.00th=[ 180], 50.00th=[ 184], 60.00th=[ 188], 00:36:56.590 | 70.00th=[ 194], 80.00th=[ 202], 90.00th=[ 215], 95.00th=[ 233], 00:36:56.590 | 99.00th=[ 277], 99.50th=[ 343], 99.90th=[ 404], 99.95th=[ 404], 00:36:56.590 | 99.99th=[ 404] 00:36:56.590 bw ( KiB/s): min= 4087, max= 4087, per=20.38%, avg=4087.00, stdev= 0.00, samples=1 00:36:56.590 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:36:56.590 lat (usec) : 250=87.41%, 500=8.79% 00:36:56.590 lat (msec) : 50=3.79% 00:36:56.590 cpu : usr=0.20%, sys=0.69%, ctx=583, majf=0, minf=1 00:36:56.590 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:56.590 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:56.590 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:56.590 issued rwts: total=68,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:56.590 latency : target=0, window=0, percentile=100.00%, depth=1 00:36:56.590 job1: (groupid=0, jobs=1): err= 0: pid=779313: Wed Nov 20 06:47:28 2024 00:36:56.590 read: IOPS=22, BW=91.3KiB/s (93.5kB/s)(92.0KiB/1008msec) 00:36:56.590 slat (nsec): min=9238, max=23009, avg=14351.22, stdev=3947.49 00:36:56.590 clat (usec): min=248, max=41186, avg=39210.62, stdev=8493.73 00:36:56.590 lat (usec): min=266, max=41196, avg=39224.98, stdev=8492.87 00:36:56.590 clat percentiles (usec): 00:36:56.590 | 1.00th=[ 249], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:36:56.590 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:36:56.590 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:36:56.590 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:36:56.590 | 99.99th=[41157] 00:36:56.590 write: IOPS=507, BW=2032KiB/s (2081kB/s)(2048KiB/1008msec); 0 zone resets 00:36:56.590 slat (nsec): min=10579, max=45021, avg=12057.57, stdev=2634.66 00:36:56.590 clat (usec): min=144, max=835, avg=190.49, stdev=53.38 00:36:56.590 lat (usec): min=156, max=846, avg=202.55, stdev=53.83 00:36:56.590 clat percentiles (usec): 00:36:56.590 | 1.00th=[ 151], 5.00th=[ 155], 10.00th=[ 157], 20.00th=[ 161], 00:36:56.590 | 30.00th=[ 163], 40.00th=[ 165], 50.00th=[ 169], 60.00th=[ 176], 00:36:56.590 | 70.00th=[ 188], 80.00th=[ 239], 90.00th=[ 241], 95.00th=[ 245], 00:36:56.590 | 99.00th=[ 310], 99.50th=[ 570], 99.90th=[ 832], 99.95th=[ 832], 00:36:56.590 | 99.99th=[ 832] 00:36:56.590 bw ( KiB/s): min= 4087, max= 4087, per=20.38%, avg=4087.00, stdev= 0.00, samples=1 00:36:56.590 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:36:56.590 lat (usec) : 250=91.96%, 500=3.36%, 750=0.37%, 1000=0.19% 00:36:56.590 lat (msec) : 50=4.11% 00:36:56.590 cpu : usr=0.40%, sys=0.99%, ctx=537, majf=0, minf=1 00:36:56.590 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:56.590 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:56.590 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:56.590 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:56.590 latency : target=0, window=0, percentile=100.00%, depth=1 00:36:56.590 job2: (groupid=0, jobs=1): err= 0: pid=779315: Wed Nov 20 06:47:28 2024 00:36:56.590 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:36:56.590 slat (nsec): min=5217, max=26819, avg=7422.07, stdev=1150.26 00:36:56.590 clat (usec): min=174, max=434, avg=211.93, stdev=30.05 00:36:56.590 lat (usec): min=187, max=444, avg=219.35, stdev=30.11 00:36:56.590 clat percentiles (usec): 00:36:56.590 | 1.00th=[ 184], 5.00th=[ 186], 10.00th=[ 188], 20.00th=[ 190], 00:36:56.590 | 30.00th=[ 192], 40.00th=[ 196], 50.00th=[ 206], 60.00th=[ 212], 00:36:56.590 | 70.00th=[ 217], 80.00th=[ 225], 90.00th=[ 253], 95.00th=[ 277], 00:36:56.590 | 99.00th=[ 302], 99.50th=[ 322], 99.90th=[ 420], 99.95th=[ 420], 00:36:56.590 | 99.99th=[ 437] 00:36:56.590 write: IOPS=2627, BW=10.3MiB/s (10.8MB/s)(10.3MiB/1001msec); 0 zone resets 00:36:56.590 slat (nsec): min=7289, max=39023, avg=10506.56, stdev=1238.73 00:36:56.590 clat (usec): min=120, max=424, avg=152.06, stdev=25.13 00:36:56.590 lat (usec): min=130, max=434, avg=162.57, stdev=25.25 00:36:56.590 clat percentiles (usec): 00:36:56.590 | 1.00th=[ 127], 5.00th=[ 131], 10.00th=[ 133], 20.00th=[ 135], 00:36:56.590 | 30.00th=[ 137], 40.00th=[ 139], 50.00th=[ 143], 60.00th=[ 149], 00:36:56.590 | 70.00th=[ 157], 80.00th=[ 167], 90.00th=[ 180], 95.00th=[ 194], 00:36:56.590 | 99.00th=[ 245], 99.50th=[ 247], 99.90th=[ 322], 99.95th=[ 326], 00:36:56.590 | 99.99th=[ 424] 00:36:56.590 bw ( KiB/s): min=11752, max=11752, per=58.59%, avg=11752.00, stdev= 0.00, samples=1 00:36:56.590 iops : min= 2938, max= 2938, avg=2938.00, stdev= 0.00, samples=1 00:36:56.590 lat (usec) : 250=94.57%, 500=5.43% 00:36:56.590 cpu : usr=2.50%, sys=4.70%, ctx=5192, majf=0, minf=1 00:36:56.590 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:56.590 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:56.590 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:56.590 issued rwts: total=2560,2630,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:56.590 latency : target=0, window=0, percentile=100.00%, depth=1 00:36:56.590 job3: (groupid=0, jobs=1): err= 0: pid=779316: Wed Nov 20 06:47:28 2024 00:36:56.590 read: IOPS=1000, BW=4000KiB/s (4096kB/s)(4140KiB/1035msec) 00:36:56.590 slat (nsec): min=6819, max=71697, avg=9123.61, stdev=4080.65 00:36:56.590 clat (usec): min=176, max=41188, avg=700.07, stdev=4174.38 00:36:56.590 lat (usec): min=185, max=41196, avg=709.19, stdev=4175.53 00:36:56.590 clat percentiles (usec): 00:36:56.590 | 1.00th=[ 186], 5.00th=[ 227], 10.00th=[ 239], 20.00th=[ 245], 00:36:56.590 | 30.00th=[ 251], 40.00th=[ 255], 50.00th=[ 258], 60.00th=[ 262], 00:36:56.590 | 70.00th=[ 265], 80.00th=[ 273], 90.00th=[ 293], 95.00th=[ 437], 00:36:56.590 | 99.00th=[40633], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:36:56.590 | 99.99th=[41157] 00:36:56.590 write: IOPS=1484, BW=5936KiB/s (6079kB/s)(6144KiB/1035msec); 0 zone resets 00:36:56.590 slat (nsec): min=9623, max=39992, avg=11204.63, stdev=2123.14 00:36:56.590 clat (usec): min=134, max=419, avg=180.40, stdev=31.11 00:36:56.590 lat (usec): min=147, max=459, avg=191.61, stdev=32.07 00:36:56.591 clat percentiles (usec): 00:36:56.591 | 1.00th=[ 147], 5.00th=[ 153], 10.00th=[ 157], 20.00th=[ 161], 00:36:56.591 | 30.00th=[ 163], 40.00th=[ 167], 50.00th=[ 169], 60.00th=[ 174], 00:36:56.591 | 70.00th=[ 182], 80.00th=[ 194], 90.00th=[ 221], 95.00th=[ 245], 00:36:56.591 | 99.00th=[ 289], 99.50th=[ 326], 99.90th=[ 351], 99.95th=[ 420], 00:36:56.591 | 99.99th=[ 420] 00:36:56.591 bw ( KiB/s): min= 4096, max= 8192, per=30.63%, avg=6144.00, stdev=2896.31, samples=2 00:36:56.591 iops : min= 1024, max= 2048, avg=1536.00, stdev=724.08, samples=2 00:36:56.591 lat (usec) : 250=68.34%, 500=30.77%, 750=0.47% 00:36:56.591 lat (msec) : 50=0.43% 00:36:56.591 cpu : usr=0.68%, sys=3.09%, ctx=2574, majf=0, minf=1 00:36:56.591 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:56.591 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:56.591 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:56.591 issued rwts: total=1035,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:56.591 latency : target=0, window=0, percentile=100.00%, depth=1 00:36:56.591 00:36:56.591 Run status group 0 (all jobs): 00:36:56.591 READ: bw=13.9MiB/s (14.6MB/s), 91.3KiB/s-9.99MiB/s (93.5kB/s-10.5MB/s), io=14.4MiB (15.1MB), run=1001-1035msec 00:36:56.591 WRITE: bw=19.6MiB/s (20.5MB/s), 2006KiB/s-10.3MiB/s (2054kB/s-10.8MB/s), io=20.3MiB (21.3MB), run=1001-1035msec 00:36:56.591 00:36:56.591 Disk stats (read/write): 00:36:56.591 nvme0n1: ios=87/512, merge=0/0, ticks=1297/96, in_queue=1393, util=99.40% 00:36:56.591 nvme0n2: ios=44/512, merge=0/0, ticks=1722/88, in_queue=1810, util=98.38% 00:36:56.591 nvme0n3: ios=2089/2364, merge=0/0, ticks=689/355, in_queue=1044, util=97.19% 00:36:56.591 nvme0n4: ios=1088/1536, merge=0/0, ticks=685/276, in_queue=961, util=98.32% 00:36:56.591 06:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:36:56.591 [global] 00:36:56.591 thread=1 00:36:56.591 invalidate=1 00:36:56.591 rw=write 00:36:56.591 time_based=1 00:36:56.591 runtime=1 00:36:56.591 ioengine=libaio 00:36:56.591 direct=1 00:36:56.591 bs=4096 00:36:56.591 iodepth=128 00:36:56.591 norandommap=0 00:36:56.591 numjobs=1 00:36:56.591 00:36:56.591 verify_dump=1 00:36:56.591 verify_backlog=512 00:36:56.591 verify_state_save=0 00:36:56.591 do_verify=1 00:36:56.591 verify=crc32c-intel 00:36:56.591 [job0] 00:36:56.591 filename=/dev/nvme0n1 00:36:56.591 [job1] 00:36:56.591 filename=/dev/nvme0n2 00:36:56.591 [job2] 00:36:56.591 filename=/dev/nvme0n3 00:36:56.591 [job3] 00:36:56.591 filename=/dev/nvme0n4 00:36:56.591 Could not set queue depth (nvme0n1) 00:36:56.591 Could not set queue depth (nvme0n2) 00:36:56.591 Could not set queue depth (nvme0n3) 00:36:56.591 Could not set queue depth (nvme0n4) 00:36:56.848 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:36:56.848 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:36:56.848 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:36:56.848 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:36:56.848 fio-3.35 00:36:56.848 Starting 4 threads 00:36:58.219 00:36:58.219 job0: (groupid=0, jobs=1): err= 0: pid=779691: Wed Nov 20 06:47:29 2024 00:36:58.219 read: IOPS=2567, BW=10.0MiB/s (10.5MB/s)(10.1MiB/1005msec) 00:36:58.219 slat (nsec): min=1492, max=15859k, avg=125263.99, stdev=926192.37 00:36:58.219 clat (usec): min=1872, max=37351, avg=16675.49, stdev=6606.22 00:36:58.219 lat (usec): min=4627, max=38227, avg=16800.76, stdev=6657.39 00:36:58.219 clat percentiles (usec): 00:36:58.219 | 1.00th=[ 7373], 5.00th=[ 9634], 10.00th=[ 9896], 20.00th=[11600], 00:36:58.219 | 30.00th=[11863], 40.00th=[13173], 50.00th=[14877], 60.00th=[15926], 00:36:58.219 | 70.00th=[19006], 80.00th=[21890], 90.00th=[27132], 95.00th=[30278], 00:36:58.219 | 99.00th=[33817], 99.50th=[36439], 99.90th=[37487], 99.95th=[37487], 00:36:58.219 | 99.99th=[37487] 00:36:58.219 write: IOPS=3056, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1005msec); 0 zone resets 00:36:58.219 slat (usec): min=2, max=46046, avg=214.55, stdev=1625.74 00:36:58.219 clat (msec): min=4, max=134, avg=23.95, stdev=14.34 00:36:58.219 lat (msec): min=4, max=134, avg=24.16, stdev=14.55 00:36:58.219 clat percentiles (msec): 00:36:58.219 | 1.00th=[ 8], 5.00th=[ 10], 10.00th=[ 11], 20.00th=[ 14], 00:36:58.219 | 30.00th=[ 16], 40.00th=[ 17], 50.00th=[ 23], 60.00th=[ 26], 00:36:58.219 | 70.00th=[ 29], 80.00th=[ 31], 90.00th=[ 40], 95.00th=[ 56], 00:36:58.219 | 99.00th=[ 82], 99.50th=[ 103], 99.90th=[ 134], 99.95th=[ 134], 00:36:58.219 | 99.99th=[ 134] 00:36:58.219 bw ( KiB/s): min= 9072, max=14640, per=17.67%, avg=11856.00, stdev=3937.17, samples=2 00:36:58.219 iops : min= 2268, max= 3660, avg=2964.00, stdev=984.29, samples=2 00:36:58.219 lat (msec) : 2=0.02%, 10=9.64%, 20=51.29%, 50=35.77%, 100=2.99% 00:36:58.219 lat (msec) : 250=0.28% 00:36:58.219 cpu : usr=2.69%, sys=3.49%, ctx=231, majf=0, minf=1 00:36:58.219 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:36:58.219 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:58.219 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:36:58.219 issued rwts: total=2580,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:58.219 latency : target=0, window=0, percentile=100.00%, depth=128 00:36:58.219 job1: (groupid=0, jobs=1): err= 0: pid=779692: Wed Nov 20 06:47:29 2024 00:36:58.219 read: IOPS=3636, BW=14.2MiB/s (14.9MB/s)(14.2MiB/1003msec) 00:36:58.219 slat (nsec): min=1022, max=19793k, avg=117694.96, stdev=816641.45 00:36:58.219 clat (usec): min=467, max=63587, avg=13876.77, stdev=6309.25 00:36:58.219 lat (usec): min=3301, max=63591, avg=13994.46, stdev=6371.25 00:36:58.219 clat percentiles (usec): 00:36:58.219 | 1.00th=[ 6063], 5.00th=[ 9241], 10.00th=[ 9765], 20.00th=[10290], 00:36:58.219 | 30.00th=[10552], 40.00th=[10945], 50.00th=[12387], 60.00th=[13304], 00:36:58.219 | 70.00th=[14877], 80.00th=[15401], 90.00th=[21103], 95.00th=[26084], 00:36:58.219 | 99.00th=[42730], 99.50th=[53740], 99.90th=[63701], 99.95th=[63701], 00:36:58.219 | 99.99th=[63701] 00:36:58.219 write: IOPS=4083, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1003msec); 0 zone resets 00:36:58.219 slat (nsec): min=1821, max=23665k, avg=135547.56, stdev=1026622.90 00:36:58.219 clat (usec): min=5945, max=71201, avg=18648.99, stdev=13753.06 00:36:58.219 lat (usec): min=5954, max=71209, avg=18784.54, stdev=13831.67 00:36:58.219 clat percentiles (usec): 00:36:58.219 | 1.00th=[ 7832], 5.00th=[ 8160], 10.00th=[ 8717], 20.00th=[10159], 00:36:58.219 | 30.00th=[10421], 40.00th=[10683], 50.00th=[11600], 60.00th=[15139], 00:36:58.219 | 70.00th=[16188], 80.00th=[25297], 90.00th=[40109], 95.00th=[50070], 00:36:58.219 | 99.00th=[61080], 99.50th=[70779], 99.90th=[70779], 99.95th=[70779], 00:36:58.219 | 99.99th=[70779] 00:36:58.219 bw ( KiB/s): min=14904, max=17344, per=24.02%, avg=16124.00, stdev=1725.34, samples=2 00:36:58.219 iops : min= 3726, max= 4336, avg=4031.00, stdev=431.34, samples=2 00:36:58.219 lat (usec) : 500=0.01% 00:36:58.219 lat (msec) : 4=0.41%, 10=14.84%, 20=66.02%, 50=15.82%, 100=2.89% 00:36:58.219 cpu : usr=2.30%, sys=2.99%, ctx=327, majf=0, minf=1 00:36:58.219 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:36:58.219 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:58.219 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:36:58.219 issued rwts: total=3647,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:58.219 latency : target=0, window=0, percentile=100.00%, depth=128 00:36:58.219 job2: (groupid=0, jobs=1): err= 0: pid=779693: Wed Nov 20 06:47:29 2024 00:36:58.219 read: IOPS=3177, BW=12.4MiB/s (13.0MB/s)(12.5MiB/1007msec) 00:36:58.219 slat (nsec): min=1433, max=21354k, avg=135148.72, stdev=1113147.32 00:36:58.219 clat (usec): min=1093, max=44668, avg=17185.45, stdev=6839.93 00:36:58.219 lat (usec): min=4241, max=44695, avg=17320.60, stdev=6915.34 00:36:58.219 clat percentiles (usec): 00:36:58.219 | 1.00th=[ 7504], 5.00th=[ 8291], 10.00th=[10028], 20.00th=[10683], 00:36:58.220 | 30.00th=[11338], 40.00th=[13960], 50.00th=[16450], 60.00th=[17695], 00:36:58.220 | 70.00th=[21365], 80.00th=[23200], 90.00th=[26346], 95.00th=[28705], 00:36:58.220 | 99.00th=[36963], 99.50th=[38011], 99.90th=[38011], 99.95th=[43254], 00:36:58.220 | 99.99th=[44827] 00:36:58.220 write: IOPS=3559, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1007msec); 0 zone resets 00:36:58.220 slat (usec): min=2, max=14528, avg=147.52, stdev=934.05 00:36:58.220 clat (usec): min=1414, max=103124, avg=20298.82, stdev=16338.75 00:36:58.220 lat (usec): min=1424, max=103136, avg=20446.34, stdev=16444.45 00:36:58.220 clat percentiles (msec): 00:36:58.220 | 1.00th=[ 5], 5.00th=[ 7], 10.00th=[ 10], 20.00th=[ 12], 00:36:58.220 | 30.00th=[ 12], 40.00th=[ 13], 50.00th=[ 16], 60.00th=[ 17], 00:36:58.220 | 70.00th=[ 21], 80.00th=[ 26], 90.00th=[ 39], 95.00th=[ 54], 00:36:58.220 | 99.00th=[ 96], 99.50th=[ 100], 99.90th=[ 104], 99.95th=[ 104], 00:36:58.220 | 99.99th=[ 104] 00:36:58.220 bw ( KiB/s): min=12288, max=16376, per=21.35%, avg=14332.00, stdev=2890.65, samples=2 00:36:58.220 iops : min= 3072, max= 4094, avg=3583.00, stdev=722.66, samples=2 00:36:58.220 lat (msec) : 2=0.15%, 4=0.28%, 10=9.64%, 20=58.36%, 50=28.45% 00:36:58.220 lat (msec) : 100=2.90%, 250=0.22% 00:36:58.220 cpu : usr=3.08%, sys=4.37%, ctx=337, majf=0, minf=2 00:36:58.220 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:36:58.220 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:58.220 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:36:58.220 issued rwts: total=3200,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:58.220 latency : target=0, window=0, percentile=100.00%, depth=128 00:36:58.220 job3: (groupid=0, jobs=1): err= 0: pid=779694: Wed Nov 20 06:47:29 2024 00:36:58.220 read: IOPS=6074, BW=23.7MiB/s (24.9MB/s)(23.9MiB/1007msec) 00:36:58.220 slat (nsec): min=1327, max=13711k, avg=88605.03, stdev=765906.58 00:36:58.220 clat (usec): min=1514, max=36724, avg=11316.16, stdev=3688.46 00:36:58.220 lat (usec): min=4141, max=38229, avg=11404.76, stdev=3754.93 00:36:58.220 clat percentiles (usec): 00:36:58.220 | 1.00th=[ 6259], 5.00th=[ 7504], 10.00th=[ 8094], 20.00th=[ 8586], 00:36:58.220 | 30.00th=[ 8848], 40.00th=[ 9241], 50.00th=[11076], 60.00th=[11600], 00:36:58.220 | 70.00th=[11994], 80.00th=[13042], 90.00th=[15533], 95.00th=[19530], 00:36:58.220 | 99.00th=[25035], 99.50th=[25297], 99.90th=[30278], 99.95th=[30278], 00:36:58.220 | 99.99th=[36963] 00:36:58.220 write: IOPS=6101, BW=23.8MiB/s (25.0MB/s)(24.0MiB/1007msec); 0 zone resets 00:36:58.220 slat (usec): min=2, max=10133, avg=68.61, stdev=514.69 00:36:58.220 clat (usec): min=428, max=23013, avg=9490.19, stdev=2674.02 00:36:58.220 lat (usec): min=443, max=23020, avg=9558.80, stdev=2700.70 00:36:58.220 clat percentiles (usec): 00:36:58.220 | 1.00th=[ 4015], 5.00th=[ 5473], 10.00th=[ 5997], 20.00th=[ 7635], 00:36:58.220 | 30.00th=[ 8291], 40.00th=[ 8455], 50.00th=[ 8848], 60.00th=[ 9241], 00:36:58.220 | 70.00th=[11207], 80.00th=[11863], 90.00th=[12780], 95.00th=[14091], 00:36:58.220 | 99.00th=[17433], 99.50th=[17433], 99.90th=[17957], 99.95th=[19006], 00:36:58.220 | 99.99th=[22938] 00:36:58.220 bw ( KiB/s): min=20480, max=28672, per=36.62%, avg=24576.00, stdev=5792.62, samples=2 00:36:58.220 iops : min= 5120, max= 7168, avg=6144.00, stdev=1448.15, samples=2 00:36:58.220 lat (usec) : 500=0.02% 00:36:58.220 lat (msec) : 2=0.02%, 4=0.44%, 10=55.69%, 20=41.58%, 50=2.24% 00:36:58.220 cpu : usr=5.37%, sys=7.65%, ctx=376, majf=0, minf=1 00:36:58.220 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:36:58.220 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:58.220 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:36:58.220 issued rwts: total=6117,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:58.220 latency : target=0, window=0, percentile=100.00%, depth=128 00:36:58.220 00:36:58.220 Run status group 0 (all jobs): 00:36:58.220 READ: bw=60.3MiB/s (63.2MB/s), 10.0MiB/s-23.7MiB/s (10.5MB/s-24.9MB/s), io=60.7MiB (63.7MB), run=1003-1007msec 00:36:58.220 WRITE: bw=65.5MiB/s (68.7MB/s), 11.9MiB/s-23.8MiB/s (12.5MB/s-25.0MB/s), io=66.0MiB (69.2MB), run=1003-1007msec 00:36:58.220 00:36:58.220 Disk stats (read/write): 00:36:58.220 nvme0n1: ios=2080/2396, merge=0/0, ticks=28240/43916, in_queue=72156, util=98.30% 00:36:58.220 nvme0n2: ios=2882/3072, merge=0/0, ticks=28142/38077, in_queue=66219, util=98.17% 00:36:58.220 nvme0n3: ios=2716/3072, merge=0/0, ticks=42230/62035, in_queue=104265, util=88.98% 00:36:58.220 nvme0n4: ios=5272/5632, merge=0/0, ticks=54338/50331, in_queue=104669, util=99.16% 00:36:58.220 06:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:36:58.220 [global] 00:36:58.220 thread=1 00:36:58.220 invalidate=1 00:36:58.220 rw=randwrite 00:36:58.220 time_based=1 00:36:58.220 runtime=1 00:36:58.220 ioengine=libaio 00:36:58.220 direct=1 00:36:58.220 bs=4096 00:36:58.220 iodepth=128 00:36:58.220 norandommap=0 00:36:58.220 numjobs=1 00:36:58.220 00:36:58.220 verify_dump=1 00:36:58.220 verify_backlog=512 00:36:58.220 verify_state_save=0 00:36:58.220 do_verify=1 00:36:58.220 verify=crc32c-intel 00:36:58.220 [job0] 00:36:58.220 filename=/dev/nvme0n1 00:36:58.220 [job1] 00:36:58.220 filename=/dev/nvme0n2 00:36:58.220 [job2] 00:36:58.220 filename=/dev/nvme0n3 00:36:58.220 [job3] 00:36:58.220 filename=/dev/nvme0n4 00:36:58.220 Could not set queue depth (nvme0n1) 00:36:58.220 Could not set queue depth (nvme0n2) 00:36:58.220 Could not set queue depth (nvme0n3) 00:36:58.220 Could not set queue depth (nvme0n4) 00:36:58.477 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:36:58.477 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:36:58.477 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:36:58.477 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:36:58.477 fio-3.35 00:36:58.477 Starting 4 threads 00:36:59.847 00:36:59.848 job0: (groupid=0, jobs=1): err= 0: pid=780061: Wed Nov 20 06:47:31 2024 00:36:59.848 read: IOPS=4055, BW=15.8MiB/s (16.6MB/s)(16.0MiB/1010msec) 00:36:59.848 slat (nsec): min=1375, max=19130k, avg=116134.25, stdev=985592.89 00:36:59.848 clat (usec): min=2283, max=53197, avg=14770.56, stdev=6820.67 00:36:59.848 lat (usec): min=2291, max=53204, avg=14886.70, stdev=6911.74 00:36:59.848 clat percentiles (usec): 00:36:59.848 | 1.00th=[ 6259], 5.00th=[ 8455], 10.00th=[ 9241], 20.00th=[10683], 00:36:59.848 | 30.00th=[11469], 40.00th=[11863], 50.00th=[12125], 60.00th=[13042], 00:36:59.848 | 70.00th=[16712], 80.00th=[18482], 90.00th=[22414], 95.00th=[28705], 00:36:59.848 | 99.00th=[44303], 99.50th=[52691], 99.90th=[52691], 99.95th=[53216], 00:36:59.848 | 99.99th=[53216] 00:36:59.848 write: IOPS=4386, BW=17.1MiB/s (18.0MB/s)(17.3MiB/1010msec); 0 zone resets 00:36:59.848 slat (usec): min=2, max=16540, avg=97.20, stdev=796.98 00:36:59.848 clat (usec): min=275, max=100511, avg=15273.23, stdev=14698.78 00:36:59.848 lat (usec): min=279, max=100530, avg=15370.43, stdev=14742.63 00:36:59.848 clat percentiles (usec): 00:36:59.848 | 1.00th=[ 1565], 5.00th=[ 3130], 10.00th=[ 5866], 20.00th=[ 8455], 00:36:59.848 | 30.00th=[ 10683], 40.00th=[ 11469], 50.00th=[ 11863], 60.00th=[ 12387], 00:36:59.848 | 70.00th=[ 12518], 80.00th=[ 15926], 90.00th=[ 21365], 95.00th=[ 50594], 00:36:59.848 | 99.00th=[ 86508], 99.50th=[ 96994], 99.90th=[100140], 99.95th=[100140], 00:36:59.848 | 99.99th=[100140] 00:36:59.848 bw ( KiB/s): min=16384, max=18040, per=22.02%, avg=17212.00, stdev=1170.97, samples=2 00:36:59.848 iops : min= 4096, max= 4510, avg=4303.00, stdev=292.74, samples=2 00:36:59.848 lat (usec) : 500=0.01%, 750=0.07% 00:36:59.848 lat (msec) : 2=1.25%, 4=2.31%, 10=17.03%, 20=65.80%, 50=10.56% 00:36:59.848 lat (msec) : 100=2.94%, 250=0.02% 00:36:59.848 cpu : usr=3.77%, sys=4.66%, ctx=318, majf=0, minf=1 00:36:59.848 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:36:59.848 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:59.848 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:36:59.848 issued rwts: total=4096,4430,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:59.848 latency : target=0, window=0, percentile=100.00%, depth=128 00:36:59.848 job1: (groupid=0, jobs=1): err= 0: pid=780063: Wed Nov 20 06:47:31 2024 00:36:59.848 read: IOPS=4836, BW=18.9MiB/s (19.8MB/s)(19.0MiB/1005msec) 00:36:59.848 slat (nsec): min=1626, max=14773k, avg=94857.36, stdev=654743.04 00:36:59.848 clat (usec): min=781, max=31008, avg=11967.40, stdev=3130.38 00:36:59.848 lat (usec): min=5246, max=31016, avg=12062.26, stdev=3157.16 00:36:59.848 clat percentiles (usec): 00:36:59.848 | 1.00th=[ 5932], 5.00th=[ 8291], 10.00th=[ 9110], 20.00th=[ 9634], 00:36:59.848 | 30.00th=[10552], 40.00th=[10814], 50.00th=[11600], 60.00th=[12125], 00:36:59.848 | 70.00th=[12780], 80.00th=[13960], 90.00th=[15270], 95.00th=[16581], 00:36:59.848 | 99.00th=[24511], 99.50th=[31065], 99.90th=[31065], 99.95th=[31065], 00:36:59.848 | 99.99th=[31065] 00:36:59.848 write: IOPS=5094, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1005msec); 0 zone resets 00:36:59.848 slat (usec): min=2, max=37916, avg=99.97, stdev=907.60 00:36:59.848 clat (usec): min=4640, max=49893, avg=13507.16, stdev=7890.35 00:36:59.848 lat (usec): min=4649, max=49897, avg=13607.13, stdev=7930.53 00:36:59.848 clat percentiles (usec): 00:36:59.848 | 1.00th=[ 6783], 5.00th=[ 8356], 10.00th=[ 9634], 20.00th=[10159], 00:36:59.848 | 30.00th=[10552], 40.00th=[11207], 50.00th=[11600], 60.00th=[11994], 00:36:59.848 | 70.00th=[12125], 80.00th=[12256], 90.00th=[17433], 95.00th=[25822], 00:36:59.848 | 99.00th=[47449], 99.50th=[48497], 99.90th=[49546], 99.95th=[50070], 00:36:59.848 | 99.99th=[50070] 00:36:59.848 bw ( KiB/s): min=18896, max=22064, per=26.20%, avg=20480.00, stdev=2240.11, samples=2 00:36:59.848 iops : min= 4724, max= 5516, avg=5120.00, stdev=560.03, samples=2 00:36:59.848 lat (usec) : 1000=0.01% 00:36:59.848 lat (msec) : 10=20.33%, 20=74.67%, 50=4.99% 00:36:59.848 cpu : usr=4.48%, sys=5.28%, ctx=497, majf=0, minf=1 00:36:59.848 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:36:59.848 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:59.848 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:36:59.848 issued rwts: total=4861,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:59.848 latency : target=0, window=0, percentile=100.00%, depth=128 00:36:59.848 job2: (groupid=0, jobs=1): err= 0: pid=780066: Wed Nov 20 06:47:31 2024 00:36:59.848 read: IOPS=4698, BW=18.4MiB/s (19.2MB/s)(18.4MiB/1004msec) 00:36:59.848 slat (nsec): min=1660, max=6405.9k, avg=101058.69, stdev=552798.93 00:36:59.848 clat (usec): min=1786, max=27471, avg=12699.51, stdev=2297.35 00:36:59.848 lat (usec): min=4947, max=27477, avg=12800.57, stdev=2326.99 00:36:59.848 clat percentiles (usec): 00:36:59.848 | 1.00th=[ 6456], 5.00th=[ 9634], 10.00th=[10290], 20.00th=[11207], 00:36:59.848 | 30.00th=[11731], 40.00th=[12125], 50.00th=[12649], 60.00th=[13042], 00:36:59.848 | 70.00th=[13435], 80.00th=[13829], 90.00th=[15008], 95.00th=[16188], 00:36:59.848 | 99.00th=[21890], 99.50th=[23462], 99.90th=[27395], 99.95th=[27395], 00:36:59.848 | 99.99th=[27395] 00:36:59.848 write: IOPS=5099, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1004msec); 0 zone resets 00:36:59.848 slat (usec): min=2, max=10691, avg=97.22, stdev=455.19 00:36:59.848 clat (usec): min=4937, max=23890, avg=13076.94, stdev=1917.87 00:36:59.848 lat (usec): min=4943, max=23904, avg=13174.16, stdev=1947.77 00:36:59.848 clat percentiles (usec): 00:36:59.848 | 1.00th=[ 7832], 5.00th=[10683], 10.00th=[11207], 20.00th=[11600], 00:36:59.848 | 30.00th=[11863], 40.00th=[12649], 50.00th=[13304], 60.00th=[13566], 00:36:59.848 | 70.00th=[13698], 80.00th=[13960], 90.00th=[15008], 95.00th=[16909], 00:36:59.848 | 99.00th=[19268], 99.50th=[20055], 99.90th=[20055], 99.95th=[20317], 00:36:59.848 | 99.99th=[23987] 00:36:59.848 bw ( KiB/s): min=20328, max=20480, per=26.10%, avg=20404.00, stdev=107.48, samples=2 00:36:59.848 iops : min= 5082, max= 5120, avg=5101.00, stdev=26.87, samples=2 00:36:59.848 lat (msec) : 2=0.01%, 10=5.38%, 20=93.51%, 50=1.10% 00:36:59.848 cpu : usr=3.39%, sys=5.38%, ctx=645, majf=0, minf=1 00:36:59.848 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:36:59.848 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:59.848 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:36:59.848 issued rwts: total=4717,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:59.848 latency : target=0, window=0, percentile=100.00%, depth=128 00:36:59.848 job3: (groupid=0, jobs=1): err= 0: pid=780067: Wed Nov 20 06:47:31 2024 00:36:59.848 read: IOPS=4566, BW=17.8MiB/s (18.7MB/s)(18.0MiB/1009msec) 00:36:59.848 slat (nsec): min=1191, max=14028k, avg=104578.41, stdev=796144.46 00:36:59.848 clat (usec): min=1436, max=43685, avg=13800.73, stdev=5634.51 00:36:59.848 lat (usec): min=1458, max=48341, avg=13905.31, stdev=5693.71 00:36:59.848 clat percentiles (usec): 00:36:59.848 | 1.00th=[ 3916], 5.00th=[ 6652], 10.00th=[ 9110], 20.00th=[11207], 00:36:59.848 | 30.00th=[11863], 40.00th=[12387], 50.00th=[12780], 60.00th=[13173], 00:36:59.848 | 70.00th=[13698], 80.00th=[15270], 90.00th=[18482], 95.00th=[28705], 00:36:59.848 | 99.00th=[34866], 99.50th=[35914], 99.90th=[43779], 99.95th=[43779], 00:36:59.848 | 99.99th=[43779] 00:36:59.848 write: IOPS=5020, BW=19.6MiB/s (20.6MB/s)(19.8MiB/1009msec); 0 zone resets 00:36:59.848 slat (nsec): min=1793, max=13658k, avg=94197.62, stdev=733115.50 00:36:59.848 clat (usec): min=1831, max=31104, avg=12668.80, stdev=2327.20 00:36:59.848 lat (usec): min=2013, max=31109, avg=12763.00, stdev=2415.23 00:36:59.848 clat percentiles (usec): 00:36:59.848 | 1.00th=[ 5866], 5.00th=[ 8291], 10.00th=[10683], 20.00th=[11338], 00:36:59.848 | 30.00th=[11863], 40.00th=[12256], 50.00th=[12911], 60.00th=[13173], 00:36:59.848 | 70.00th=[13304], 80.00th=[13698], 90.00th=[14746], 95.00th=[15926], 00:36:59.848 | 99.00th=[20579], 99.50th=[20579], 99.90th=[23462], 99.95th=[28181], 00:36:59.848 | 99.99th=[31065] 00:36:59.848 bw ( KiB/s): min=18680, max=20824, per=25.27%, avg=19752.00, stdev=1516.04, samples=2 00:36:59.848 iops : min= 4670, max= 5206, avg=4938.00, stdev=379.01, samples=2 00:36:59.848 lat (msec) : 2=0.44%, 4=0.31%, 10=9.12%, 20=85.32%, 50=4.81% 00:36:59.848 cpu : usr=3.57%, sys=5.06%, ctx=276, majf=0, minf=1 00:36:59.848 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:36:59.848 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:59.848 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:36:59.848 issued rwts: total=4608,5066,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:59.848 latency : target=0, window=0, percentile=100.00%, depth=128 00:36:59.848 00:36:59.848 Run status group 0 (all jobs): 00:36:59.848 READ: bw=70.7MiB/s (74.1MB/s), 15.8MiB/s-18.9MiB/s (16.6MB/s-19.8MB/s), io=71.4MiB (74.9MB), run=1004-1010msec 00:36:59.848 WRITE: bw=76.3MiB/s (80.0MB/s), 17.1MiB/s-19.9MiB/s (18.0MB/s-20.9MB/s), io=77.1MiB (80.8MB), run=1004-1010msec 00:36:59.848 00:36:59.848 Disk stats (read/write): 00:36:59.848 nvme0n1: ios=3237/3584, merge=0/0, ticks=48111/55186, in_queue=103297, util=86.67% 00:36:59.848 nvme0n2: ios=4119/4204, merge=0/0, ticks=32530/34157, in_queue=66687, util=94.00% 00:36:59.848 nvme0n3: ios=4117/4407, merge=0/0, ticks=19580/18305, in_queue=37885, util=98.65% 00:36:59.848 nvme0n4: ios=4105/4103, merge=0/0, ticks=35190/32799, in_queue=67989, util=98.74% 00:36:59.848 06:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:36:59.848 06:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=780297 00:36:59.848 06:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:36:59.848 06:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:36:59.848 [global] 00:36:59.848 thread=1 00:36:59.848 invalidate=1 00:36:59.848 rw=read 00:36:59.848 time_based=1 00:36:59.848 runtime=10 00:36:59.848 ioengine=libaio 00:36:59.848 direct=1 00:36:59.848 bs=4096 00:36:59.848 iodepth=1 00:36:59.848 norandommap=1 00:36:59.848 numjobs=1 00:36:59.848 00:36:59.848 [job0] 00:36:59.848 filename=/dev/nvme0n1 00:36:59.848 [job1] 00:36:59.848 filename=/dev/nvme0n2 00:36:59.848 [job2] 00:36:59.848 filename=/dev/nvme0n3 00:36:59.848 [job3] 00:36:59.848 filename=/dev/nvme0n4 00:36:59.848 Could not set queue depth (nvme0n1) 00:36:59.849 Could not set queue depth (nvme0n2) 00:36:59.849 Could not set queue depth (nvme0n3) 00:36:59.849 Could not set queue depth (nvme0n4) 00:37:00.105 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:37:00.105 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:37:00.105 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:37:00.105 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:37:00.105 fio-3.35 00:37:00.105 Starting 4 threads 00:37:02.628 06:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:37:02.886 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=52527104, buflen=4096 00:37:02.886 fio: pid=780438, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:37:02.886 06:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:37:03.143 06:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:37:03.143 06:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:37:03.143 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=2310144, buflen=4096 00:37:03.143 fio: pid=780437, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:37:03.143 06:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:37:03.143 06:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:37:03.400 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=339968, buflen=4096 00:37:03.400 fio: pid=780435, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:37:03.400 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=335872, buflen=4096 00:37:03.400 fio: pid=780436, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:37:03.400 06:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:37:03.400 06:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:37:03.658 00:37:03.658 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=780435: Wed Nov 20 06:47:35 2024 00:37:03.658 read: IOPS=26, BW=106KiB/s (108kB/s)(332KiB/3145msec) 00:37:03.658 slat (nsec): min=8273, max=75578, avg=21317.70, stdev=7602.78 00:37:03.658 clat (usec): min=255, max=42030, avg=37602.38, stdev=11354.72 00:37:03.658 lat (usec): min=264, max=42051, avg=37623.69, stdev=11357.27 00:37:03.658 clat percentiles (usec): 00:37:03.658 | 1.00th=[ 255], 5.00th=[ 441], 10.00th=[40633], 20.00th=[41157], 00:37:03.658 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:37:03.658 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:37:03.658 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:37:03.658 | 99.99th=[42206] 00:37:03.658 bw ( KiB/s): min= 96, max= 144, per=0.65%, avg=106.00, stdev=18.89, samples=6 00:37:03.658 iops : min= 24, max= 36, avg=26.50, stdev= 4.72, samples=6 00:37:03.658 lat (usec) : 500=7.14%, 750=1.19% 00:37:03.658 lat (msec) : 50=90.48% 00:37:03.658 cpu : usr=0.00%, sys=0.10%, ctx=85, majf=0, minf=1 00:37:03.658 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:03.658 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:03.658 complete : 0=1.2%, 4=98.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:03.658 issued rwts: total=84,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:03.658 latency : target=0, window=0, percentile=100.00%, depth=1 00:37:03.658 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=780436: Wed Nov 20 06:47:35 2024 00:37:03.658 read: IOPS=24, BW=97.9KiB/s (100kB/s)(328KiB/3349msec) 00:37:03.658 slat (nsec): min=8975, max=63645, avg=16447.84, stdev=9219.66 00:37:03.658 clat (usec): min=382, max=44820, avg=40554.14, stdev=4516.75 00:37:03.658 lat (usec): min=412, max=44841, avg=40570.51, stdev=4515.27 00:37:03.658 clat percentiles (usec): 00:37:03.658 | 1.00th=[ 383], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:37:03.658 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:37:03.658 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:37:03.658 | 99.00th=[44827], 99.50th=[44827], 99.90th=[44827], 99.95th=[44827], 00:37:03.658 | 99.99th=[44827] 00:37:03.658 bw ( KiB/s): min= 93, max= 104, per=0.61%, avg=98.17, stdev= 4.67, samples=6 00:37:03.658 iops : min= 23, max= 26, avg=24.50, stdev= 1.22, samples=6 00:37:03.658 lat (usec) : 500=1.20% 00:37:03.658 lat (msec) : 50=97.59% 00:37:03.658 cpu : usr=0.00%, sys=0.09%, ctx=85, majf=0, minf=2 00:37:03.658 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:03.658 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:03.658 complete : 0=1.2%, 4=98.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:03.658 issued rwts: total=83,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:03.658 latency : target=0, window=0, percentile=100.00%, depth=1 00:37:03.658 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=780437: Wed Nov 20 06:47:35 2024 00:37:03.658 read: IOPS=193, BW=773KiB/s (792kB/s)(2256KiB/2917msec) 00:37:03.658 slat (nsec): min=6654, max=34428, avg=9312.90, stdev=3473.95 00:37:03.658 clat (usec): min=209, max=42223, avg=5122.53, stdev=13217.57 00:37:03.658 lat (usec): min=218, max=42232, avg=5131.82, stdev=13218.03 00:37:03.658 clat percentiles (usec): 00:37:03.658 | 1.00th=[ 215], 5.00th=[ 217], 10.00th=[ 221], 20.00th=[ 223], 00:37:03.658 | 30.00th=[ 225], 40.00th=[ 229], 50.00th=[ 233], 60.00th=[ 241], 00:37:03.658 | 70.00th=[ 253], 80.00th=[ 269], 90.00th=[40633], 95.00th=[41157], 00:37:03.658 | 99.00th=[41681], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:37:03.658 | 99.99th=[42206] 00:37:03.658 bw ( KiB/s): min= 240, max= 2984, per=5.28%, avg=854.40, stdev=1194.19, samples=5 00:37:03.658 iops : min= 60, max= 746, avg=213.60, stdev=298.55, samples=5 00:37:03.658 lat (usec) : 250=66.55%, 500=20.88%, 750=0.35% 00:37:03.658 lat (msec) : 50=12.04% 00:37:03.658 cpu : usr=0.03%, sys=0.21%, ctx=565, majf=0, minf=2 00:37:03.658 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:03.658 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:03.658 complete : 0=0.2%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:03.658 issued rwts: total=565,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:03.658 latency : target=0, window=0, percentile=100.00%, depth=1 00:37:03.658 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=780438: Wed Nov 20 06:47:35 2024 00:37:03.658 read: IOPS=4737, BW=18.5MiB/s (19.4MB/s)(50.1MiB/2707msec) 00:37:03.658 slat (nsec): min=6707, max=46534, avg=8116.31, stdev=1479.71 00:37:03.658 clat (usec): min=169, max=1614, avg=199.42, stdev=19.21 00:37:03.658 lat (usec): min=186, max=1621, avg=207.53, stdev=19.28 00:37:03.658 clat percentiles (usec): 00:37:03.658 | 1.00th=[ 184], 5.00th=[ 190], 10.00th=[ 190], 20.00th=[ 192], 00:37:03.658 | 30.00th=[ 194], 40.00th=[ 196], 50.00th=[ 196], 60.00th=[ 198], 00:37:03.658 | 70.00th=[ 202], 80.00th=[ 204], 90.00th=[ 210], 95.00th=[ 219], 00:37:03.658 | 99.00th=[ 243], 99.50th=[ 260], 99.90th=[ 375], 99.95th=[ 379], 00:37:03.658 | 99.99th=[ 848] 00:37:03.658 bw ( KiB/s): min=18816, max=19384, per=100.00%, avg=19132.80, stdev=203.38, samples=5 00:37:03.658 iops : min= 4704, max= 4846, avg=4783.20, stdev=50.84, samples=5 00:37:03.658 lat (usec) : 250=99.29%, 500=0.69%, 1000=0.01% 00:37:03.658 lat (msec) : 2=0.01% 00:37:03.658 cpu : usr=2.73%, sys=7.24%, ctx=12826, majf=0, minf=2 00:37:03.658 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:03.658 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:03.658 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:03.658 issued rwts: total=12825,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:03.658 latency : target=0, window=0, percentile=100.00%, depth=1 00:37:03.658 00:37:03.658 Run status group 0 (all jobs): 00:37:03.658 READ: bw=15.8MiB/s (16.6MB/s), 97.9KiB/s-18.5MiB/s (100kB/s-19.4MB/s), io=52.9MiB (55.5MB), run=2707-3349msec 00:37:03.658 00:37:03.658 Disk stats (read/write): 00:37:03.658 nvme0n1: ios=82/0, merge=0/0, ticks=3082/0, in_queue=3082, util=95.72% 00:37:03.658 nvme0n2: ios=76/0, merge=0/0, ticks=3081/0, in_queue=3081, util=96.10% 00:37:03.658 nvme0n3: ios=563/0, merge=0/0, ticks=2846/0, in_queue=2846, util=96.55% 00:37:03.658 nvme0n4: ios=12448/0, merge=0/0, ticks=2327/0, in_queue=2327, util=96.45% 00:37:03.658 06:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:37:03.659 06:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:37:03.916 06:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:37:03.916 06:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:37:04.174 06:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:37:04.174 06:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:37:04.433 06:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:37:04.433 06:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:37:04.433 06:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:37:04.433 06:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 780297 00:37:04.433 06:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:37:04.433 06:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:37:04.690 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:37:04.690 06:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:37:04.690 06:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1221 -- # local i=0 00:37:04.690 06:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:37:04.690 06:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:37:04.690 06:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:37:04.690 06:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:37:04.690 06:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1233 -- # return 0 00:37:04.690 06:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:37:04.690 06:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:37:04.690 nvmf hotplug test: fio failed as expected 00:37:04.690 06:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:04.948 06:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:37:04.948 06:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:37:04.948 06:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:37:04.948 06:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:37:04.948 06:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:37:04.948 06:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:04.948 06:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:37:04.948 06:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:04.948 06:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:37:04.948 06:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:04.948 06:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:04.948 rmmod nvme_tcp 00:37:04.948 rmmod nvme_fabrics 00:37:04.948 rmmod nvme_keyring 00:37:04.948 06:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:04.948 06:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:37:04.948 06:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:37:04.948 06:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 777813 ']' 00:37:04.948 06:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 777813 00:37:04.948 06:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@952 -- # '[' -z 777813 ']' 00:37:04.948 06:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@956 -- # kill -0 777813 00:37:04.948 06:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@957 -- # uname 00:37:04.948 06:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:37:04.948 06:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 777813 00:37:04.949 06:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:37:04.949 06:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:37:04.949 06:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 777813' 00:37:04.949 killing process with pid 777813 00:37:04.949 06:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@971 -- # kill 777813 00:37:04.949 06:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@976 -- # wait 777813 00:37:05.207 06:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:37:05.207 06:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:05.207 06:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:05.207 06:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:37:05.207 06:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:37:05.207 06:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:05.207 06:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:37:05.207 06:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:05.207 06:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:05.207 06:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:05.207 06:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:05.208 06:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:07.112 06:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:07.372 00:37:07.372 real 0m25.829s 00:37:07.372 user 1m30.954s 00:37:07.372 sys 0m10.927s 00:37:07.372 06:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1128 -- # xtrace_disable 00:37:07.372 06:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:37:07.372 ************************************ 00:37:07.372 END TEST nvmf_fio_target 00:37:07.372 ************************************ 00:37:07.372 06:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:37:07.372 06:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:37:07.372 06:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:37:07.372 06:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:37:07.372 ************************************ 00:37:07.372 START TEST nvmf_bdevio 00:37:07.372 ************************************ 00:37:07.372 06:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:37:07.372 * Looking for test storage... 00:37:07.372 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:07.372 06:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:37:07.372 06:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lcov --version 00:37:07.372 06:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:37:07.372 06:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:37:07.372 06:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:07.372 06:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:07.372 06:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:07.372 06:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:37:07.372 06:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:37:07.372 06:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:37:07.372 06:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:37:07.372 06:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:37:07.372 06:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:37:07.372 06:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:37:07.372 06:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:07.372 06:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:37:07.372 06:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:37:07.372 06:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:07.372 06:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:07.372 06:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:37:07.372 06:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:37:07.372 06:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:07.372 06:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:37:07.372 06:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:37:07.372 06:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:37:07.372 06:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:37:07.372 06:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:07.372 06:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:37:07.372 06:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:37:07.372 06:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:07.372 06:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:07.372 06:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:37:07.372 06:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:07.372 06:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:37:07.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:07.372 --rc genhtml_branch_coverage=1 00:37:07.372 --rc genhtml_function_coverage=1 00:37:07.372 --rc genhtml_legend=1 00:37:07.372 --rc geninfo_all_blocks=1 00:37:07.372 --rc geninfo_unexecuted_blocks=1 00:37:07.372 00:37:07.372 ' 00:37:07.372 06:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:37:07.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:07.372 --rc genhtml_branch_coverage=1 00:37:07.372 --rc genhtml_function_coverage=1 00:37:07.372 --rc genhtml_legend=1 00:37:07.372 --rc geninfo_all_blocks=1 00:37:07.372 --rc geninfo_unexecuted_blocks=1 00:37:07.372 00:37:07.372 ' 00:37:07.372 06:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:37:07.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:07.372 --rc genhtml_branch_coverage=1 00:37:07.372 --rc genhtml_function_coverage=1 00:37:07.372 --rc genhtml_legend=1 00:37:07.372 --rc geninfo_all_blocks=1 00:37:07.372 --rc geninfo_unexecuted_blocks=1 00:37:07.372 00:37:07.372 ' 00:37:07.372 06:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:37:07.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:07.372 --rc genhtml_branch_coverage=1 00:37:07.372 --rc genhtml_function_coverage=1 00:37:07.372 --rc genhtml_legend=1 00:37:07.372 --rc geninfo_all_blocks=1 00:37:07.372 --rc geninfo_unexecuted_blocks=1 00:37:07.372 00:37:07.372 ' 00:37:07.372 06:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:07.372 06:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:37:07.372 06:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:07.372 06:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:07.372 06:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:07.372 06:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:07.372 06:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:07.372 06:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:07.372 06:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:07.372 06:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:07.372 06:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:07.372 06:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:07.632 06:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:37:07.632 06:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:37:07.632 06:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:07.632 06:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:07.632 06:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:07.632 06:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:07.632 06:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:07.632 06:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:37:07.632 06:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:07.632 06:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:07.632 06:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:07.632 06:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:07.632 06:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:07.632 06:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:07.632 06:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:37:07.632 06:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:07.632 06:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:37:07.632 06:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:07.632 06:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:07.632 06:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:07.632 06:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:07.632 06:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:07.632 06:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:37:07.632 06:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:37:07.632 06:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:07.632 06:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:07.632 06:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:07.632 06:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:37:07.632 06:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:37:07.632 06:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:37:07.632 06:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:37:07.632 06:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:07.632 06:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:07.632 06:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:07.632 06:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:07.632 06:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:07.632 06:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:07.632 06:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:07.632 06:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:37:07.632 06:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:37:07.632 06:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:37:07.632 06:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:37:14.199 06:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:14.199 06:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:37:14.199 06:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:14.199 06:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:14.199 06:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:14.199 06:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:14.199 06:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:14.199 06:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:37:14.199 06:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:14.199 06:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:37:14.199 06:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:37:14.199 06:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:37:14.199 06:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:37:14.199 06:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:37:14.199 06:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:37:14.199 06:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:14.199 06:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:14.199 06:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:14.199 06:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:14.199 06:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:14.199 06:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:14.199 06:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:14.199 06:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:14.199 06:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:14.199 06:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:14.199 06:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:14.199 06:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:14.199 06:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:14.199 06:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:14.199 06:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:14.199 06:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:14.199 06:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:14.199 06:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:14.199 06:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:14.199 06:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:37:14.199 Found 0000:86:00.0 (0x8086 - 0x159b) 00:37:14.199 06:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:14.199 06:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:14.199 06:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:14.199 06:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:14.199 06:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:14.199 06:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:14.199 06:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:37:14.199 Found 0000:86:00.1 (0x8086 - 0x159b) 00:37:14.199 06:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:14.199 06:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:14.199 06:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:14.199 06:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:14.199 06:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:14.199 06:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:14.199 06:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:14.199 06:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:14.199 06:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:14.199 06:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:14.199 06:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:14.199 06:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:14.199 06:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:14.199 06:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:14.199 06:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:14.199 06:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:37:14.199 Found net devices under 0000:86:00.0: cvl_0_0 00:37:14.199 06:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:14.199 06:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:14.199 06:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:14.199 06:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:14.199 06:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:14.199 06:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:14.199 06:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:14.199 06:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:14.199 06:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:37:14.199 Found net devices under 0000:86:00.1: cvl_0_1 00:37:14.199 06:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:14.199 06:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:37:14.199 06:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:37:14.199 06:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:37:14.199 06:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:37:14.199 06:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:37:14.199 06:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:14.199 06:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:14.199 06:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:14.199 06:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:14.199 06:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:14.199 06:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:14.199 06:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:14.199 06:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:14.199 06:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:14.199 06:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:14.199 06:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:14.199 06:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:14.199 06:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:14.199 06:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:14.199 06:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:14.199 06:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:14.199 06:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:14.199 06:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:14.200 06:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:14.200 06:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:14.200 06:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:14.200 06:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:14.200 06:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:14.200 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:14.200 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.433 ms 00:37:14.200 00:37:14.200 --- 10.0.0.2 ping statistics --- 00:37:14.200 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:14.200 rtt min/avg/max/mdev = 0.433/0.433/0.433/0.000 ms 00:37:14.200 06:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:14.200 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:14.200 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.215 ms 00:37:14.200 00:37:14.200 --- 10.0.0.1 ping statistics --- 00:37:14.200 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:14.200 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:37:14.200 06:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:14.200 06:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:37:14.200 06:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:37:14.200 06:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:14.200 06:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:37:14.200 06:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:37:14.200 06:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:14.200 06:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:37:14.200 06:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:37:14.200 06:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:37:14.200 06:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:14.200 06:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:37:14.200 06:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:37:14.200 06:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=784673 00:37:14.200 06:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:37:14.200 06:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 784673 00:37:14.200 06:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@833 -- # '[' -z 784673 ']' 00:37:14.200 06:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:14.200 06:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@838 -- # local max_retries=100 00:37:14.200 06:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:14.200 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:14.200 06:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # xtrace_disable 00:37:14.200 06:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:37:14.200 [2024-11-20 06:47:45.217673] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:37:14.200 [2024-11-20 06:47:45.218569] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:37:14.200 [2024-11-20 06:47:45.218601] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:14.200 [2024-11-20 06:47:45.298222] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:37:14.200 [2024-11-20 06:47:45.342738] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:14.200 [2024-11-20 06:47:45.342771] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:14.200 [2024-11-20 06:47:45.342778] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:14.200 [2024-11-20 06:47:45.342785] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:14.200 [2024-11-20 06:47:45.342790] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:14.200 [2024-11-20 06:47:45.344374] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:37:14.200 [2024-11-20 06:47:45.344485] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:37:14.200 [2024-11-20 06:47:45.344593] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:37:14.200 [2024-11-20 06:47:45.344594] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:37:14.200 [2024-11-20 06:47:45.410482] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:37:14.200 [2024-11-20 06:47:45.411289] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:37:14.200 [2024-11-20 06:47:45.411357] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:37:14.200 [2024-11-20 06:47:45.411779] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:37:14.200 [2024-11-20 06:47:45.411829] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:37:14.458 06:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:37:14.458 06:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@866 -- # return 0 00:37:14.458 06:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:14.458 06:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:37:14.458 06:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:37:14.458 06:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:14.458 06:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:37:14.458 06:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:14.458 06:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:37:14.458 [2024-11-20 06:47:46.105294] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:14.458 06:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:14.458 06:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:37:14.459 06:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:14.459 06:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:37:14.459 Malloc0 00:37:14.459 06:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:14.459 06:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:37:14.459 06:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:14.459 06:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:37:14.459 06:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:14.459 06:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:37:14.459 06:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:14.459 06:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:37:14.459 06:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:14.459 06:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:14.459 06:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:14.459 06:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:37:14.459 [2024-11-20 06:47:46.193600] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:14.459 06:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:14.459 06:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:37:14.459 06:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:37:14.459 06:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:37:14.459 06:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:37:14.459 06:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:14.459 06:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:14.459 { 00:37:14.459 "params": { 00:37:14.459 "name": "Nvme$subsystem", 00:37:14.459 "trtype": "$TEST_TRANSPORT", 00:37:14.459 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:14.459 "adrfam": "ipv4", 00:37:14.459 "trsvcid": "$NVMF_PORT", 00:37:14.459 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:14.459 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:14.459 "hdgst": ${hdgst:-false}, 00:37:14.459 "ddgst": ${ddgst:-false} 00:37:14.459 }, 00:37:14.459 "method": "bdev_nvme_attach_controller" 00:37:14.459 } 00:37:14.459 EOF 00:37:14.459 )") 00:37:14.459 06:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:37:14.459 06:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:37:14.459 06:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:37:14.459 06:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:37:14.459 "params": { 00:37:14.459 "name": "Nvme1", 00:37:14.459 "trtype": "tcp", 00:37:14.459 "traddr": "10.0.0.2", 00:37:14.459 "adrfam": "ipv4", 00:37:14.459 "trsvcid": "4420", 00:37:14.459 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:14.459 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:14.459 "hdgst": false, 00:37:14.459 "ddgst": false 00:37:14.459 }, 00:37:14.459 "method": "bdev_nvme_attach_controller" 00:37:14.459 }' 00:37:14.459 [2024-11-20 06:47:46.244252] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:37:14.459 [2024-11-20 06:47:46.244310] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid784920 ] 00:37:14.716 [2024-11-20 06:47:46.322252] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:37:14.716 [2024-11-20 06:47:46.366551] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:14.716 [2024-11-20 06:47:46.366578] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:14.716 [2024-11-20 06:47:46.366579] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:37:14.973 I/O targets: 00:37:14.973 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:37:14.973 00:37:14.973 00:37:14.973 CUnit - A unit testing framework for C - Version 2.1-3 00:37:14.973 http://cunit.sourceforge.net/ 00:37:14.973 00:37:14.973 00:37:14.973 Suite: bdevio tests on: Nvme1n1 00:37:14.973 Test: blockdev write read block ...passed 00:37:14.973 Test: blockdev write zeroes read block ...passed 00:37:14.973 Test: blockdev write zeroes read no split ...passed 00:37:14.973 Test: blockdev write zeroes read split ...passed 00:37:14.973 Test: blockdev write zeroes read split partial ...passed 00:37:14.973 Test: blockdev reset ...[2024-11-20 06:47:46.750855] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:37:14.973 [2024-11-20 06:47:46.750922] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b7340 (9): Bad file descriptor 00:37:14.973 [2024-11-20 06:47:46.795295] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:37:14.973 passed 00:37:14.973 Test: blockdev write read 8 blocks ...passed 00:37:14.973 Test: blockdev write read size > 128k ...passed 00:37:14.973 Test: blockdev write read invalid size ...passed 00:37:15.231 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:37:15.231 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:37:15.231 Test: blockdev write read max offset ...passed 00:37:15.231 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:37:15.231 Test: blockdev writev readv 8 blocks ...passed 00:37:15.231 Test: blockdev writev readv 30 x 1block ...passed 00:37:15.231 Test: blockdev writev readv block ...passed 00:37:15.231 Test: blockdev writev readv size > 128k ...passed 00:37:15.231 Test: blockdev writev readv size > 128k in two iovs ...passed 00:37:15.231 Test: blockdev comparev and writev ...[2024-11-20 06:47:47.045988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:37:15.231 [2024-11-20 06:47:47.046021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:15.231 [2024-11-20 06:47:47.046036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:37:15.231 [2024-11-20 06:47:47.046044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:15.231 [2024-11-20 06:47:47.046351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:37:15.231 [2024-11-20 06:47:47.046362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:37:15.231 [2024-11-20 06:47:47.046373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:37:15.231 [2024-11-20 06:47:47.046380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:37:15.231 [2024-11-20 06:47:47.046670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:37:15.231 [2024-11-20 06:47:47.046680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:37:15.231 [2024-11-20 06:47:47.046692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:37:15.231 [2024-11-20 06:47:47.046699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:37:15.231 [2024-11-20 06:47:47.046984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:37:15.231 [2024-11-20 06:47:47.046998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:37:15.231 [2024-11-20 06:47:47.047010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:37:15.231 [2024-11-20 06:47:47.047017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:37:15.488 passed 00:37:15.488 Test: blockdev nvme passthru rw ...passed 00:37:15.488 Test: blockdev nvme passthru vendor specific ...[2024-11-20 06:47:47.128446] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:37:15.488 [2024-11-20 06:47:47.128462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:37:15.488 [2024-11-20 06:47:47.128578] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:37:15.488 [2024-11-20 06:47:47.128588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:37:15.488 [2024-11-20 06:47:47.128696] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:37:15.488 [2024-11-20 06:47:47.128706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:37:15.488 [2024-11-20 06:47:47.128813] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:37:15.488 [2024-11-20 06:47:47.128822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:37:15.488 passed 00:37:15.488 Test: blockdev nvme admin passthru ...passed 00:37:15.488 Test: blockdev copy ...passed 00:37:15.488 00:37:15.488 Run Summary: Type Total Ran Passed Failed Inactive 00:37:15.488 suites 1 1 n/a 0 0 00:37:15.488 tests 23 23 23 0 0 00:37:15.488 asserts 152 152 152 0 n/a 00:37:15.488 00:37:15.488 Elapsed time = 1.250 seconds 00:37:15.488 06:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:15.488 06:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:15.488 06:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:37:15.746 06:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:15.746 06:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:37:15.746 06:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:37:15.746 06:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:15.746 06:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:37:15.746 06:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:15.746 06:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:37:15.746 06:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:15.746 06:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:15.746 rmmod nvme_tcp 00:37:15.746 rmmod nvme_fabrics 00:37:15.746 rmmod nvme_keyring 00:37:15.746 06:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:15.746 06:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:37:15.746 06:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:37:15.746 06:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 784673 ']' 00:37:15.746 06:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 784673 00:37:15.746 06:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@952 -- # '[' -z 784673 ']' 00:37:15.746 06:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@956 -- # kill -0 784673 00:37:15.746 06:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@957 -- # uname 00:37:15.746 06:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:37:15.746 06:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 784673 00:37:15.746 06:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # process_name=reactor_3 00:37:15.746 06:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@962 -- # '[' reactor_3 = sudo ']' 00:37:15.746 06:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@970 -- # echo 'killing process with pid 784673' 00:37:15.746 killing process with pid 784673 00:37:15.746 06:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@971 -- # kill 784673 00:37:15.746 06:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@976 -- # wait 784673 00:37:16.005 06:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:37:16.005 06:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:16.005 06:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:16.005 06:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:37:16.005 06:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:37:16.005 06:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:16.005 06:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:37:16.005 06:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:16.005 06:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:16.005 06:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:16.005 06:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:16.005 06:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:17.906 06:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:17.906 00:37:17.906 real 0m10.698s 00:37:17.906 user 0m9.255s 00:37:17.906 sys 0m5.318s 00:37:17.906 06:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1128 -- # xtrace_disable 00:37:17.906 06:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:37:17.906 ************************************ 00:37:17.906 END TEST nvmf_bdevio 00:37:17.906 ************************************ 00:37:18.164 06:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:37:18.164 00:37:18.164 real 4m32.744s 00:37:18.164 user 9m4.800s 00:37:18.164 sys 1m50.852s 00:37:18.164 06:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1128 -- # xtrace_disable 00:37:18.164 06:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:37:18.164 ************************************ 00:37:18.164 END TEST nvmf_target_core_interrupt_mode 00:37:18.164 ************************************ 00:37:18.164 06:47:49 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:37:18.164 06:47:49 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:37:18.164 06:47:49 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:37:18.164 06:47:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:18.164 ************************************ 00:37:18.164 START TEST nvmf_interrupt 00:37:18.164 ************************************ 00:37:18.164 06:47:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:37:18.164 * Looking for test storage... 00:37:18.164 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:18.164 06:47:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:37:18.164 06:47:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1691 -- # lcov --version 00:37:18.164 06:47:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:37:18.164 06:47:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:37:18.164 06:47:49 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:18.164 06:47:49 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:18.164 06:47:49 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:18.164 06:47:49 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:37:18.164 06:47:49 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:37:18.164 06:47:49 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:37:18.164 06:47:49 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:37:18.164 06:47:49 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:37:18.164 06:47:49 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:37:18.164 06:47:49 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:37:18.164 06:47:49 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:18.164 06:47:49 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:37:18.164 06:47:49 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:37:18.164 06:47:49 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:18.164 06:47:49 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:18.164 06:47:49 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:37:18.164 06:47:49 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:37:18.164 06:47:49 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:18.164 06:47:49 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:37:18.164 06:47:49 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:37:18.164 06:47:49 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:37:18.164 06:47:49 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:37:18.164 06:47:49 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:18.164 06:47:49 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:37:18.164 06:47:49 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:37:18.164 06:47:49 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:18.424 06:47:49 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:18.424 06:47:49 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:37:18.424 06:47:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:18.424 06:47:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:37:18.424 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:18.424 --rc genhtml_branch_coverage=1 00:37:18.424 --rc genhtml_function_coverage=1 00:37:18.424 --rc genhtml_legend=1 00:37:18.424 --rc geninfo_all_blocks=1 00:37:18.424 --rc geninfo_unexecuted_blocks=1 00:37:18.424 00:37:18.424 ' 00:37:18.424 06:47:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:37:18.424 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:18.424 --rc genhtml_branch_coverage=1 00:37:18.424 --rc genhtml_function_coverage=1 00:37:18.424 --rc genhtml_legend=1 00:37:18.424 --rc geninfo_all_blocks=1 00:37:18.424 --rc geninfo_unexecuted_blocks=1 00:37:18.424 00:37:18.424 ' 00:37:18.424 06:47:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:37:18.424 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:18.424 --rc genhtml_branch_coverage=1 00:37:18.424 --rc genhtml_function_coverage=1 00:37:18.424 --rc genhtml_legend=1 00:37:18.424 --rc geninfo_all_blocks=1 00:37:18.424 --rc geninfo_unexecuted_blocks=1 00:37:18.424 00:37:18.424 ' 00:37:18.424 06:47:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:37:18.424 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:18.424 --rc genhtml_branch_coverage=1 00:37:18.424 --rc genhtml_function_coverage=1 00:37:18.424 --rc genhtml_legend=1 00:37:18.424 --rc geninfo_all_blocks=1 00:37:18.424 --rc geninfo_unexecuted_blocks=1 00:37:18.424 00:37:18.424 ' 00:37:18.424 06:47:49 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:18.424 06:47:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:37:18.424 06:47:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:18.424 06:47:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:18.424 06:47:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:18.424 06:47:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:18.424 06:47:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:18.424 06:47:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:18.424 06:47:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:18.424 06:47:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:18.424 06:47:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:18.424 06:47:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:18.424 06:47:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:37:18.424 06:47:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:37:18.424 06:47:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:18.424 06:47:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:18.424 06:47:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:18.424 06:47:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:18.424 06:47:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:18.424 06:47:50 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:37:18.424 06:47:50 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:18.424 06:47:50 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:18.424 06:47:50 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:18.424 06:47:50 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:18.424 06:47:50 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:18.424 06:47:50 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:18.424 06:47:50 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:37:18.424 06:47:50 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:18.424 06:47:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:37:18.424 06:47:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:18.424 06:47:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:18.424 06:47:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:18.424 06:47:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:18.424 06:47:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:18.424 06:47:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:37:18.424 06:47:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:37:18.424 06:47:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:18.424 06:47:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:18.424 06:47:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:18.424 06:47:50 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:37:18.424 06:47:50 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:37:18.424 06:47:50 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:37:18.424 06:47:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:37:18.424 06:47:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:18.424 06:47:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:18.424 06:47:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:18.424 06:47:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:18.424 06:47:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:18.425 06:47:50 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:37:18.425 06:47:50 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:18.425 06:47:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:37:18.425 06:47:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:37:18.425 06:47:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:37:18.425 06:47:50 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:37:24.987 06:47:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:24.987 06:47:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:37:24.987 06:47:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:24.987 06:47:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:24.987 06:47:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:24.987 06:47:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:24.987 06:47:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:24.987 06:47:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:37:24.987 06:47:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:24.987 06:47:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:37:24.987 06:47:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:37:24.987 06:47:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:37:24.987 06:47:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:37:24.987 06:47:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:37:24.987 06:47:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:37:24.987 06:47:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:24.987 06:47:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:24.987 06:47:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:24.987 06:47:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:24.987 06:47:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:24.987 06:47:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:24.987 06:47:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:24.987 06:47:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:24.987 06:47:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:24.987 06:47:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:24.987 06:47:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:24.987 06:47:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:24.987 06:47:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:24.987 06:47:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:24.987 06:47:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:24.987 06:47:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:24.987 06:47:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:24.987 06:47:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:24.987 06:47:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:24.987 06:47:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:37:24.987 Found 0000:86:00.0 (0x8086 - 0x159b) 00:37:24.987 06:47:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:24.987 06:47:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:24.987 06:47:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:24.987 06:47:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:24.987 06:47:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:24.987 06:47:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:24.987 06:47:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:37:24.987 Found 0000:86:00.1 (0x8086 - 0x159b) 00:37:24.987 06:47:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:24.987 06:47:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:24.987 06:47:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:24.987 06:47:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:24.987 06:47:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:24.987 06:47:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:24.987 06:47:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:24.987 06:47:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:24.987 06:47:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:24.987 06:47:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:24.987 06:47:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:24.987 06:47:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:24.987 06:47:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:24.987 06:47:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:24.987 06:47:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:24.987 06:47:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:37:24.987 Found net devices under 0000:86:00.0: cvl_0_0 00:37:24.987 06:47:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:24.987 06:47:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:24.988 06:47:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:24.988 06:47:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:24.988 06:47:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:24.988 06:47:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:24.988 06:47:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:24.988 06:47:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:24.988 06:47:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:37:24.988 Found net devices under 0000:86:00.1: cvl_0_1 00:37:24.988 06:47:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:24.988 06:47:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:37:24.988 06:47:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # is_hw=yes 00:37:24.988 06:47:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:37:24.988 06:47:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:37:24.988 06:47:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:37:24.988 06:47:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:24.988 06:47:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:24.988 06:47:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:24.988 06:47:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:24.988 06:47:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:24.988 06:47:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:24.988 06:47:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:24.988 06:47:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:24.988 06:47:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:24.988 06:47:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:24.988 06:47:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:24.988 06:47:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:24.988 06:47:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:24.988 06:47:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:24.988 06:47:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:24.988 06:47:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:24.988 06:47:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:24.988 06:47:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:24.988 06:47:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:24.988 06:47:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:24.988 06:47:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:24.988 06:47:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:24.988 06:47:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:24.988 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:24.988 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.452 ms 00:37:24.988 00:37:24.988 --- 10.0.0.2 ping statistics --- 00:37:24.988 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:24.988 rtt min/avg/max/mdev = 0.452/0.452/0.452/0.000 ms 00:37:24.988 06:47:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:24.988 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:24.988 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.216 ms 00:37:24.988 00:37:24.988 --- 10.0.0.1 ping statistics --- 00:37:24.988 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:24.988 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:37:24.988 06:47:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:24.988 06:47:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@450 -- # return 0 00:37:24.988 06:47:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:37:24.988 06:47:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:24.988 06:47:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:37:24.988 06:47:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:37:24.988 06:47:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:24.988 06:47:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:37:24.988 06:47:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:37:24.988 06:47:55 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:37:24.988 06:47:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:24.988 06:47:55 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@724 -- # xtrace_disable 00:37:24.988 06:47:55 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:37:24.988 06:47:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # nvmfpid=788671 00:37:24.988 06:47:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # waitforlisten 788671 00:37:24.988 06:47:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:37:24.988 06:47:55 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@833 -- # '[' -z 788671 ']' 00:37:24.988 06:47:55 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:24.988 06:47:55 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@838 -- # local max_retries=100 00:37:24.988 06:47:55 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:24.988 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:24.988 06:47:55 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # xtrace_disable 00:37:24.988 06:47:55 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:37:24.988 [2024-11-20 06:47:56.011882] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:37:24.988 [2024-11-20 06:47:56.012874] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:37:24.988 [2024-11-20 06:47:56.012913] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:24.988 [2024-11-20 06:47:56.093350] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:37:24.988 [2024-11-20 06:47:56.134458] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:24.988 [2024-11-20 06:47:56.134492] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:24.988 [2024-11-20 06:47:56.134498] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:24.988 [2024-11-20 06:47:56.134504] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:24.988 [2024-11-20 06:47:56.134509] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:24.988 [2024-11-20 06:47:56.135677] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:24.988 [2024-11-20 06:47:56.135679] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:24.988 [2024-11-20 06:47:56.201330] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:37:24.988 [2024-11-20 06:47:56.201749] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:37:24.988 [2024-11-20 06:47:56.202022] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:37:24.988 06:47:56 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:37:24.988 06:47:56 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@866 -- # return 0 00:37:24.988 06:47:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:24.988 06:47:56 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@730 -- # xtrace_disable 00:37:24.988 06:47:56 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:37:24.988 06:47:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:24.989 06:47:56 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:37:24.989 06:47:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:37:24.989 06:47:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:37:24.989 06:47:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:37:24.989 5000+0 records in 00:37:24.989 5000+0 records out 00:37:24.989 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0165267 s, 620 MB/s 00:37:24.989 06:47:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:37:24.989 06:47:56 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:24.989 06:47:56 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:37:24.989 AIO0 00:37:24.989 06:47:56 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:24.989 06:47:56 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:37:24.989 06:47:56 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:24.989 06:47:56 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:37:24.989 [2024-11-20 06:47:56.328560] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:24.989 06:47:56 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:24.989 06:47:56 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:37:24.989 06:47:56 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:24.989 06:47:56 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:37:24.989 06:47:56 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:24.989 06:47:56 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:37:24.989 06:47:56 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:24.989 06:47:56 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:37:24.989 06:47:56 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:24.989 06:47:56 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:24.989 06:47:56 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:24.989 06:47:56 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:37:24.989 [2024-11-20 06:47:56.368800] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:24.989 06:47:56 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:24.989 06:47:56 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:37:24.989 06:47:56 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 788671 0 00:37:24.989 06:47:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 788671 0 idle 00:37:24.989 06:47:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=788671 00:37:24.989 06:47:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:37:24.989 06:47:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:37:24.989 06:47:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:37:24.989 06:47:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:37:24.989 06:47:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:37:24.989 06:47:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:37:24.989 06:47:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:37:24.989 06:47:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:37:24.989 06:47:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:37:24.989 06:47:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 788671 -w 256 00:37:24.989 06:47:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:37:24.989 06:47:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 788671 root 20 0 128.2g 46848 34560 S 0.0 0.0 0:00.25 reactor_0' 00:37:24.989 06:47:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 788671 root 20 0 128.2g 46848 34560 S 0.0 0.0 0:00.25 reactor_0 00:37:24.989 06:47:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:37:24.989 06:47:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:37:24.989 06:47:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:37:24.989 06:47:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:37:24.989 06:47:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:37:24.989 06:47:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:37:24.989 06:47:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:37:24.989 06:47:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:37:24.989 06:47:56 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:37:24.989 06:47:56 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 788671 1 00:37:24.989 06:47:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 788671 1 idle 00:37:24.989 06:47:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=788671 00:37:24.989 06:47:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:37:24.989 06:47:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:37:24.989 06:47:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:37:24.989 06:47:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:37:24.989 06:47:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:37:24.989 06:47:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:37:24.989 06:47:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:37:24.989 06:47:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:37:24.989 06:47:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:37:24.989 06:47:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 788671 -w 256 00:37:24.989 06:47:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:37:24.989 06:47:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 788693 root 20 0 128.2g 46848 34560 S 0.0 0.0 0:00.00 reactor_1' 00:37:24.989 06:47:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 788693 root 20 0 128.2g 46848 34560 S 0.0 0.0 0:00.00 reactor_1 00:37:24.989 06:47:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:37:24.989 06:47:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:37:24.989 06:47:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:37:24.989 06:47:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:37:24.989 06:47:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:37:24.989 06:47:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:37:24.989 06:47:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:37:24.989 06:47:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:37:24.989 06:47:56 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:37:24.989 06:47:56 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=788730 00:37:24.989 06:47:56 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:37:24.989 06:47:56 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:37:24.989 06:47:56 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:37:24.989 06:47:56 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 788671 0 00:37:24.989 06:47:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 788671 0 busy 00:37:24.989 06:47:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=788671 00:37:24.989 06:47:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:37:24.989 06:47:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:37:24.989 06:47:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:37:24.989 06:47:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:37:24.989 06:47:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:37:24.989 06:47:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:37:24.989 06:47:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:37:24.989 06:47:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:37:24.989 06:47:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 788671 -w 256 00:37:24.989 06:47:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:37:25.246 06:47:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 788671 root 20 0 128.2g 47616 34560 R 20.0 0.0 0:00.29 reactor_0' 00:37:25.246 06:47:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 788671 root 20 0 128.2g 47616 34560 R 20.0 0.0 0:00.29 reactor_0 00:37:25.247 06:47:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:37:25.247 06:47:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:37:25.247 06:47:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=20.0 00:37:25.247 06:47:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=20 00:37:25.247 06:47:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:37:25.247 06:47:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:37:25.247 06:47:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@31 -- # sleep 1 00:37:26.177 06:47:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j-- )) 00:37:26.177 06:47:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:37:26.177 06:47:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 788671 -w 256 00:37:26.177 06:47:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:37:26.433 06:47:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 788671 root 20 0 128.2g 47616 34560 R 99.9 0.0 0:02.64 reactor_0' 00:37:26.433 06:47:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 788671 root 20 0 128.2g 47616 34560 R 99.9 0.0 0:02.64 reactor_0 00:37:26.433 06:47:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:37:26.433 06:47:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:37:26.433 06:47:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:37:26.433 06:47:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:37:26.433 06:47:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:37:26.433 06:47:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:37:26.433 06:47:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:37:26.433 06:47:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:37:26.433 06:47:58 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:37:26.433 06:47:58 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:37:26.433 06:47:58 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 788671 1 00:37:26.433 06:47:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 788671 1 busy 00:37:26.433 06:47:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=788671 00:37:26.433 06:47:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:37:26.433 06:47:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:37:26.433 06:47:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:37:26.433 06:47:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:37:26.433 06:47:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:37:26.433 06:47:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:37:26.433 06:47:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:37:26.433 06:47:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:37:26.433 06:47:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 788671 -w 256 00:37:26.433 06:47:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:37:26.690 06:47:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 788693 root 20 0 128.2g 47616 34560 R 93.8 0.0 0:01.37 reactor_1' 00:37:26.690 06:47:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 788693 root 20 0 128.2g 47616 34560 R 93.8 0.0 0:01.37 reactor_1 00:37:26.690 06:47:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:37:26.690 06:47:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:37:26.690 06:47:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=93.8 00:37:26.690 06:47:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=93 00:37:26.690 06:47:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:37:26.690 06:47:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:37:26.690 06:47:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:37:26.690 06:47:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:37:26.690 06:47:58 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 788730 00:37:36.644 Initializing NVMe Controllers 00:37:36.644 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:37:36.644 Controller IO queue size 256, less than required. 00:37:36.644 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:37:36.644 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:37:36.644 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:37:36.644 Initialization complete. Launching workers. 00:37:36.644 ======================================================== 00:37:36.644 Latency(us) 00:37:36.644 Device Information : IOPS MiB/s Average min max 00:37:36.644 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 16930.60 66.14 15127.88 2908.85 29738.65 00:37:36.644 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 17115.30 66.86 14961.74 7564.13 26088.90 00:37:36.644 ======================================================== 00:37:36.644 Total : 34045.89 132.99 15044.36 2908.85 29738.65 00:37:36.644 00:37:36.644 06:48:06 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:37:36.644 06:48:06 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 788671 0 00:37:36.644 06:48:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 788671 0 idle 00:37:36.644 06:48:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=788671 00:37:36.644 06:48:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:37:36.644 06:48:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:37:36.644 06:48:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:37:36.644 06:48:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:37:36.644 06:48:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:37:36.644 06:48:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:37:36.644 06:48:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:37:36.644 06:48:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:37:36.644 06:48:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:37:36.644 06:48:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 788671 -w 256 00:37:36.644 06:48:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:37:36.644 06:48:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 788671 root 20 0 128.2g 47616 34560 S 0.0 0.0 0:20.26 reactor_0' 00:37:36.644 06:48:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 788671 root 20 0 128.2g 47616 34560 S 0.0 0.0 0:20.26 reactor_0 00:37:36.644 06:48:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:37:36.644 06:48:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:37:36.644 06:48:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:37:36.644 06:48:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:37:36.644 06:48:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:37:36.644 06:48:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:37:36.644 06:48:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:37:36.644 06:48:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:37:36.644 06:48:07 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:37:36.644 06:48:07 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 788671 1 00:37:36.644 06:48:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 788671 1 idle 00:37:36.644 06:48:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=788671 00:37:36.644 06:48:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:37:36.644 06:48:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:37:36.644 06:48:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:37:36.644 06:48:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:37:36.644 06:48:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:37:36.644 06:48:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:37:36.644 06:48:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:37:36.644 06:48:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:37:36.644 06:48:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:37:36.644 06:48:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 788671 -w 256 00:37:36.644 06:48:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:37:36.644 06:48:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 788693 root 20 0 128.2g 47616 34560 S 0.0 0.0 0:10.00 reactor_1' 00:37:36.644 06:48:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 788693 root 20 0 128.2g 47616 34560 S 0.0 0.0 0:10.00 reactor_1 00:37:36.644 06:48:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:37:36.644 06:48:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:37:36.644 06:48:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:37:36.644 06:48:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:37:36.644 06:48:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:37:36.644 06:48:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:37:36.644 06:48:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:37:36.644 06:48:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:37:36.644 06:48:07 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:37:36.644 06:48:07 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:37:36.644 06:48:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1200 -- # local i=0 00:37:36.644 06:48:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:37:36.644 06:48:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:37:36.644 06:48:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1207 -- # sleep 2 00:37:38.020 06:48:09 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:37:38.020 06:48:09 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:37:38.020 06:48:09 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:37:38.020 06:48:09 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:37:38.020 06:48:09 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:37:38.020 06:48:09 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # return 0 00:37:38.020 06:48:09 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:37:38.020 06:48:09 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 788671 0 00:37:38.020 06:48:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 788671 0 idle 00:37:38.020 06:48:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=788671 00:37:38.020 06:48:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:37:38.020 06:48:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:37:38.020 06:48:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:37:38.020 06:48:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:37:38.020 06:48:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:37:38.020 06:48:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:37:38.020 06:48:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:37:38.020 06:48:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:37:38.020 06:48:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:37:38.020 06:48:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 788671 -w 256 00:37:38.020 06:48:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:37:38.278 06:48:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 788671 root 20 0 128.2g 73728 34560 S 0.0 0.0 0:20.49 reactor_0' 00:37:38.278 06:48:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 788671 root 20 0 128.2g 73728 34560 S 0.0 0.0 0:20.49 reactor_0 00:37:38.278 06:48:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:37:38.278 06:48:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:37:38.278 06:48:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:37:38.278 06:48:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:37:38.278 06:48:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:37:38.278 06:48:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:37:38.278 06:48:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:37:38.278 06:48:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:37:38.278 06:48:09 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:37:38.278 06:48:09 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 788671 1 00:37:38.278 06:48:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 788671 1 idle 00:37:38.278 06:48:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=788671 00:37:38.278 06:48:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:37:38.278 06:48:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:37:38.278 06:48:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:37:38.278 06:48:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:37:38.278 06:48:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:37:38.278 06:48:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:37:38.278 06:48:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:37:38.278 06:48:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:37:38.278 06:48:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:37:38.278 06:48:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 788671 -w 256 00:37:38.278 06:48:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:37:38.536 06:48:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 788693 root 20 0 128.2g 73728 34560 S 0.0 0.0 0:10.08 reactor_1' 00:37:38.536 06:48:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 788693 root 20 0 128.2g 73728 34560 S 0.0 0.0 0:10.08 reactor_1 00:37:38.536 06:48:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:37:38.536 06:48:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:37:38.536 06:48:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:37:38.536 06:48:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:37:38.536 06:48:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:37:38.536 06:48:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:37:38.536 06:48:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:37:38.536 06:48:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:37:38.536 06:48:10 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:37:38.795 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:37:38.795 06:48:10 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:37:38.795 06:48:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1221 -- # local i=0 00:37:38.795 06:48:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:37:38.795 06:48:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:37:38.795 06:48:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:37:38.795 06:48:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:37:38.795 06:48:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1233 -- # return 0 00:37:38.795 06:48:10 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:37:38.795 06:48:10 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:37:38.795 06:48:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:38.795 06:48:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:37:38.795 06:48:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:38.795 06:48:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:37:38.795 06:48:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:38.795 06:48:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:38.795 rmmod nvme_tcp 00:37:38.795 rmmod nvme_fabrics 00:37:38.795 rmmod nvme_keyring 00:37:38.795 06:48:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:38.795 06:48:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:37:38.795 06:48:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:37:38.795 06:48:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@517 -- # '[' -n 788671 ']' 00:37:38.795 06:48:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # killprocess 788671 00:37:38.795 06:48:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@952 -- # '[' -z 788671 ']' 00:37:38.795 06:48:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@956 -- # kill -0 788671 00:37:38.795 06:48:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@957 -- # uname 00:37:38.795 06:48:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:37:38.795 06:48:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 788671 00:37:38.795 06:48:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:37:38.795 06:48:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:37:38.795 06:48:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@970 -- # echo 'killing process with pid 788671' 00:37:38.795 killing process with pid 788671 00:37:38.795 06:48:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@971 -- # kill 788671 00:37:38.795 06:48:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@976 -- # wait 788671 00:37:39.053 06:48:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:37:39.053 06:48:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:39.053 06:48:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:39.053 06:48:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:37:39.053 06:48:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-save 00:37:39.053 06:48:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:39.053 06:48:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-restore 00:37:39.053 06:48:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:39.053 06:48:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:39.053 06:48:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:39.053 06:48:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:37:39.053 06:48:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:41.586 06:48:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:41.586 00:37:41.586 real 0m22.983s 00:37:41.586 user 0m39.533s 00:37:41.586 sys 0m8.613s 00:37:41.586 06:48:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1128 -- # xtrace_disable 00:37:41.586 06:48:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:37:41.586 ************************************ 00:37:41.586 END TEST nvmf_interrupt 00:37:41.586 ************************************ 00:37:41.586 00:37:41.586 real 27m33.969s 00:37:41.586 user 56m53.630s 00:37:41.586 sys 9m15.789s 00:37:41.586 06:48:12 nvmf_tcp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:37:41.586 06:48:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:41.586 ************************************ 00:37:41.586 END TEST nvmf_tcp 00:37:41.586 ************************************ 00:37:41.586 06:48:12 -- spdk/autotest.sh@281 -- # [[ 0 -eq 0 ]] 00:37:41.586 06:48:12 -- spdk/autotest.sh@282 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:37:41.586 06:48:12 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:37:41.586 06:48:12 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:37:41.586 06:48:12 -- common/autotest_common.sh@10 -- # set +x 00:37:41.586 ************************************ 00:37:41.586 START TEST spdkcli_nvmf_tcp 00:37:41.586 ************************************ 00:37:41.586 06:48:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:37:41.586 * Looking for test storage... 00:37:41.586 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:37:41.586 06:48:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:37:41.586 06:48:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:37:41.586 06:48:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:37:41.586 06:48:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:37:41.586 06:48:13 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:41.586 06:48:13 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:41.586 06:48:13 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:41.586 06:48:13 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:37:41.586 06:48:13 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:37:41.586 06:48:13 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:37:41.586 06:48:13 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:37:41.586 06:48:13 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:37:41.586 06:48:13 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:37:41.586 06:48:13 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:37:41.586 06:48:13 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:41.586 06:48:13 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:37:41.586 06:48:13 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:37:41.586 06:48:13 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:41.586 06:48:13 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:41.586 06:48:13 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:37:41.586 06:48:13 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:37:41.586 06:48:13 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:41.586 06:48:13 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:37:41.586 06:48:13 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:37:41.586 06:48:13 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:37:41.586 06:48:13 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:37:41.586 06:48:13 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:41.586 06:48:13 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:37:41.586 06:48:13 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:37:41.586 06:48:13 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:41.586 06:48:13 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:41.586 06:48:13 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:37:41.586 06:48:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:41.586 06:48:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:37:41.586 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:41.586 --rc genhtml_branch_coverage=1 00:37:41.586 --rc genhtml_function_coverage=1 00:37:41.586 --rc genhtml_legend=1 00:37:41.586 --rc geninfo_all_blocks=1 00:37:41.586 --rc geninfo_unexecuted_blocks=1 00:37:41.586 00:37:41.586 ' 00:37:41.586 06:48:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:37:41.586 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:41.586 --rc genhtml_branch_coverage=1 00:37:41.586 --rc genhtml_function_coverage=1 00:37:41.586 --rc genhtml_legend=1 00:37:41.586 --rc geninfo_all_blocks=1 00:37:41.586 --rc geninfo_unexecuted_blocks=1 00:37:41.586 00:37:41.586 ' 00:37:41.586 06:48:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:37:41.586 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:41.586 --rc genhtml_branch_coverage=1 00:37:41.586 --rc genhtml_function_coverage=1 00:37:41.586 --rc genhtml_legend=1 00:37:41.586 --rc geninfo_all_blocks=1 00:37:41.586 --rc geninfo_unexecuted_blocks=1 00:37:41.586 00:37:41.586 ' 00:37:41.586 06:48:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:37:41.586 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:41.586 --rc genhtml_branch_coverage=1 00:37:41.586 --rc genhtml_function_coverage=1 00:37:41.586 --rc genhtml_legend=1 00:37:41.586 --rc geninfo_all_blocks=1 00:37:41.586 --rc geninfo_unexecuted_blocks=1 00:37:41.586 00:37:41.586 ' 00:37:41.586 06:48:13 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:37:41.586 06:48:13 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:37:41.586 06:48:13 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:37:41.586 06:48:13 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:41.586 06:48:13 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:37:41.586 06:48:13 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:41.586 06:48:13 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:41.586 06:48:13 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:41.586 06:48:13 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:41.586 06:48:13 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:41.587 06:48:13 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:41.587 06:48:13 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:41.587 06:48:13 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:41.587 06:48:13 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:41.587 06:48:13 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:41.587 06:48:13 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:37:41.587 06:48:13 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:37:41.587 06:48:13 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:41.587 06:48:13 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:41.587 06:48:13 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:41.587 06:48:13 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:41.587 06:48:13 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:41.587 06:48:13 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:37:41.587 06:48:13 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:41.587 06:48:13 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:41.587 06:48:13 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:41.587 06:48:13 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:41.587 06:48:13 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:41.587 06:48:13 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:41.587 06:48:13 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:37:41.587 06:48:13 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:41.587 06:48:13 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:37:41.587 06:48:13 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:41.587 06:48:13 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:41.587 06:48:13 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:41.587 06:48:13 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:41.587 06:48:13 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:41.587 06:48:13 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:37:41.587 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:37:41.587 06:48:13 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:41.587 06:48:13 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:41.587 06:48:13 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:41.587 06:48:13 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:37:41.587 06:48:13 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:37:41.587 06:48:13 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:37:41.587 06:48:13 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:37:41.587 06:48:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:37:41.587 06:48:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:41.587 06:48:13 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:37:41.587 06:48:13 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=791970 00:37:41.587 06:48:13 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 791970 00:37:41.587 06:48:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@833 -- # '[' -z 791970 ']' 00:37:41.587 06:48:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:41.587 06:48:13 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:37:41.587 06:48:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # local max_retries=100 00:37:41.587 06:48:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:41.587 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:41.587 06:48:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # xtrace_disable 00:37:41.587 06:48:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:41.587 [2024-11-20 06:48:13.168233] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:37:41.587 [2024-11-20 06:48:13.168283] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid791970 ] 00:37:41.587 [2024-11-20 06:48:13.242303] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:37:41.587 [2024-11-20 06:48:13.286072] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:41.587 [2024-11-20 06:48:13.286074] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:41.587 06:48:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:37:41.587 06:48:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@866 -- # return 0 00:37:41.587 06:48:13 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:37:41.587 06:48:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:37:41.587 06:48:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:41.846 06:48:13 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:37:41.846 06:48:13 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:37:41.846 06:48:13 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:37:41.846 06:48:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:37:41.846 06:48:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:41.846 06:48:13 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:37:41.846 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:37:41.846 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:37:41.846 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:37:41.846 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:37:41.846 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:37:41.846 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:37:41.846 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:37:41.846 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:37:41.846 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:37:41.846 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:37:41.846 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:37:41.846 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:37:41.846 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:37:41.846 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:37:41.846 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:37:41.846 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:37:41.846 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:37:41.846 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:37:41.846 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:37:41.846 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:37:41.846 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:37:41.846 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:37:41.846 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:37:41.846 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:37:41.846 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:37:41.846 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:37:41.846 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:37:41.846 ' 00:37:44.375 [2024-11-20 06:48:16.124109] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:45.749 [2024-11-20 06:48:17.464685] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:37:48.278 [2024-11-20 06:48:19.948286] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:37:50.807 [2024-11-20 06:48:22.110984] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:37:52.181 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:37:52.181 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:37:52.181 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:37:52.181 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:37:52.181 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:37:52.181 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:37:52.181 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:37:52.181 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:37:52.181 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:37:52.181 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:37:52.181 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:37:52.181 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:37:52.181 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:37:52.181 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:37:52.181 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:37:52.181 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:37:52.181 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:37:52.181 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:37:52.181 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:37:52.181 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:37:52.181 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:37:52.181 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:37:52.181 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:37:52.181 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:37:52.181 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:37:52.181 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:37:52.181 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:37:52.181 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:37:52.181 06:48:23 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:37:52.181 06:48:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:37:52.181 06:48:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:52.181 06:48:23 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:37:52.181 06:48:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:37:52.181 06:48:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:52.181 06:48:23 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:37:52.181 06:48:23 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:37:52.747 06:48:24 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:37:52.747 06:48:24 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:37:52.747 06:48:24 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:37:52.747 06:48:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:37:52.747 06:48:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:52.747 06:48:24 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:37:52.747 06:48:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:37:52.747 06:48:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:52.747 06:48:24 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:37:52.747 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:37:52.747 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:37:52.747 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:37:52.747 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:37:52.747 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:37:52.747 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:37:52.747 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:37:52.747 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:37:52.747 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:37:52.747 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:37:52.747 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:37:52.747 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:37:52.747 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:37:52.747 ' 00:37:58.081 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:37:58.081 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:37:58.081 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:37:58.081 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:37:58.081 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:37:58.081 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:37:58.081 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:37:58.081 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:37:58.081 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:37:58.081 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:37:58.081 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:37:58.081 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:37:58.081 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:37:58.081 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:37:58.401 06:48:30 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:37:58.401 06:48:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:37:58.401 06:48:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:58.401 06:48:30 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 791970 00:37:58.401 06:48:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # '[' -z 791970 ']' 00:37:58.401 06:48:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # kill -0 791970 00:37:58.401 06:48:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@957 -- # uname 00:37:58.401 06:48:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:37:58.401 06:48:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 791970 00:37:58.401 06:48:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:37:58.401 06:48:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:37:58.401 06:48:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@970 -- # echo 'killing process with pid 791970' 00:37:58.401 killing process with pid 791970 00:37:58.401 06:48:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@971 -- # kill 791970 00:37:58.401 06:48:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@976 -- # wait 791970 00:37:58.660 06:48:30 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:37:58.660 06:48:30 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:37:58.660 06:48:30 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 791970 ']' 00:37:58.660 06:48:30 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 791970 00:37:58.660 06:48:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # '[' -z 791970 ']' 00:37:58.660 06:48:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # kill -0 791970 00:37:58.660 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (791970) - No such process 00:37:58.660 06:48:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@979 -- # echo 'Process with pid 791970 is not found' 00:37:58.660 Process with pid 791970 is not found 00:37:58.660 06:48:30 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:37:58.660 06:48:30 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:37:58.660 06:48:30 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:37:58.660 00:37:58.660 real 0m17.348s 00:37:58.660 user 0m38.270s 00:37:58.660 sys 0m0.800s 00:37:58.660 06:48:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:37:58.660 06:48:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:58.660 ************************************ 00:37:58.660 END TEST spdkcli_nvmf_tcp 00:37:58.660 ************************************ 00:37:58.660 06:48:30 -- spdk/autotest.sh@283 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:37:58.660 06:48:30 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:37:58.660 06:48:30 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:37:58.660 06:48:30 -- common/autotest_common.sh@10 -- # set +x 00:37:58.660 ************************************ 00:37:58.660 START TEST nvmf_identify_passthru 00:37:58.660 ************************************ 00:37:58.660 06:48:30 nvmf_identify_passthru -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:37:58.660 * Looking for test storage... 00:37:58.660 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:58.660 06:48:30 nvmf_identify_passthru -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:37:58.660 06:48:30 nvmf_identify_passthru -- common/autotest_common.sh@1691 -- # lcov --version 00:37:58.660 06:48:30 nvmf_identify_passthru -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:37:58.660 06:48:30 nvmf_identify_passthru -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:37:58.660 06:48:30 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:58.660 06:48:30 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:58.660 06:48:30 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:58.919 06:48:30 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:37:58.919 06:48:30 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:37:58.919 06:48:30 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:37:58.919 06:48:30 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:37:58.919 06:48:30 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:37:58.919 06:48:30 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:37:58.919 06:48:30 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:37:58.919 06:48:30 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:58.919 06:48:30 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:37:58.919 06:48:30 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:37:58.919 06:48:30 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:58.919 06:48:30 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:58.919 06:48:30 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:37:58.919 06:48:30 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:37:58.919 06:48:30 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:58.919 06:48:30 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:37:58.919 06:48:30 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:37:58.919 06:48:30 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:37:58.919 06:48:30 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:37:58.919 06:48:30 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:58.919 06:48:30 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:37:58.919 06:48:30 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:37:58.920 06:48:30 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:58.920 06:48:30 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:58.920 06:48:30 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:37:58.920 06:48:30 nvmf_identify_passthru -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:58.920 06:48:30 nvmf_identify_passthru -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:37:58.920 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:58.920 --rc genhtml_branch_coverage=1 00:37:58.920 --rc genhtml_function_coverage=1 00:37:58.920 --rc genhtml_legend=1 00:37:58.920 --rc geninfo_all_blocks=1 00:37:58.920 --rc geninfo_unexecuted_blocks=1 00:37:58.920 00:37:58.920 ' 00:37:58.920 06:48:30 nvmf_identify_passthru -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:37:58.920 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:58.920 --rc genhtml_branch_coverage=1 00:37:58.920 --rc genhtml_function_coverage=1 00:37:58.920 --rc genhtml_legend=1 00:37:58.920 --rc geninfo_all_blocks=1 00:37:58.920 --rc geninfo_unexecuted_blocks=1 00:37:58.920 00:37:58.920 ' 00:37:58.920 06:48:30 nvmf_identify_passthru -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:37:58.920 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:58.920 --rc genhtml_branch_coverage=1 00:37:58.920 --rc genhtml_function_coverage=1 00:37:58.920 --rc genhtml_legend=1 00:37:58.920 --rc geninfo_all_blocks=1 00:37:58.920 --rc geninfo_unexecuted_blocks=1 00:37:58.920 00:37:58.920 ' 00:37:58.920 06:48:30 nvmf_identify_passthru -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:37:58.920 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:58.920 --rc genhtml_branch_coverage=1 00:37:58.920 --rc genhtml_function_coverage=1 00:37:58.920 --rc genhtml_legend=1 00:37:58.920 --rc geninfo_all_blocks=1 00:37:58.920 --rc geninfo_unexecuted_blocks=1 00:37:58.920 00:37:58.920 ' 00:37:58.920 06:48:30 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:58.920 06:48:30 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:37:58.920 06:48:30 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:58.920 06:48:30 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:58.920 06:48:30 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:58.920 06:48:30 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:58.920 06:48:30 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:58.920 06:48:30 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:58.920 06:48:30 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:58.920 06:48:30 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:58.920 06:48:30 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:58.920 06:48:30 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:58.920 06:48:30 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:37:58.920 06:48:30 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:37:58.920 06:48:30 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:58.920 06:48:30 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:58.920 06:48:30 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:58.920 06:48:30 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:58.920 06:48:30 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:58.920 06:48:30 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:37:58.920 06:48:30 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:58.920 06:48:30 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:58.920 06:48:30 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:58.920 06:48:30 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:58.920 06:48:30 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:58.920 06:48:30 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:58.920 06:48:30 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:37:58.920 06:48:30 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:58.920 06:48:30 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:37:58.920 06:48:30 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:58.920 06:48:30 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:58.920 06:48:30 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:58.920 06:48:30 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:58.920 06:48:30 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:58.920 06:48:30 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:37:58.920 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:37:58.920 06:48:30 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:58.920 06:48:30 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:58.920 06:48:30 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:58.920 06:48:30 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:58.920 06:48:30 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:37:58.920 06:48:30 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:58.920 06:48:30 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:58.920 06:48:30 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:58.920 06:48:30 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:58.920 06:48:30 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:58.920 06:48:30 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:58.920 06:48:30 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:37:58.920 06:48:30 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:58.920 06:48:30 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:37:58.920 06:48:30 nvmf_identify_passthru -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:37:58.920 06:48:30 nvmf_identify_passthru -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:58.920 06:48:30 nvmf_identify_passthru -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:58.920 06:48:30 nvmf_identify_passthru -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:58.920 06:48:30 nvmf_identify_passthru -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:58.920 06:48:30 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:58.920 06:48:30 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:37:58.920 06:48:30 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:58.920 06:48:30 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:37:58.920 06:48:30 nvmf_identify_passthru -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:37:58.920 06:48:30 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:37:58.921 06:48:30 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:05.487 06:48:36 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:05.487 06:48:36 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:38:05.487 06:48:36 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:05.487 06:48:36 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:05.487 06:48:36 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:05.487 06:48:36 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:05.487 06:48:36 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:05.487 06:48:36 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:38:05.487 06:48:36 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:05.487 06:48:36 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:38:05.487 06:48:36 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:38:05.487 06:48:36 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:38:05.487 06:48:36 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:38:05.487 06:48:36 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:38:05.487 06:48:36 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:38:05.487 06:48:36 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:05.487 06:48:36 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:05.487 06:48:36 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:05.487 06:48:36 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:05.487 06:48:36 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:05.487 06:48:36 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:05.487 06:48:36 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:05.487 06:48:36 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:05.487 06:48:36 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:05.487 06:48:36 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:05.487 06:48:36 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:05.487 06:48:36 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:05.487 06:48:36 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:05.487 06:48:36 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:05.487 06:48:36 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:05.487 06:48:36 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:05.487 06:48:36 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:05.487 06:48:36 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:05.487 06:48:36 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:05.487 06:48:36 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:38:05.487 Found 0000:86:00.0 (0x8086 - 0x159b) 00:38:05.487 06:48:36 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:05.487 06:48:36 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:05.487 06:48:36 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:05.487 06:48:36 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:05.487 06:48:36 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:05.487 06:48:36 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:05.487 06:48:36 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:38:05.487 Found 0000:86:00.1 (0x8086 - 0x159b) 00:38:05.487 06:48:36 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:05.487 06:48:36 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:05.487 06:48:36 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:05.487 06:48:36 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:05.487 06:48:36 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:05.487 06:48:36 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:05.487 06:48:36 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:05.487 06:48:36 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:05.487 06:48:36 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:05.487 06:48:36 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:05.487 06:48:36 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:05.487 06:48:36 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:05.487 06:48:36 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:05.487 06:48:36 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:05.487 06:48:36 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:05.487 06:48:36 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:38:05.487 Found net devices under 0000:86:00.0: cvl_0_0 00:38:05.487 06:48:36 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:05.487 06:48:36 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:05.487 06:48:36 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:05.487 06:48:36 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:05.487 06:48:36 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:05.487 06:48:36 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:05.487 06:48:36 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:05.487 06:48:36 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:05.487 06:48:36 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:38:05.487 Found net devices under 0000:86:00.1: cvl_0_1 00:38:05.487 06:48:36 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:05.487 06:48:36 nvmf_identify_passthru -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:38:05.487 06:48:36 nvmf_identify_passthru -- nvmf/common.sh@442 -- # is_hw=yes 00:38:05.487 06:48:36 nvmf_identify_passthru -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:38:05.487 06:48:36 nvmf_identify_passthru -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:38:05.487 06:48:36 nvmf_identify_passthru -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:38:05.487 06:48:36 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:05.487 06:48:36 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:05.487 06:48:36 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:05.487 06:48:36 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:05.487 06:48:36 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:05.487 06:48:36 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:05.487 06:48:36 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:05.487 06:48:36 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:05.487 06:48:36 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:05.488 06:48:36 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:05.488 06:48:36 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:05.488 06:48:36 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:05.488 06:48:36 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:05.488 06:48:36 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:05.488 06:48:36 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:05.488 06:48:36 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:05.488 06:48:36 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:05.488 06:48:36 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:05.488 06:48:36 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:05.488 06:48:36 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:05.488 06:48:36 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:05.488 06:48:36 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:05.488 06:48:36 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:05.488 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:05.488 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.401 ms 00:38:05.488 00:38:05.488 --- 10.0.0.2 ping statistics --- 00:38:05.488 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:05.488 rtt min/avg/max/mdev = 0.401/0.401/0.401/0.000 ms 00:38:05.488 06:48:36 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:05.488 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:05.488 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.216 ms 00:38:05.488 00:38:05.488 --- 10.0.0.1 ping statistics --- 00:38:05.488 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:05.488 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:38:05.488 06:48:36 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:05.488 06:48:36 nvmf_identify_passthru -- nvmf/common.sh@450 -- # return 0 00:38:05.488 06:48:36 nvmf_identify_passthru -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:38:05.488 06:48:36 nvmf_identify_passthru -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:05.488 06:48:36 nvmf_identify_passthru -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:38:05.488 06:48:36 nvmf_identify_passthru -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:38:05.488 06:48:36 nvmf_identify_passthru -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:05.488 06:48:36 nvmf_identify_passthru -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:38:05.488 06:48:36 nvmf_identify_passthru -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:38:05.488 06:48:36 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:38:05.488 06:48:36 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:38:05.488 06:48:36 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:05.488 06:48:36 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:38:05.488 06:48:36 nvmf_identify_passthru -- common/autotest_common.sh@1507 -- # bdfs=() 00:38:05.488 06:48:36 nvmf_identify_passthru -- common/autotest_common.sh@1507 -- # local bdfs 00:38:05.488 06:48:36 nvmf_identify_passthru -- common/autotest_common.sh@1508 -- # bdfs=($(get_nvme_bdfs)) 00:38:05.488 06:48:36 nvmf_identify_passthru -- common/autotest_common.sh@1508 -- # get_nvme_bdfs 00:38:05.488 06:48:36 nvmf_identify_passthru -- common/autotest_common.sh@1496 -- # bdfs=() 00:38:05.488 06:48:36 nvmf_identify_passthru -- common/autotest_common.sh@1496 -- # local bdfs 00:38:05.488 06:48:36 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:38:05.488 06:48:36 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:38:05.488 06:48:36 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:38:05.488 06:48:36 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:38:05.488 06:48:36 nvmf_identify_passthru -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:5e:00.0 00:38:05.488 06:48:36 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # echo 0000:5e:00.0 00:38:05.488 06:48:36 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:5e:00.0 00:38:05.488 06:48:36 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:5e:00.0 ']' 00:38:05.488 06:48:36 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:38:05.488 06:48:36 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:38:05.488 06:48:36 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:38:09.673 06:48:41 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=PHLN951000C61P6AGN 00:38:09.673 06:48:41 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:38:09.673 06:48:41 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:38:09.673 06:48:41 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:38:14.939 06:48:45 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:38:14.939 06:48:45 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:38:14.939 06:48:45 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:38:14.939 06:48:45 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:14.939 06:48:45 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:38:14.939 06:48:45 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:38:14.939 06:48:45 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:14.939 06:48:45 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=799437 00:38:14.939 06:48:45 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:38:14.939 06:48:45 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:38:14.939 06:48:45 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 799437 00:38:14.939 06:48:45 nvmf_identify_passthru -- common/autotest_common.sh@833 -- # '[' -z 799437 ']' 00:38:14.939 06:48:45 nvmf_identify_passthru -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:14.939 06:48:45 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # local max_retries=100 00:38:14.939 06:48:45 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:14.939 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:14.939 06:48:45 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # xtrace_disable 00:38:14.939 06:48:45 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:14.939 [2024-11-20 06:48:46.020934] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:38:14.939 [2024-11-20 06:48:46.020978] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:14.939 [2024-11-20 06:48:46.099089] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:38:14.939 [2024-11-20 06:48:46.142166] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:14.939 [2024-11-20 06:48:46.142206] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:14.939 [2024-11-20 06:48:46.142213] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:14.939 [2024-11-20 06:48:46.142219] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:14.939 [2024-11-20 06:48:46.142224] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:14.939 [2024-11-20 06:48:46.143665] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:14.939 [2024-11-20 06:48:46.143777] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:38:14.939 [2024-11-20 06:48:46.143883] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:14.939 [2024-11-20 06:48:46.143883] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:38:14.939 06:48:46 nvmf_identify_passthru -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:38:14.939 06:48:46 nvmf_identify_passthru -- common/autotest_common.sh@866 -- # return 0 00:38:14.939 06:48:46 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:38:14.939 06:48:46 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:14.939 06:48:46 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:14.939 INFO: Log level set to 20 00:38:14.939 INFO: Requests: 00:38:14.939 { 00:38:14.939 "jsonrpc": "2.0", 00:38:14.939 "method": "nvmf_set_config", 00:38:14.940 "id": 1, 00:38:14.940 "params": { 00:38:14.940 "admin_cmd_passthru": { 00:38:14.940 "identify_ctrlr": true 00:38:14.940 } 00:38:14.940 } 00:38:14.940 } 00:38:14.940 00:38:14.940 INFO: response: 00:38:14.940 { 00:38:14.940 "jsonrpc": "2.0", 00:38:14.940 "id": 1, 00:38:14.940 "result": true 00:38:14.940 } 00:38:14.940 00:38:14.940 06:48:46 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:14.940 06:48:46 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:38:14.940 06:48:46 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:14.940 06:48:46 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:14.940 INFO: Setting log level to 20 00:38:14.940 INFO: Setting log level to 20 00:38:14.940 INFO: Log level set to 20 00:38:14.940 INFO: Log level set to 20 00:38:14.940 INFO: Requests: 00:38:14.940 { 00:38:14.940 "jsonrpc": "2.0", 00:38:14.940 "method": "framework_start_init", 00:38:14.940 "id": 1 00:38:14.940 } 00:38:14.940 00:38:14.940 INFO: Requests: 00:38:14.940 { 00:38:14.940 "jsonrpc": "2.0", 00:38:14.940 "method": "framework_start_init", 00:38:14.940 "id": 1 00:38:14.940 } 00:38:14.940 00:38:14.940 [2024-11-20 06:48:46.247441] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:38:14.940 INFO: response: 00:38:14.940 { 00:38:14.940 "jsonrpc": "2.0", 00:38:14.940 "id": 1, 00:38:14.940 "result": true 00:38:14.940 } 00:38:14.940 00:38:14.940 INFO: response: 00:38:14.940 { 00:38:14.940 "jsonrpc": "2.0", 00:38:14.940 "id": 1, 00:38:14.940 "result": true 00:38:14.940 } 00:38:14.940 00:38:14.940 06:48:46 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:14.940 06:48:46 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:38:14.940 06:48:46 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:14.940 06:48:46 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:14.940 INFO: Setting log level to 40 00:38:14.940 INFO: Setting log level to 40 00:38:14.940 INFO: Setting log level to 40 00:38:14.940 [2024-11-20 06:48:46.260758] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:14.940 06:48:46 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:14.940 06:48:46 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:38:14.940 06:48:46 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:38:14.940 06:48:46 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:14.940 06:48:46 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:5e:00.0 00:38:14.940 06:48:46 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:14.940 06:48:46 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:17.465 Nvme0n1 00:38:17.465 06:48:49 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:17.465 06:48:49 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:38:17.465 06:48:49 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:17.465 06:48:49 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:17.465 06:48:49 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:17.465 06:48:49 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:38:17.465 06:48:49 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:17.465 06:48:49 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:17.465 06:48:49 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:17.465 06:48:49 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:17.465 06:48:49 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:17.465 06:48:49 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:17.465 [2024-11-20 06:48:49.168699] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:17.465 06:48:49 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:17.465 06:48:49 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:38:17.465 06:48:49 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:17.465 06:48:49 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:17.465 [ 00:38:17.465 { 00:38:17.465 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:38:17.465 "subtype": "Discovery", 00:38:17.465 "listen_addresses": [], 00:38:17.465 "allow_any_host": true, 00:38:17.465 "hosts": [] 00:38:17.465 }, 00:38:17.465 { 00:38:17.465 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:38:17.465 "subtype": "NVMe", 00:38:17.465 "listen_addresses": [ 00:38:17.465 { 00:38:17.465 "trtype": "TCP", 00:38:17.465 "adrfam": "IPv4", 00:38:17.465 "traddr": "10.0.0.2", 00:38:17.465 "trsvcid": "4420" 00:38:17.465 } 00:38:17.465 ], 00:38:17.465 "allow_any_host": true, 00:38:17.465 "hosts": [], 00:38:17.465 "serial_number": "SPDK00000000000001", 00:38:17.465 "model_number": "SPDK bdev Controller", 00:38:17.465 "max_namespaces": 1, 00:38:17.465 "min_cntlid": 1, 00:38:17.465 "max_cntlid": 65519, 00:38:17.465 "namespaces": [ 00:38:17.465 { 00:38:17.465 "nsid": 1, 00:38:17.465 "bdev_name": "Nvme0n1", 00:38:17.465 "name": "Nvme0n1", 00:38:17.465 "nguid": "007999AD2DE74BBFBFB8A3BB63EEA89E", 00:38:17.465 "uuid": "007999ad-2de7-4bbf-bfb8-a3bb63eea89e" 00:38:17.465 } 00:38:17.465 ] 00:38:17.465 } 00:38:17.465 ] 00:38:17.465 06:48:49 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:17.465 06:48:49 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:38:17.465 06:48:49 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:38:17.465 06:48:49 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:38:17.722 06:48:49 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=PHLN951000C61P6AGN 00:38:17.722 06:48:49 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:38:17.722 06:48:49 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:38:17.722 06:48:49 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:38:17.722 06:48:49 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:38:17.722 06:48:49 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' PHLN951000C61P6AGN '!=' PHLN951000C61P6AGN ']' 00:38:17.722 06:48:49 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:38:17.722 06:48:49 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:38:17.722 06:48:49 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:17.722 06:48:49 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:17.722 06:48:49 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:17.722 06:48:49 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:38:17.722 06:48:49 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:38:17.980 06:48:49 nvmf_identify_passthru -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:17.980 06:48:49 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:38:17.980 06:48:49 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:17.980 06:48:49 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:38:17.980 06:48:49 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:17.980 06:48:49 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:17.980 rmmod nvme_tcp 00:38:17.980 rmmod nvme_fabrics 00:38:17.980 rmmod nvme_keyring 00:38:17.980 06:48:49 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:17.980 06:48:49 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:38:17.980 06:48:49 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:38:17.980 06:48:49 nvmf_identify_passthru -- nvmf/common.sh@517 -- # '[' -n 799437 ']' 00:38:17.980 06:48:49 nvmf_identify_passthru -- nvmf/common.sh@518 -- # killprocess 799437 00:38:17.980 06:48:49 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # '[' -z 799437 ']' 00:38:17.980 06:48:49 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # kill -0 799437 00:38:17.980 06:48:49 nvmf_identify_passthru -- common/autotest_common.sh@957 -- # uname 00:38:17.980 06:48:49 nvmf_identify_passthru -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:38:17.980 06:48:49 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 799437 00:38:17.980 06:48:49 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:38:17.980 06:48:49 nvmf_identify_passthru -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:38:17.980 06:48:49 nvmf_identify_passthru -- common/autotest_common.sh@970 -- # echo 'killing process with pid 799437' 00:38:17.980 killing process with pid 799437 00:38:17.980 06:48:49 nvmf_identify_passthru -- common/autotest_common.sh@971 -- # kill 799437 00:38:17.980 06:48:49 nvmf_identify_passthru -- common/autotest_common.sh@976 -- # wait 799437 00:38:19.878 06:48:51 nvmf_identify_passthru -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:38:19.878 06:48:51 nvmf_identify_passthru -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:19.878 06:48:51 nvmf_identify_passthru -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:19.878 06:48:51 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:38:19.878 06:48:51 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-save 00:38:19.878 06:48:51 nvmf_identify_passthru -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:19.878 06:48:51 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-restore 00:38:19.878 06:48:51 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:19.878 06:48:51 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:19.878 06:48:51 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:19.878 06:48:51 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:38:19.878 06:48:51 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:22.421 06:48:53 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:22.421 00:38:22.421 real 0m23.373s 00:38:22.421 user 0m29.497s 00:38:22.421 sys 0m6.367s 00:38:22.421 06:48:53 nvmf_identify_passthru -- common/autotest_common.sh@1128 -- # xtrace_disable 00:38:22.421 06:48:53 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:22.421 ************************************ 00:38:22.421 END TEST nvmf_identify_passthru 00:38:22.421 ************************************ 00:38:22.421 06:48:53 -- spdk/autotest.sh@285 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:38:22.421 06:48:53 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:38:22.421 06:48:53 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:38:22.421 06:48:53 -- common/autotest_common.sh@10 -- # set +x 00:38:22.421 ************************************ 00:38:22.421 START TEST nvmf_dif 00:38:22.421 ************************************ 00:38:22.421 06:48:53 nvmf_dif -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:38:22.421 * Looking for test storage... 00:38:22.421 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:22.421 06:48:53 nvmf_dif -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:38:22.421 06:48:53 nvmf_dif -- common/autotest_common.sh@1691 -- # lcov --version 00:38:22.421 06:48:53 nvmf_dif -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:38:22.421 06:48:53 nvmf_dif -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:38:22.421 06:48:53 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:22.421 06:48:53 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:22.421 06:48:53 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:22.421 06:48:53 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:38:22.421 06:48:53 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:38:22.421 06:48:53 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:38:22.422 06:48:53 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:38:22.422 06:48:53 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:38:22.422 06:48:53 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:38:22.422 06:48:53 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:38:22.422 06:48:53 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:22.422 06:48:53 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:38:22.422 06:48:53 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:38:22.422 06:48:53 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:22.422 06:48:53 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:22.422 06:48:53 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:38:22.422 06:48:53 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:38:22.422 06:48:53 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:22.422 06:48:53 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:38:22.422 06:48:53 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:38:22.422 06:48:53 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:38:22.422 06:48:53 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:38:22.422 06:48:53 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:22.422 06:48:53 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:38:22.422 06:48:53 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:38:22.422 06:48:53 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:22.422 06:48:53 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:22.422 06:48:53 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:38:22.422 06:48:53 nvmf_dif -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:22.422 06:48:53 nvmf_dif -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:38:22.422 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:22.422 --rc genhtml_branch_coverage=1 00:38:22.422 --rc genhtml_function_coverage=1 00:38:22.422 --rc genhtml_legend=1 00:38:22.422 --rc geninfo_all_blocks=1 00:38:22.422 --rc geninfo_unexecuted_blocks=1 00:38:22.422 00:38:22.422 ' 00:38:22.422 06:48:53 nvmf_dif -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:38:22.422 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:22.422 --rc genhtml_branch_coverage=1 00:38:22.422 --rc genhtml_function_coverage=1 00:38:22.422 --rc genhtml_legend=1 00:38:22.422 --rc geninfo_all_blocks=1 00:38:22.422 --rc geninfo_unexecuted_blocks=1 00:38:22.422 00:38:22.422 ' 00:38:22.422 06:48:53 nvmf_dif -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:38:22.422 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:22.422 --rc genhtml_branch_coverage=1 00:38:22.422 --rc genhtml_function_coverage=1 00:38:22.422 --rc genhtml_legend=1 00:38:22.422 --rc geninfo_all_blocks=1 00:38:22.422 --rc geninfo_unexecuted_blocks=1 00:38:22.422 00:38:22.422 ' 00:38:22.422 06:48:53 nvmf_dif -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:38:22.422 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:22.422 --rc genhtml_branch_coverage=1 00:38:22.422 --rc genhtml_function_coverage=1 00:38:22.422 --rc genhtml_legend=1 00:38:22.422 --rc geninfo_all_blocks=1 00:38:22.422 --rc geninfo_unexecuted_blocks=1 00:38:22.422 00:38:22.422 ' 00:38:22.422 06:48:53 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:22.422 06:48:53 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:38:22.422 06:48:53 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:22.422 06:48:53 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:22.422 06:48:53 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:22.422 06:48:53 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:22.422 06:48:53 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:22.422 06:48:53 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:22.422 06:48:53 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:22.422 06:48:53 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:22.423 06:48:53 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:22.423 06:48:53 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:22.423 06:48:53 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:38:22.423 06:48:53 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:38:22.423 06:48:53 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:22.423 06:48:53 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:22.423 06:48:53 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:22.423 06:48:53 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:22.423 06:48:53 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:22.423 06:48:53 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:38:22.423 06:48:53 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:22.423 06:48:53 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:22.423 06:48:53 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:22.423 06:48:53 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:22.423 06:48:53 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:22.423 06:48:53 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:22.423 06:48:53 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:38:22.423 06:48:53 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:22.423 06:48:53 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:38:22.423 06:48:53 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:22.423 06:48:53 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:22.423 06:48:53 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:22.423 06:48:53 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:22.423 06:48:53 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:22.423 06:48:53 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:38:22.423 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:38:22.423 06:48:53 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:22.423 06:48:53 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:22.423 06:48:53 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:22.423 06:48:53 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:38:22.423 06:48:53 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:38:22.423 06:48:53 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:38:22.423 06:48:53 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:38:22.423 06:48:53 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:38:22.423 06:48:53 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:38:22.423 06:48:53 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:22.423 06:48:53 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:38:22.423 06:48:53 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:38:22.423 06:48:53 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:38:22.423 06:48:53 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:22.423 06:48:53 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:38:22.424 06:48:53 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:22.424 06:48:53 nvmf_dif -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:38:22.424 06:48:53 nvmf_dif -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:38:22.424 06:48:53 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:38:22.424 06:48:53 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:38:28.995 06:48:59 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:28.995 06:48:59 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:38:28.995 06:48:59 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:28.995 06:48:59 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:28.995 06:48:59 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:28.995 06:48:59 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:28.995 06:48:59 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:28.995 06:48:59 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:38:28.995 06:48:59 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:28.995 06:48:59 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:38:28.995 06:48:59 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:38:28.995 06:48:59 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:38:28.995 06:48:59 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:38:28.995 06:48:59 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:38:28.995 06:48:59 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:38:28.995 06:48:59 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:28.995 06:48:59 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:28.995 06:48:59 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:28.995 06:48:59 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:28.995 06:48:59 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:28.995 06:48:59 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:28.995 06:48:59 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:28.995 06:48:59 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:28.995 06:48:59 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:28.995 06:48:59 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:28.995 06:48:59 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:28.995 06:48:59 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:28.995 06:48:59 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:28.995 06:48:59 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:28.995 06:48:59 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:28.995 06:48:59 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:28.995 06:48:59 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:28.995 06:48:59 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:28.995 06:48:59 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:28.995 06:48:59 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:38:28.995 Found 0000:86:00.0 (0x8086 - 0x159b) 00:38:28.995 06:48:59 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:28.995 06:48:59 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:28.995 06:48:59 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:28.995 06:48:59 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:28.995 06:48:59 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:28.995 06:48:59 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:28.995 06:48:59 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:38:28.995 Found 0000:86:00.1 (0x8086 - 0x159b) 00:38:28.995 06:48:59 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:28.995 06:48:59 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:28.995 06:48:59 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:28.995 06:48:59 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:28.995 06:48:59 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:28.995 06:48:59 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:28.995 06:48:59 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:28.995 06:48:59 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:28.995 06:48:59 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:28.995 06:48:59 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:28.995 06:48:59 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:28.995 06:48:59 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:28.995 06:48:59 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:28.995 06:48:59 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:28.995 06:48:59 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:28.995 06:48:59 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:38:28.995 Found net devices under 0000:86:00.0: cvl_0_0 00:38:28.995 06:48:59 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:28.995 06:48:59 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:28.995 06:48:59 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:28.995 06:48:59 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:28.995 06:48:59 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:28.995 06:48:59 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:28.995 06:48:59 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:28.995 06:48:59 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:28.995 06:48:59 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:38:28.995 Found net devices under 0000:86:00.1: cvl_0_1 00:38:28.995 06:48:59 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:28.995 06:48:59 nvmf_dif -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:38:28.995 06:48:59 nvmf_dif -- nvmf/common.sh@442 -- # is_hw=yes 00:38:28.995 06:48:59 nvmf_dif -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:38:28.995 06:48:59 nvmf_dif -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:38:28.995 06:48:59 nvmf_dif -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:38:28.995 06:48:59 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:28.995 06:48:59 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:28.995 06:48:59 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:28.995 06:48:59 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:28.995 06:48:59 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:28.995 06:48:59 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:28.995 06:48:59 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:28.995 06:48:59 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:28.995 06:48:59 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:28.995 06:48:59 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:28.995 06:48:59 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:28.995 06:48:59 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:28.995 06:48:59 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:28.995 06:48:59 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:28.995 06:48:59 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:28.995 06:48:59 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:28.995 06:48:59 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:28.995 06:48:59 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:28.995 06:48:59 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:28.995 06:48:59 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:28.995 06:48:59 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:28.995 06:48:59 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:28.995 06:48:59 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:28.995 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:28.995 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.359 ms 00:38:28.995 00:38:28.995 --- 10.0.0.2 ping statistics --- 00:38:28.995 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:28.995 rtt min/avg/max/mdev = 0.359/0.359/0.359/0.000 ms 00:38:28.995 06:48:59 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:28.995 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:28.995 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.133 ms 00:38:28.995 00:38:28.995 --- 10.0.0.1 ping statistics --- 00:38:28.995 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:28.995 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:38:28.995 06:48:59 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:28.995 06:48:59 nvmf_dif -- nvmf/common.sh@450 -- # return 0 00:38:28.995 06:48:59 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:38:28.996 06:48:59 nvmf_dif -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:38:30.899 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:38:30.899 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:38:30.899 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:38:30.899 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:38:30.899 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:38:30.899 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:38:30.899 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:38:30.899 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:38:30.899 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:38:30.899 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:38:30.899 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:38:30.899 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:38:30.899 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:38:30.899 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:38:30.899 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:38:30.899 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:38:30.899 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:38:30.899 06:49:02 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:30.900 06:49:02 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:38:30.900 06:49:02 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:38:30.900 06:49:02 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:30.900 06:49:02 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:38:30.900 06:49:02 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:38:31.158 06:49:02 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:38:31.158 06:49:02 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:38:31.158 06:49:02 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:31.158 06:49:02 nvmf_dif -- common/autotest_common.sh@724 -- # xtrace_disable 00:38:31.158 06:49:02 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:38:31.158 06:49:02 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=804971 00:38:31.158 06:49:02 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:38:31.158 06:49:02 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 804971 00:38:31.158 06:49:02 nvmf_dif -- common/autotest_common.sh@833 -- # '[' -z 804971 ']' 00:38:31.158 06:49:02 nvmf_dif -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:31.158 06:49:02 nvmf_dif -- common/autotest_common.sh@838 -- # local max_retries=100 00:38:31.158 06:49:02 nvmf_dif -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:31.158 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:31.158 06:49:02 nvmf_dif -- common/autotest_common.sh@842 -- # xtrace_disable 00:38:31.158 06:49:02 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:38:31.159 [2024-11-20 06:49:02.813790] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:38:31.159 [2024-11-20 06:49:02.813837] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:31.159 [2024-11-20 06:49:02.893535] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:31.159 [2024-11-20 06:49:02.933799] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:31.159 [2024-11-20 06:49:02.933835] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:31.159 [2024-11-20 06:49:02.933842] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:31.159 [2024-11-20 06:49:02.933848] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:31.159 [2024-11-20 06:49:02.933853] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:31.159 [2024-11-20 06:49:02.934438] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:31.417 06:49:03 nvmf_dif -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:38:31.417 06:49:03 nvmf_dif -- common/autotest_common.sh@866 -- # return 0 00:38:31.417 06:49:03 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:31.417 06:49:03 nvmf_dif -- common/autotest_common.sh@730 -- # xtrace_disable 00:38:31.417 06:49:03 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:38:31.417 06:49:03 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:31.417 06:49:03 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:38:31.417 06:49:03 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:38:31.417 06:49:03 nvmf_dif -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:31.417 06:49:03 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:38:31.417 [2024-11-20 06:49:03.068680] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:31.417 06:49:03 nvmf_dif -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:31.417 06:49:03 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:38:31.417 06:49:03 nvmf_dif -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:38:31.417 06:49:03 nvmf_dif -- common/autotest_common.sh@1109 -- # xtrace_disable 00:38:31.417 06:49:03 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:38:31.417 ************************************ 00:38:31.417 START TEST fio_dif_1_default 00:38:31.417 ************************************ 00:38:31.417 06:49:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1127 -- # fio_dif_1 00:38:31.417 06:49:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:38:31.417 06:49:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:38:31.417 06:49:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:38:31.417 06:49:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:38:31.417 06:49:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:38:31.417 06:49:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:38:31.417 06:49:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:31.417 06:49:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:38:31.417 bdev_null0 00:38:31.417 06:49:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:31.417 06:49:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:38:31.417 06:49:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:31.417 06:49:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:38:31.417 06:49:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:31.417 06:49:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:38:31.417 06:49:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:31.417 06:49:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:38:31.417 06:49:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:31.417 06:49:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:38:31.417 06:49:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:31.417 06:49:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:38:31.417 [2024-11-20 06:49:03.136972] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:31.417 06:49:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:31.417 06:49:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:38:31.417 06:49:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:38:31.417 06:49:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:38:31.417 06:49:03 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:38:31.417 06:49:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:31.417 06:49:03 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:38:31.417 06:49:03 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:31.418 06:49:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:31.418 06:49:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:38:31.418 06:49:03 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:31.418 { 00:38:31.418 "params": { 00:38:31.418 "name": "Nvme$subsystem", 00:38:31.418 "trtype": "$TEST_TRANSPORT", 00:38:31.418 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:31.418 "adrfam": "ipv4", 00:38:31.418 "trsvcid": "$NVMF_PORT", 00:38:31.418 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:31.418 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:31.418 "hdgst": ${hdgst:-false}, 00:38:31.418 "ddgst": ${ddgst:-false} 00:38:31.418 }, 00:38:31.418 "method": "bdev_nvme_attach_controller" 00:38:31.418 } 00:38:31.418 EOF 00:38:31.418 )") 00:38:31.418 06:49:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:38:31.418 06:49:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:38:31.418 06:49:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:38:31.418 06:49:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:38:31.418 06:49:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local sanitizers 00:38:31.418 06:49:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:31.418 06:49:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # shift 00:38:31.418 06:49:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # local asan_lib= 00:38:31.418 06:49:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:38:31.418 06:49:03 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:38:31.418 06:49:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:38:31.418 06:49:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:38:31.418 06:49:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:31.418 06:49:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # grep libasan 00:38:31.418 06:49:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:38:31.418 06:49:03 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:38:31.418 06:49:03 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:38:31.418 06:49:03 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:38:31.418 "params": { 00:38:31.418 "name": "Nvme0", 00:38:31.418 "trtype": "tcp", 00:38:31.418 "traddr": "10.0.0.2", 00:38:31.418 "adrfam": "ipv4", 00:38:31.418 "trsvcid": "4420", 00:38:31.418 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:31.418 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:31.418 "hdgst": false, 00:38:31.418 "ddgst": false 00:38:31.418 }, 00:38:31.418 "method": "bdev_nvme_attach_controller" 00:38:31.418 }' 00:38:31.418 06:49:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # asan_lib= 00:38:31.418 06:49:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:38:31.418 06:49:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:38:31.418 06:49:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:31.418 06:49:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:38:31.418 06:49:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:38:31.418 06:49:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # asan_lib= 00:38:31.418 06:49:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:38:31.418 06:49:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:38:31.418 06:49:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:31.982 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:38:31.982 fio-3.35 00:38:31.982 Starting 1 thread 00:38:44.181 00:38:44.181 filename0: (groupid=0, jobs=1): err= 0: pid=805284: Wed Nov 20 06:49:14 2024 00:38:44.181 read: IOPS=97, BW=390KiB/s (399kB/s)(3904KiB/10009msec) 00:38:44.181 slat (nsec): min=5703, max=25724, avg=6122.88, stdev=1330.83 00:38:44.181 clat (usec): min=40801, max=42370, avg=41001.31, stdev=146.71 00:38:44.181 lat (usec): min=40807, max=42396, avg=41007.43, stdev=147.08 00:38:44.181 clat percentiles (usec): 00:38:44.181 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:38:44.181 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:38:44.181 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:38:44.181 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:38:44.181 | 99.99th=[42206] 00:38:44.181 bw ( KiB/s): min= 384, max= 416, per=99.47%, avg=388.80, stdev=11.72, samples=20 00:38:44.181 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:38:44.181 lat (msec) : 50=100.00% 00:38:44.181 cpu : usr=92.56%, sys=7.19%, ctx=10, majf=0, minf=0 00:38:44.181 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:44.181 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:44.181 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:44.181 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:44.181 latency : target=0, window=0, percentile=100.00%, depth=4 00:38:44.181 00:38:44.181 Run status group 0 (all jobs): 00:38:44.181 READ: bw=390KiB/s (399kB/s), 390KiB/s-390KiB/s (399kB/s-399kB/s), io=3904KiB (3998kB), run=10009-10009msec 00:38:44.181 06:49:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:38:44.181 06:49:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:38:44.181 06:49:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:38:44.181 06:49:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:38:44.181 06:49:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:38:44.181 06:49:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:38:44.181 06:49:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:44.181 06:49:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:38:44.181 06:49:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:44.181 06:49:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:38:44.181 06:49:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:44.181 06:49:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:38:44.181 06:49:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:44.181 00:38:44.181 real 0m11.239s 00:38:44.181 user 0m15.566s 00:38:44.181 sys 0m1.017s 00:38:44.181 06:49:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1128 -- # xtrace_disable 00:38:44.181 06:49:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:38:44.181 ************************************ 00:38:44.181 END TEST fio_dif_1_default 00:38:44.181 ************************************ 00:38:44.181 06:49:14 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:38:44.181 06:49:14 nvmf_dif -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:38:44.181 06:49:14 nvmf_dif -- common/autotest_common.sh@1109 -- # xtrace_disable 00:38:44.181 06:49:14 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:38:44.181 ************************************ 00:38:44.181 START TEST fio_dif_1_multi_subsystems 00:38:44.181 ************************************ 00:38:44.181 06:49:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1127 -- # fio_dif_1_multi_subsystems 00:38:44.181 06:49:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:38:44.181 06:49:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:38:44.181 06:49:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:38:44.181 06:49:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:38:44.181 06:49:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:38:44.181 06:49:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:38:44.181 06:49:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:38:44.181 06:49:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:44.181 06:49:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:44.181 bdev_null0 00:38:44.181 06:49:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:44.181 06:49:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:38:44.181 06:49:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:44.181 06:49:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:44.181 06:49:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:44.182 06:49:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:38:44.182 06:49:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:44.182 06:49:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:44.182 06:49:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:44.182 06:49:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:38:44.182 06:49:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:44.182 06:49:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:44.182 [2024-11-20 06:49:14.449026] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:44.182 06:49:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:44.182 06:49:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:38:44.182 06:49:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:38:44.182 06:49:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:38:44.182 06:49:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:38:44.182 06:49:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:44.182 06:49:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:44.182 bdev_null1 00:38:44.182 06:49:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:44.182 06:49:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:38:44.182 06:49:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:44.182 06:49:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:44.182 06:49:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:44.182 06:49:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:38:44.182 06:49:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:44.182 06:49:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:44.182 06:49:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:44.182 06:49:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:44.182 06:49:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:44.182 06:49:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:44.182 06:49:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:44.182 06:49:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:38:44.182 06:49:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:38:44.182 06:49:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:38:44.182 06:49:14 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:38:44.182 06:49:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:44.182 06:49:14 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:38:44.182 06:49:14 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:44.182 06:49:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:44.182 06:49:14 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:44.182 { 00:38:44.182 "params": { 00:38:44.182 "name": "Nvme$subsystem", 00:38:44.182 "trtype": "$TEST_TRANSPORT", 00:38:44.182 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:44.182 "adrfam": "ipv4", 00:38:44.182 "trsvcid": "$NVMF_PORT", 00:38:44.182 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:44.182 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:44.182 "hdgst": ${hdgst:-false}, 00:38:44.182 "ddgst": ${ddgst:-false} 00:38:44.182 }, 00:38:44.182 "method": "bdev_nvme_attach_controller" 00:38:44.182 } 00:38:44.182 EOF 00:38:44.182 )") 00:38:44.182 06:49:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:38:44.182 06:49:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:38:44.182 06:49:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:38:44.182 06:49:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:38:44.182 06:49:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:38:44.182 06:49:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local sanitizers 00:38:44.182 06:49:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:44.182 06:49:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # shift 00:38:44.182 06:49:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # local asan_lib= 00:38:44.182 06:49:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:38:44.182 06:49:14 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:38:44.182 06:49:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:38:44.182 06:49:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:44.182 06:49:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:38:44.182 06:49:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # grep libasan 00:38:44.182 06:49:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:38:44.182 06:49:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:38:44.182 06:49:14 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:44.182 06:49:14 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:44.182 { 00:38:44.182 "params": { 00:38:44.182 "name": "Nvme$subsystem", 00:38:44.182 "trtype": "$TEST_TRANSPORT", 00:38:44.182 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:44.182 "adrfam": "ipv4", 00:38:44.182 "trsvcid": "$NVMF_PORT", 00:38:44.182 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:44.182 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:44.182 "hdgst": ${hdgst:-false}, 00:38:44.182 "ddgst": ${ddgst:-false} 00:38:44.182 }, 00:38:44.182 "method": "bdev_nvme_attach_controller" 00:38:44.182 } 00:38:44.182 EOF 00:38:44.182 )") 00:38:44.182 06:49:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:38:44.182 06:49:14 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:38:44.182 06:49:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:38:44.182 06:49:14 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:38:44.182 06:49:14 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:38:44.182 06:49:14 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:38:44.182 "params": { 00:38:44.182 "name": "Nvme0", 00:38:44.182 "trtype": "tcp", 00:38:44.182 "traddr": "10.0.0.2", 00:38:44.182 "adrfam": "ipv4", 00:38:44.182 "trsvcid": "4420", 00:38:44.182 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:44.182 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:44.182 "hdgst": false, 00:38:44.182 "ddgst": false 00:38:44.182 }, 00:38:44.182 "method": "bdev_nvme_attach_controller" 00:38:44.182 },{ 00:38:44.182 "params": { 00:38:44.182 "name": "Nvme1", 00:38:44.182 "trtype": "tcp", 00:38:44.182 "traddr": "10.0.0.2", 00:38:44.182 "adrfam": "ipv4", 00:38:44.182 "trsvcid": "4420", 00:38:44.182 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:44.182 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:44.182 "hdgst": false, 00:38:44.182 "ddgst": false 00:38:44.182 }, 00:38:44.182 "method": "bdev_nvme_attach_controller" 00:38:44.182 }' 00:38:44.182 06:49:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # asan_lib= 00:38:44.182 06:49:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:38:44.182 06:49:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:38:44.182 06:49:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:44.182 06:49:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:38:44.182 06:49:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:38:44.182 06:49:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # asan_lib= 00:38:44.182 06:49:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:38:44.182 06:49:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:38:44.182 06:49:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:44.182 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:38:44.182 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:38:44.182 fio-3.35 00:38:44.182 Starting 2 threads 00:38:54.163 00:38:54.163 filename0: (groupid=0, jobs=1): err= 0: pid=807248: Wed Nov 20 06:49:25 2024 00:38:54.163 read: IOPS=195, BW=783KiB/s (802kB/s)(7840KiB/10016msec) 00:38:54.163 slat (nsec): min=5766, max=30456, avg=7091.15, stdev=2291.11 00:38:54.163 clat (usec): min=374, max=42612, avg=20419.65, stdev=20423.43 00:38:54.163 lat (usec): min=380, max=42619, avg=20426.74, stdev=20422.93 00:38:54.163 clat percentiles (usec): 00:38:54.163 | 1.00th=[ 388], 5.00th=[ 396], 10.00th=[ 404], 20.00th=[ 416], 00:38:54.163 | 30.00th=[ 545], 40.00th=[ 594], 50.00th=[ 807], 60.00th=[40633], 00:38:54.163 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[42206], 00:38:54.163 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:38:54.163 | 99.99th=[42730] 00:38:54.163 bw ( KiB/s): min= 704, max= 896, per=49.85%, avg=782.40, stdev=39.50, samples=20 00:38:54.163 iops : min= 176, max= 224, avg=195.60, stdev= 9.88, samples=20 00:38:54.163 lat (usec) : 500=29.54%, 750=20.05%, 1000=1.48% 00:38:54.163 lat (msec) : 2=0.15%, 50=48.78% 00:38:54.163 cpu : usr=96.72%, sys=3.03%, ctx=13, majf=0, minf=69 00:38:54.163 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:54.163 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:54.163 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:54.163 issued rwts: total=1960,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:54.163 latency : target=0, window=0, percentile=100.00%, depth=4 00:38:54.163 filename1: (groupid=0, jobs=1): err= 0: pid=807249: Wed Nov 20 06:49:25 2024 00:38:54.163 read: IOPS=196, BW=786KiB/s (805kB/s)(7872KiB/10016msec) 00:38:54.163 slat (nsec): min=5753, max=30899, avg=6993.79, stdev=2163.38 00:38:54.164 clat (usec): min=382, max=42603, avg=20337.02, stdev=20472.76 00:38:54.164 lat (usec): min=388, max=42609, avg=20344.01, stdev=20472.28 00:38:54.164 clat percentiles (usec): 00:38:54.164 | 1.00th=[ 396], 5.00th=[ 408], 10.00th=[ 420], 20.00th=[ 478], 00:38:54.164 | 30.00th=[ 545], 40.00th=[ 594], 50.00th=[ 865], 60.00th=[41157], 00:38:54.164 | 70.00th=[41157], 80.00th=[41681], 90.00th=[41681], 95.00th=[42206], 00:38:54.164 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:38:54.164 | 99.99th=[42730] 00:38:54.164 bw ( KiB/s): min= 672, max= 896, per=50.04%, avg=785.60, stdev=54.42, samples=20 00:38:54.164 iops : min= 168, max= 224, avg=196.40, stdev=13.60, samples=20 00:38:54.164 lat (usec) : 500=28.71%, 750=20.48%, 1000=2.18% 00:38:54.164 lat (msec) : 2=0.25%, 50=48.37% 00:38:54.164 cpu : usr=96.56%, sys=3.20%, ctx=12, majf=0, minf=109 00:38:54.164 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:54.164 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:54.164 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:54.164 issued rwts: total=1968,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:54.164 latency : target=0, window=0, percentile=100.00%, depth=4 00:38:54.164 00:38:54.164 Run status group 0 (all jobs): 00:38:54.164 READ: bw=1569KiB/s (1606kB/s), 783KiB/s-786KiB/s (802kB/s-805kB/s), io=15.3MiB (16.1MB), run=10016-10016msec 00:38:54.164 06:49:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:38:54.164 06:49:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:38:54.164 06:49:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:38:54.164 06:49:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:38:54.164 06:49:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:38:54.164 06:49:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:38:54.164 06:49:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:54.164 06:49:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:54.164 06:49:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:54.164 06:49:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:38:54.164 06:49:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:54.164 06:49:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:54.164 06:49:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:54.164 06:49:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:38:54.164 06:49:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:38:54.164 06:49:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:38:54.164 06:49:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:38:54.164 06:49:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:54.164 06:49:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:54.164 06:49:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:54.164 06:49:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:38:54.164 06:49:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:54.164 06:49:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:54.164 06:49:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:54.164 00:38:54.164 real 0m11.420s 00:38:54.164 user 0m26.792s 00:38:54.164 sys 0m0.923s 00:38:54.164 06:49:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1128 -- # xtrace_disable 00:38:54.164 06:49:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:54.164 ************************************ 00:38:54.164 END TEST fio_dif_1_multi_subsystems 00:38:54.164 ************************************ 00:38:54.164 06:49:25 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:38:54.164 06:49:25 nvmf_dif -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:38:54.164 06:49:25 nvmf_dif -- common/autotest_common.sh@1109 -- # xtrace_disable 00:38:54.164 06:49:25 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:38:54.164 ************************************ 00:38:54.164 START TEST fio_dif_rand_params 00:38:54.164 ************************************ 00:38:54.164 06:49:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1127 -- # fio_dif_rand_params 00:38:54.164 06:49:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:38:54.164 06:49:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:38:54.164 06:49:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:38:54.164 06:49:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:38:54.164 06:49:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:38:54.164 06:49:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:38:54.164 06:49:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:38:54.164 06:49:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:38:54.164 06:49:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:38:54.164 06:49:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:38:54.164 06:49:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:38:54.164 06:49:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:38:54.164 06:49:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:38:54.164 06:49:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:54.164 06:49:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:54.164 bdev_null0 00:38:54.164 06:49:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:54.164 06:49:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:38:54.164 06:49:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:54.164 06:49:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:54.164 06:49:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:54.164 06:49:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:38:54.164 06:49:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:54.164 06:49:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:54.164 06:49:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:54.164 06:49:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:38:54.164 06:49:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:54.164 06:49:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:54.164 [2024-11-20 06:49:25.940444] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:54.164 06:49:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:54.164 06:49:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:38:54.164 06:49:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:38:54.164 06:49:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:38:54.164 06:49:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:38:54.164 06:49:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:54.164 06:49:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:38:54.164 06:49:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:54.164 06:49:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:38:54.164 06:49:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:54.164 06:49:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:54.164 { 00:38:54.164 "params": { 00:38:54.164 "name": "Nvme$subsystem", 00:38:54.164 "trtype": "$TEST_TRANSPORT", 00:38:54.164 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:54.164 "adrfam": "ipv4", 00:38:54.164 "trsvcid": "$NVMF_PORT", 00:38:54.164 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:54.164 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:54.164 "hdgst": ${hdgst:-false}, 00:38:54.164 "ddgst": ${ddgst:-false} 00:38:54.164 }, 00:38:54.164 "method": "bdev_nvme_attach_controller" 00:38:54.164 } 00:38:54.164 EOF 00:38:54.164 )") 00:38:54.164 06:49:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:38:54.164 06:49:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:38:54.164 06:49:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:38:54.164 06:49:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:38:54.164 06:49:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local sanitizers 00:38:54.164 06:49:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:54.164 06:49:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # shift 00:38:54.164 06:49:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # local asan_lib= 00:38:54.164 06:49:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:38:54.164 06:49:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:38:54.164 06:49:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:38:54.164 06:49:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:38:54.164 06:49:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:54.164 06:49:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libasan 00:38:54.164 06:49:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:38:54.164 06:49:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:38:54.165 06:49:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:38:54.165 06:49:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:38:54.165 "params": { 00:38:54.165 "name": "Nvme0", 00:38:54.165 "trtype": "tcp", 00:38:54.165 "traddr": "10.0.0.2", 00:38:54.165 "adrfam": "ipv4", 00:38:54.165 "trsvcid": "4420", 00:38:54.165 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:54.165 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:54.165 "hdgst": false, 00:38:54.165 "ddgst": false 00:38:54.165 }, 00:38:54.165 "method": "bdev_nvme_attach_controller" 00:38:54.165 }' 00:38:54.165 06:49:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:38:54.165 06:49:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:38:54.165 06:49:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:38:54.165 06:49:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:54.165 06:49:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:38:54.165 06:49:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:38:54.453 06:49:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:38:54.453 06:49:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:38:54.453 06:49:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:38:54.453 06:49:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:54.717 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:38:54.717 ... 00:38:54.717 fio-3.35 00:38:54.717 Starting 3 threads 00:39:01.274 00:39:01.274 filename0: (groupid=0, jobs=1): err= 0: pid=809213: Wed Nov 20 06:49:31 2024 00:39:01.274 read: IOPS=315, BW=39.5MiB/s (41.4MB/s)(197MiB/5003msec) 00:39:01.274 slat (nsec): min=6054, max=36832, avg=10466.90, stdev=2029.47 00:39:01.274 clat (usec): min=3071, max=52547, avg=9492.13, stdev=5333.29 00:39:01.274 lat (usec): min=3083, max=52559, avg=9502.59, stdev=5333.22 00:39:01.274 clat percentiles (usec): 00:39:01.274 | 1.00th=[ 5342], 5.00th=[ 6063], 10.00th=[ 6915], 20.00th=[ 7832], 00:39:01.274 | 30.00th=[ 8291], 40.00th=[ 8586], 50.00th=[ 8979], 60.00th=[ 9241], 00:39:01.274 | 70.00th=[ 9634], 80.00th=[10028], 90.00th=[10552], 95.00th=[11207], 00:39:01.274 | 99.00th=[47973], 99.50th=[49021], 99.90th=[52167], 99.95th=[52691], 00:39:01.274 | 99.99th=[52691] 00:39:01.274 bw ( KiB/s): min=35072, max=46080, per=33.21%, avg=40931.56, stdev=3468.88, samples=9 00:39:01.274 iops : min= 274, max= 360, avg=319.78, stdev=27.10, samples=9 00:39:01.274 lat (msec) : 4=0.06%, 10=79.67%, 20=18.56%, 50=1.39%, 100=0.32% 00:39:01.274 cpu : usr=94.34%, sys=5.38%, ctx=12, majf=0, minf=9 00:39:01.274 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:01.274 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:01.274 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:01.274 issued rwts: total=1579,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:01.274 latency : target=0, window=0, percentile=100.00%, depth=3 00:39:01.274 filename0: (groupid=0, jobs=1): err= 0: pid=809214: Wed Nov 20 06:49:31 2024 00:39:01.274 read: IOPS=336, BW=42.1MiB/s (44.1MB/s)(211MiB/5005msec) 00:39:01.274 slat (nsec): min=6055, max=24258, avg=10407.23, stdev=1991.30 00:39:01.274 clat (usec): min=3056, max=50592, avg=8896.43, stdev=4740.79 00:39:01.274 lat (usec): min=3063, max=50614, avg=8906.84, stdev=4740.96 00:39:01.274 clat percentiles (usec): 00:39:01.274 | 1.00th=[ 3687], 5.00th=[ 5997], 10.00th=[ 6587], 20.00th=[ 7373], 00:39:01.274 | 30.00th=[ 7767], 40.00th=[ 8094], 50.00th=[ 8455], 60.00th=[ 8717], 00:39:01.274 | 70.00th=[ 9241], 80.00th=[ 9634], 90.00th=[10290], 95.00th=[10814], 00:39:01.274 | 99.00th=[48497], 99.50th=[49546], 99.90th=[50594], 99.95th=[50594], 00:39:01.274 | 99.99th=[50594] 00:39:01.274 bw ( KiB/s): min=32256, max=47616, per=34.96%, avg=43084.80, stdev=4592.25, samples=10 00:39:01.274 iops : min= 252, max= 372, avg=336.60, stdev=35.88, samples=10 00:39:01.274 lat (msec) : 4=2.14%, 10=84.51%, 20=12.11%, 50=1.07%, 100=0.18% 00:39:01.274 cpu : usr=94.56%, sys=5.12%, ctx=10, majf=0, minf=0 00:39:01.274 IO depths : 1=0.4%, 2=99.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:01.274 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:01.274 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:01.274 issued rwts: total=1685,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:01.274 latency : target=0, window=0, percentile=100.00%, depth=3 00:39:01.274 filename0: (groupid=0, jobs=1): err= 0: pid=809215: Wed Nov 20 06:49:31 2024 00:39:01.274 read: IOPS=315, BW=39.5MiB/s (41.4MB/s)(199MiB/5044msec) 00:39:01.274 slat (nsec): min=6106, max=27295, avg=10700.89, stdev=1955.91 00:39:01.274 clat (usec): min=3323, max=51277, avg=9459.08, stdev=4087.30 00:39:01.274 lat (usec): min=3331, max=51288, avg=9469.78, stdev=4087.41 00:39:01.274 clat percentiles (usec): 00:39:01.274 | 1.00th=[ 3589], 5.00th=[ 5932], 10.00th=[ 6390], 20.00th=[ 7701], 00:39:01.274 | 30.00th=[ 8455], 40.00th=[ 8979], 50.00th=[ 9372], 60.00th=[ 9765], 00:39:01.274 | 70.00th=[10290], 80.00th=[10683], 90.00th=[11338], 95.00th=[11863], 00:39:01.274 | 99.00th=[13173], 99.50th=[47973], 99.90th=[50594], 99.95th=[51119], 00:39:01.274 | 99.99th=[51119] 00:39:01.274 bw ( KiB/s): min=37632, max=44288, per=33.04%, avg=40729.60, stdev=2013.76, samples=10 00:39:01.274 iops : min= 294, max= 346, avg=318.20, stdev=15.73, samples=10 00:39:01.274 lat (msec) : 4=1.51%, 10=63.47%, 20=34.15%, 50=0.69%, 100=0.19% 00:39:01.274 cpu : usr=94.19%, sys=5.51%, ctx=8, majf=0, minf=9 00:39:01.274 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:01.274 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:01.274 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:01.274 issued rwts: total=1593,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:01.274 latency : target=0, window=0, percentile=100.00%, depth=3 00:39:01.274 00:39:01.274 Run status group 0 (all jobs): 00:39:01.274 READ: bw=120MiB/s (126MB/s), 39.5MiB/s-42.1MiB/s (41.4MB/s-44.1MB/s), io=607MiB (637MB), run=5003-5044msec 00:39:01.274 06:49:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:39:01.274 06:49:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:39:01.274 06:49:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:39:01.274 06:49:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:39:01.274 06:49:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:39:01.274 06:49:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:39:01.274 06:49:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:01.274 06:49:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:01.274 06:49:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:01.274 06:49:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:39:01.274 06:49:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:01.274 06:49:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:01.274 06:49:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:01.274 06:49:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:39:01.274 06:49:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:39:01.275 06:49:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:39:01.275 06:49:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:39:01.275 06:49:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:39:01.275 06:49:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:39:01.275 06:49:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:39:01.275 06:49:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:39:01.275 06:49:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:39:01.275 06:49:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:39:01.275 06:49:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:39:01.275 06:49:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:39:01.275 06:49:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:01.275 06:49:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:01.275 bdev_null0 00:39:01.275 06:49:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:01.275 06:49:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:39:01.275 06:49:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:01.275 06:49:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:01.275 06:49:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:01.275 06:49:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:39:01.275 06:49:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:01.275 06:49:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:01.275 06:49:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:01.275 06:49:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:39:01.275 06:49:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:01.275 06:49:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:01.275 [2024-11-20 06:49:32.182455] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:01.275 06:49:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:01.275 06:49:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:39:01.275 06:49:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:39:01.275 06:49:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:39:01.275 06:49:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:39:01.275 06:49:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:01.275 06:49:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:01.275 bdev_null1 00:39:01.275 06:49:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:01.275 06:49:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:39:01.275 06:49:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:01.275 06:49:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:01.275 06:49:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:01.275 06:49:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:39:01.275 06:49:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:01.275 06:49:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:01.275 06:49:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:01.275 06:49:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:01.275 06:49:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:01.275 06:49:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:01.275 06:49:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:01.275 06:49:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:39:01.275 06:49:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:39:01.275 06:49:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:39:01.275 06:49:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:39:01.275 06:49:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:01.275 06:49:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:01.275 bdev_null2 00:39:01.275 06:49:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:01.275 06:49:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:39:01.275 06:49:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:01.275 06:49:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:01.275 06:49:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:01.275 06:49:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:39:01.275 06:49:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:01.275 06:49:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:01.275 06:49:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:01.275 06:49:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:39:01.275 06:49:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:01.275 06:49:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:01.275 06:49:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:01.275 06:49:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:39:01.275 06:49:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:39:01.275 06:49:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:39:01.275 06:49:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:39:01.275 06:49:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:01.275 06:49:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:39:01.275 06:49:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:39:01.275 06:49:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:01.275 06:49:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:39:01.275 06:49:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:39:01.275 { 00:39:01.275 "params": { 00:39:01.275 "name": "Nvme$subsystem", 00:39:01.275 "trtype": "$TEST_TRANSPORT", 00:39:01.275 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:01.275 "adrfam": "ipv4", 00:39:01.275 "trsvcid": "$NVMF_PORT", 00:39:01.275 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:01.275 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:01.275 "hdgst": ${hdgst:-false}, 00:39:01.275 "ddgst": ${ddgst:-false} 00:39:01.275 }, 00:39:01.275 "method": "bdev_nvme_attach_controller" 00:39:01.275 } 00:39:01.275 EOF 00:39:01.275 )") 00:39:01.275 06:49:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:39:01.275 06:49:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:39:01.275 06:49:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:39:01.275 06:49:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:39:01.275 06:49:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local sanitizers 00:39:01.275 06:49:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:39:01.275 06:49:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # shift 00:39:01.275 06:49:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # local asan_lib= 00:39:01.275 06:49:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:39:01.275 06:49:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:39:01.275 06:49:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:39:01.275 06:49:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:39:01.275 06:49:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:39:01.275 06:49:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:39:01.275 06:49:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:39:01.275 06:49:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libasan 00:39:01.275 06:49:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:39:01.275 06:49:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:39:01.275 { 00:39:01.275 "params": { 00:39:01.275 "name": "Nvme$subsystem", 00:39:01.275 "trtype": "$TEST_TRANSPORT", 00:39:01.275 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:01.275 "adrfam": "ipv4", 00:39:01.275 "trsvcid": "$NVMF_PORT", 00:39:01.275 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:01.275 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:01.275 "hdgst": ${hdgst:-false}, 00:39:01.275 "ddgst": ${ddgst:-false} 00:39:01.275 }, 00:39:01.275 "method": "bdev_nvme_attach_controller" 00:39:01.275 } 00:39:01.275 EOF 00:39:01.275 )") 00:39:01.275 06:49:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:39:01.275 06:49:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:39:01.275 06:49:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:39:01.275 06:49:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:39:01.275 06:49:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:39:01.275 06:49:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:39:01.276 { 00:39:01.276 "params": { 00:39:01.276 "name": "Nvme$subsystem", 00:39:01.276 "trtype": "$TEST_TRANSPORT", 00:39:01.276 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:01.276 "adrfam": "ipv4", 00:39:01.276 "trsvcid": "$NVMF_PORT", 00:39:01.276 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:01.276 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:01.276 "hdgst": ${hdgst:-false}, 00:39:01.276 "ddgst": ${ddgst:-false} 00:39:01.276 }, 00:39:01.276 "method": "bdev_nvme_attach_controller" 00:39:01.276 } 00:39:01.276 EOF 00:39:01.276 )") 00:39:01.276 06:49:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:39:01.276 06:49:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:39:01.276 06:49:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:39:01.276 06:49:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:39:01.276 06:49:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:39:01.276 06:49:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:39:01.276 "params": { 00:39:01.276 "name": "Nvme0", 00:39:01.276 "trtype": "tcp", 00:39:01.276 "traddr": "10.0.0.2", 00:39:01.276 "adrfam": "ipv4", 00:39:01.276 "trsvcid": "4420", 00:39:01.276 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:01.276 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:01.276 "hdgst": false, 00:39:01.276 "ddgst": false 00:39:01.276 }, 00:39:01.276 "method": "bdev_nvme_attach_controller" 00:39:01.276 },{ 00:39:01.276 "params": { 00:39:01.276 "name": "Nvme1", 00:39:01.276 "trtype": "tcp", 00:39:01.276 "traddr": "10.0.0.2", 00:39:01.276 "adrfam": "ipv4", 00:39:01.276 "trsvcid": "4420", 00:39:01.276 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:39:01.276 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:39:01.276 "hdgst": false, 00:39:01.276 "ddgst": false 00:39:01.276 }, 00:39:01.276 "method": "bdev_nvme_attach_controller" 00:39:01.276 },{ 00:39:01.276 "params": { 00:39:01.276 "name": "Nvme2", 00:39:01.276 "trtype": "tcp", 00:39:01.276 "traddr": "10.0.0.2", 00:39:01.276 "adrfam": "ipv4", 00:39:01.276 "trsvcid": "4420", 00:39:01.276 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:39:01.276 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:39:01.276 "hdgst": false, 00:39:01.276 "ddgst": false 00:39:01.276 }, 00:39:01.276 "method": "bdev_nvme_attach_controller" 00:39:01.276 }' 00:39:01.276 06:49:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:39:01.276 06:49:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:39:01.276 06:49:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:39:01.276 06:49:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:39:01.276 06:49:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:39:01.276 06:49:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:39:01.276 06:49:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:39:01.276 06:49:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:39:01.276 06:49:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:39:01.276 06:49:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:01.276 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:39:01.276 ... 00:39:01.276 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:39:01.276 ... 00:39:01.276 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:39:01.276 ... 00:39:01.276 fio-3.35 00:39:01.276 Starting 24 threads 00:39:13.467 00:39:13.467 filename0: (groupid=0, jobs=1): err= 0: pid=810386: Wed Nov 20 06:49:43 2024 00:39:13.467 read: IOPS=533, BW=2135KiB/s (2186kB/s)(20.9MiB/10014msec) 00:39:13.467 slat (nsec): min=7657, max=54380, avg=18994.44, stdev=7519.75 00:39:13.467 clat (usec): min=11806, max=44861, avg=29828.44, stdev=1812.73 00:39:13.467 lat (usec): min=11831, max=44883, avg=29847.44, stdev=1811.80 00:39:13.467 clat percentiles (usec): 00:39:13.467 | 1.00th=[19268], 5.00th=[29754], 10.00th=[29754], 20.00th=[29754], 00:39:13.467 | 30.00th=[30016], 40.00th=[30016], 50.00th=[30016], 60.00th=[30016], 00:39:13.467 | 70.00th=[30016], 80.00th=[30278], 90.00th=[30278], 95.00th=[30540], 00:39:13.467 | 99.00th=[30802], 99.50th=[31589], 99.90th=[31851], 99.95th=[43779], 00:39:13.467 | 99.99th=[44827] 00:39:13.467 bw ( KiB/s): min= 2048, max= 2304, per=4.18%, avg=2131.20, stdev=75.15, samples=20 00:39:13.467 iops : min= 512, max= 576, avg=532.80, stdev=18.79, samples=20 00:39:13.467 lat (msec) : 20=1.27%, 50=98.73% 00:39:13.467 cpu : usr=98.17%, sys=1.49%, ctx=7, majf=0, minf=9 00:39:13.467 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:39:13.467 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:13.467 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:13.467 issued rwts: total=5344,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:13.467 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:13.467 filename0: (groupid=0, jobs=1): err= 0: pid=810387: Wed Nov 20 06:49:43 2024 00:39:13.467 read: IOPS=538, BW=2153KiB/s (2205kB/s)(21.0MiB/10007msec) 00:39:13.467 slat (nsec): min=7276, max=75720, avg=16167.08, stdev=10512.92 00:39:13.467 clat (usec): min=9571, max=69263, avg=29639.53, stdev=4269.75 00:39:13.467 lat (usec): min=9580, max=69306, avg=29655.70, stdev=4269.83 00:39:13.467 clat percentiles (usec): 00:39:13.467 | 1.00th=[16188], 5.00th=[22414], 10.00th=[23725], 20.00th=[29492], 00:39:13.467 | 30.00th=[30016], 40.00th=[30016], 50.00th=[30016], 60.00th=[30016], 00:39:13.467 | 70.00th=[30016], 80.00th=[30278], 90.00th=[31589], 95.00th=[36439], 00:39:13.467 | 99.00th=[38536], 99.50th=[41681], 99.90th=[68682], 99.95th=[68682], 00:39:13.467 | 99.99th=[69731] 00:39:13.467 bw ( KiB/s): min= 1920, max= 2208, per=4.21%, avg=2146.53, stdev=62.48, samples=19 00:39:13.467 iops : min= 480, max= 552, avg=536.63, stdev=15.62, samples=19 00:39:13.467 lat (msec) : 10=0.30%, 20=1.41%, 50=97.99%, 100=0.30% 00:39:13.467 cpu : usr=98.19%, sys=1.46%, ctx=5, majf=0, minf=9 00:39:13.467 IO depths : 1=1.2%, 2=2.5%, 4=6.8%, 8=75.0%, 16=14.5%, 32=0.0%, >=64=0.0% 00:39:13.467 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:13.467 complete : 0=0.0%, 4=89.9%, 8=7.6%, 16=2.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:13.467 issued rwts: total=5386,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:13.467 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:13.467 filename0: (groupid=0, jobs=1): err= 0: pid=810388: Wed Nov 20 06:49:43 2024 00:39:13.467 read: IOPS=530, BW=2123KiB/s (2174kB/s)(20.8MiB/10008msec) 00:39:13.467 slat (nsec): min=5195, max=85384, avg=25934.30, stdev=12745.87 00:39:13.467 clat (usec): min=9581, max=51875, avg=29865.76, stdev=1706.64 00:39:13.467 lat (usec): min=9597, max=51892, avg=29891.69, stdev=1706.90 00:39:13.467 clat percentiles (usec): 00:39:13.467 | 1.00th=[29492], 5.00th=[29492], 10.00th=[29754], 20.00th=[29754], 00:39:13.467 | 30.00th=[29754], 40.00th=[29754], 50.00th=[29754], 60.00th=[30016], 00:39:13.468 | 70.00th=[30016], 80.00th=[30016], 90.00th=[30278], 95.00th=[30278], 00:39:13.468 | 99.00th=[30802], 99.50th=[31589], 99.90th=[51643], 99.95th=[51643], 00:39:13.468 | 99.99th=[51643] 00:39:13.468 bw ( KiB/s): min= 1920, max= 2176, per=4.15%, avg=2118.40, stdev=77.42, samples=20 00:39:13.468 iops : min= 480, max= 544, avg=529.60, stdev=19.35, samples=20 00:39:13.468 lat (msec) : 10=0.30%, 50=99.40%, 100=0.30% 00:39:13.468 cpu : usr=98.53%, sys=1.09%, ctx=13, majf=0, minf=9 00:39:13.468 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:39:13.468 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:13.468 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:13.468 issued rwts: total=5312,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:13.468 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:13.468 filename0: (groupid=0, jobs=1): err= 0: pid=810389: Wed Nov 20 06:49:43 2024 00:39:13.468 read: IOPS=529, BW=2118KiB/s (2169kB/s)(20.7MiB/10002msec) 00:39:13.468 slat (nsec): min=4613, max=52668, avg=26048.65, stdev=7552.89 00:39:13.468 clat (usec): min=21900, max=58764, avg=29998.34, stdev=1517.86 00:39:13.468 lat (usec): min=21933, max=58778, avg=30024.39, stdev=1516.74 00:39:13.468 clat percentiles (usec): 00:39:13.468 | 1.00th=[29492], 5.00th=[29492], 10.00th=[29754], 20.00th=[29754], 00:39:13.468 | 30.00th=[29754], 40.00th=[29754], 50.00th=[30016], 60.00th=[30016], 00:39:13.468 | 70.00th=[30016], 80.00th=[30016], 90.00th=[30278], 95.00th=[30540], 00:39:13.468 | 99.00th=[30802], 99.50th=[31589], 99.90th=[55837], 99.95th=[55837], 00:39:13.468 | 99.99th=[58983] 00:39:13.468 bw ( KiB/s): min= 1923, max= 2176, per=4.14%, avg=2112.15, stdev=77.30, samples=20 00:39:13.468 iops : min= 480, max= 544, avg=528.00, stdev=19.42, samples=20 00:39:13.468 lat (msec) : 50=99.70%, 100=0.30% 00:39:13.468 cpu : usr=98.16%, sys=1.50%, ctx=16, majf=0, minf=9 00:39:13.468 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:39:13.468 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:13.468 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:13.468 issued rwts: total=5296,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:13.468 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:13.468 filename0: (groupid=0, jobs=1): err= 0: pid=810390: Wed Nov 20 06:49:43 2024 00:39:13.468 read: IOPS=530, BW=2124KiB/s (2175kB/s)(20.8MiB/10004msec) 00:39:13.468 slat (nsec): min=5563, max=54018, avg=18515.04, stdev=7851.98 00:39:13.468 clat (usec): min=17574, max=42537, avg=29982.54, stdev=1826.10 00:39:13.468 lat (usec): min=17583, max=42545, avg=30001.06, stdev=1826.01 00:39:13.468 clat percentiles (usec): 00:39:13.468 | 1.00th=[18744], 5.00th=[29492], 10.00th=[29754], 20.00th=[29754], 00:39:13.468 | 30.00th=[30016], 40.00th=[30016], 50.00th=[30016], 60.00th=[30016], 00:39:13.468 | 70.00th=[30016], 80.00th=[30278], 90.00th=[30278], 95.00th=[30540], 00:39:13.468 | 99.00th=[41157], 99.50th=[41681], 99.90th=[42206], 99.95th=[42730], 00:39:13.468 | 99.99th=[42730] 00:39:13.468 bw ( KiB/s): min= 2048, max= 2176, per=4.16%, avg=2122.11, stdev=64.93, samples=19 00:39:13.468 iops : min= 512, max= 544, avg=530.53, stdev=16.23, samples=19 00:39:13.468 lat (msec) : 20=1.36%, 50=98.64% 00:39:13.468 cpu : usr=98.50%, sys=1.15%, ctx=14, majf=0, minf=9 00:39:13.468 IO depths : 1=5.6%, 2=11.7%, 4=24.6%, 8=51.2%, 16=6.9%, 32=0.0%, >=64=0.0% 00:39:13.468 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:13.468 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:13.468 issued rwts: total=5312,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:13.468 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:13.468 filename0: (groupid=0, jobs=1): err= 0: pid=810392: Wed Nov 20 06:49:43 2024 00:39:13.468 read: IOPS=533, BW=2135KiB/s (2186kB/s)(20.9MiB/10014msec) 00:39:13.468 slat (nsec): min=8172, max=58559, avg=24935.53, stdev=7397.16 00:39:13.468 clat (usec): min=11460, max=51680, avg=29761.55, stdev=2304.84 00:39:13.468 lat (usec): min=11473, max=51701, avg=29786.49, stdev=2305.07 00:39:13.468 clat percentiles (usec): 00:39:13.468 | 1.00th=[15139], 5.00th=[29492], 10.00th=[29754], 20.00th=[29754], 00:39:13.468 | 30.00th=[29754], 40.00th=[29754], 50.00th=[30016], 60.00th=[30016], 00:39:13.468 | 70.00th=[30016], 80.00th=[30016], 90.00th=[30278], 95.00th=[30540], 00:39:13.468 | 99.00th=[31327], 99.50th=[33817], 99.90th=[45351], 99.95th=[51643], 00:39:13.468 | 99.99th=[51643] 00:39:13.468 bw ( KiB/s): min= 2048, max= 2304, per=4.18%, avg=2131.20, stdev=75.15, samples=20 00:39:13.468 iops : min= 512, max= 576, avg=532.80, stdev=18.79, samples=20 00:39:13.468 lat (msec) : 20=1.68%, 50=98.24%, 100=0.07% 00:39:13.468 cpu : usr=98.27%, sys=1.38%, ctx=16, majf=0, minf=9 00:39:13.468 IO depths : 1=5.9%, 2=12.1%, 4=25.0%, 8=50.4%, 16=6.6%, 32=0.0%, >=64=0.0% 00:39:13.468 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:13.468 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:13.468 issued rwts: total=5344,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:13.468 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:13.468 filename0: (groupid=0, jobs=1): err= 0: pid=810393: Wed Nov 20 06:49:43 2024 00:39:13.468 read: IOPS=530, BW=2123KiB/s (2174kB/s)(20.8MiB/10008msec) 00:39:13.468 slat (nsec): min=7532, max=53884, avg=23578.45, stdev=7329.27 00:39:13.468 clat (usec): min=19026, max=34053, avg=29919.41, stdev=679.77 00:39:13.468 lat (usec): min=19041, max=34073, avg=29942.99, stdev=680.54 00:39:13.468 clat percentiles (usec): 00:39:13.468 | 1.00th=[29492], 5.00th=[29754], 10.00th=[29754], 20.00th=[29754], 00:39:13.468 | 30.00th=[29754], 40.00th=[29754], 50.00th=[30016], 60.00th=[30016], 00:39:13.468 | 70.00th=[30016], 80.00th=[30016], 90.00th=[30278], 95.00th=[30540], 00:39:13.468 | 99.00th=[30802], 99.50th=[31589], 99.90th=[33817], 99.95th=[33817], 00:39:13.468 | 99.99th=[33817] 00:39:13.468 bw ( KiB/s): min= 2048, max= 2176, per=4.15%, avg=2117.10, stdev=61.00, samples=20 00:39:13.468 iops : min= 512, max= 544, avg=529.35, stdev=15.20, samples=20 00:39:13.468 lat (msec) : 20=0.30%, 50=99.70% 00:39:13.468 cpu : usr=98.57%, sys=1.08%, ctx=14, majf=0, minf=9 00:39:13.468 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:39:13.468 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:13.468 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:13.468 issued rwts: total=5312,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:13.468 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:13.468 filename0: (groupid=0, jobs=1): err= 0: pid=810394: Wed Nov 20 06:49:43 2024 00:39:13.468 read: IOPS=533, BW=2135KiB/s (2186kB/s)(20.9MiB/10014msec) 00:39:13.468 slat (nsec): min=9590, max=57378, avg=23177.90, stdev=5755.07 00:39:13.468 clat (usec): min=11666, max=31948, avg=29778.49, stdev=1717.53 00:39:13.468 lat (usec): min=11702, max=31966, avg=29801.67, stdev=1717.04 00:39:13.468 clat percentiles (usec): 00:39:13.468 | 1.00th=[19530], 5.00th=[29754], 10.00th=[29754], 20.00th=[29754], 00:39:13.468 | 30.00th=[29754], 40.00th=[29754], 50.00th=[30016], 60.00th=[30016], 00:39:13.468 | 70.00th=[30016], 80.00th=[30016], 90.00th=[30278], 95.00th=[30540], 00:39:13.468 | 99.00th=[30802], 99.50th=[31327], 99.90th=[31851], 99.95th=[31851], 00:39:13.468 | 99.99th=[31851] 00:39:13.468 bw ( KiB/s): min= 2048, max= 2304, per=4.18%, avg=2131.20, stdev=75.15, samples=20 00:39:13.468 iops : min= 512, max= 576, avg=532.80, stdev=18.79, samples=20 00:39:13.468 lat (msec) : 20=1.20%, 50=98.80% 00:39:13.468 cpu : usr=98.53%, sys=1.13%, ctx=12, majf=0, minf=9 00:39:13.468 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:39:13.468 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:13.468 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:13.468 issued rwts: total=5344,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:13.468 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:13.468 filename1: (groupid=0, jobs=1): err= 0: pid=810395: Wed Nov 20 06:49:43 2024 00:39:13.468 read: IOPS=533, BW=2135KiB/s (2186kB/s)(20.9MiB/10014msec) 00:39:13.468 slat (nsec): min=8199, max=58681, avg=22018.23, stdev=8511.39 00:39:13.468 clat (usec): min=11600, max=31988, avg=29805.58, stdev=1725.28 00:39:13.468 lat (usec): min=11625, max=32005, avg=29827.59, stdev=1724.22 00:39:13.468 clat percentiles (usec): 00:39:13.468 | 1.00th=[19268], 5.00th=[29492], 10.00th=[29754], 20.00th=[29754], 00:39:13.468 | 30.00th=[29754], 40.00th=[30016], 50.00th=[30016], 60.00th=[30016], 00:39:13.468 | 70.00th=[30016], 80.00th=[30016], 90.00th=[30278], 95.00th=[30540], 00:39:13.468 | 99.00th=[30802], 99.50th=[31327], 99.90th=[31851], 99.95th=[31851], 00:39:13.468 | 99.99th=[32113] 00:39:13.468 bw ( KiB/s): min= 2048, max= 2304, per=4.18%, avg=2131.20, stdev=75.15, samples=20 00:39:13.468 iops : min= 512, max= 576, avg=532.80, stdev=18.79, samples=20 00:39:13.468 lat (msec) : 20=1.20%, 50=98.80% 00:39:13.468 cpu : usr=98.36%, sys=1.30%, ctx=8, majf=0, minf=9 00:39:13.468 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:39:13.468 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:13.468 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:13.468 issued rwts: total=5344,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:13.468 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:13.468 filename1: (groupid=0, jobs=1): err= 0: pid=810396: Wed Nov 20 06:49:43 2024 00:39:13.468 read: IOPS=530, BW=2123KiB/s (2174kB/s)(20.8MiB/10009msec) 00:39:13.468 slat (nsec): min=4302, max=54771, avg=25349.19, stdev=8107.30 00:39:13.468 clat (usec): min=9641, max=52482, avg=29907.15, stdev=1727.78 00:39:13.468 lat (usec): min=9655, max=52496, avg=29932.50, stdev=1727.61 00:39:13.468 clat percentiles (usec): 00:39:13.468 | 1.00th=[29492], 5.00th=[29492], 10.00th=[29754], 20.00th=[29754], 00:39:13.468 | 30.00th=[29754], 40.00th=[29754], 50.00th=[30016], 60.00th=[30016], 00:39:13.468 | 70.00th=[30016], 80.00th=[30016], 90.00th=[30278], 95.00th=[30278], 00:39:13.468 | 99.00th=[30802], 99.50th=[31589], 99.90th=[52691], 99.95th=[52691], 00:39:13.468 | 99.99th=[52691] 00:39:13.468 bw ( KiB/s): min= 1920, max= 2176, per=4.15%, avg=2118.40, stdev=77.42, samples=20 00:39:13.468 iops : min= 480, max= 544, avg=529.60, stdev=19.35, samples=20 00:39:13.468 lat (msec) : 10=0.19%, 20=0.11%, 50=99.40%, 100=0.30% 00:39:13.468 cpu : usr=98.39%, sys=1.25%, ctx=14, majf=0, minf=9 00:39:13.468 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:39:13.468 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:13.468 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:13.468 issued rwts: total=5312,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:13.469 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:13.469 filename1: (groupid=0, jobs=1): err= 0: pid=810397: Wed Nov 20 06:49:43 2024 00:39:13.469 read: IOPS=530, BW=2123KiB/s (2174kB/s)(20.8MiB/10009msec) 00:39:13.469 slat (usec): min=8, max=120, avg=24.91, stdev= 8.34 00:39:13.469 clat (usec): min=22049, max=35076, avg=29944.36, stdev=533.06 00:39:13.469 lat (usec): min=22083, max=35095, avg=29969.27, stdev=532.05 00:39:13.469 clat percentiles (usec): 00:39:13.469 | 1.00th=[29492], 5.00th=[29492], 10.00th=[29754], 20.00th=[29754], 00:39:13.469 | 30.00th=[29754], 40.00th=[30016], 50.00th=[30016], 60.00th=[30016], 00:39:13.469 | 70.00th=[30016], 80.00th=[30016], 90.00th=[30278], 95.00th=[30540], 00:39:13.469 | 99.00th=[30802], 99.50th=[31589], 99.90th=[33162], 99.95th=[33162], 00:39:13.469 | 99.99th=[34866] 00:39:13.469 bw ( KiB/s): min= 2048, max= 2176, per=4.15%, avg=2117.30, stdev=64.49, samples=20 00:39:13.469 iops : min= 512, max= 544, avg=529.30, stdev=16.11, samples=20 00:39:13.469 lat (msec) : 50=100.00% 00:39:13.469 cpu : usr=98.26%, sys=1.39%, ctx=14, majf=0, minf=9 00:39:13.469 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:39:13.469 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:13.469 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:13.469 issued rwts: total=5312,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:13.469 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:13.469 filename1: (groupid=0, jobs=1): err= 0: pid=810398: Wed Nov 20 06:49:43 2024 00:39:13.469 read: IOPS=533, BW=2135KiB/s (2186kB/s)(20.9MiB/10014msec) 00:39:13.469 slat (nsec): min=7357, max=48546, avg=23521.23, stdev=7435.28 00:39:13.469 clat (usec): min=12104, max=34307, avg=29757.74, stdev=1735.83 00:39:13.469 lat (usec): min=12126, max=34332, avg=29781.26, stdev=1736.99 00:39:13.469 clat percentiles (usec): 00:39:13.469 | 1.00th=[15270], 5.00th=[29492], 10.00th=[29754], 20.00th=[29754], 00:39:13.469 | 30.00th=[29754], 40.00th=[29754], 50.00th=[30016], 60.00th=[30016], 00:39:13.469 | 70.00th=[30016], 80.00th=[30016], 90.00th=[30278], 95.00th=[30278], 00:39:13.469 | 99.00th=[30802], 99.50th=[31589], 99.90th=[34341], 99.95th=[34341], 00:39:13.469 | 99.99th=[34341] 00:39:13.469 bw ( KiB/s): min= 2048, max= 2304, per=4.18%, avg=2131.20, stdev=75.15, samples=20 00:39:13.469 iops : min= 512, max= 576, avg=532.80, stdev=18.79, samples=20 00:39:13.469 lat (msec) : 20=1.16%, 50=98.84% 00:39:13.469 cpu : usr=98.34%, sys=1.31%, ctx=14, majf=0, minf=9 00:39:13.469 IO depths : 1=6.2%, 2=12.4%, 4=24.8%, 8=50.3%, 16=6.3%, 32=0.0%, >=64=0.0% 00:39:13.469 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:13.469 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:13.469 issued rwts: total=5344,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:13.469 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:13.469 filename1: (groupid=0, jobs=1): err= 0: pid=810399: Wed Nov 20 06:49:43 2024 00:39:13.469 read: IOPS=531, BW=2127KiB/s (2178kB/s)(20.8MiB/10021msec) 00:39:13.469 slat (nsec): min=7272, max=31021, avg=11028.49, stdev=3563.93 00:39:13.469 clat (usec): min=17327, max=42767, avg=29995.71, stdev=1627.41 00:39:13.469 lat (usec): min=17335, max=42785, avg=30006.74, stdev=1627.39 00:39:13.469 clat percentiles (usec): 00:39:13.469 | 1.00th=[19006], 5.00th=[29754], 10.00th=[29754], 20.00th=[30016], 00:39:13.469 | 30.00th=[30016], 40.00th=[30016], 50.00th=[30016], 60.00th=[30016], 00:39:13.469 | 70.00th=[30016], 80.00th=[30278], 90.00th=[30278], 95.00th=[30540], 00:39:13.469 | 99.00th=[31589], 99.50th=[41681], 99.90th=[41681], 99.95th=[42206], 00:39:13.469 | 99.99th=[42730] 00:39:13.469 bw ( KiB/s): min= 2048, max= 2176, per=4.16%, avg=2124.80, stdev=62.85, samples=20 00:39:13.469 iops : min= 512, max= 544, avg=531.20, stdev=15.71, samples=20 00:39:13.469 lat (msec) : 20=1.05%, 50=98.95% 00:39:13.469 cpu : usr=98.48%, sys=1.17%, ctx=14, majf=0, minf=9 00:39:13.469 IO depths : 1=5.4%, 2=11.6%, 4=25.0%, 8=50.9%, 16=7.1%, 32=0.0%, >=64=0.0% 00:39:13.469 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:13.469 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:13.469 issued rwts: total=5328,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:13.469 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:13.469 filename1: (groupid=0, jobs=1): err= 0: pid=810400: Wed Nov 20 06:49:43 2024 00:39:13.469 read: IOPS=530, BW=2123KiB/s (2174kB/s)(20.8MiB/10007msec) 00:39:13.469 slat (nsec): min=4469, max=51174, avg=24873.43, stdev=6597.50 00:39:13.469 clat (usec): min=7809, max=55090, avg=29914.29, stdev=1798.73 00:39:13.469 lat (usec): min=7828, max=55104, avg=29939.17, stdev=1798.62 00:39:13.469 clat percentiles (usec): 00:39:13.469 | 1.00th=[29492], 5.00th=[29492], 10.00th=[29754], 20.00th=[29754], 00:39:13.469 | 30.00th=[29754], 40.00th=[29754], 50.00th=[30016], 60.00th=[30016], 00:39:13.469 | 70.00th=[30016], 80.00th=[30016], 90.00th=[30278], 95.00th=[30278], 00:39:13.469 | 99.00th=[30802], 99.50th=[31851], 99.90th=[52167], 99.95th=[52167], 00:39:13.469 | 99.99th=[55313] 00:39:13.469 bw ( KiB/s): min= 1920, max= 2176, per=4.15%, avg=2118.40, stdev=77.42, samples=20 00:39:13.469 iops : min= 480, max= 544, avg=529.60, stdev=19.35, samples=20 00:39:13.469 lat (msec) : 10=0.30%, 50=99.40%, 100=0.30% 00:39:13.469 cpu : usr=98.40%, sys=1.25%, ctx=14, majf=0, minf=9 00:39:13.469 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:39:13.469 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:13.469 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:13.469 issued rwts: total=5312,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:13.469 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:13.469 filename1: (groupid=0, jobs=1): err= 0: pid=810401: Wed Nov 20 06:49:43 2024 00:39:13.469 read: IOPS=533, BW=2135KiB/s (2186kB/s)(20.9MiB/10014msec) 00:39:13.469 slat (nsec): min=9386, max=51626, avg=24441.43, stdev=7486.75 00:39:13.469 clat (usec): min=11686, max=32050, avg=29772.21, stdev=1715.78 00:39:13.469 lat (usec): min=11714, max=32073, avg=29796.65, stdev=1715.48 00:39:13.469 clat percentiles (usec): 00:39:13.469 | 1.00th=[19268], 5.00th=[29754], 10.00th=[29754], 20.00th=[29754], 00:39:13.469 | 30.00th=[29754], 40.00th=[29754], 50.00th=[30016], 60.00th=[30016], 00:39:13.469 | 70.00th=[30016], 80.00th=[30016], 90.00th=[30278], 95.00th=[30278], 00:39:13.469 | 99.00th=[30802], 99.50th=[31327], 99.90th=[31851], 99.95th=[32113], 00:39:13.469 | 99.99th=[32113] 00:39:13.469 bw ( KiB/s): min= 2048, max= 2304, per=4.18%, avg=2131.20, stdev=75.15, samples=20 00:39:13.469 iops : min= 512, max= 576, avg=532.80, stdev=18.79, samples=20 00:39:13.469 lat (msec) : 20=1.20%, 50=98.80% 00:39:13.469 cpu : usr=98.38%, sys=1.28%, ctx=14, majf=0, minf=9 00:39:13.469 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:39:13.469 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:13.469 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:13.469 issued rwts: total=5344,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:13.469 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:13.469 filename1: (groupid=0, jobs=1): err= 0: pid=810402: Wed Nov 20 06:49:43 2024 00:39:13.469 read: IOPS=530, BW=2123KiB/s (2174kB/s)(20.8MiB/10007msec) 00:39:13.469 slat (nsec): min=8220, max=66024, avg=26179.03, stdev=8461.01 00:39:13.469 clat (usec): min=21889, max=33813, avg=29929.23, stdev=519.72 00:39:13.469 lat (usec): min=21918, max=33830, avg=29955.41, stdev=518.71 00:39:13.469 clat percentiles (usec): 00:39:13.469 | 1.00th=[29492], 5.00th=[29492], 10.00th=[29754], 20.00th=[29754], 00:39:13.469 | 30.00th=[29754], 40.00th=[29754], 50.00th=[30016], 60.00th=[30016], 00:39:13.469 | 70.00th=[30016], 80.00th=[30016], 90.00th=[30278], 95.00th=[30540], 00:39:13.469 | 99.00th=[30802], 99.50th=[31589], 99.90th=[31851], 99.95th=[31851], 00:39:13.469 | 99.99th=[33817] 00:39:13.469 bw ( KiB/s): min= 2048, max= 2176, per=4.15%, avg=2117.50, stdev=64.27, samples=20 00:39:13.469 iops : min= 512, max= 544, avg=529.35, stdev=16.05, samples=20 00:39:13.469 lat (msec) : 50=100.00% 00:39:13.469 cpu : usr=98.55%, sys=1.11%, ctx=14, majf=0, minf=9 00:39:13.469 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:39:13.469 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:13.469 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:13.469 issued rwts: total=5312,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:13.469 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:13.469 filename2: (groupid=0, jobs=1): err= 0: pid=810403: Wed Nov 20 06:49:43 2024 00:39:13.469 read: IOPS=530, BW=2123KiB/s (2174kB/s)(20.8MiB/10009msec) 00:39:13.469 slat (nsec): min=4556, max=51843, avg=24617.04, stdev=8332.93 00:39:13.469 clat (usec): min=9660, max=52078, avg=29907.97, stdev=1713.62 00:39:13.469 lat (usec): min=9678, max=52092, avg=29932.59, stdev=1713.47 00:39:13.469 clat percentiles (usec): 00:39:13.469 | 1.00th=[29492], 5.00th=[29492], 10.00th=[29754], 20.00th=[29754], 00:39:13.469 | 30.00th=[29754], 40.00th=[29754], 50.00th=[30016], 60.00th=[30016], 00:39:13.469 | 70.00th=[30016], 80.00th=[30016], 90.00th=[30278], 95.00th=[30278], 00:39:13.469 | 99.00th=[30802], 99.50th=[31589], 99.90th=[52167], 99.95th=[52167], 00:39:13.469 | 99.99th=[52167] 00:39:13.469 bw ( KiB/s): min= 1920, max= 2176, per=4.15%, avg=2118.40, stdev=77.42, samples=20 00:39:13.469 iops : min= 480, max= 544, avg=529.60, stdev=19.35, samples=20 00:39:13.469 lat (msec) : 10=0.30%, 50=99.40%, 100=0.30% 00:39:13.469 cpu : usr=98.46%, sys=1.19%, ctx=14, majf=0, minf=9 00:39:13.470 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:39:13.470 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:13.470 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:13.470 issued rwts: total=5312,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:13.470 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:13.470 filename2: (groupid=0, jobs=1): err= 0: pid=810405: Wed Nov 20 06:49:43 2024 00:39:13.470 read: IOPS=530, BW=2123KiB/s (2174kB/s)(20.8MiB/10008msec) 00:39:13.470 slat (nsec): min=5383, max=89198, avg=13399.52, stdev=6638.19 00:39:13.470 clat (usec): min=22015, max=32682, avg=30032.89, stdev=527.48 00:39:13.470 lat (usec): min=22035, max=32698, avg=30046.29, stdev=526.35 00:39:13.470 clat percentiles (usec): 00:39:13.470 | 1.00th=[29492], 5.00th=[29754], 10.00th=[29754], 20.00th=[29754], 00:39:13.470 | 30.00th=[30016], 40.00th=[30016], 50.00th=[30016], 60.00th=[30016], 00:39:13.470 | 70.00th=[30016], 80.00th=[30278], 90.00th=[30278], 95.00th=[30540], 00:39:13.470 | 99.00th=[30802], 99.50th=[31589], 99.90th=[32637], 99.95th=[32637], 00:39:13.470 | 99.99th=[32637] 00:39:13.470 bw ( KiB/s): min= 2048, max= 2176, per=4.15%, avg=2115.37, stdev=65.66, samples=19 00:39:13.470 iops : min= 512, max= 544, avg=528.84, stdev=16.42, samples=19 00:39:13.470 lat (msec) : 50=100.00% 00:39:13.470 cpu : usr=98.25%, sys=1.41%, ctx=13, majf=0, minf=11 00:39:13.470 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:39:13.470 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:13.470 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:13.470 issued rwts: total=5312,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:13.470 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:13.470 filename2: (groupid=0, jobs=1): err= 0: pid=810406: Wed Nov 20 06:49:43 2024 00:39:13.470 read: IOPS=529, BW=2118KiB/s (2168kB/s)(20.7MiB/10004msec) 00:39:13.470 slat (nsec): min=4191, max=53539, avg=25380.80, stdev=6598.67 00:39:13.470 clat (usec): min=21968, max=60089, avg=30004.72, stdev=1596.58 00:39:13.470 lat (usec): min=21983, max=60103, avg=30030.10, stdev=1595.50 00:39:13.470 clat percentiles (usec): 00:39:13.470 | 1.00th=[29492], 5.00th=[29754], 10.00th=[29754], 20.00th=[29754], 00:39:13.470 | 30.00th=[29754], 40.00th=[29754], 50.00th=[30016], 60.00th=[30016], 00:39:13.470 | 70.00th=[30016], 80.00th=[30016], 90.00th=[30278], 95.00th=[30540], 00:39:13.470 | 99.00th=[30802], 99.50th=[31851], 99.90th=[56886], 99.95th=[56886], 00:39:13.470 | 99.99th=[60031] 00:39:13.470 bw ( KiB/s): min= 1920, max= 2176, per=4.14%, avg=2112.00, stdev=77.69, samples=20 00:39:13.470 iops : min= 480, max= 544, avg=528.00, stdev=19.42, samples=20 00:39:13.470 lat (msec) : 50=99.70%, 100=0.30% 00:39:13.470 cpu : usr=98.29%, sys=1.36%, ctx=13, majf=0, minf=9 00:39:13.470 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.3%, 32=0.0%, >=64=0.0% 00:39:13.470 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:13.470 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:13.470 issued rwts: total=5296,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:13.470 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:13.470 filename2: (groupid=0, jobs=1): err= 0: pid=810407: Wed Nov 20 06:49:43 2024 00:39:13.470 read: IOPS=530, BW=2123KiB/s (2174kB/s)(20.8MiB/10009msec) 00:39:13.470 slat (nsec): min=7744, max=53280, avg=20687.62, stdev=8116.88 00:39:13.470 clat (usec): min=22024, max=33177, avg=29982.85, stdev=532.01 00:39:13.470 lat (usec): min=22055, max=33193, avg=30003.54, stdev=530.65 00:39:13.470 clat percentiles (usec): 00:39:13.470 | 1.00th=[29492], 5.00th=[29754], 10.00th=[29754], 20.00th=[29754], 00:39:13.470 | 30.00th=[29754], 40.00th=[30016], 50.00th=[30016], 60.00th=[30016], 00:39:13.470 | 70.00th=[30016], 80.00th=[30016], 90.00th=[30278], 95.00th=[30540], 00:39:13.470 | 99.00th=[30802], 99.50th=[31589], 99.90th=[33162], 99.95th=[33162], 00:39:13.470 | 99.99th=[33162] 00:39:13.470 bw ( KiB/s): min= 2048, max= 2176, per=4.15%, avg=2117.30, stdev=64.49, samples=20 00:39:13.470 iops : min= 512, max= 544, avg=529.30, stdev=16.11, samples=20 00:39:13.470 lat (msec) : 50=100.00% 00:39:13.470 cpu : usr=98.21%, sys=1.44%, ctx=15, majf=0, minf=9 00:39:13.470 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:39:13.470 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:13.470 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:13.470 issued rwts: total=5312,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:13.470 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:13.470 filename2: (groupid=0, jobs=1): err= 0: pid=810408: Wed Nov 20 06:49:43 2024 00:39:13.470 read: IOPS=533, BW=2135KiB/s (2186kB/s)(20.9MiB/10014msec) 00:39:13.470 slat (nsec): min=7920, max=53711, avg=24424.78, stdev=7948.21 00:39:13.470 clat (usec): min=11086, max=31980, avg=29750.15, stdev=1709.34 00:39:13.470 lat (usec): min=11096, max=32014, avg=29774.57, stdev=1709.55 00:39:13.470 clat percentiles (usec): 00:39:13.470 | 1.00th=[19268], 5.00th=[29492], 10.00th=[29754], 20.00th=[29754], 00:39:13.470 | 30.00th=[29754], 40.00th=[29754], 50.00th=[30016], 60.00th=[30016], 00:39:13.470 | 70.00th=[30016], 80.00th=[30016], 90.00th=[30278], 95.00th=[30278], 00:39:13.470 | 99.00th=[30802], 99.50th=[31327], 99.90th=[31851], 99.95th=[31851], 00:39:13.470 | 99.99th=[31851] 00:39:13.470 bw ( KiB/s): min= 2048, max= 2304, per=4.18%, avg=2131.20, stdev=75.15, samples=20 00:39:13.470 iops : min= 512, max= 576, avg=532.80, stdev=18.79, samples=20 00:39:13.470 lat (msec) : 20=1.20%, 50=98.80% 00:39:13.470 cpu : usr=98.41%, sys=1.22%, ctx=13, majf=0, minf=9 00:39:13.470 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:39:13.470 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:13.470 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:13.470 issued rwts: total=5344,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:13.470 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:13.470 filename2: (groupid=0, jobs=1): err= 0: pid=810409: Wed Nov 20 06:49:43 2024 00:39:13.470 read: IOPS=533, BW=2135KiB/s (2186kB/s)(20.9MiB/10014msec) 00:39:13.470 slat (usec): min=7, max=105, avg=21.15, stdev= 9.00 00:39:13.470 clat (usec): min=11612, max=31887, avg=29809.78, stdev=1724.96 00:39:13.470 lat (usec): min=11653, max=31904, avg=29830.93, stdev=1723.68 00:39:13.470 clat percentiles (usec): 00:39:13.470 | 1.00th=[19268], 5.00th=[29492], 10.00th=[29754], 20.00th=[29754], 00:39:13.470 | 30.00th=[29754], 40.00th=[30016], 50.00th=[30016], 60.00th=[30016], 00:39:13.470 | 70.00th=[30016], 80.00th=[30016], 90.00th=[30278], 95.00th=[30540], 00:39:13.470 | 99.00th=[30802], 99.50th=[31327], 99.90th=[31851], 99.95th=[31851], 00:39:13.470 | 99.99th=[31851] 00:39:13.470 bw ( KiB/s): min= 2048, max= 2304, per=4.18%, avg=2131.20, stdev=75.15, samples=20 00:39:13.470 iops : min= 512, max= 576, avg=532.80, stdev=18.79, samples=20 00:39:13.470 lat (msec) : 20=1.20%, 50=98.80% 00:39:13.470 cpu : usr=98.45%, sys=1.19%, ctx=33, majf=0, minf=9 00:39:13.470 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:39:13.470 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:13.470 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:13.470 issued rwts: total=5344,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:13.470 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:13.470 filename2: (groupid=0, jobs=1): err= 0: pid=810410: Wed Nov 20 06:49:43 2024 00:39:13.470 read: IOPS=529, BW=2118KiB/s (2169kB/s)(20.7MiB/10002msec) 00:39:13.470 slat (nsec): min=5005, max=69666, avg=25791.44, stdev=7842.52 00:39:13.470 clat (usec): min=21888, max=56284, avg=29983.79, stdev=1528.54 00:39:13.470 lat (usec): min=21911, max=56301, avg=30009.58, stdev=1527.72 00:39:13.470 clat percentiles (usec): 00:39:13.470 | 1.00th=[29492], 5.00th=[29492], 10.00th=[29754], 20.00th=[29754], 00:39:13.470 | 30.00th=[29754], 40.00th=[29754], 50.00th=[30016], 60.00th=[30016], 00:39:13.470 | 70.00th=[30016], 80.00th=[30016], 90.00th=[30278], 95.00th=[30278], 00:39:13.470 | 99.00th=[30802], 99.50th=[31589], 99.90th=[56361], 99.95th=[56361], 00:39:13.470 | 99.99th=[56361] 00:39:13.470 bw ( KiB/s): min= 1923, max= 2176, per=4.14%, avg=2112.15, stdev=77.30, samples=20 00:39:13.470 iops : min= 480, max= 544, avg=528.00, stdev=19.42, samples=20 00:39:13.470 lat (msec) : 50=99.70%, 100=0.30% 00:39:13.470 cpu : usr=98.55%, sys=1.10%, ctx=18, majf=0, minf=9 00:39:13.470 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:39:13.470 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:13.470 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:13.470 issued rwts: total=5296,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:13.470 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:13.470 filename2: (groupid=0, jobs=1): err= 0: pid=810411: Wed Nov 20 06:49:43 2024 00:39:13.470 read: IOPS=530, BW=2123KiB/s (2174kB/s)(20.8MiB/10008msec) 00:39:13.470 slat (nsec): min=14932, max=91328, avg=25990.08, stdev=12883.89 00:39:13.470 clat (usec): min=9544, max=51662, avg=29866.87, stdev=1734.78 00:39:13.470 lat (usec): min=9563, max=51701, avg=29892.86, stdev=1735.65 00:39:13.470 clat percentiles (usec): 00:39:13.470 | 1.00th=[29492], 5.00th=[29492], 10.00th=[29754], 20.00th=[29754], 00:39:13.470 | 30.00th=[29754], 40.00th=[29754], 50.00th=[29754], 60.00th=[30016], 00:39:13.470 | 70.00th=[30016], 80.00th=[30016], 90.00th=[30278], 95.00th=[30278], 00:39:13.470 | 99.00th=[30802], 99.50th=[31589], 99.90th=[51643], 99.95th=[51643], 00:39:13.470 | 99.99th=[51643] 00:39:13.470 bw ( KiB/s): min= 1923, max= 2176, per=4.15%, avg=2118.55, stdev=77.01, samples=20 00:39:13.470 iops : min= 480, max= 544, avg=529.60, stdev=19.35, samples=20 00:39:13.470 lat (msec) : 10=0.30%, 20=0.04%, 50=99.36%, 100=0.30% 00:39:13.470 cpu : usr=98.29%, sys=1.30%, ctx=13, majf=0, minf=9 00:39:13.470 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:39:13.470 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:13.470 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:13.470 issued rwts: total=5312,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:13.470 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:13.470 00:39:13.470 Run status group 0 (all jobs): 00:39:13.470 READ: bw=49.8MiB/s (52.2MB/s), 2118KiB/s-2153KiB/s (2168kB/s-2205kB/s), io=499MiB (523MB), run=10002-10021msec 00:39:13.471 06:49:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:39:13.471 06:49:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:39:13.471 06:49:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:39:13.471 06:49:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:39:13.471 06:49:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:39:13.471 06:49:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:39:13.471 06:49:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:13.471 06:49:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:13.471 06:49:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:13.471 06:49:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:39:13.471 06:49:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:13.471 06:49:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:13.471 06:49:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:13.471 06:49:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:39:13.471 06:49:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:39:13.471 06:49:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:39:13.471 06:49:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:39:13.471 06:49:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:13.471 06:49:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:13.471 06:49:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:13.471 06:49:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:39:13.471 06:49:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:13.471 06:49:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:13.471 06:49:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:13.471 06:49:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:39:13.471 06:49:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:39:13.471 06:49:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:39:13.471 06:49:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:39:13.471 06:49:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:13.471 06:49:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:13.471 06:49:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:13.471 06:49:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:39:13.471 06:49:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:13.471 06:49:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:13.471 06:49:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:13.471 06:49:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:39:13.471 06:49:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:39:13.471 06:49:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:39:13.471 06:49:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:39:13.471 06:49:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:39:13.471 06:49:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:39:13.471 06:49:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:39:13.471 06:49:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:39:13.471 06:49:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:39:13.471 06:49:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:39:13.471 06:49:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:39:13.471 06:49:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:39:13.471 06:49:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:13.471 06:49:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:13.471 bdev_null0 00:39:13.471 06:49:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:13.471 06:49:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:39:13.471 06:49:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:13.471 06:49:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:13.471 06:49:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:13.471 06:49:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:39:13.471 06:49:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:13.471 06:49:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:13.471 06:49:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:13.471 06:49:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:39:13.471 06:49:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:13.471 06:49:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:13.471 [2024-11-20 06:49:44.055723] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:13.471 06:49:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:13.471 06:49:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:39:13.471 06:49:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:39:13.471 06:49:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:39:13.471 06:49:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:39:13.471 06:49:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:13.471 06:49:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:13.471 bdev_null1 00:39:13.471 06:49:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:13.471 06:49:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:39:13.471 06:49:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:13.471 06:49:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:13.471 06:49:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:13.471 06:49:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:39:13.471 06:49:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:13.471 06:49:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:13.471 06:49:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:13.471 06:49:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:13.471 06:49:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:13.471 06:49:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:13.471 06:49:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:13.471 06:49:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:39:13.471 06:49:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:39:13.471 06:49:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:39:13.471 06:49:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:39:13.471 06:49:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:13.471 06:49:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:39:13.471 06:49:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:13.471 06:49:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:39:13.471 06:49:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:39:13.471 06:49:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:39:13.471 { 00:39:13.471 "params": { 00:39:13.471 "name": "Nvme$subsystem", 00:39:13.471 "trtype": "$TEST_TRANSPORT", 00:39:13.471 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:13.471 "adrfam": "ipv4", 00:39:13.471 "trsvcid": "$NVMF_PORT", 00:39:13.471 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:13.471 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:13.471 "hdgst": ${hdgst:-false}, 00:39:13.471 "ddgst": ${ddgst:-false} 00:39:13.471 }, 00:39:13.471 "method": "bdev_nvme_attach_controller" 00:39:13.471 } 00:39:13.471 EOF 00:39:13.471 )") 00:39:13.471 06:49:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:39:13.471 06:49:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:39:13.471 06:49:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:39:13.471 06:49:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:39:13.471 06:49:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local sanitizers 00:39:13.471 06:49:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:39:13.471 06:49:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # shift 00:39:13.471 06:49:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # local asan_lib= 00:39:13.471 06:49:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:39:13.471 06:49:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:39:13.471 06:49:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:39:13.471 06:49:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:39:13.471 06:49:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:39:13.471 06:49:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libasan 00:39:13.471 06:49:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:39:13.472 06:49:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:39:13.472 06:49:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:39:13.472 06:49:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:39:13.472 { 00:39:13.472 "params": { 00:39:13.472 "name": "Nvme$subsystem", 00:39:13.472 "trtype": "$TEST_TRANSPORT", 00:39:13.472 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:13.472 "adrfam": "ipv4", 00:39:13.472 "trsvcid": "$NVMF_PORT", 00:39:13.472 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:13.472 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:13.472 "hdgst": ${hdgst:-false}, 00:39:13.472 "ddgst": ${ddgst:-false} 00:39:13.472 }, 00:39:13.472 "method": "bdev_nvme_attach_controller" 00:39:13.472 } 00:39:13.472 EOF 00:39:13.472 )") 00:39:13.472 06:49:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:39:13.472 06:49:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:39:13.472 06:49:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:39:13.472 06:49:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:39:13.472 06:49:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:39:13.472 06:49:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:39:13.472 "params": { 00:39:13.472 "name": "Nvme0", 00:39:13.472 "trtype": "tcp", 00:39:13.472 "traddr": "10.0.0.2", 00:39:13.472 "adrfam": "ipv4", 00:39:13.472 "trsvcid": "4420", 00:39:13.472 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:13.472 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:13.472 "hdgst": false, 00:39:13.472 "ddgst": false 00:39:13.472 }, 00:39:13.472 "method": "bdev_nvme_attach_controller" 00:39:13.472 },{ 00:39:13.472 "params": { 00:39:13.472 "name": "Nvme1", 00:39:13.472 "trtype": "tcp", 00:39:13.472 "traddr": "10.0.0.2", 00:39:13.472 "adrfam": "ipv4", 00:39:13.472 "trsvcid": "4420", 00:39:13.472 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:39:13.472 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:39:13.472 "hdgst": false, 00:39:13.472 "ddgst": false 00:39:13.472 }, 00:39:13.472 "method": "bdev_nvme_attach_controller" 00:39:13.472 }' 00:39:13.472 06:49:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:39:13.472 06:49:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:39:13.472 06:49:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:39:13.472 06:49:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:39:13.472 06:49:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:39:13.472 06:49:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:39:13.472 06:49:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:39:13.472 06:49:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:39:13.472 06:49:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:39:13.472 06:49:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:13.472 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:39:13.472 ... 00:39:13.472 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:39:13.472 ... 00:39:13.472 fio-3.35 00:39:13.472 Starting 4 threads 00:39:18.732 00:39:18.732 filename0: (groupid=0, jobs=1): err= 0: pid=812395: Wed Nov 20 06:49:50 2024 00:39:18.732 read: IOPS=2897, BW=22.6MiB/s (23.7MB/s)(113MiB/5002msec) 00:39:18.732 slat (nsec): min=5988, max=32994, avg=8818.95, stdev=3097.63 00:39:18.732 clat (usec): min=825, max=5060, avg=2732.81, stdev=417.40 00:39:18.732 lat (usec): min=838, max=5072, avg=2741.62, stdev=417.33 00:39:18.732 clat percentiles (usec): 00:39:18.732 | 1.00th=[ 1549], 5.00th=[ 2089], 10.00th=[ 2245], 20.00th=[ 2409], 00:39:18.732 | 30.00th=[ 2507], 40.00th=[ 2671], 50.00th=[ 2769], 60.00th=[ 2900], 00:39:18.732 | 70.00th=[ 2966], 80.00th=[ 2999], 90.00th=[ 3163], 95.00th=[ 3326], 00:39:18.732 | 99.00th=[ 3916], 99.50th=[ 4113], 99.90th=[ 4686], 99.95th=[ 4948], 00:39:18.732 | 99.99th=[ 5014] 00:39:18.732 bw ( KiB/s): min=21408, max=25712, per=27.10%, avg=23177.60, stdev=1249.94, samples=10 00:39:18.732 iops : min= 2676, max= 3214, avg=2897.20, stdev=156.24, samples=10 00:39:18.732 lat (usec) : 1000=0.31% 00:39:18.732 lat (msec) : 2=2.77%, 4=96.20%, 10=0.72% 00:39:18.732 cpu : usr=95.42%, sys=4.24%, ctx=19, majf=0, minf=9 00:39:18.732 IO depths : 1=0.6%, 2=9.4%, 4=63.1%, 8=26.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:18.732 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:18.732 complete : 0=0.0%, 4=92.1%, 8=7.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:18.732 issued rwts: total=14491,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:18.732 latency : target=0, window=0, percentile=100.00%, depth=8 00:39:18.732 filename0: (groupid=0, jobs=1): err= 0: pid=812397: Wed Nov 20 06:49:50 2024 00:39:18.732 read: IOPS=2563, BW=20.0MiB/s (21.0MB/s)(100MiB/5001msec) 00:39:18.732 slat (nsec): min=5979, max=42488, avg=8778.52, stdev=3054.79 00:39:18.732 clat (usec): min=740, max=5490, avg=3095.29, stdev=473.21 00:39:18.732 lat (usec): min=751, max=5503, avg=3104.07, stdev=472.92 00:39:18.732 clat percentiles (usec): 00:39:18.732 | 1.00th=[ 2147], 5.00th=[ 2507], 10.00th=[ 2671], 20.00th=[ 2868], 00:39:18.732 | 30.00th=[ 2933], 40.00th=[ 2966], 50.00th=[ 2966], 60.00th=[ 3032], 00:39:18.732 | 70.00th=[ 3163], 80.00th=[ 3326], 90.00th=[ 3654], 95.00th=[ 4113], 00:39:18.732 | 99.00th=[ 4817], 99.50th=[ 5014], 99.90th=[ 5342], 99.95th=[ 5342], 00:39:18.732 | 99.99th=[ 5407] 00:39:18.732 bw ( KiB/s): min=19584, max=21296, per=23.88%, avg=20426.67, stdev=579.82, samples=9 00:39:18.732 iops : min= 2448, max= 2662, avg=2553.33, stdev=72.48, samples=9 00:39:18.732 lat (usec) : 750=0.01%, 1000=0.02% 00:39:18.732 lat (msec) : 2=0.41%, 4=93.77%, 10=5.78% 00:39:18.732 cpu : usr=95.72%, sys=3.96%, ctx=7, majf=0, minf=9 00:39:18.732 IO depths : 1=0.1%, 2=2.5%, 4=69.5%, 8=27.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:18.732 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:18.732 complete : 0=0.0%, 4=92.6%, 8=7.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:18.732 issued rwts: total=12819,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:18.732 latency : target=0, window=0, percentile=100.00%, depth=8 00:39:18.732 filename1: (groupid=0, jobs=1): err= 0: pid=812398: Wed Nov 20 06:49:50 2024 00:39:18.732 read: IOPS=2704, BW=21.1MiB/s (22.2MB/s)(106MiB/5002msec) 00:39:18.732 slat (nsec): min=5963, max=59719, avg=8970.16, stdev=3203.34 00:39:18.732 clat (usec): min=946, max=5391, avg=2930.29, stdev=451.93 00:39:18.732 lat (usec): min=953, max=5404, avg=2939.26, stdev=451.78 00:39:18.732 clat percentiles (usec): 00:39:18.732 | 1.00th=[ 1975], 5.00th=[ 2278], 10.00th=[ 2442], 20.00th=[ 2606], 00:39:18.732 | 30.00th=[ 2737], 40.00th=[ 2868], 50.00th=[ 2933], 60.00th=[ 2966], 00:39:18.732 | 70.00th=[ 2999], 80.00th=[ 3130], 90.00th=[ 3490], 95.00th=[ 3720], 00:39:18.732 | 99.00th=[ 4490], 99.50th=[ 4752], 99.90th=[ 5145], 99.95th=[ 5211], 00:39:18.732 | 99.99th=[ 5407] 00:39:18.732 bw ( KiB/s): min=20656, max=22368, per=25.31%, avg=21644.80, stdev=568.53, samples=10 00:39:18.732 iops : min= 2582, max= 2796, avg=2705.60, stdev=71.07, samples=10 00:39:18.732 lat (usec) : 1000=0.01% 00:39:18.732 lat (msec) : 2=1.17%, 4=95.74%, 10=3.08% 00:39:18.732 cpu : usr=95.84%, sys=3.86%, ctx=10, majf=0, minf=9 00:39:18.732 IO depths : 1=0.3%, 2=5.1%, 4=66.2%, 8=28.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:18.732 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:18.732 complete : 0=0.0%, 4=93.1%, 8=6.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:18.732 issued rwts: total=13530,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:18.732 latency : target=0, window=0, percentile=100.00%, depth=8 00:39:18.732 filename1: (groupid=0, jobs=1): err= 0: pid=812399: Wed Nov 20 06:49:50 2024 00:39:18.732 read: IOPS=2526, BW=19.7MiB/s (20.7MB/s)(98.7MiB/5002msec) 00:39:18.732 slat (nsec): min=5968, max=53046, avg=8490.55, stdev=3065.54 00:39:18.732 clat (usec): min=575, max=6961, avg=3141.52, stdev=472.68 00:39:18.732 lat (usec): min=586, max=6967, avg=3150.01, stdev=472.51 00:39:18.732 clat percentiles (usec): 00:39:18.732 | 1.00th=[ 2245], 5.00th=[ 2638], 10.00th=[ 2769], 20.00th=[ 2900], 00:39:18.732 | 30.00th=[ 2933], 40.00th=[ 2966], 50.00th=[ 2999], 60.00th=[ 3064], 00:39:18.732 | 70.00th=[ 3228], 80.00th=[ 3359], 90.00th=[ 3687], 95.00th=[ 4146], 00:39:18.732 | 99.00th=[ 4883], 99.50th=[ 5014], 99.90th=[ 5473], 99.95th=[ 5604], 00:39:18.732 | 99.99th=[ 6980] 00:39:18.732 bw ( KiB/s): min=19632, max=21168, per=23.63%, avg=20212.00, stdev=489.18, samples=10 00:39:18.732 iops : min= 2454, max= 2646, avg=2526.50, stdev=61.15, samples=10 00:39:18.732 lat (usec) : 750=0.02%, 1000=0.01% 00:39:18.732 lat (msec) : 2=0.39%, 4=93.63%, 10=5.95% 00:39:18.732 cpu : usr=96.04%, sys=3.64%, ctx=11, majf=0, minf=9 00:39:18.732 IO depths : 1=0.1%, 2=1.7%, 4=70.8%, 8=27.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:18.732 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:18.732 complete : 0=0.0%, 4=92.0%, 8=8.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:18.732 issued rwts: total=12638,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:18.732 latency : target=0, window=0, percentile=100.00%, depth=8 00:39:18.732 00:39:18.732 Run status group 0 (all jobs): 00:39:18.732 READ: bw=83.5MiB/s (87.6MB/s), 19.7MiB/s-22.6MiB/s (20.7MB/s-23.7MB/s), io=418MiB (438MB), run=5001-5002msec 00:39:18.732 06:49:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:39:18.732 06:49:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:39:18.732 06:49:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:39:18.732 06:49:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:39:18.732 06:49:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:39:18.732 06:49:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:39:18.732 06:49:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:18.732 06:49:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:18.732 06:49:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:18.732 06:49:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:39:18.732 06:49:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:18.732 06:49:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:18.732 06:49:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:18.732 06:49:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:39:18.732 06:49:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:39:18.732 06:49:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:39:18.732 06:49:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:39:18.732 06:49:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:18.732 06:49:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:18.732 06:49:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:18.732 06:49:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:39:18.732 06:49:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:18.732 06:49:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:18.732 06:49:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:18.732 00:39:18.732 real 0m24.635s 00:39:18.732 user 4m52.433s 00:39:18.732 sys 0m5.653s 00:39:18.733 06:49:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1128 -- # xtrace_disable 00:39:18.733 06:49:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:18.733 ************************************ 00:39:18.733 END TEST fio_dif_rand_params 00:39:18.733 ************************************ 00:39:18.990 06:49:50 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:39:18.990 06:49:50 nvmf_dif -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:39:18.990 06:49:50 nvmf_dif -- common/autotest_common.sh@1109 -- # xtrace_disable 00:39:18.990 06:49:50 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:39:18.990 ************************************ 00:39:18.990 START TEST fio_dif_digest 00:39:18.990 ************************************ 00:39:18.990 06:49:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1127 -- # fio_dif_digest 00:39:18.990 06:49:50 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:39:18.990 06:49:50 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:39:18.990 06:49:50 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:39:18.990 06:49:50 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:39:18.990 06:49:50 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:39:18.990 06:49:50 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:39:18.990 06:49:50 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:39:18.990 06:49:50 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:39:18.990 06:49:50 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:39:18.990 06:49:50 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:39:18.990 06:49:50 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:39:18.990 06:49:50 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:39:18.990 06:49:50 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:39:18.990 06:49:50 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:39:18.990 06:49:50 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:39:18.990 06:49:50 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:39:18.990 06:49:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:18.990 06:49:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:39:18.990 bdev_null0 00:39:18.990 06:49:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:18.990 06:49:50 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:39:18.990 06:49:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:18.990 06:49:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:39:18.990 06:49:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:18.990 06:49:50 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:39:18.990 06:49:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:18.990 06:49:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:39:18.990 06:49:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:18.990 06:49:50 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:39:18.990 06:49:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:18.990 06:49:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:39:18.990 [2024-11-20 06:49:50.641851] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:18.990 06:49:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:18.990 06:49:50 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:39:18.990 06:49:50 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:39:18.990 06:49:50 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:39:18.990 06:49:50 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:39:18.990 06:49:50 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:18.990 06:49:50 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:39:18.990 06:49:50 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:39:18.990 06:49:50 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:39:18.990 06:49:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:18.990 06:49:50 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:39:18.990 { 00:39:18.990 "params": { 00:39:18.990 "name": "Nvme$subsystem", 00:39:18.990 "trtype": "$TEST_TRANSPORT", 00:39:18.991 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:18.991 "adrfam": "ipv4", 00:39:18.991 "trsvcid": "$NVMF_PORT", 00:39:18.991 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:18.991 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:18.991 "hdgst": ${hdgst:-false}, 00:39:18.991 "ddgst": ${ddgst:-false} 00:39:18.991 }, 00:39:18.991 "method": "bdev_nvme_attach_controller" 00:39:18.991 } 00:39:18.991 EOF 00:39:18.991 )") 00:39:18.991 06:49:50 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:39:18.991 06:49:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:39:18.991 06:49:50 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:39:18.991 06:49:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:39:18.991 06:49:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local sanitizers 00:39:18.991 06:49:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:39:18.991 06:49:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # shift 00:39:18.991 06:49:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # local asan_lib= 00:39:18.991 06:49:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:39:18.991 06:49:50 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:39:18.991 06:49:50 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:39:18.991 06:49:50 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:39:18.991 06:49:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:39:18.991 06:49:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # grep libasan 00:39:18.991 06:49:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:39:18.991 06:49:50 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:39:18.991 06:49:50 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:39:18.991 06:49:50 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:39:18.991 "params": { 00:39:18.991 "name": "Nvme0", 00:39:18.991 "trtype": "tcp", 00:39:18.991 "traddr": "10.0.0.2", 00:39:18.991 "adrfam": "ipv4", 00:39:18.991 "trsvcid": "4420", 00:39:18.991 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:18.991 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:18.991 "hdgst": true, 00:39:18.991 "ddgst": true 00:39:18.991 }, 00:39:18.991 "method": "bdev_nvme_attach_controller" 00:39:18.991 }' 00:39:18.991 06:49:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # asan_lib= 00:39:18.991 06:49:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:39:18.991 06:49:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:39:18.991 06:49:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:39:18.991 06:49:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:39:18.991 06:49:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:39:18.991 06:49:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # asan_lib= 00:39:18.991 06:49:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:39:18.991 06:49:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:39:18.991 06:49:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:19.248 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:39:19.248 ... 00:39:19.248 fio-3.35 00:39:19.248 Starting 3 threads 00:39:31.627 00:39:31.627 filename0: (groupid=0, jobs=1): err= 0: pid=813510: Wed Nov 20 06:50:01 2024 00:39:31.627 read: IOPS=280, BW=35.1MiB/s (36.8MB/s)(353MiB/10047msec) 00:39:31.627 slat (nsec): min=6290, max=48470, avg=17832.51, stdev=7232.57 00:39:31.627 clat (usec): min=3923, max=47577, avg=10642.28, stdev=1340.96 00:39:31.627 lat (usec): min=3934, max=47587, avg=10660.11, stdev=1341.05 00:39:31.627 clat percentiles (usec): 00:39:31.627 | 1.00th=[ 6915], 5.00th=[ 9503], 10.00th=[ 9765], 20.00th=[10159], 00:39:31.627 | 30.00th=[10290], 40.00th=[10421], 50.00th=[10683], 60.00th=[10814], 00:39:31.627 | 70.00th=[11076], 80.00th=[11207], 90.00th=[11600], 95.00th=[11863], 00:39:31.627 | 99.00th=[12387], 99.50th=[12649], 99.90th=[13435], 99.95th=[46400], 00:39:31.627 | 99.99th=[47449] 00:39:31.627 bw ( KiB/s): min=34816, max=39424, per=33.11%, avg=36122.95, stdev=890.48, samples=19 00:39:31.627 iops : min= 272, max= 308, avg=282.21, stdev= 6.96, samples=19 00:39:31.627 lat (msec) : 4=0.14%, 10=16.61%, 20=83.17%, 50=0.07% 00:39:31.627 cpu : usr=96.38%, sys=3.30%, ctx=15, majf=0, minf=48 00:39:31.627 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:31.627 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:31.627 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:31.627 issued rwts: total=2823,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:31.627 latency : target=0, window=0, percentile=100.00%, depth=3 00:39:31.627 filename0: (groupid=0, jobs=1): err= 0: pid=813511: Wed Nov 20 06:50:01 2024 00:39:31.627 read: IOPS=269, BW=33.6MiB/s (35.3MB/s)(338MiB/10044msec) 00:39:31.627 slat (nsec): min=6307, max=61084, avg=18032.05, stdev=7307.10 00:39:31.627 clat (usec): min=8056, max=54000, avg=11112.53, stdev=1888.32 00:39:31.627 lat (usec): min=8075, max=54032, avg=11130.56, stdev=1888.53 00:39:31.627 clat percentiles (usec): 00:39:31.627 | 1.00th=[ 9503], 5.00th=[10028], 10.00th=[10159], 20.00th=[10421], 00:39:31.627 | 30.00th=[10683], 40.00th=[10814], 50.00th=[10945], 60.00th=[11207], 00:39:31.627 | 70.00th=[11338], 80.00th=[11600], 90.00th=[11994], 95.00th=[12256], 00:39:31.627 | 99.00th=[12780], 99.50th=[13042], 99.90th=[52691], 99.95th=[52691], 00:39:31.627 | 99.99th=[53740] 00:39:31.627 bw ( KiB/s): min=31744, max=35840, per=31.70%, avg=34586.95, stdev=831.27, samples=19 00:39:31.627 iops : min= 248, max= 280, avg=270.21, stdev= 6.49, samples=19 00:39:31.627 lat (msec) : 10=5.59%, 20=94.23%, 50=0.04%, 100=0.15% 00:39:31.627 cpu : usr=95.99%, sys=3.69%, ctx=12, majf=0, minf=91 00:39:31.628 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:31.628 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:31.628 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:31.628 issued rwts: total=2703,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:31.628 latency : target=0, window=0, percentile=100.00%, depth=3 00:39:31.628 filename0: (groupid=0, jobs=1): err= 0: pid=813512: Wed Nov 20 06:50:01 2024 00:39:31.628 read: IOPS=303, BW=37.9MiB/s (39.8MB/s)(380MiB/10008msec) 00:39:31.628 slat (nsec): min=6602, max=53541, avg=21999.60, stdev=6411.70 00:39:31.628 clat (usec): min=7365, max=53365, avg=9860.20, stdev=1516.52 00:39:31.628 lat (usec): min=7386, max=53391, avg=9882.20, stdev=1516.15 00:39:31.628 clat percentiles (usec): 00:39:31.628 | 1.00th=[ 8225], 5.00th=[ 8717], 10.00th=[ 8979], 20.00th=[ 9241], 00:39:31.628 | 30.00th=[ 9503], 40.00th=[ 9634], 50.00th=[ 9896], 60.00th=[10028], 00:39:31.628 | 70.00th=[10159], 80.00th=[10421], 90.00th=[10683], 95.00th=[10814], 00:39:31.628 | 99.00th=[11338], 99.50th=[11600], 99.90th=[12387], 99.95th=[53216], 00:39:31.628 | 99.99th=[53216] 00:39:31.628 bw ( KiB/s): min=35072, max=39680, per=35.63%, avg=38871.58, stdev=985.09, samples=19 00:39:31.628 iops : min= 274, max= 310, avg=303.68, stdev= 7.70, samples=19 00:39:31.628 lat (msec) : 10=59.83%, 20=40.07%, 100=0.10% 00:39:31.628 cpu : usr=97.00%, sys=2.68%, ctx=16, majf=0, minf=58 00:39:31.628 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:31.628 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:31.628 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:31.628 issued rwts: total=3037,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:31.628 latency : target=0, window=0, percentile=100.00%, depth=3 00:39:31.628 00:39:31.628 Run status group 0 (all jobs): 00:39:31.628 READ: bw=107MiB/s (112MB/s), 33.6MiB/s-37.9MiB/s (35.3MB/s-39.8MB/s), io=1070MiB (1122MB), run=10008-10047msec 00:39:31.628 06:50:01 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:39:31.628 06:50:01 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:39:31.628 06:50:01 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:39:31.628 06:50:01 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:39:31.628 06:50:01 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:39:31.628 06:50:01 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:39:31.628 06:50:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:31.628 06:50:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:39:31.628 06:50:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:31.628 06:50:01 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:39:31.628 06:50:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:31.628 06:50:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:39:31.628 06:50:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:31.628 00:39:31.628 real 0m11.268s 00:39:31.628 user 0m35.591s 00:39:31.628 sys 0m1.327s 00:39:31.628 06:50:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1128 -- # xtrace_disable 00:39:31.628 06:50:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:39:31.628 ************************************ 00:39:31.628 END TEST fio_dif_digest 00:39:31.628 ************************************ 00:39:31.628 06:50:01 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:39:31.628 06:50:01 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:39:31.628 06:50:01 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:39:31.628 06:50:01 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:39:31.628 06:50:01 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:31.628 06:50:01 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:39:31.628 06:50:01 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:31.628 06:50:01 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:31.628 rmmod nvme_tcp 00:39:31.628 rmmod nvme_fabrics 00:39:31.628 rmmod nvme_keyring 00:39:31.628 06:50:01 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:31.628 06:50:01 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:39:31.628 06:50:01 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:39:31.628 06:50:01 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 804971 ']' 00:39:31.628 06:50:01 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 804971 00:39:31.628 06:50:01 nvmf_dif -- common/autotest_common.sh@952 -- # '[' -z 804971 ']' 00:39:31.628 06:50:01 nvmf_dif -- common/autotest_common.sh@956 -- # kill -0 804971 00:39:31.628 06:50:01 nvmf_dif -- common/autotest_common.sh@957 -- # uname 00:39:31.628 06:50:01 nvmf_dif -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:39:31.628 06:50:01 nvmf_dif -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 804971 00:39:31.628 06:50:02 nvmf_dif -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:39:31.628 06:50:02 nvmf_dif -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:39:31.628 06:50:02 nvmf_dif -- common/autotest_common.sh@970 -- # echo 'killing process with pid 804971' 00:39:31.628 killing process with pid 804971 00:39:31.628 06:50:02 nvmf_dif -- common/autotest_common.sh@971 -- # kill 804971 00:39:31.628 06:50:02 nvmf_dif -- common/autotest_common.sh@976 -- # wait 804971 00:39:31.628 06:50:02 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:39:31.628 06:50:02 nvmf_dif -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:39:33.534 Waiting for block devices as requested 00:39:33.534 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:39:33.534 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:39:33.534 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:39:33.534 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:39:33.534 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:39:33.534 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:39:33.792 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:39:33.792 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:39:33.792 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:39:34.051 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:39:34.051 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:39:34.051 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:39:34.309 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:39:34.310 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:39:34.310 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:39:34.310 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:39:34.568 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:39:34.568 06:50:06 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:39:34.568 06:50:06 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:39:34.568 06:50:06 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:39:34.568 06:50:06 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:39:34.568 06:50:06 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:39:34.568 06:50:06 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:39:34.568 06:50:06 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:34.568 06:50:06 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:34.568 06:50:06 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:34.568 06:50:06 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:39:34.568 06:50:06 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:37.102 06:50:08 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:37.102 00:39:37.102 real 1m14.589s 00:39:37.102 user 7m10.407s 00:39:37.102 sys 0m20.795s 00:39:37.102 06:50:08 nvmf_dif -- common/autotest_common.sh@1128 -- # xtrace_disable 00:39:37.102 06:50:08 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:39:37.102 ************************************ 00:39:37.102 END TEST nvmf_dif 00:39:37.102 ************************************ 00:39:37.102 06:50:08 -- spdk/autotest.sh@286 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:39:37.102 06:50:08 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:39:37.102 06:50:08 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:39:37.102 06:50:08 -- common/autotest_common.sh@10 -- # set +x 00:39:37.102 ************************************ 00:39:37.102 START TEST nvmf_abort_qd_sizes 00:39:37.102 ************************************ 00:39:37.102 06:50:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:39:37.102 * Looking for test storage... 00:39:37.102 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:37.102 06:50:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:39:37.102 06:50:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@1691 -- # lcov --version 00:39:37.102 06:50:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:39:37.102 06:50:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:39:37.102 06:50:08 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:37.102 06:50:08 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:37.102 06:50:08 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:37.102 06:50:08 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:39:37.102 06:50:08 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:39:37.102 06:50:08 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:39:37.102 06:50:08 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:39:37.102 06:50:08 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:39:37.102 06:50:08 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:39:37.102 06:50:08 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:39:37.102 06:50:08 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:37.102 06:50:08 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:39:37.102 06:50:08 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:39:37.102 06:50:08 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:37.102 06:50:08 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:37.102 06:50:08 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:39:37.102 06:50:08 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:39:37.102 06:50:08 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:37.102 06:50:08 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:39:37.102 06:50:08 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:39:37.102 06:50:08 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:39:37.102 06:50:08 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:39:37.102 06:50:08 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:37.102 06:50:08 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:39:37.102 06:50:08 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:39:37.102 06:50:08 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:37.102 06:50:08 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:37.102 06:50:08 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:39:37.102 06:50:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:37.102 06:50:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:39:37.102 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:37.102 --rc genhtml_branch_coverage=1 00:39:37.102 --rc genhtml_function_coverage=1 00:39:37.102 --rc genhtml_legend=1 00:39:37.102 --rc geninfo_all_blocks=1 00:39:37.102 --rc geninfo_unexecuted_blocks=1 00:39:37.102 00:39:37.102 ' 00:39:37.102 06:50:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:39:37.102 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:37.102 --rc genhtml_branch_coverage=1 00:39:37.102 --rc genhtml_function_coverage=1 00:39:37.102 --rc genhtml_legend=1 00:39:37.102 --rc geninfo_all_blocks=1 00:39:37.102 --rc geninfo_unexecuted_blocks=1 00:39:37.102 00:39:37.102 ' 00:39:37.102 06:50:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:39:37.102 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:37.102 --rc genhtml_branch_coverage=1 00:39:37.102 --rc genhtml_function_coverage=1 00:39:37.102 --rc genhtml_legend=1 00:39:37.102 --rc geninfo_all_blocks=1 00:39:37.102 --rc geninfo_unexecuted_blocks=1 00:39:37.102 00:39:37.102 ' 00:39:37.102 06:50:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:39:37.102 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:37.102 --rc genhtml_branch_coverage=1 00:39:37.102 --rc genhtml_function_coverage=1 00:39:37.102 --rc genhtml_legend=1 00:39:37.102 --rc geninfo_all_blocks=1 00:39:37.102 --rc geninfo_unexecuted_blocks=1 00:39:37.102 00:39:37.102 ' 00:39:37.102 06:50:08 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:37.102 06:50:08 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:39:37.102 06:50:08 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:37.102 06:50:08 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:37.103 06:50:08 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:37.103 06:50:08 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:37.103 06:50:08 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:37.103 06:50:08 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:37.103 06:50:08 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:37.103 06:50:08 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:37.103 06:50:08 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:37.103 06:50:08 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:37.103 06:50:08 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:39:37.103 06:50:08 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:39:37.103 06:50:08 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:37.103 06:50:08 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:37.103 06:50:08 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:37.103 06:50:08 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:37.103 06:50:08 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:37.103 06:50:08 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:39:37.103 06:50:08 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:37.103 06:50:08 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:37.103 06:50:08 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:37.103 06:50:08 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:37.103 06:50:08 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:37.103 06:50:08 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:37.103 06:50:08 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:39:37.103 06:50:08 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:37.103 06:50:08 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:39:37.103 06:50:08 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:37.103 06:50:08 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:37.103 06:50:08 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:37.103 06:50:08 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:37.103 06:50:08 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:37.103 06:50:08 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:39:37.103 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:39:37.103 06:50:08 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:37.103 06:50:08 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:37.103 06:50:08 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:37.103 06:50:08 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:39:37.103 06:50:08 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:39:37.103 06:50:08 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:37.103 06:50:08 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:39:37.103 06:50:08 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:39:37.103 06:50:08 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:39:37.103 06:50:08 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:37.103 06:50:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:39:37.103 06:50:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:37.103 06:50:08 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:39:37.103 06:50:08 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:39:37.103 06:50:08 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:39:37.103 06:50:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:39:43.669 06:50:14 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:43.669 06:50:14 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:39:43.669 06:50:14 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:43.669 06:50:14 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:43.669 06:50:14 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:43.669 06:50:14 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:43.669 06:50:14 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:43.669 06:50:14 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:39:43.669 06:50:14 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:43.669 06:50:14 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:39:43.669 06:50:14 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:39:43.669 06:50:14 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:39:43.669 06:50:14 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:39:43.669 06:50:14 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:39:43.669 06:50:14 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:39:43.669 06:50:14 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:43.669 06:50:14 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:43.669 06:50:14 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:43.669 06:50:14 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:43.669 06:50:14 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:43.669 06:50:14 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:43.669 06:50:14 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:43.669 06:50:14 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:43.669 06:50:14 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:43.669 06:50:14 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:43.669 06:50:14 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:43.669 06:50:14 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:43.669 06:50:14 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:39:43.669 06:50:14 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:39:43.669 06:50:14 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:39:43.669 06:50:14 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:39:43.669 06:50:14 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:39:43.669 06:50:14 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:39:43.669 06:50:14 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:43.669 06:50:14 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:39:43.669 Found 0000:86:00.0 (0x8086 - 0x159b) 00:39:43.669 06:50:14 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:43.669 06:50:14 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:43.669 06:50:14 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:43.669 06:50:14 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:43.669 06:50:14 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:43.669 06:50:14 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:43.669 06:50:14 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:39:43.669 Found 0000:86:00.1 (0x8086 - 0x159b) 00:39:43.669 06:50:14 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:43.669 06:50:14 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:43.669 06:50:14 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:43.669 06:50:14 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:43.669 06:50:14 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:43.669 06:50:14 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:39:43.669 06:50:14 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:39:43.669 06:50:14 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:39:43.669 06:50:14 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:43.669 06:50:14 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:43.669 06:50:14 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:43.669 06:50:14 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:43.669 06:50:14 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:43.669 06:50:14 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:43.669 06:50:14 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:43.669 06:50:14 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:39:43.669 Found net devices under 0000:86:00.0: cvl_0_0 00:39:43.669 06:50:14 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:43.669 06:50:14 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:43.669 06:50:14 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:43.669 06:50:14 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:43.669 06:50:14 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:43.669 06:50:14 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:43.669 06:50:14 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:43.669 06:50:14 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:43.670 06:50:14 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:39:43.670 Found net devices under 0000:86:00.1: cvl_0_1 00:39:43.670 06:50:14 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:43.670 06:50:14 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:39:43.670 06:50:14 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # is_hw=yes 00:39:43.670 06:50:14 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:39:43.670 06:50:14 nvmf_abort_qd_sizes -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:39:43.670 06:50:14 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:39:43.670 06:50:14 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:43.670 06:50:14 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:43.670 06:50:14 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:43.670 06:50:14 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:43.670 06:50:14 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:43.670 06:50:14 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:43.670 06:50:14 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:43.670 06:50:14 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:43.670 06:50:14 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:43.670 06:50:14 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:43.670 06:50:14 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:43.670 06:50:14 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:43.670 06:50:14 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:43.670 06:50:14 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:43.670 06:50:14 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:43.670 06:50:14 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:43.670 06:50:14 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:43.670 06:50:14 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:43.670 06:50:14 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:43.670 06:50:14 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:43.670 06:50:14 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:43.670 06:50:14 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:43.670 06:50:14 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:43.670 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:43.670 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.450 ms 00:39:43.670 00:39:43.670 --- 10.0.0.2 ping statistics --- 00:39:43.670 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:43.670 rtt min/avg/max/mdev = 0.450/0.450/0.450/0.000 ms 00:39:43.670 06:50:14 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:43.670 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:43.670 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.215 ms 00:39:43.670 00:39:43.670 --- 10.0.0.1 ping statistics --- 00:39:43.670 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:43.670 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:39:43.670 06:50:14 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:43.670 06:50:14 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # return 0 00:39:43.670 06:50:14 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:39:43.670 06:50:14 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:39:45.574 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:39:45.574 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:39:45.574 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:39:45.574 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:39:45.574 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:39:45.574 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:39:45.574 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:39:45.574 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:39:45.574 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:39:45.574 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:39:45.574 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:39:45.574 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:39:45.832 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:39:45.832 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:39:45.832 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:39:45.832 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:39:47.208 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:39:47.208 06:50:18 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:47.208 06:50:18 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:39:47.208 06:50:18 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:39:47.208 06:50:18 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:47.208 06:50:18 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:39:47.208 06:50:18 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:39:47.208 06:50:18 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:39:47.208 06:50:18 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:39:47.208 06:50:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@724 -- # xtrace_disable 00:39:47.209 06:50:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:39:47.209 06:50:18 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=821516 00:39:47.209 06:50:18 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 821516 00:39:47.209 06:50:18 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:39:47.209 06:50:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # '[' -z 821516 ']' 00:39:47.209 06:50:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:47.209 06:50:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # local max_retries=100 00:39:47.209 06:50:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:47.209 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:47.209 06:50:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # xtrace_disable 00:39:47.209 06:50:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:39:47.209 [2024-11-20 06:50:18.955427] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:39:47.209 [2024-11-20 06:50:18.955471] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:47.209 [2024-11-20 06:50:19.038377] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:39:47.466 [2024-11-20 06:50:19.082043] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:47.466 [2024-11-20 06:50:19.082081] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:47.466 [2024-11-20 06:50:19.082088] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:47.466 [2024-11-20 06:50:19.082094] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:47.466 [2024-11-20 06:50:19.082099] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:47.466 [2024-11-20 06:50:19.083647] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:47.466 [2024-11-20 06:50:19.083754] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:39:47.466 [2024-11-20 06:50:19.083874] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:47.466 [2024-11-20 06:50:19.083876] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:39:48.032 06:50:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:39:48.032 06:50:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@866 -- # return 0 00:39:48.032 06:50:19 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:39:48.032 06:50:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@730 -- # xtrace_disable 00:39:48.032 06:50:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:39:48.032 06:50:19 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:48.032 06:50:19 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:39:48.032 06:50:19 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:39:48.032 06:50:19 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:39:48.032 06:50:19 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:39:48.032 06:50:19 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:39:48.032 06:50:19 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:5e:00.0 ]] 00:39:48.032 06:50:19 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:39:48.032 06:50:19 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:39:48.032 06:50:19 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:5e:00.0 ]] 00:39:48.032 06:50:19 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:39:48.032 06:50:19 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:39:48.032 06:50:19 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:39:48.032 06:50:19 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:39:48.032 06:50:19 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:5e:00.0 00:39:48.032 06:50:19 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:39:48.032 06:50:19 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:5e:00.0 00:39:48.032 06:50:19 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:39:48.032 06:50:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:39:48.032 06:50:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@1109 -- # xtrace_disable 00:39:48.032 06:50:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:39:48.290 ************************************ 00:39:48.290 START TEST spdk_target_abort 00:39:48.290 ************************************ 00:39:48.290 06:50:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1127 -- # spdk_target 00:39:48.290 06:50:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:39:48.290 06:50:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:5e:00.0 -b spdk_target 00:39:48.290 06:50:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:48.290 06:50:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:39:51.564 spdk_targetn1 00:39:51.564 06:50:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:51.564 06:50:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:39:51.564 06:50:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:51.564 06:50:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:39:51.564 [2024-11-20 06:50:22.709331] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:51.564 06:50:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:51.564 06:50:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:39:51.564 06:50:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:51.564 06:50:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:39:51.564 06:50:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:51.564 06:50:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:39:51.564 06:50:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:51.564 06:50:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:39:51.564 06:50:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:51.564 06:50:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:39:51.564 06:50:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:51.564 06:50:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:39:51.564 [2024-11-20 06:50:22.755027] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:51.564 06:50:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:51.564 06:50:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:39:51.564 06:50:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:39:51.565 06:50:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:39:51.565 06:50:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:39:51.565 06:50:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:39:51.565 06:50:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:39:51.565 06:50:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:39:51.565 06:50:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:39:51.565 06:50:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:39:51.565 06:50:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:51.565 06:50:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:39:51.565 06:50:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:51.565 06:50:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:39:51.565 06:50:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:51.565 06:50:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:39:51.565 06:50:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:51.565 06:50:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:39:51.565 06:50:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:51.565 06:50:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:39:51.565 06:50:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:39:51.565 06:50:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:39:54.087 Initializing NVMe Controllers 00:39:54.087 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:39:54.087 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:39:54.087 Initialization complete. Launching workers. 00:39:54.087 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 16854, failed: 0 00:39:54.087 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1339, failed to submit 15515 00:39:54.087 success 725, unsuccessful 614, failed 0 00:39:54.087 06:50:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:39:54.087 06:50:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:39:57.361 Initializing NVMe Controllers 00:39:57.361 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:39:57.361 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:39:57.361 Initialization complete. Launching workers. 00:39:57.361 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8498, failed: 0 00:39:57.361 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1267, failed to submit 7231 00:39:57.361 success 321, unsuccessful 946, failed 0 00:39:57.361 06:50:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:39:57.618 06:50:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:40:00.901 Initializing NVMe Controllers 00:40:00.901 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:40:00.901 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:40:00.901 Initialization complete. Launching workers. 00:40:00.901 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 38487, failed: 0 00:40:00.901 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2885, failed to submit 35602 00:40:00.901 success 595, unsuccessful 2290, failed 0 00:40:00.901 06:50:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:40:00.901 06:50:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:00.901 06:50:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:40:00.901 06:50:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:00.901 06:50:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:40:00.901 06:50:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:00.901 06:50:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:40:02.805 06:50:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:02.805 06:50:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 821516 00:40:02.805 06:50:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # '[' -z 821516 ']' 00:40:02.805 06:50:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # kill -0 821516 00:40:02.805 06:50:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@957 -- # uname 00:40:02.805 06:50:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:40:02.805 06:50:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 821516 00:40:02.805 06:50:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:40:02.805 06:50:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:40:02.805 06:50:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@970 -- # echo 'killing process with pid 821516' 00:40:02.805 killing process with pid 821516 00:40:02.805 06:50:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@971 -- # kill 821516 00:40:02.805 06:50:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@976 -- # wait 821516 00:40:02.805 00:40:02.805 real 0m14.613s 00:40:02.805 user 0m58.218s 00:40:02.805 sys 0m2.622s 00:40:02.805 06:50:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1128 -- # xtrace_disable 00:40:02.805 06:50:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:40:02.805 ************************************ 00:40:02.805 END TEST spdk_target_abort 00:40:02.805 ************************************ 00:40:02.805 06:50:34 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:40:02.805 06:50:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:40:02.805 06:50:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@1109 -- # xtrace_disable 00:40:02.805 06:50:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:40:02.805 ************************************ 00:40:02.805 START TEST kernel_target_abort 00:40:02.805 ************************************ 00:40:02.805 06:50:34 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1127 -- # kernel_target 00:40:02.805 06:50:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:40:02.805 06:50:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:40:02.805 06:50:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:40:02.805 06:50:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:40:02.805 06:50:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:40:02.805 06:50:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:40:02.805 06:50:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:40:02.805 06:50:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:40:02.805 06:50:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:40:02.805 06:50:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:40:02.805 06:50:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:40:02.806 06:50:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:40:02.806 06:50:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:40:02.806 06:50:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:40:02.806 06:50:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:40:02.806 06:50:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:40:02.806 06:50:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:40:02.806 06:50:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:40:02.806 06:50:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:40:02.806 06:50:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:40:02.806 06:50:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:40:02.806 06:50:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:40:06.092 Waiting for block devices as requested 00:40:06.092 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:40:06.092 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:40:06.092 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:40:06.092 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:40:06.092 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:40:06.092 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:40:06.092 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:40:06.351 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:40:06.351 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:40:06.351 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:40:06.351 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:40:06.610 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:40:06.610 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:40:06.610 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:40:06.868 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:40:06.868 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:40:06.868 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:40:07.126 06:50:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:40:07.126 06:50:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:40:07.126 06:50:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:40:07.126 06:50:38 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:40:07.126 06:50:38 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:40:07.126 06:50:38 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:40:07.126 06:50:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:40:07.126 06:50:38 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:40:07.126 06:50:38 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:40:07.126 No valid GPT data, bailing 00:40:07.126 06:50:38 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:40:07.126 06:50:38 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:40:07.126 06:50:38 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:40:07.126 06:50:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:40:07.126 06:50:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:40:07.126 06:50:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:40:07.126 06:50:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:40:07.126 06:50:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:40:07.126 06:50:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:40:07.126 06:50:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:40:07.126 06:50:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:40:07.126 06:50:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:40:07.126 06:50:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:40:07.126 06:50:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:40:07.126 06:50:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:40:07.126 06:50:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:40:07.126 06:50:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:40:07.126 06:50:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:40:07.126 00:40:07.126 Discovery Log Number of Records 2, Generation counter 2 00:40:07.126 =====Discovery Log Entry 0====== 00:40:07.126 trtype: tcp 00:40:07.126 adrfam: ipv4 00:40:07.126 subtype: current discovery subsystem 00:40:07.126 treq: not specified, sq flow control disable supported 00:40:07.126 portid: 1 00:40:07.126 trsvcid: 4420 00:40:07.126 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:40:07.126 traddr: 10.0.0.1 00:40:07.126 eflags: none 00:40:07.126 sectype: none 00:40:07.126 =====Discovery Log Entry 1====== 00:40:07.126 trtype: tcp 00:40:07.126 adrfam: ipv4 00:40:07.126 subtype: nvme subsystem 00:40:07.126 treq: not specified, sq flow control disable supported 00:40:07.126 portid: 1 00:40:07.126 trsvcid: 4420 00:40:07.126 subnqn: nqn.2016-06.io.spdk:testnqn 00:40:07.126 traddr: 10.0.0.1 00:40:07.126 eflags: none 00:40:07.126 sectype: none 00:40:07.126 06:50:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:40:07.126 06:50:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:40:07.126 06:50:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:40:07.126 06:50:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:40:07.126 06:50:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:40:07.126 06:50:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:40:07.126 06:50:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:40:07.126 06:50:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:40:07.126 06:50:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:40:07.126 06:50:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:40:07.127 06:50:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:40:07.127 06:50:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:40:07.127 06:50:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:40:07.127 06:50:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:40:07.127 06:50:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:40:07.127 06:50:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:40:07.127 06:50:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:40:07.127 06:50:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:40:07.127 06:50:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:40:07.127 06:50:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:40:07.127 06:50:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:40:10.405 Initializing NVMe Controllers 00:40:10.405 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:40:10.405 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:40:10.405 Initialization complete. Launching workers. 00:40:10.405 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 95299, failed: 0 00:40:10.405 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 95299, failed to submit 0 00:40:10.405 success 0, unsuccessful 95299, failed 0 00:40:10.405 06:50:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:40:10.405 06:50:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:40:13.685 Initializing NVMe Controllers 00:40:13.685 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:40:13.685 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:40:13.685 Initialization complete. Launching workers. 00:40:13.685 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 151550, failed: 0 00:40:13.685 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 38286, failed to submit 113264 00:40:13.685 success 0, unsuccessful 38286, failed 0 00:40:13.685 06:50:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:40:13.685 06:50:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:40:16.991 Initializing NVMe Controllers 00:40:16.991 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:40:16.991 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:40:16.991 Initialization complete. Launching workers. 00:40:16.991 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 142563, failed: 0 00:40:16.991 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 35714, failed to submit 106849 00:40:16.991 success 0, unsuccessful 35714, failed 0 00:40:16.991 06:50:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:40:16.991 06:50:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:40:16.991 06:50:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:40:16.991 06:50:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:40:16.991 06:50:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:40:16.991 06:50:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:40:16.991 06:50:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:40:16.992 06:50:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:40:16.992 06:50:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:40:16.992 06:50:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:40:19.520 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:40:19.520 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:40:19.520 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:40:19.520 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:40:19.520 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:40:19.520 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:40:19.520 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:40:19.520 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:40:19.520 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:40:19.520 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:40:19.520 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:40:19.520 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:40:19.520 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:40:19.520 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:40:19.520 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:40:19.520 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:40:20.896 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:40:20.896 00:40:20.896 real 0m18.027s 00:40:20.896 user 0m9.093s 00:40:20.896 sys 0m5.122s 00:40:20.896 06:50:52 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1128 -- # xtrace_disable 00:40:20.896 06:50:52 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:40:20.896 ************************************ 00:40:20.896 END TEST kernel_target_abort 00:40:20.896 ************************************ 00:40:20.896 06:50:52 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:40:20.896 06:50:52 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:40:20.896 06:50:52 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:40:20.896 06:50:52 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:40:20.896 06:50:52 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:20.896 06:50:52 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:40:20.896 06:50:52 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:20.896 06:50:52 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:20.896 rmmod nvme_tcp 00:40:20.896 rmmod nvme_fabrics 00:40:20.896 rmmod nvme_keyring 00:40:20.896 06:50:52 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:20.896 06:50:52 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:40:20.896 06:50:52 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:40:20.896 06:50:52 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 821516 ']' 00:40:20.896 06:50:52 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 821516 00:40:20.896 06:50:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@952 -- # '[' -z 821516 ']' 00:40:20.896 06:50:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@956 -- # kill -0 821516 00:40:20.896 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (821516) - No such process 00:40:20.896 06:50:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@979 -- # echo 'Process with pid 821516 is not found' 00:40:20.896 Process with pid 821516 is not found 00:40:20.896 06:50:52 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:40:20.896 06:50:52 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:40:24.182 Waiting for block devices as requested 00:40:24.182 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:40:24.182 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:40:24.182 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:40:24.182 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:40:24.182 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:40:24.182 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:40:24.182 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:40:24.182 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:40:24.441 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:40:24.441 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:40:24.441 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:40:24.699 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:40:24.699 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:40:24.699 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:40:24.958 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:40:24.958 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:40:24.958 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:40:25.217 06:50:56 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:40:25.217 06:50:56 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:40:25.217 06:50:56 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:40:25.217 06:50:56 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:40:25.217 06:50:56 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:40:25.217 06:50:56 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:40:25.217 06:50:56 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:25.217 06:50:56 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:25.217 06:50:56 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:25.217 06:50:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:40:25.217 06:50:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:27.119 06:50:58 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:27.119 00:40:27.119 real 0m50.467s 00:40:27.119 user 1m11.681s 00:40:27.119 sys 0m16.687s 00:40:27.119 06:50:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@1128 -- # xtrace_disable 00:40:27.119 06:50:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:40:27.119 ************************************ 00:40:27.119 END TEST nvmf_abort_qd_sizes 00:40:27.119 ************************************ 00:40:27.119 06:50:58 -- spdk/autotest.sh@288 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:40:27.119 06:50:58 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:40:27.119 06:50:58 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:40:27.119 06:50:58 -- common/autotest_common.sh@10 -- # set +x 00:40:27.378 ************************************ 00:40:27.378 START TEST keyring_file 00:40:27.378 ************************************ 00:40:27.378 06:50:58 keyring_file -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:40:27.378 * Looking for test storage... 00:40:27.378 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:40:27.378 06:50:59 keyring_file -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:40:27.378 06:50:59 keyring_file -- common/autotest_common.sh@1691 -- # lcov --version 00:40:27.378 06:50:59 keyring_file -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:40:27.378 06:50:59 keyring_file -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:40:27.378 06:50:59 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:27.378 06:50:59 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:27.378 06:50:59 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:27.378 06:50:59 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:40:27.378 06:50:59 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:40:27.378 06:50:59 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:40:27.378 06:50:59 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:40:27.378 06:50:59 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:40:27.378 06:50:59 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:40:27.378 06:50:59 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:40:27.378 06:50:59 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:27.378 06:50:59 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:40:27.378 06:50:59 keyring_file -- scripts/common.sh@345 -- # : 1 00:40:27.378 06:50:59 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:27.378 06:50:59 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:27.378 06:50:59 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:40:27.378 06:50:59 keyring_file -- scripts/common.sh@353 -- # local d=1 00:40:27.378 06:50:59 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:27.378 06:50:59 keyring_file -- scripts/common.sh@355 -- # echo 1 00:40:27.378 06:50:59 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:40:27.378 06:50:59 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:40:27.378 06:50:59 keyring_file -- scripts/common.sh@353 -- # local d=2 00:40:27.378 06:50:59 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:27.378 06:50:59 keyring_file -- scripts/common.sh@355 -- # echo 2 00:40:27.378 06:50:59 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:40:27.378 06:50:59 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:27.378 06:50:59 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:27.379 06:50:59 keyring_file -- scripts/common.sh@368 -- # return 0 00:40:27.379 06:50:59 keyring_file -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:27.379 06:50:59 keyring_file -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:40:27.379 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:27.379 --rc genhtml_branch_coverage=1 00:40:27.379 --rc genhtml_function_coverage=1 00:40:27.379 --rc genhtml_legend=1 00:40:27.379 --rc geninfo_all_blocks=1 00:40:27.379 --rc geninfo_unexecuted_blocks=1 00:40:27.379 00:40:27.379 ' 00:40:27.379 06:50:59 keyring_file -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:40:27.379 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:27.379 --rc genhtml_branch_coverage=1 00:40:27.379 --rc genhtml_function_coverage=1 00:40:27.379 --rc genhtml_legend=1 00:40:27.379 --rc geninfo_all_blocks=1 00:40:27.379 --rc geninfo_unexecuted_blocks=1 00:40:27.379 00:40:27.379 ' 00:40:27.379 06:50:59 keyring_file -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:40:27.379 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:27.379 --rc genhtml_branch_coverage=1 00:40:27.379 --rc genhtml_function_coverage=1 00:40:27.379 --rc genhtml_legend=1 00:40:27.379 --rc geninfo_all_blocks=1 00:40:27.379 --rc geninfo_unexecuted_blocks=1 00:40:27.379 00:40:27.379 ' 00:40:27.379 06:50:59 keyring_file -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:40:27.379 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:27.379 --rc genhtml_branch_coverage=1 00:40:27.379 --rc genhtml_function_coverage=1 00:40:27.379 --rc genhtml_legend=1 00:40:27.379 --rc geninfo_all_blocks=1 00:40:27.379 --rc geninfo_unexecuted_blocks=1 00:40:27.379 00:40:27.379 ' 00:40:27.379 06:50:59 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:40:27.379 06:50:59 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:27.379 06:50:59 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:40:27.379 06:50:59 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:27.379 06:50:59 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:27.379 06:50:59 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:27.379 06:50:59 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:27.379 06:50:59 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:27.379 06:50:59 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:27.379 06:50:59 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:27.379 06:50:59 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:27.379 06:50:59 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:27.379 06:50:59 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:27.379 06:50:59 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:40:27.379 06:50:59 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:40:27.379 06:50:59 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:27.379 06:50:59 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:27.379 06:50:59 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:27.379 06:50:59 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:27.379 06:50:59 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:27.379 06:50:59 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:40:27.379 06:50:59 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:27.379 06:50:59 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:27.379 06:50:59 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:27.379 06:50:59 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:27.379 06:50:59 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:27.379 06:50:59 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:27.379 06:50:59 keyring_file -- paths/export.sh@5 -- # export PATH 00:40:27.379 06:50:59 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:27.379 06:50:59 keyring_file -- nvmf/common.sh@51 -- # : 0 00:40:27.379 06:50:59 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:27.379 06:50:59 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:27.379 06:50:59 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:27.379 06:50:59 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:27.379 06:50:59 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:27.379 06:50:59 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:40:27.379 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:40:27.379 06:50:59 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:27.379 06:50:59 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:27.379 06:50:59 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:27.379 06:50:59 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:40:27.379 06:50:59 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:40:27.379 06:50:59 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:40:27.379 06:50:59 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:40:27.379 06:50:59 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:40:27.379 06:50:59 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:40:27.379 06:50:59 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:40:27.379 06:50:59 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:40:27.379 06:50:59 keyring_file -- keyring/common.sh@17 -- # name=key0 00:40:27.379 06:50:59 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:40:27.379 06:50:59 keyring_file -- keyring/common.sh@17 -- # digest=0 00:40:27.379 06:50:59 keyring_file -- keyring/common.sh@18 -- # mktemp 00:40:27.379 06:50:59 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.rRtJb2Ej9o 00:40:27.379 06:50:59 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:40:27.379 06:50:59 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:40:27.379 06:50:59 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:40:27.379 06:50:59 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:40:27.379 06:50:59 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:40:27.379 06:50:59 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:40:27.379 06:50:59 keyring_file -- nvmf/common.sh@733 -- # python - 00:40:27.637 06:50:59 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.rRtJb2Ej9o 00:40:27.637 06:50:59 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.rRtJb2Ej9o 00:40:27.637 06:50:59 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.rRtJb2Ej9o 00:40:27.637 06:50:59 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:40:27.637 06:50:59 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:40:27.637 06:50:59 keyring_file -- keyring/common.sh@17 -- # name=key1 00:40:27.637 06:50:59 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:40:27.637 06:50:59 keyring_file -- keyring/common.sh@17 -- # digest=0 00:40:27.637 06:50:59 keyring_file -- keyring/common.sh@18 -- # mktemp 00:40:27.637 06:50:59 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.23yQ5Kv7UN 00:40:27.637 06:50:59 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:40:27.637 06:50:59 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:40:27.637 06:50:59 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:40:27.637 06:50:59 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:40:27.637 06:50:59 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:40:27.637 06:50:59 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:40:27.637 06:50:59 keyring_file -- nvmf/common.sh@733 -- # python - 00:40:27.637 06:50:59 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.23yQ5Kv7UN 00:40:27.637 06:50:59 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.23yQ5Kv7UN 00:40:27.637 06:50:59 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.23yQ5Kv7UN 00:40:27.637 06:50:59 keyring_file -- keyring/file.sh@30 -- # tgtpid=830325 00:40:27.637 06:50:59 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:40:27.637 06:50:59 keyring_file -- keyring/file.sh@32 -- # waitforlisten 830325 00:40:27.637 06:50:59 keyring_file -- common/autotest_common.sh@833 -- # '[' -z 830325 ']' 00:40:27.637 06:50:59 keyring_file -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:27.637 06:50:59 keyring_file -- common/autotest_common.sh@838 -- # local max_retries=100 00:40:27.637 06:50:59 keyring_file -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:27.637 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:27.637 06:50:59 keyring_file -- common/autotest_common.sh@842 -- # xtrace_disable 00:40:27.637 06:50:59 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:40:27.637 [2024-11-20 06:50:59.327864] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:40:27.637 [2024-11-20 06:50:59.327910] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid830325 ] 00:40:27.637 [2024-11-20 06:50:59.402294] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:27.637 [2024-11-20 06:50:59.445320] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:27.894 06:50:59 keyring_file -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:40:27.894 06:50:59 keyring_file -- common/autotest_common.sh@866 -- # return 0 00:40:27.894 06:50:59 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:40:27.894 06:50:59 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:27.894 06:50:59 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:40:27.894 [2024-11-20 06:50:59.661598] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:27.894 null0 00:40:27.894 [2024-11-20 06:50:59.693653] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:40:27.894 [2024-11-20 06:50:59.694024] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:40:27.894 06:50:59 keyring_file -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:27.894 06:50:59 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:40:27.894 06:50:59 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:40:27.894 06:50:59 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:40:27.894 06:50:59 keyring_file -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:40:27.894 06:50:59 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:40:27.894 06:50:59 keyring_file -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:40:27.894 06:50:59 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:40:27.894 06:50:59 keyring_file -- common/autotest_common.sh@653 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:40:27.894 06:50:59 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:27.894 06:50:59 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:40:27.894 [2024-11-20 06:50:59.721722] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:40:27.894 request: 00:40:27.894 { 00:40:27.894 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:40:27.894 "secure_channel": false, 00:40:27.894 "listen_address": { 00:40:28.152 "trtype": "tcp", 00:40:28.152 "traddr": "127.0.0.1", 00:40:28.152 "trsvcid": "4420" 00:40:28.152 }, 00:40:28.152 "method": "nvmf_subsystem_add_listener", 00:40:28.152 "req_id": 1 00:40:28.152 } 00:40:28.152 Got JSON-RPC error response 00:40:28.152 response: 00:40:28.152 { 00:40:28.152 "code": -32602, 00:40:28.152 "message": "Invalid parameters" 00:40:28.152 } 00:40:28.152 06:50:59 keyring_file -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:40:28.152 06:50:59 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:40:28.152 06:50:59 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:40:28.152 06:50:59 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:40:28.152 06:50:59 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:40:28.152 06:50:59 keyring_file -- keyring/file.sh@47 -- # bperfpid=830339 00:40:28.152 06:50:59 keyring_file -- keyring/file.sh@49 -- # waitforlisten 830339 /var/tmp/bperf.sock 00:40:28.152 06:50:59 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:40:28.152 06:50:59 keyring_file -- common/autotest_common.sh@833 -- # '[' -z 830339 ']' 00:40:28.152 06:50:59 keyring_file -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:40:28.152 06:50:59 keyring_file -- common/autotest_common.sh@838 -- # local max_retries=100 00:40:28.152 06:50:59 keyring_file -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:40:28.152 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:40:28.152 06:50:59 keyring_file -- common/autotest_common.sh@842 -- # xtrace_disable 00:40:28.152 06:50:59 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:40:28.152 [2024-11-20 06:50:59.774993] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:40:28.152 [2024-11-20 06:50:59.775034] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid830339 ] 00:40:28.152 [2024-11-20 06:50:59.849775] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:28.152 [2024-11-20 06:50:59.891865] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:40:28.152 06:50:59 keyring_file -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:40:28.152 06:50:59 keyring_file -- common/autotest_common.sh@866 -- # return 0 00:40:28.152 06:50:59 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.rRtJb2Ej9o 00:40:28.152 06:50:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.rRtJb2Ej9o 00:40:28.410 06:51:00 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.23yQ5Kv7UN 00:40:28.410 06:51:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.23yQ5Kv7UN 00:40:28.667 06:51:00 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:40:28.667 06:51:00 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:40:28.667 06:51:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:28.667 06:51:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:40:28.667 06:51:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:28.925 06:51:00 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.rRtJb2Ej9o == \/\t\m\p\/\t\m\p\.\r\R\t\J\b\2\E\j\9\o ]] 00:40:28.925 06:51:00 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:40:28.925 06:51:00 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:40:28.925 06:51:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:28.925 06:51:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:28.925 06:51:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:40:28.925 06:51:00 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.23yQ5Kv7UN == \/\t\m\p\/\t\m\p\.\2\3\y\Q\5\K\v\7\U\N ]] 00:40:28.925 06:51:00 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:40:28.925 06:51:00 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:40:28.925 06:51:00 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:29.182 06:51:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:29.182 06:51:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:40:29.182 06:51:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:29.182 06:51:00 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:40:29.183 06:51:00 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:40:29.183 06:51:00 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:40:29.183 06:51:00 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:29.183 06:51:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:29.183 06:51:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:40:29.183 06:51:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:29.440 06:51:01 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:40:29.440 06:51:01 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:40:29.440 06:51:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:40:29.698 [2024-11-20 06:51:01.329413] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:40:29.698 nvme0n1 00:40:29.698 06:51:01 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:40:29.698 06:51:01 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:40:29.698 06:51:01 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:29.698 06:51:01 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:29.698 06:51:01 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:40:29.698 06:51:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:29.987 06:51:01 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:40:29.988 06:51:01 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:40:29.988 06:51:01 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:40:29.988 06:51:01 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:29.988 06:51:01 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:29.988 06:51:01 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:40:29.988 06:51:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:30.286 06:51:01 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:40:30.287 06:51:01 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:40:30.287 Running I/O for 1 seconds... 00:40:31.270 19255.00 IOPS, 75.21 MiB/s 00:40:31.270 Latency(us) 00:40:31.270 [2024-11-20T05:51:03.106Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:31.270 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:40:31.270 nvme0n1 : 1.00 19305.41 75.41 0.00 0.00 6618.29 2543.42 14792.41 00:40:31.270 [2024-11-20T05:51:03.106Z] =================================================================================================================== 00:40:31.270 [2024-11-20T05:51:03.106Z] Total : 19305.41 75.41 0.00 0.00 6618.29 2543.42 14792.41 00:40:31.270 { 00:40:31.270 "results": [ 00:40:31.270 { 00:40:31.270 "job": "nvme0n1", 00:40:31.270 "core_mask": "0x2", 00:40:31.270 "workload": "randrw", 00:40:31.270 "percentage": 50, 00:40:31.270 "status": "finished", 00:40:31.270 "queue_depth": 128, 00:40:31.270 "io_size": 4096, 00:40:31.270 "runtime": 1.004071, 00:40:31.270 "iops": 19305.40768531309, 00:40:31.270 "mibps": 75.41174877075426, 00:40:31.270 "io_failed": 0, 00:40:31.270 "io_timeout": 0, 00:40:31.270 "avg_latency_us": 6618.285695418903, 00:40:31.270 "min_latency_us": 2543.4209523809523, 00:40:31.270 "max_latency_us": 14792.411428571428 00:40:31.270 } 00:40:31.270 ], 00:40:31.270 "core_count": 1 00:40:31.270 } 00:40:31.270 06:51:02 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:40:31.270 06:51:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:40:31.540 06:51:03 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:40:31.540 06:51:03 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:40:31.540 06:51:03 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:31.540 06:51:03 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:31.540 06:51:03 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:40:31.541 06:51:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:31.541 06:51:03 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:40:31.541 06:51:03 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:40:31.541 06:51:03 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:40:31.541 06:51:03 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:31.541 06:51:03 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:31.541 06:51:03 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:40:31.541 06:51:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:31.798 06:51:03 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:40:31.798 06:51:03 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:40:31.798 06:51:03 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:40:31.798 06:51:03 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:40:31.798 06:51:03 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:40:31.798 06:51:03 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:40:31.798 06:51:03 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:40:31.798 06:51:03 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:40:31.798 06:51:03 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:40:31.798 06:51:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:40:32.056 [2024-11-20 06:51:03.695756] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:40:32.056 [2024-11-20 06:51:03.696024] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x192f1f0 (107): Transport endpoint is not connected 00:40:32.056 [2024-11-20 06:51:03.697019] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x192f1f0 (9): Bad file descriptor 00:40:32.056 [2024-11-20 06:51:03.698021] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:40:32.056 [2024-11-20 06:51:03.698031] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:40:32.056 [2024-11-20 06:51:03.698038] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:40:32.056 [2024-11-20 06:51:03.698047] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:40:32.056 request: 00:40:32.056 { 00:40:32.056 "name": "nvme0", 00:40:32.056 "trtype": "tcp", 00:40:32.056 "traddr": "127.0.0.1", 00:40:32.056 "adrfam": "ipv4", 00:40:32.056 "trsvcid": "4420", 00:40:32.056 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:32.056 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:40:32.056 "prchk_reftag": false, 00:40:32.056 "prchk_guard": false, 00:40:32.056 "hdgst": false, 00:40:32.056 "ddgst": false, 00:40:32.056 "psk": "key1", 00:40:32.056 "allow_unrecognized_csi": false, 00:40:32.056 "method": "bdev_nvme_attach_controller", 00:40:32.056 "req_id": 1 00:40:32.056 } 00:40:32.056 Got JSON-RPC error response 00:40:32.056 response: 00:40:32.056 { 00:40:32.056 "code": -5, 00:40:32.056 "message": "Input/output error" 00:40:32.056 } 00:40:32.056 06:51:03 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:40:32.056 06:51:03 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:40:32.056 06:51:03 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:40:32.056 06:51:03 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:40:32.056 06:51:03 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:40:32.056 06:51:03 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:40:32.056 06:51:03 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:32.056 06:51:03 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:32.056 06:51:03 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:40:32.056 06:51:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:32.314 06:51:03 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:40:32.314 06:51:03 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:40:32.314 06:51:03 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:40:32.314 06:51:03 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:32.314 06:51:03 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:40:32.314 06:51:03 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:32.314 06:51:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:32.314 06:51:04 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:40:32.314 06:51:04 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:40:32.314 06:51:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:40:32.571 06:51:04 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:40:32.571 06:51:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:40:32.829 06:51:04 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:40:32.829 06:51:04 keyring_file -- keyring/file.sh@78 -- # jq length 00:40:32.829 06:51:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:33.087 06:51:04 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:40:33.087 06:51:04 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.rRtJb2Ej9o 00:40:33.087 06:51:04 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.rRtJb2Ej9o 00:40:33.087 06:51:04 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:40:33.087 06:51:04 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.rRtJb2Ej9o 00:40:33.087 06:51:04 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:40:33.087 06:51:04 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:40:33.087 06:51:04 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:40:33.087 06:51:04 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:40:33.087 06:51:04 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.rRtJb2Ej9o 00:40:33.087 06:51:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.rRtJb2Ej9o 00:40:33.087 [2024-11-20 06:51:04.848064] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.rRtJb2Ej9o': 0100660 00:40:33.087 [2024-11-20 06:51:04.848090] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:40:33.087 request: 00:40:33.087 { 00:40:33.087 "name": "key0", 00:40:33.087 "path": "/tmp/tmp.rRtJb2Ej9o", 00:40:33.087 "method": "keyring_file_add_key", 00:40:33.087 "req_id": 1 00:40:33.087 } 00:40:33.087 Got JSON-RPC error response 00:40:33.087 response: 00:40:33.087 { 00:40:33.087 "code": -1, 00:40:33.087 "message": "Operation not permitted" 00:40:33.087 } 00:40:33.087 06:51:04 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:40:33.087 06:51:04 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:40:33.087 06:51:04 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:40:33.087 06:51:04 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:40:33.087 06:51:04 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.rRtJb2Ej9o 00:40:33.087 06:51:04 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.rRtJb2Ej9o 00:40:33.087 06:51:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.rRtJb2Ej9o 00:40:33.344 06:51:05 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.rRtJb2Ej9o 00:40:33.344 06:51:05 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:40:33.344 06:51:05 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:40:33.344 06:51:05 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:33.344 06:51:05 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:33.344 06:51:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:33.344 06:51:05 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:40:33.602 06:51:05 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:40:33.602 06:51:05 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:40:33.602 06:51:05 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:40:33.602 06:51:05 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:40:33.602 06:51:05 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:40:33.602 06:51:05 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:40:33.602 06:51:05 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:40:33.602 06:51:05 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:40:33.602 06:51:05 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:40:33.602 06:51:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:40:33.602 [2024-11-20 06:51:05.409559] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.rRtJb2Ej9o': No such file or directory 00:40:33.602 [2024-11-20 06:51:05.409579] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:40:33.602 [2024-11-20 06:51:05.409595] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:40:33.602 [2024-11-20 06:51:05.409602] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:40:33.602 [2024-11-20 06:51:05.409609] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:40:33.602 [2024-11-20 06:51:05.409616] bdev_nvme.c:6669:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:40:33.602 request: 00:40:33.602 { 00:40:33.602 "name": "nvme0", 00:40:33.602 "trtype": "tcp", 00:40:33.602 "traddr": "127.0.0.1", 00:40:33.602 "adrfam": "ipv4", 00:40:33.602 "trsvcid": "4420", 00:40:33.602 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:33.602 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:40:33.602 "prchk_reftag": false, 00:40:33.602 "prchk_guard": false, 00:40:33.602 "hdgst": false, 00:40:33.602 "ddgst": false, 00:40:33.602 "psk": "key0", 00:40:33.602 "allow_unrecognized_csi": false, 00:40:33.602 "method": "bdev_nvme_attach_controller", 00:40:33.602 "req_id": 1 00:40:33.602 } 00:40:33.602 Got JSON-RPC error response 00:40:33.602 response: 00:40:33.602 { 00:40:33.602 "code": -19, 00:40:33.602 "message": "No such device" 00:40:33.602 } 00:40:33.602 06:51:05 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:40:33.602 06:51:05 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:40:33.602 06:51:05 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:40:33.602 06:51:05 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:40:33.602 06:51:05 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:40:33.602 06:51:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:40:33.860 06:51:05 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:40:33.860 06:51:05 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:40:33.860 06:51:05 keyring_file -- keyring/common.sh@17 -- # name=key0 00:40:33.860 06:51:05 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:40:33.860 06:51:05 keyring_file -- keyring/common.sh@17 -- # digest=0 00:40:33.860 06:51:05 keyring_file -- keyring/common.sh@18 -- # mktemp 00:40:33.860 06:51:05 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.XwbTbUKQ0u 00:40:33.860 06:51:05 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:40:33.860 06:51:05 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:40:33.860 06:51:05 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:40:33.860 06:51:05 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:40:33.860 06:51:05 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:40:33.860 06:51:05 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:40:33.860 06:51:05 keyring_file -- nvmf/common.sh@733 -- # python - 00:40:33.860 06:51:05 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.XwbTbUKQ0u 00:40:33.860 06:51:05 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.XwbTbUKQ0u 00:40:33.860 06:51:05 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.XwbTbUKQ0u 00:40:33.860 06:51:05 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.XwbTbUKQ0u 00:40:33.860 06:51:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.XwbTbUKQ0u 00:40:34.117 06:51:05 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:40:34.117 06:51:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:40:34.375 nvme0n1 00:40:34.375 06:51:06 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:40:34.375 06:51:06 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:40:34.375 06:51:06 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:34.375 06:51:06 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:34.375 06:51:06 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:40:34.375 06:51:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:34.633 06:51:06 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:40:34.633 06:51:06 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:40:34.633 06:51:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:40:34.890 06:51:06 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:40:34.890 06:51:06 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:40:34.890 06:51:06 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:34.890 06:51:06 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:40:34.890 06:51:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:34.890 06:51:06 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:40:34.890 06:51:06 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:40:34.890 06:51:06 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:40:34.890 06:51:06 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:34.890 06:51:06 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:34.890 06:51:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:34.890 06:51:06 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:40:35.148 06:51:06 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:40:35.148 06:51:06 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:40:35.148 06:51:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:40:35.405 06:51:07 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:40:35.405 06:51:07 keyring_file -- keyring/file.sh@105 -- # jq length 00:40:35.405 06:51:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:35.405 06:51:07 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:40:35.405 06:51:07 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.XwbTbUKQ0u 00:40:35.405 06:51:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.XwbTbUKQ0u 00:40:35.663 06:51:07 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.23yQ5Kv7UN 00:40:35.663 06:51:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.23yQ5Kv7UN 00:40:35.920 06:51:07 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:40:35.920 06:51:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:40:36.201 nvme0n1 00:40:36.201 06:51:07 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:40:36.201 06:51:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:40:36.459 06:51:08 keyring_file -- keyring/file.sh@113 -- # config='{ 00:40:36.459 "subsystems": [ 00:40:36.459 { 00:40:36.459 "subsystem": "keyring", 00:40:36.459 "config": [ 00:40:36.459 { 00:40:36.459 "method": "keyring_file_add_key", 00:40:36.459 "params": { 00:40:36.459 "name": "key0", 00:40:36.459 "path": "/tmp/tmp.XwbTbUKQ0u" 00:40:36.459 } 00:40:36.459 }, 00:40:36.459 { 00:40:36.459 "method": "keyring_file_add_key", 00:40:36.459 "params": { 00:40:36.459 "name": "key1", 00:40:36.459 "path": "/tmp/tmp.23yQ5Kv7UN" 00:40:36.459 } 00:40:36.459 } 00:40:36.459 ] 00:40:36.459 }, 00:40:36.459 { 00:40:36.459 "subsystem": "iobuf", 00:40:36.459 "config": [ 00:40:36.459 { 00:40:36.459 "method": "iobuf_set_options", 00:40:36.459 "params": { 00:40:36.459 "small_pool_count": 8192, 00:40:36.459 "large_pool_count": 1024, 00:40:36.459 "small_bufsize": 8192, 00:40:36.459 "large_bufsize": 135168, 00:40:36.459 "enable_numa": false 00:40:36.459 } 00:40:36.459 } 00:40:36.459 ] 00:40:36.459 }, 00:40:36.459 { 00:40:36.459 "subsystem": "sock", 00:40:36.459 "config": [ 00:40:36.459 { 00:40:36.459 "method": "sock_set_default_impl", 00:40:36.459 "params": { 00:40:36.459 "impl_name": "posix" 00:40:36.459 } 00:40:36.459 }, 00:40:36.459 { 00:40:36.459 "method": "sock_impl_set_options", 00:40:36.459 "params": { 00:40:36.459 "impl_name": "ssl", 00:40:36.459 "recv_buf_size": 4096, 00:40:36.459 "send_buf_size": 4096, 00:40:36.459 "enable_recv_pipe": true, 00:40:36.459 "enable_quickack": false, 00:40:36.459 "enable_placement_id": 0, 00:40:36.459 "enable_zerocopy_send_server": true, 00:40:36.459 "enable_zerocopy_send_client": false, 00:40:36.459 "zerocopy_threshold": 0, 00:40:36.459 "tls_version": 0, 00:40:36.459 "enable_ktls": false 00:40:36.459 } 00:40:36.459 }, 00:40:36.459 { 00:40:36.459 "method": "sock_impl_set_options", 00:40:36.459 "params": { 00:40:36.459 "impl_name": "posix", 00:40:36.459 "recv_buf_size": 2097152, 00:40:36.459 "send_buf_size": 2097152, 00:40:36.459 "enable_recv_pipe": true, 00:40:36.459 "enable_quickack": false, 00:40:36.459 "enable_placement_id": 0, 00:40:36.459 "enable_zerocopy_send_server": true, 00:40:36.459 "enable_zerocopy_send_client": false, 00:40:36.459 "zerocopy_threshold": 0, 00:40:36.459 "tls_version": 0, 00:40:36.459 "enable_ktls": false 00:40:36.459 } 00:40:36.459 } 00:40:36.459 ] 00:40:36.459 }, 00:40:36.459 { 00:40:36.459 "subsystem": "vmd", 00:40:36.459 "config": [] 00:40:36.459 }, 00:40:36.459 { 00:40:36.459 "subsystem": "accel", 00:40:36.459 "config": [ 00:40:36.459 { 00:40:36.459 "method": "accel_set_options", 00:40:36.459 "params": { 00:40:36.459 "small_cache_size": 128, 00:40:36.459 "large_cache_size": 16, 00:40:36.459 "task_count": 2048, 00:40:36.459 "sequence_count": 2048, 00:40:36.459 "buf_count": 2048 00:40:36.459 } 00:40:36.459 } 00:40:36.459 ] 00:40:36.459 }, 00:40:36.459 { 00:40:36.459 "subsystem": "bdev", 00:40:36.459 "config": [ 00:40:36.459 { 00:40:36.459 "method": "bdev_set_options", 00:40:36.459 "params": { 00:40:36.459 "bdev_io_pool_size": 65535, 00:40:36.459 "bdev_io_cache_size": 256, 00:40:36.459 "bdev_auto_examine": true, 00:40:36.459 "iobuf_small_cache_size": 128, 00:40:36.459 "iobuf_large_cache_size": 16 00:40:36.459 } 00:40:36.459 }, 00:40:36.459 { 00:40:36.459 "method": "bdev_raid_set_options", 00:40:36.459 "params": { 00:40:36.459 "process_window_size_kb": 1024, 00:40:36.459 "process_max_bandwidth_mb_sec": 0 00:40:36.459 } 00:40:36.459 }, 00:40:36.459 { 00:40:36.459 "method": "bdev_iscsi_set_options", 00:40:36.459 "params": { 00:40:36.459 "timeout_sec": 30 00:40:36.459 } 00:40:36.459 }, 00:40:36.459 { 00:40:36.459 "method": "bdev_nvme_set_options", 00:40:36.459 "params": { 00:40:36.459 "action_on_timeout": "none", 00:40:36.459 "timeout_us": 0, 00:40:36.459 "timeout_admin_us": 0, 00:40:36.459 "keep_alive_timeout_ms": 10000, 00:40:36.459 "arbitration_burst": 0, 00:40:36.459 "low_priority_weight": 0, 00:40:36.459 "medium_priority_weight": 0, 00:40:36.459 "high_priority_weight": 0, 00:40:36.459 "nvme_adminq_poll_period_us": 10000, 00:40:36.459 "nvme_ioq_poll_period_us": 0, 00:40:36.459 "io_queue_requests": 512, 00:40:36.459 "delay_cmd_submit": true, 00:40:36.459 "transport_retry_count": 4, 00:40:36.459 "bdev_retry_count": 3, 00:40:36.459 "transport_ack_timeout": 0, 00:40:36.460 "ctrlr_loss_timeout_sec": 0, 00:40:36.460 "reconnect_delay_sec": 0, 00:40:36.460 "fast_io_fail_timeout_sec": 0, 00:40:36.460 "disable_auto_failback": false, 00:40:36.460 "generate_uuids": false, 00:40:36.460 "transport_tos": 0, 00:40:36.460 "nvme_error_stat": false, 00:40:36.460 "rdma_srq_size": 0, 00:40:36.460 "io_path_stat": false, 00:40:36.460 "allow_accel_sequence": false, 00:40:36.460 "rdma_max_cq_size": 0, 00:40:36.460 "rdma_cm_event_timeout_ms": 0, 00:40:36.460 "dhchap_digests": [ 00:40:36.460 "sha256", 00:40:36.460 "sha384", 00:40:36.460 "sha512" 00:40:36.460 ], 00:40:36.460 "dhchap_dhgroups": [ 00:40:36.460 "null", 00:40:36.460 "ffdhe2048", 00:40:36.460 "ffdhe3072", 00:40:36.460 "ffdhe4096", 00:40:36.460 "ffdhe6144", 00:40:36.460 "ffdhe8192" 00:40:36.460 ] 00:40:36.460 } 00:40:36.460 }, 00:40:36.460 { 00:40:36.460 "method": "bdev_nvme_attach_controller", 00:40:36.460 "params": { 00:40:36.460 "name": "nvme0", 00:40:36.460 "trtype": "TCP", 00:40:36.460 "adrfam": "IPv4", 00:40:36.460 "traddr": "127.0.0.1", 00:40:36.460 "trsvcid": "4420", 00:40:36.460 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:36.460 "prchk_reftag": false, 00:40:36.460 "prchk_guard": false, 00:40:36.460 "ctrlr_loss_timeout_sec": 0, 00:40:36.460 "reconnect_delay_sec": 0, 00:40:36.460 "fast_io_fail_timeout_sec": 0, 00:40:36.460 "psk": "key0", 00:40:36.460 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:40:36.460 "hdgst": false, 00:40:36.460 "ddgst": false, 00:40:36.460 "multipath": "multipath" 00:40:36.460 } 00:40:36.460 }, 00:40:36.460 { 00:40:36.460 "method": "bdev_nvme_set_hotplug", 00:40:36.460 "params": { 00:40:36.460 "period_us": 100000, 00:40:36.460 "enable": false 00:40:36.460 } 00:40:36.460 }, 00:40:36.460 { 00:40:36.460 "method": "bdev_wait_for_examine" 00:40:36.460 } 00:40:36.460 ] 00:40:36.460 }, 00:40:36.460 { 00:40:36.460 "subsystem": "nbd", 00:40:36.460 "config": [] 00:40:36.460 } 00:40:36.460 ] 00:40:36.460 }' 00:40:36.460 06:51:08 keyring_file -- keyring/file.sh@115 -- # killprocess 830339 00:40:36.460 06:51:08 keyring_file -- common/autotest_common.sh@952 -- # '[' -z 830339 ']' 00:40:36.460 06:51:08 keyring_file -- common/autotest_common.sh@956 -- # kill -0 830339 00:40:36.460 06:51:08 keyring_file -- common/autotest_common.sh@957 -- # uname 00:40:36.460 06:51:08 keyring_file -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:40:36.460 06:51:08 keyring_file -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 830339 00:40:36.460 06:51:08 keyring_file -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:40:36.460 06:51:08 keyring_file -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:40:36.460 06:51:08 keyring_file -- common/autotest_common.sh@970 -- # echo 'killing process with pid 830339' 00:40:36.460 killing process with pid 830339 00:40:36.460 06:51:08 keyring_file -- common/autotest_common.sh@971 -- # kill 830339 00:40:36.460 Received shutdown signal, test time was about 1.000000 seconds 00:40:36.460 00:40:36.460 Latency(us) 00:40:36.460 [2024-11-20T05:51:08.296Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:36.460 [2024-11-20T05:51:08.296Z] =================================================================================================================== 00:40:36.460 [2024-11-20T05:51:08.296Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:40:36.460 06:51:08 keyring_file -- common/autotest_common.sh@976 -- # wait 830339 00:40:36.718 06:51:08 keyring_file -- keyring/file.sh@118 -- # bperfpid=832365 00:40:36.718 06:51:08 keyring_file -- keyring/file.sh@120 -- # waitforlisten 832365 /var/tmp/bperf.sock 00:40:36.718 06:51:08 keyring_file -- common/autotest_common.sh@833 -- # '[' -z 832365 ']' 00:40:36.718 06:51:08 keyring_file -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:40:36.718 06:51:08 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:40:36.718 06:51:08 keyring_file -- common/autotest_common.sh@838 -- # local max_retries=100 00:40:36.718 06:51:08 keyring_file -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:40:36.718 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:40:36.718 06:51:08 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:40:36.718 "subsystems": [ 00:40:36.718 { 00:40:36.718 "subsystem": "keyring", 00:40:36.718 "config": [ 00:40:36.718 { 00:40:36.718 "method": "keyring_file_add_key", 00:40:36.718 "params": { 00:40:36.718 "name": "key0", 00:40:36.718 "path": "/tmp/tmp.XwbTbUKQ0u" 00:40:36.718 } 00:40:36.718 }, 00:40:36.718 { 00:40:36.718 "method": "keyring_file_add_key", 00:40:36.718 "params": { 00:40:36.718 "name": "key1", 00:40:36.718 "path": "/tmp/tmp.23yQ5Kv7UN" 00:40:36.718 } 00:40:36.718 } 00:40:36.718 ] 00:40:36.718 }, 00:40:36.718 { 00:40:36.718 "subsystem": "iobuf", 00:40:36.718 "config": [ 00:40:36.718 { 00:40:36.718 "method": "iobuf_set_options", 00:40:36.718 "params": { 00:40:36.718 "small_pool_count": 8192, 00:40:36.718 "large_pool_count": 1024, 00:40:36.718 "small_bufsize": 8192, 00:40:36.718 "large_bufsize": 135168, 00:40:36.718 "enable_numa": false 00:40:36.718 } 00:40:36.718 } 00:40:36.718 ] 00:40:36.718 }, 00:40:36.718 { 00:40:36.718 "subsystem": "sock", 00:40:36.718 "config": [ 00:40:36.718 { 00:40:36.718 "method": "sock_set_default_impl", 00:40:36.718 "params": { 00:40:36.718 "impl_name": "posix" 00:40:36.718 } 00:40:36.718 }, 00:40:36.718 { 00:40:36.718 "method": "sock_impl_set_options", 00:40:36.718 "params": { 00:40:36.718 "impl_name": "ssl", 00:40:36.718 "recv_buf_size": 4096, 00:40:36.718 "send_buf_size": 4096, 00:40:36.718 "enable_recv_pipe": true, 00:40:36.718 "enable_quickack": false, 00:40:36.718 "enable_placement_id": 0, 00:40:36.718 "enable_zerocopy_send_server": true, 00:40:36.718 "enable_zerocopy_send_client": false, 00:40:36.718 "zerocopy_threshold": 0, 00:40:36.718 "tls_version": 0, 00:40:36.718 "enable_ktls": false 00:40:36.718 } 00:40:36.718 }, 00:40:36.718 { 00:40:36.718 "method": "sock_impl_set_options", 00:40:36.718 "params": { 00:40:36.718 "impl_name": "posix", 00:40:36.718 "recv_buf_size": 2097152, 00:40:36.718 "send_buf_size": 2097152, 00:40:36.718 "enable_recv_pipe": true, 00:40:36.718 "enable_quickack": false, 00:40:36.718 "enable_placement_id": 0, 00:40:36.718 "enable_zerocopy_send_server": true, 00:40:36.718 "enable_zerocopy_send_client": false, 00:40:36.718 "zerocopy_threshold": 0, 00:40:36.718 "tls_version": 0, 00:40:36.718 "enable_ktls": false 00:40:36.718 } 00:40:36.718 } 00:40:36.718 ] 00:40:36.718 }, 00:40:36.718 { 00:40:36.718 "subsystem": "vmd", 00:40:36.718 "config": [] 00:40:36.718 }, 00:40:36.718 { 00:40:36.718 "subsystem": "accel", 00:40:36.718 "config": [ 00:40:36.718 { 00:40:36.718 "method": "accel_set_options", 00:40:36.718 "params": { 00:40:36.718 "small_cache_size": 128, 00:40:36.718 "large_cache_size": 16, 00:40:36.718 "task_count": 2048, 00:40:36.718 "sequence_count": 2048, 00:40:36.718 "buf_count": 2048 00:40:36.718 } 00:40:36.718 } 00:40:36.718 ] 00:40:36.718 }, 00:40:36.718 { 00:40:36.718 "subsystem": "bdev", 00:40:36.718 "config": [ 00:40:36.718 { 00:40:36.718 "method": "bdev_set_options", 00:40:36.718 "params": { 00:40:36.718 "bdev_io_pool_size": 65535, 00:40:36.718 "bdev_io_cache_size": 256, 00:40:36.718 "bdev_auto_examine": true, 00:40:36.718 "iobuf_small_cache_size": 128, 00:40:36.718 "iobuf_large_cache_size": 16 00:40:36.718 } 00:40:36.718 }, 00:40:36.719 { 00:40:36.719 "method": "bdev_raid_set_options", 00:40:36.719 "params": { 00:40:36.719 "process_window_size_kb": 1024, 00:40:36.719 "process_max_bandwidth_mb_sec": 0 00:40:36.719 } 00:40:36.719 }, 00:40:36.719 { 00:40:36.719 "method": "bdev_iscsi_set_options", 00:40:36.719 "params": { 00:40:36.719 "timeout_sec": 30 00:40:36.719 } 00:40:36.719 }, 00:40:36.719 { 00:40:36.719 "method": "bdev_nvme_set_options", 00:40:36.719 "params": { 00:40:36.719 "action_on_timeout": "none", 00:40:36.719 "timeout_us": 0, 00:40:36.719 "timeout_admin_us": 0, 00:40:36.719 "keep_alive_timeout_ms": 10000, 00:40:36.719 "arbitration_burst": 0, 00:40:36.719 "low_priority_weight": 0, 00:40:36.719 "medium_priority_weight": 0, 00:40:36.719 "high_priority_weight": 0, 00:40:36.719 "nvme_adminq_poll_period_us": 10000, 00:40:36.719 "nvme_ioq_poll_period_us": 0, 00:40:36.719 "io_queue_requests": 512, 00:40:36.719 "delay_cmd_submit": true, 00:40:36.719 "transport_retry_count": 4, 00:40:36.719 "bdev_retry_count": 3, 00:40:36.719 "transport_ack_timeout": 0, 00:40:36.719 "ctrlr_loss_timeout_sec": 0, 00:40:36.719 "reconnect_delay_sec": 0, 00:40:36.719 "fast_io_fail_timeout_sec": 0, 00:40:36.719 "disable_auto_failback": false, 00:40:36.719 "generate_uuids": false, 00:40:36.719 "transport_tos": 0, 00:40:36.719 "nvme_error_stat": false, 00:40:36.719 "rdma_srq_size": 0, 00:40:36.719 "io_path_stat": false, 00:40:36.719 "allow_accel_sequence": false, 00:40:36.719 "rdma_max_cq_size": 0, 00:40:36.719 "rdma_cm_event_timeout_ms": 0, 00:40:36.719 "dhchap_digests": [ 00:40:36.719 "sha256", 00:40:36.719 "sha384", 00:40:36.719 "sha512" 00:40:36.719 ], 00:40:36.719 "dhchap_dhgroups": [ 00:40:36.719 "null", 00:40:36.719 "ffdhe2048", 00:40:36.719 "ffdhe3072", 00:40:36.719 "ffdhe4096", 00:40:36.719 "ffdhe6144", 00:40:36.719 "ffdhe8192" 00:40:36.719 ] 00:40:36.719 } 00:40:36.719 }, 00:40:36.719 { 00:40:36.719 "method": "bdev_nvme_attach_controller", 00:40:36.719 "params": { 00:40:36.719 "name": "nvme0", 00:40:36.719 "trtype": "TCP", 00:40:36.719 "adrfam": "IPv4", 00:40:36.719 "traddr": "127.0.0.1", 00:40:36.719 "trsvcid": "4420", 00:40:36.719 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:36.719 "prchk_reftag": false, 00:40:36.719 "prchk_guard": false, 00:40:36.719 "ctrlr_loss_timeout_sec": 0, 00:40:36.719 "reconnect_delay_sec": 0, 00:40:36.719 "fast_io_fail_timeout_sec": 0, 00:40:36.719 "psk": "key0", 00:40:36.719 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:40:36.719 "hdgst": false, 00:40:36.719 "ddgst": false, 00:40:36.719 "multipath": "multipath" 00:40:36.719 } 00:40:36.719 }, 00:40:36.719 { 00:40:36.719 "method": "bdev_nvme_set_hotplug", 00:40:36.719 "params": { 00:40:36.719 "period_us": 100000, 00:40:36.719 "enable": false 00:40:36.719 } 00:40:36.719 }, 00:40:36.719 { 00:40:36.719 "method": "bdev_wait_for_examine" 00:40:36.719 } 00:40:36.719 ] 00:40:36.719 }, 00:40:36.719 { 00:40:36.719 "subsystem": "nbd", 00:40:36.719 "config": [] 00:40:36.719 } 00:40:36.719 ] 00:40:36.719 }' 00:40:36.719 06:51:08 keyring_file -- common/autotest_common.sh@842 -- # xtrace_disable 00:40:36.719 06:51:08 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:40:36.719 [2024-11-20 06:51:08.376912] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:40:36.719 [2024-11-20 06:51:08.376965] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid832365 ] 00:40:36.719 [2024-11-20 06:51:08.448107] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:36.719 [2024-11-20 06:51:08.484978] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:40:36.977 [2024-11-20 06:51:08.645678] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:40:37.545 06:51:09 keyring_file -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:40:37.545 06:51:09 keyring_file -- common/autotest_common.sh@866 -- # return 0 00:40:37.545 06:51:09 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:40:37.545 06:51:09 keyring_file -- keyring/file.sh@121 -- # jq length 00:40:37.545 06:51:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:37.800 06:51:09 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:40:37.800 06:51:09 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:40:37.800 06:51:09 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:37.800 06:51:09 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:40:37.801 06:51:09 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:37.801 06:51:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:37.801 06:51:09 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:40:37.801 06:51:09 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:40:37.801 06:51:09 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:40:37.801 06:51:09 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:40:37.801 06:51:09 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:37.801 06:51:09 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:37.801 06:51:09 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:40:37.801 06:51:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:38.056 06:51:09 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:40:38.056 06:51:09 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:40:38.056 06:51:09 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:40:38.056 06:51:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:40:38.313 06:51:10 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:40:38.313 06:51:10 keyring_file -- keyring/file.sh@1 -- # cleanup 00:40:38.313 06:51:10 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.XwbTbUKQ0u /tmp/tmp.23yQ5Kv7UN 00:40:38.313 06:51:10 keyring_file -- keyring/file.sh@20 -- # killprocess 832365 00:40:38.313 06:51:10 keyring_file -- common/autotest_common.sh@952 -- # '[' -z 832365 ']' 00:40:38.313 06:51:10 keyring_file -- common/autotest_common.sh@956 -- # kill -0 832365 00:40:38.313 06:51:10 keyring_file -- common/autotest_common.sh@957 -- # uname 00:40:38.313 06:51:10 keyring_file -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:40:38.314 06:51:10 keyring_file -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 832365 00:40:38.314 06:51:10 keyring_file -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:40:38.314 06:51:10 keyring_file -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:40:38.314 06:51:10 keyring_file -- common/autotest_common.sh@970 -- # echo 'killing process with pid 832365' 00:40:38.314 killing process with pid 832365 00:40:38.314 06:51:10 keyring_file -- common/autotest_common.sh@971 -- # kill 832365 00:40:38.314 Received shutdown signal, test time was about 1.000000 seconds 00:40:38.314 00:40:38.314 Latency(us) 00:40:38.314 [2024-11-20T05:51:10.150Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:38.314 [2024-11-20T05:51:10.150Z] =================================================================================================================== 00:40:38.314 [2024-11-20T05:51:10.150Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:40:38.314 06:51:10 keyring_file -- common/autotest_common.sh@976 -- # wait 832365 00:40:38.571 06:51:10 keyring_file -- keyring/file.sh@21 -- # killprocess 830325 00:40:38.571 06:51:10 keyring_file -- common/autotest_common.sh@952 -- # '[' -z 830325 ']' 00:40:38.571 06:51:10 keyring_file -- common/autotest_common.sh@956 -- # kill -0 830325 00:40:38.571 06:51:10 keyring_file -- common/autotest_common.sh@957 -- # uname 00:40:38.571 06:51:10 keyring_file -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:40:38.571 06:51:10 keyring_file -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 830325 00:40:38.571 06:51:10 keyring_file -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:40:38.571 06:51:10 keyring_file -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:40:38.571 06:51:10 keyring_file -- common/autotest_common.sh@970 -- # echo 'killing process with pid 830325' 00:40:38.571 killing process with pid 830325 00:40:38.571 06:51:10 keyring_file -- common/autotest_common.sh@971 -- # kill 830325 00:40:38.571 06:51:10 keyring_file -- common/autotest_common.sh@976 -- # wait 830325 00:40:38.829 00:40:38.829 real 0m11.612s 00:40:38.829 user 0m28.845s 00:40:38.829 sys 0m2.661s 00:40:38.829 06:51:10 keyring_file -- common/autotest_common.sh@1128 -- # xtrace_disable 00:40:38.829 06:51:10 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:40:38.829 ************************************ 00:40:38.829 END TEST keyring_file 00:40:38.829 ************************************ 00:40:38.829 06:51:10 -- spdk/autotest.sh@289 -- # [[ y == y ]] 00:40:38.830 06:51:10 -- spdk/autotest.sh@290 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:40:38.830 06:51:10 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:40:38.830 06:51:10 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:40:38.830 06:51:10 -- common/autotest_common.sh@10 -- # set +x 00:40:38.830 ************************************ 00:40:38.830 START TEST keyring_linux 00:40:38.830 ************************************ 00:40:38.830 06:51:10 keyring_linux -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:40:38.830 Joined session keyring: 708725296 00:40:39.089 * Looking for test storage... 00:40:39.089 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:40:39.089 06:51:10 keyring_linux -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:40:39.089 06:51:10 keyring_linux -- common/autotest_common.sh@1691 -- # lcov --version 00:40:39.089 06:51:10 keyring_linux -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:40:39.089 06:51:10 keyring_linux -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:40:39.089 06:51:10 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:39.089 06:51:10 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:39.089 06:51:10 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:39.089 06:51:10 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:40:39.089 06:51:10 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:40:39.089 06:51:10 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:40:39.089 06:51:10 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:40:39.089 06:51:10 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:40:39.089 06:51:10 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:40:39.089 06:51:10 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:40:39.089 06:51:10 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:39.089 06:51:10 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:40:39.089 06:51:10 keyring_linux -- scripts/common.sh@345 -- # : 1 00:40:39.089 06:51:10 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:39.089 06:51:10 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:39.089 06:51:10 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:40:39.089 06:51:10 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:40:39.089 06:51:10 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:39.089 06:51:10 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:40:39.089 06:51:10 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:40:39.089 06:51:10 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:40:39.089 06:51:10 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:40:39.089 06:51:10 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:39.089 06:51:10 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:40:39.089 06:51:10 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:40:39.089 06:51:10 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:39.089 06:51:10 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:39.089 06:51:10 keyring_linux -- scripts/common.sh@368 -- # return 0 00:40:39.089 06:51:10 keyring_linux -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:39.089 06:51:10 keyring_linux -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:40:39.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:39.089 --rc genhtml_branch_coverage=1 00:40:39.089 --rc genhtml_function_coverage=1 00:40:39.089 --rc genhtml_legend=1 00:40:39.089 --rc geninfo_all_blocks=1 00:40:39.089 --rc geninfo_unexecuted_blocks=1 00:40:39.089 00:40:39.089 ' 00:40:39.089 06:51:10 keyring_linux -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:40:39.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:39.089 --rc genhtml_branch_coverage=1 00:40:39.089 --rc genhtml_function_coverage=1 00:40:39.089 --rc genhtml_legend=1 00:40:39.089 --rc geninfo_all_blocks=1 00:40:39.089 --rc geninfo_unexecuted_blocks=1 00:40:39.089 00:40:39.089 ' 00:40:39.089 06:51:10 keyring_linux -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:40:39.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:39.089 --rc genhtml_branch_coverage=1 00:40:39.089 --rc genhtml_function_coverage=1 00:40:39.089 --rc genhtml_legend=1 00:40:39.089 --rc geninfo_all_blocks=1 00:40:39.089 --rc geninfo_unexecuted_blocks=1 00:40:39.089 00:40:39.089 ' 00:40:39.089 06:51:10 keyring_linux -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:40:39.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:39.089 --rc genhtml_branch_coverage=1 00:40:39.089 --rc genhtml_function_coverage=1 00:40:39.090 --rc genhtml_legend=1 00:40:39.090 --rc geninfo_all_blocks=1 00:40:39.090 --rc geninfo_unexecuted_blocks=1 00:40:39.090 00:40:39.090 ' 00:40:39.090 06:51:10 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:40:39.090 06:51:10 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:39.090 06:51:10 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:40:39.090 06:51:10 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:39.090 06:51:10 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:39.090 06:51:10 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:39.090 06:51:10 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:39.090 06:51:10 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:39.090 06:51:10 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:39.090 06:51:10 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:39.090 06:51:10 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:39.090 06:51:10 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:39.090 06:51:10 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:39.090 06:51:10 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:40:39.090 06:51:10 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:40:39.090 06:51:10 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:39.090 06:51:10 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:39.090 06:51:10 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:39.090 06:51:10 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:39.090 06:51:10 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:39.090 06:51:10 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:40:39.090 06:51:10 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:39.090 06:51:10 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:39.090 06:51:10 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:39.090 06:51:10 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:39.090 06:51:10 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:39.090 06:51:10 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:39.090 06:51:10 keyring_linux -- paths/export.sh@5 -- # export PATH 00:40:39.090 06:51:10 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:39.090 06:51:10 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:40:39.090 06:51:10 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:39.090 06:51:10 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:39.090 06:51:10 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:39.090 06:51:10 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:39.090 06:51:10 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:39.090 06:51:10 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:40:39.090 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:40:39.090 06:51:10 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:39.090 06:51:10 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:39.090 06:51:10 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:39.090 06:51:10 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:40:39.090 06:51:10 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:40:39.090 06:51:10 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:40:39.090 06:51:10 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:40:39.090 06:51:10 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:40:39.090 06:51:10 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:40:39.090 06:51:10 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:40:39.090 06:51:10 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:40:39.090 06:51:10 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:40:39.090 06:51:10 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:40:39.090 06:51:10 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:40:39.090 06:51:10 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:40:39.090 06:51:10 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:40:39.090 06:51:10 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:40:39.090 06:51:10 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:40:39.090 06:51:10 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:40:39.090 06:51:10 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:40:39.090 06:51:10 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:40:39.090 06:51:10 keyring_linux -- nvmf/common.sh@733 -- # python - 00:40:39.090 06:51:10 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:40:39.090 06:51:10 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:40:39.090 /tmp/:spdk-test:key0 00:40:39.090 06:51:10 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:40:39.090 06:51:10 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:40:39.090 06:51:10 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:40:39.090 06:51:10 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:40:39.090 06:51:10 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:40:39.090 06:51:10 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:40:39.090 06:51:10 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:40:39.090 06:51:10 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:40:39.090 06:51:10 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:40:39.090 06:51:10 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:40:39.090 06:51:10 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:40:39.090 06:51:10 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:40:39.090 06:51:10 keyring_linux -- nvmf/common.sh@733 -- # python - 00:40:39.349 06:51:10 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:40:39.349 06:51:10 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:40:39.349 /tmp/:spdk-test:key1 00:40:39.349 06:51:10 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=832923 00:40:39.349 06:51:10 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 832923 00:40:39.349 06:51:10 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:40:39.349 06:51:10 keyring_linux -- common/autotest_common.sh@833 -- # '[' -z 832923 ']' 00:40:39.349 06:51:10 keyring_linux -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:39.349 06:51:10 keyring_linux -- common/autotest_common.sh@838 -- # local max_retries=100 00:40:39.349 06:51:10 keyring_linux -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:39.349 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:39.349 06:51:10 keyring_linux -- common/autotest_common.sh@842 -- # xtrace_disable 00:40:39.349 06:51:10 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:40:39.349 [2024-11-20 06:51:10.989867] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:40:39.349 [2024-11-20 06:51:10.989916] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid832923 ] 00:40:39.349 [2024-11-20 06:51:11.065011] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:39.349 [2024-11-20 06:51:11.106923] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:39.607 06:51:11 keyring_linux -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:40:39.607 06:51:11 keyring_linux -- common/autotest_common.sh@866 -- # return 0 00:40:39.607 06:51:11 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:40:39.607 06:51:11 keyring_linux -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:39.607 06:51:11 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:40:39.607 [2024-11-20 06:51:11.325858] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:39.607 null0 00:40:39.607 [2024-11-20 06:51:11.357920] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:40:39.607 [2024-11-20 06:51:11.358275] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:40:39.607 06:51:11 keyring_linux -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:39.607 06:51:11 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:40:39.607 107355749 00:40:39.607 06:51:11 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:40:39.607 574476183 00:40:39.607 06:51:11 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=832928 00:40:39.607 06:51:11 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 832928 /var/tmp/bperf.sock 00:40:39.607 06:51:11 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:40:39.607 06:51:11 keyring_linux -- common/autotest_common.sh@833 -- # '[' -z 832928 ']' 00:40:39.607 06:51:11 keyring_linux -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:40:39.607 06:51:11 keyring_linux -- common/autotest_common.sh@838 -- # local max_retries=100 00:40:39.607 06:51:11 keyring_linux -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:40:39.607 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:40:39.607 06:51:11 keyring_linux -- common/autotest_common.sh@842 -- # xtrace_disable 00:40:39.607 06:51:11 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:40:39.607 [2024-11-20 06:51:11.429058] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:40:39.607 [2024-11-20 06:51:11.429098] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid832928 ] 00:40:39.865 [2024-11-20 06:51:11.502910] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:39.865 [2024-11-20 06:51:11.542990] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:40:39.865 06:51:11 keyring_linux -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:40:39.865 06:51:11 keyring_linux -- common/autotest_common.sh@866 -- # return 0 00:40:39.865 06:51:11 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:40:39.865 06:51:11 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:40:40.122 06:51:11 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:40:40.122 06:51:11 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:40:40.379 06:51:12 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:40:40.379 06:51:12 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:40:40.379 [2024-11-20 06:51:12.198496] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:40:40.636 nvme0n1 00:40:40.636 06:51:12 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:40:40.636 06:51:12 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:40:40.636 06:51:12 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:40:40.636 06:51:12 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:40:40.636 06:51:12 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:40.636 06:51:12 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:40:40.894 06:51:12 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:40:40.894 06:51:12 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:40:40.894 06:51:12 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:40:40.894 06:51:12 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:40:40.894 06:51:12 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:40.894 06:51:12 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:40.894 06:51:12 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:40:40.894 06:51:12 keyring_linux -- keyring/linux.sh@25 -- # sn=107355749 00:40:40.894 06:51:12 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:40:40.894 06:51:12 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:40:40.894 06:51:12 keyring_linux -- keyring/linux.sh@26 -- # [[ 107355749 == \1\0\7\3\5\5\7\4\9 ]] 00:40:40.894 06:51:12 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 107355749 00:40:40.894 06:51:12 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:40:40.894 06:51:12 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:40:41.151 Running I/O for 1 seconds... 00:40:42.083 21847.00 IOPS, 85.34 MiB/s 00:40:42.083 Latency(us) 00:40:42.083 [2024-11-20T05:51:13.919Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:42.083 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:40:42.084 nvme0n1 : 1.01 21847.54 85.34 0.00 0.00 5839.94 1973.88 7084.13 00:40:42.084 [2024-11-20T05:51:13.920Z] =================================================================================================================== 00:40:42.084 [2024-11-20T05:51:13.920Z] Total : 21847.54 85.34 0.00 0.00 5839.94 1973.88 7084.13 00:40:42.084 { 00:40:42.084 "results": [ 00:40:42.084 { 00:40:42.084 "job": "nvme0n1", 00:40:42.084 "core_mask": "0x2", 00:40:42.084 "workload": "randread", 00:40:42.084 "status": "finished", 00:40:42.084 "queue_depth": 128, 00:40:42.084 "io_size": 4096, 00:40:42.084 "runtime": 1.005834, 00:40:42.084 "iops": 21847.541443220252, 00:40:42.084 "mibps": 85.34195876257911, 00:40:42.084 "io_failed": 0, 00:40:42.084 "io_timeout": 0, 00:40:42.084 "avg_latency_us": 5839.94477117937, 00:40:42.084 "min_latency_us": 1973.8819047619047, 00:40:42.084 "max_latency_us": 7084.129523809524 00:40:42.084 } 00:40:42.084 ], 00:40:42.084 "core_count": 1 00:40:42.084 } 00:40:42.084 06:51:13 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:40:42.084 06:51:13 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:40:42.342 06:51:13 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:40:42.342 06:51:13 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:40:42.342 06:51:13 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:40:42.342 06:51:13 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:40:42.342 06:51:13 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:42.342 06:51:13 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:40:42.600 06:51:14 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:40:42.600 06:51:14 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:40:42.600 06:51:14 keyring_linux -- keyring/linux.sh@23 -- # return 00:40:42.600 06:51:14 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:40:42.600 06:51:14 keyring_linux -- common/autotest_common.sh@650 -- # local es=0 00:40:42.600 06:51:14 keyring_linux -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:40:42.600 06:51:14 keyring_linux -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:40:42.600 06:51:14 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:40:42.600 06:51:14 keyring_linux -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:40:42.600 06:51:14 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:40:42.600 06:51:14 keyring_linux -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:40:42.600 06:51:14 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:40:42.600 [2024-11-20 06:51:14.346537] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:40:42.600 [2024-11-20 06:51:14.346920] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e20f60 (107): Transport endpoint is not connected 00:40:42.600 [2024-11-20 06:51:14.347914] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e20f60 (9): Bad file descriptor 00:40:42.600 [2024-11-20 06:51:14.348917] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:40:42.600 [2024-11-20 06:51:14.348926] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:40:42.600 [2024-11-20 06:51:14.348933] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:40:42.600 [2024-11-20 06:51:14.348942] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:40:42.600 request: 00:40:42.600 { 00:40:42.600 "name": "nvme0", 00:40:42.600 "trtype": "tcp", 00:40:42.600 "traddr": "127.0.0.1", 00:40:42.600 "adrfam": "ipv4", 00:40:42.600 "trsvcid": "4420", 00:40:42.600 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:42.600 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:40:42.600 "prchk_reftag": false, 00:40:42.600 "prchk_guard": false, 00:40:42.600 "hdgst": false, 00:40:42.600 "ddgst": false, 00:40:42.600 "psk": ":spdk-test:key1", 00:40:42.600 "allow_unrecognized_csi": false, 00:40:42.600 "method": "bdev_nvme_attach_controller", 00:40:42.600 "req_id": 1 00:40:42.600 } 00:40:42.600 Got JSON-RPC error response 00:40:42.600 response: 00:40:42.600 { 00:40:42.600 "code": -5, 00:40:42.600 "message": "Input/output error" 00:40:42.600 } 00:40:42.600 06:51:14 keyring_linux -- common/autotest_common.sh@653 -- # es=1 00:40:42.600 06:51:14 keyring_linux -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:40:42.600 06:51:14 keyring_linux -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:40:42.600 06:51:14 keyring_linux -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:40:42.600 06:51:14 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:40:42.600 06:51:14 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:40:42.600 06:51:14 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:40:42.600 06:51:14 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:40:42.600 06:51:14 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:40:42.600 06:51:14 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:40:42.600 06:51:14 keyring_linux -- keyring/linux.sh@33 -- # sn=107355749 00:40:42.600 06:51:14 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 107355749 00:40:42.600 1 links removed 00:40:42.600 06:51:14 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:40:42.600 06:51:14 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:40:42.600 06:51:14 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:40:42.600 06:51:14 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:40:42.600 06:51:14 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:40:42.600 06:51:14 keyring_linux -- keyring/linux.sh@33 -- # sn=574476183 00:40:42.600 06:51:14 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 574476183 00:40:42.600 1 links removed 00:40:42.600 06:51:14 keyring_linux -- keyring/linux.sh@41 -- # killprocess 832928 00:40:42.600 06:51:14 keyring_linux -- common/autotest_common.sh@952 -- # '[' -z 832928 ']' 00:40:42.600 06:51:14 keyring_linux -- common/autotest_common.sh@956 -- # kill -0 832928 00:40:42.600 06:51:14 keyring_linux -- common/autotest_common.sh@957 -- # uname 00:40:42.600 06:51:14 keyring_linux -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:40:42.600 06:51:14 keyring_linux -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 832928 00:40:42.858 06:51:14 keyring_linux -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:40:42.858 06:51:14 keyring_linux -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:40:42.858 06:51:14 keyring_linux -- common/autotest_common.sh@970 -- # echo 'killing process with pid 832928' 00:40:42.858 killing process with pid 832928 00:40:42.858 06:51:14 keyring_linux -- common/autotest_common.sh@971 -- # kill 832928 00:40:42.858 Received shutdown signal, test time was about 1.000000 seconds 00:40:42.858 00:40:42.858 Latency(us) 00:40:42.858 [2024-11-20T05:51:14.694Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:42.858 [2024-11-20T05:51:14.694Z] =================================================================================================================== 00:40:42.858 [2024-11-20T05:51:14.694Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:40:42.858 06:51:14 keyring_linux -- common/autotest_common.sh@976 -- # wait 832928 00:40:42.858 06:51:14 keyring_linux -- keyring/linux.sh@42 -- # killprocess 832923 00:40:42.858 06:51:14 keyring_linux -- common/autotest_common.sh@952 -- # '[' -z 832923 ']' 00:40:42.858 06:51:14 keyring_linux -- common/autotest_common.sh@956 -- # kill -0 832923 00:40:42.858 06:51:14 keyring_linux -- common/autotest_common.sh@957 -- # uname 00:40:42.858 06:51:14 keyring_linux -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:40:42.858 06:51:14 keyring_linux -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 832923 00:40:42.858 06:51:14 keyring_linux -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:40:42.858 06:51:14 keyring_linux -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:40:42.858 06:51:14 keyring_linux -- common/autotest_common.sh@970 -- # echo 'killing process with pid 832923' 00:40:42.858 killing process with pid 832923 00:40:42.858 06:51:14 keyring_linux -- common/autotest_common.sh@971 -- # kill 832923 00:40:42.858 06:51:14 keyring_linux -- common/autotest_common.sh@976 -- # wait 832923 00:40:43.116 00:40:43.116 real 0m4.287s 00:40:43.116 user 0m8.089s 00:40:43.116 sys 0m1.387s 00:40:43.116 06:51:14 keyring_linux -- common/autotest_common.sh@1128 -- # xtrace_disable 00:40:43.116 06:51:14 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:40:43.116 ************************************ 00:40:43.116 END TEST keyring_linux 00:40:43.116 ************************************ 00:40:43.374 06:51:14 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:40:43.374 06:51:14 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:40:43.374 06:51:14 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:40:43.374 06:51:14 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:40:43.374 06:51:14 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:40:43.374 06:51:14 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:40:43.374 06:51:14 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:40:43.374 06:51:14 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:40:43.374 06:51:14 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:40:43.374 06:51:14 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:40:43.374 06:51:14 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:40:43.374 06:51:14 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:40:43.374 06:51:14 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:40:43.374 06:51:14 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:40:43.374 06:51:14 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:40:43.374 06:51:14 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:40:43.374 06:51:14 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:40:43.374 06:51:14 -- common/autotest_common.sh@724 -- # xtrace_disable 00:40:43.374 06:51:14 -- common/autotest_common.sh@10 -- # set +x 00:40:43.374 06:51:14 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:40:43.374 06:51:14 -- common/autotest_common.sh@1394 -- # local autotest_es=0 00:40:43.374 06:51:14 -- common/autotest_common.sh@1395 -- # xtrace_disable 00:40:43.374 06:51:14 -- common/autotest_common.sh@10 -- # set +x 00:40:48.647 INFO: APP EXITING 00:40:48.647 INFO: killing all VMs 00:40:48.647 INFO: killing vhost app 00:40:48.647 INFO: EXIT DONE 00:40:51.184 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:40:51.184 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:40:51.184 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:40:51.184 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:40:51.184 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:40:51.184 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:40:51.184 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:40:51.184 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:40:51.184 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:40:51.184 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:40:51.184 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:40:51.184 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:40:51.184 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:40:51.184 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:40:51.184 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:40:51.184 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:40:51.184 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:40:54.484 Cleaning 00:40:54.484 Removing: /var/run/dpdk/spdk0/config 00:40:54.484 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:40:54.484 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:40:54.484 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:40:54.484 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:40:54.484 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:40:54.484 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:40:54.484 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:40:54.484 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:40:54.484 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:40:54.484 Removing: /var/run/dpdk/spdk0/hugepage_info 00:40:54.484 Removing: /var/run/dpdk/spdk1/config 00:40:54.484 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:40:54.484 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:40:54.484 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:40:54.484 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:40:54.484 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:40:54.484 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:40:54.484 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:40:54.484 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:40:54.484 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:40:54.484 Removing: /var/run/dpdk/spdk1/hugepage_info 00:40:54.484 Removing: /var/run/dpdk/spdk2/config 00:40:54.484 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:40:54.484 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:40:54.484 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:40:54.484 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:40:54.484 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:40:54.484 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:40:54.484 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:40:54.484 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:40:54.484 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:40:54.484 Removing: /var/run/dpdk/spdk2/hugepage_info 00:40:54.484 Removing: /var/run/dpdk/spdk3/config 00:40:54.484 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:40:54.484 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:40:54.484 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:40:54.484 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:40:54.484 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:40:54.484 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:40:54.484 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:40:54.484 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:40:54.484 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:40:54.484 Removing: /var/run/dpdk/spdk3/hugepage_info 00:40:54.484 Removing: /var/run/dpdk/spdk4/config 00:40:54.484 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:40:54.484 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:40:54.484 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:40:54.484 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:40:54.484 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:40:54.484 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:40:54.484 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:40:54.484 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:40:54.484 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:40:54.484 Removing: /var/run/dpdk/spdk4/hugepage_info 00:40:54.484 Removing: /dev/shm/bdev_svc_trace.1 00:40:54.484 Removing: /dev/shm/nvmf_trace.0 00:40:54.484 Removing: /dev/shm/spdk_tgt_trace.pid352713 00:40:54.484 Removing: /var/run/dpdk/spdk0 00:40:54.484 Removing: /var/run/dpdk/spdk1 00:40:54.484 Removing: /var/run/dpdk/spdk2 00:40:54.484 Removing: /var/run/dpdk/spdk3 00:40:54.484 Removing: /var/run/dpdk/spdk4 00:40:54.484 Removing: /var/run/dpdk/spdk_pid350326 00:40:54.484 Removing: /var/run/dpdk/spdk_pid351398 00:40:54.484 Removing: /var/run/dpdk/spdk_pid352713 00:40:54.484 Removing: /var/run/dpdk/spdk_pid353291 00:40:54.484 Removing: /var/run/dpdk/spdk_pid354192 00:40:54.484 Removing: /var/run/dpdk/spdk_pid354311 00:40:54.484 Removing: /var/run/dpdk/spdk_pid355282 00:40:54.484 Removing: /var/run/dpdk/spdk_pid355418 00:40:54.484 Removing: /var/run/dpdk/spdk_pid355649 00:40:54.484 Removing: /var/run/dpdk/spdk_pid357389 00:40:54.484 Removing: /var/run/dpdk/spdk_pid358886 00:40:54.484 Removing: /var/run/dpdk/spdk_pid359176 00:40:54.484 Removing: /var/run/dpdk/spdk_pid359466 00:40:54.484 Removing: /var/run/dpdk/spdk_pid359782 00:40:54.484 Removing: /var/run/dpdk/spdk_pid360068 00:40:54.484 Removing: /var/run/dpdk/spdk_pid360324 00:40:54.484 Removing: /var/run/dpdk/spdk_pid360574 00:40:54.484 Removing: /var/run/dpdk/spdk_pid360862 00:40:54.484 Removing: /var/run/dpdk/spdk_pid361425 00:40:54.484 Removing: /var/run/dpdk/spdk_pid364532 00:40:54.484 Removing: /var/run/dpdk/spdk_pid364864 00:40:54.484 Removing: /var/run/dpdk/spdk_pid365118 00:40:54.484 Removing: /var/run/dpdk/spdk_pid365130 00:40:54.484 Removing: /var/run/dpdk/spdk_pid365518 00:40:54.484 Removing: /var/run/dpdk/spdk_pid365623 00:40:54.484 Removing: /var/run/dpdk/spdk_pid365906 00:40:54.484 Removing: /var/run/dpdk/spdk_pid366126 00:40:54.484 Removing: /var/run/dpdk/spdk_pid366384 00:40:54.484 Removing: /var/run/dpdk/spdk_pid366395 00:40:54.484 Removing: /var/run/dpdk/spdk_pid366653 00:40:54.484 Removing: /var/run/dpdk/spdk_pid366664 00:40:54.484 Removing: /var/run/dpdk/spdk_pid367229 00:40:54.484 Removing: /var/run/dpdk/spdk_pid367477 00:40:54.484 Removing: /var/run/dpdk/spdk_pid367773 00:40:54.484 Removing: /var/run/dpdk/spdk_pid371488 00:40:54.484 Removing: /var/run/dpdk/spdk_pid375986 00:40:54.484 Removing: /var/run/dpdk/spdk_pid386747 00:40:54.484 Removing: /var/run/dpdk/spdk_pid387232 00:40:54.484 Removing: /var/run/dpdk/spdk_pid391729 00:40:54.484 Removing: /var/run/dpdk/spdk_pid391982 00:40:54.484 Removing: /var/run/dpdk/spdk_pid396245 00:40:54.484 Removing: /var/run/dpdk/spdk_pid402141 00:40:54.484 Removing: /var/run/dpdk/spdk_pid404747 00:40:54.484 Removing: /var/run/dpdk/spdk_pid415180 00:40:54.484 Removing: /var/run/dpdk/spdk_pid424118 00:40:54.484 Removing: /var/run/dpdk/spdk_pid426403 00:40:54.484 Removing: /var/run/dpdk/spdk_pid427369 00:40:54.484 Removing: /var/run/dpdk/spdk_pid444038 00:40:54.484 Removing: /var/run/dpdk/spdk_pid448116 00:40:54.484 Removing: /var/run/dpdk/spdk_pid494343 00:40:54.484 Removing: /var/run/dpdk/spdk_pid499527 00:40:54.484 Removing: /var/run/dpdk/spdk_pid505369 00:40:54.484 Removing: /var/run/dpdk/spdk_pid512027 00:40:54.484 Removing: /var/run/dpdk/spdk_pid512032 00:40:54.484 Removing: /var/run/dpdk/spdk_pid512951 00:40:54.484 Removing: /var/run/dpdk/spdk_pid513863 00:40:54.484 Removing: /var/run/dpdk/spdk_pid514760 00:40:54.484 Removing: /var/run/dpdk/spdk_pid515249 00:40:54.484 Removing: /var/run/dpdk/spdk_pid515254 00:40:54.484 Removing: /var/run/dpdk/spdk_pid515488 00:40:54.484 Removing: /var/run/dpdk/spdk_pid515709 00:40:54.484 Removing: /var/run/dpdk/spdk_pid515711 00:40:54.484 Removing: /var/run/dpdk/spdk_pid516628 00:40:54.484 Removing: /var/run/dpdk/spdk_pid517436 00:40:54.484 Removing: /var/run/dpdk/spdk_pid518247 00:40:54.484 Removing: /var/run/dpdk/spdk_pid519004 00:40:54.484 Removing: /var/run/dpdk/spdk_pid519034 00:40:54.484 Removing: /var/run/dpdk/spdk_pid519338 00:40:54.485 Removing: /var/run/dpdk/spdk_pid520911 00:40:54.485 Removing: /var/run/dpdk/spdk_pid521896 00:40:54.485 Removing: /var/run/dpdk/spdk_pid530177 00:40:54.485 Removing: /var/run/dpdk/spdk_pid559127 00:40:54.485 Removing: /var/run/dpdk/spdk_pid563657 00:40:54.485 Removing: /var/run/dpdk/spdk_pid565257 00:40:54.485 Removing: /var/run/dpdk/spdk_pid567094 00:40:54.485 Removing: /var/run/dpdk/spdk_pid567112 00:40:54.485 Removing: /var/run/dpdk/spdk_pid567346 00:40:54.485 Removing: /var/run/dpdk/spdk_pid567520 00:40:54.485 Removing: /var/run/dpdk/spdk_pid567940 00:40:54.485 Removing: /var/run/dpdk/spdk_pid569702 00:40:54.485 Removing: /var/run/dpdk/spdk_pid570560 00:40:54.744 Removing: /var/run/dpdk/spdk_pid570969 00:40:54.744 Removing: /var/run/dpdk/spdk_pid573287 00:40:54.744 Removing: /var/run/dpdk/spdk_pid573782 00:40:54.744 Removing: /var/run/dpdk/spdk_pid574284 00:40:54.744 Removing: /var/run/dpdk/spdk_pid578552 00:40:54.744 Removing: /var/run/dpdk/spdk_pid583977 00:40:54.744 Removing: /var/run/dpdk/spdk_pid583978 00:40:54.744 Removing: /var/run/dpdk/spdk_pid583980 00:40:54.744 Removing: /var/run/dpdk/spdk_pid587830 00:40:54.744 Removing: /var/run/dpdk/spdk_pid596308 00:40:54.744 Removing: /var/run/dpdk/spdk_pid600857 00:40:54.744 Removing: /var/run/dpdk/spdk_pid606985 00:40:54.744 Removing: /var/run/dpdk/spdk_pid608164 00:40:54.744 Removing: /var/run/dpdk/spdk_pid609699 00:40:54.744 Removing: /var/run/dpdk/spdk_pid611029 00:40:54.744 Removing: /var/run/dpdk/spdk_pid615739 00:40:54.744 Removing: /var/run/dpdk/spdk_pid620079 00:40:54.744 Removing: /var/run/dpdk/spdk_pid624098 00:40:54.744 Removing: /var/run/dpdk/spdk_pid631693 00:40:54.744 Removing: /var/run/dpdk/spdk_pid631695 00:40:54.744 Removing: /var/run/dpdk/spdk_pid636423 00:40:54.744 Removing: /var/run/dpdk/spdk_pid636647 00:40:54.744 Removing: /var/run/dpdk/spdk_pid636876 00:40:54.744 Removing: /var/run/dpdk/spdk_pid637336 00:40:54.744 Removing: /var/run/dpdk/spdk_pid637341 00:40:54.744 Removing: /var/run/dpdk/spdk_pid641843 00:40:54.744 Removing: /var/run/dpdk/spdk_pid642414 00:40:54.744 Removing: /var/run/dpdk/spdk_pid646908 00:40:54.744 Removing: /var/run/dpdk/spdk_pid650027 00:40:54.744 Removing: /var/run/dpdk/spdk_pid655420 00:40:54.745 Removing: /var/run/dpdk/spdk_pid660749 00:40:54.745 Removing: /var/run/dpdk/spdk_pid669491 00:40:54.745 Removing: /var/run/dpdk/spdk_pid676444 00:40:54.745 Removing: /var/run/dpdk/spdk_pid676508 00:40:54.745 Removing: /var/run/dpdk/spdk_pid695153 00:40:54.745 Removing: /var/run/dpdk/spdk_pid695700 00:40:54.745 Removing: /var/run/dpdk/spdk_pid696681 00:40:54.745 Removing: /var/run/dpdk/spdk_pid697245 00:40:54.745 Removing: /var/run/dpdk/spdk_pid697958 00:40:54.745 Removing: /var/run/dpdk/spdk_pid698464 00:40:54.745 Removing: /var/run/dpdk/spdk_pid699163 00:40:54.745 Removing: /var/run/dpdk/spdk_pid699641 00:40:54.745 Removing: /var/run/dpdk/spdk_pid703869 00:40:54.745 Removing: /var/run/dpdk/spdk_pid704118 00:40:54.745 Removing: /var/run/dpdk/spdk_pid710066 00:40:54.745 Removing: /var/run/dpdk/spdk_pid710240 00:40:54.745 Removing: /var/run/dpdk/spdk_pid715560 00:40:54.745 Removing: /var/run/dpdk/spdk_pid719748 00:40:54.745 Removing: /var/run/dpdk/spdk_pid729479 00:40:54.745 Removing: /var/run/dpdk/spdk_pid730158 00:40:54.745 Removing: /var/run/dpdk/spdk_pid734199 00:40:54.745 Removing: /var/run/dpdk/spdk_pid734587 00:40:54.745 Removing: /var/run/dpdk/spdk_pid738707 00:40:54.745 Removing: /var/run/dpdk/spdk_pid744947 00:40:54.745 Removing: /var/run/dpdk/spdk_pid747495 00:40:54.745 Removing: /var/run/dpdk/spdk_pid757460 00:40:54.745 Removing: /var/run/dpdk/spdk_pid766257 00:40:54.745 Removing: /var/run/dpdk/spdk_pid767860 00:40:54.745 Removing: /var/run/dpdk/spdk_pid768786 00:40:54.745 Removing: /var/run/dpdk/spdk_pid784920 00:40:54.745 Removing: /var/run/dpdk/spdk_pid788730 00:40:54.745 Removing: /var/run/dpdk/spdk_pid791970 00:40:54.745 Removing: /var/run/dpdk/spdk_pid799921 00:40:54.745 Removing: /var/run/dpdk/spdk_pid799931 00:40:54.745 Removing: /var/run/dpdk/spdk_pid805174 00:40:54.745 Removing: /var/run/dpdk/spdk_pid807090 00:40:55.004 Removing: /var/run/dpdk/spdk_pid808952 00:40:55.004 Removing: /var/run/dpdk/spdk_pid810155 00:40:55.004 Removing: /var/run/dpdk/spdk_pid812125 00:40:55.004 Removing: /var/run/dpdk/spdk_pid813199 00:40:55.004 Removing: /var/run/dpdk/spdk_pid822162 00:40:55.004 Removing: /var/run/dpdk/spdk_pid822622 00:40:55.004 Removing: /var/run/dpdk/spdk_pid823181 00:40:55.004 Removing: /var/run/dpdk/spdk_pid825565 00:40:55.004 Removing: /var/run/dpdk/spdk_pid826030 00:40:55.004 Removing: /var/run/dpdk/spdk_pid826498 00:40:55.004 Removing: /var/run/dpdk/spdk_pid830325 00:40:55.004 Removing: /var/run/dpdk/spdk_pid830339 00:40:55.004 Removing: /var/run/dpdk/spdk_pid832365 00:40:55.004 Removing: /var/run/dpdk/spdk_pid832923 00:40:55.004 Removing: /var/run/dpdk/spdk_pid832928 00:40:55.004 Clean 00:40:55.004 06:51:26 -- common/autotest_common.sh@1451 -- # return 0 00:40:55.004 06:51:26 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:40:55.004 06:51:26 -- common/autotest_common.sh@730 -- # xtrace_disable 00:40:55.004 06:51:26 -- common/autotest_common.sh@10 -- # set +x 00:40:55.004 06:51:26 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:40:55.004 06:51:26 -- common/autotest_common.sh@730 -- # xtrace_disable 00:40:55.004 06:51:26 -- common/autotest_common.sh@10 -- # set +x 00:40:55.004 06:51:26 -- spdk/autotest.sh@388 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:40:55.004 06:51:26 -- spdk/autotest.sh@390 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:40:55.004 06:51:26 -- spdk/autotest.sh@390 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:40:55.004 06:51:26 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:40:55.004 06:51:26 -- spdk/autotest.sh@394 -- # hostname 00:40:55.004 06:51:26 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-wfp-06 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:40:55.263 geninfo: WARNING: invalid characters removed from testname! 00:41:17.199 06:51:47 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:41:18.598 06:51:50 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:41:20.503 06:51:52 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:41:22.409 06:51:54 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:41:24.311 06:51:55 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:41:26.216 06:51:57 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:41:28.119 06:51:59 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:41:28.119 06:51:59 -- spdk/autorun.sh@1 -- $ timing_finish 00:41:28.119 06:51:59 -- common/autotest_common.sh@736 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt ]] 00:41:28.119 06:51:59 -- common/autotest_common.sh@738 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:41:28.119 06:51:59 -- common/autotest_common.sh@739 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:41:28.119 06:51:59 -- common/autotest_common.sh@742 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:41:28.119 + [[ -n 273063 ]] 00:41:28.119 + sudo kill 273063 00:41:28.127 [Pipeline] } 00:41:28.141 [Pipeline] // stage 00:41:28.146 [Pipeline] } 00:41:28.159 [Pipeline] // timeout 00:41:28.163 [Pipeline] } 00:41:28.177 [Pipeline] // catchError 00:41:28.181 [Pipeline] } 00:41:28.194 [Pipeline] // wrap 00:41:28.200 [Pipeline] } 00:41:28.212 [Pipeline] // catchError 00:41:28.220 [Pipeline] stage 00:41:28.222 [Pipeline] { (Epilogue) 00:41:28.234 [Pipeline] catchError 00:41:28.236 [Pipeline] { 00:41:28.247 [Pipeline] echo 00:41:28.248 Cleanup processes 00:41:28.254 [Pipeline] sh 00:41:28.538 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:41:28.538 843621 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:41:28.550 [Pipeline] sh 00:41:28.833 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:41:28.833 ++ grep -v 'sudo pgrep' 00:41:28.833 ++ awk '{print $1}' 00:41:28.833 + sudo kill -9 00:41:28.833 + true 00:41:28.844 [Pipeline] sh 00:41:29.128 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:41:41.432 [Pipeline] sh 00:41:41.715 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:41:41.715 Artifacts sizes are good 00:41:41.730 [Pipeline] archiveArtifacts 00:41:41.737 Archiving artifacts 00:41:41.860 [Pipeline] sh 00:41:42.145 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:41:42.160 [Pipeline] cleanWs 00:41:42.171 [WS-CLEANUP] Deleting project workspace... 00:41:42.171 [WS-CLEANUP] Deferred wipeout is used... 00:41:42.178 [WS-CLEANUP] done 00:41:42.180 [Pipeline] } 00:41:42.201 [Pipeline] // catchError 00:41:42.213 [Pipeline] sh 00:41:42.498 + logger -p user.info -t JENKINS-CI 00:41:42.507 [Pipeline] } 00:41:42.522 [Pipeline] // stage 00:41:42.526 [Pipeline] } 00:41:42.542 [Pipeline] // node 00:41:42.548 [Pipeline] End of Pipeline 00:41:42.579 Finished: SUCCESS